Computers in Industry 56 (2005) 905–917 www.elsevier.com/locate/compind
Grading of construction aggregate through machine vision: Results and prospects Fionn Murtagh a,*, Xiaoyu Qiao b, Paul Walsh c, P.A.M. Basheer c, Danny Crookes b, Adrian Long c a
Department of Computer Science, Royal Holloway, University of London, Egham, Surrey TW20 0EX, UK b School of Computer Science, Queen’s University Belfast, Belfast BT7 1NN, UK c School of Civil Engineering, Queen’s University Belfast, Belfast BT7 1NN, UK Received 1 December 2004; received in revised form 31 March 2005; accepted 31 May 2005 Available online 10 October 2005
Abstract Traditionally, crushed aggregate to be used in construction is graded using sieves. We describe an innovative machine vision approach to such grading. Our operational scenario is one where a camera takes images from directly overhead of a layer of aggregate on a conveyor belt. In this article, we describe effective solutions for (i) image segmentation, allowing larger pieces of aggregate to be measured and (ii) supervised classification from wavelet entropy features, for class assignment of both finer and coarse aggregate. # 2005 Elsevier B.V. All rights reserved. Keywords: Machine vision; Construction industry; Wavelet transform; Supervised and unsupervised classification; Image database
1. Introduction The construction industry in the UK currently uses in excess of 218 million tons of crushed aggregate per year [1]. Approximately £1.25 billion is spent yearly in getting this material and processing it to a quality required by the end user. In terms of quality control the sector is relatively underdeveloped, with only very simple manual tests being applied to the end * Corresponding author. E-mail address:
[email protected] (F. Murtagh).
product. As a result produce may be produced that fails to meet the specification and must be reworked. Similarly, excessive, and wasteful, crushing and processing effort may be used to produce the desired product. From a cost-benefit perspective it is therefore essential that maximum economic value be obtained from the quarried stone, which will require wastage to be eliminated from each stage of the processing chain. The quality of the aggregate produced in terms of the consistency of its size and shape also has a major influence on the quality (particularly in relation to
0166-3615/$ – see front matter # 2005 Elsevier B.V. All rights reserved. doi:10.1016/j.compind.2005.05.016
906
F. Murtagh et al. / Computers in Industry 56 (2005) 905–917
workability and durability) of the concrete and blacktop mixes subsequently produced. The recent introduction of an aggregate tax is also going to drive the need for more efficient usage of quarried rock. Round or cubic shape aggregate particles have traditionally been considered the most suitable in relation to meeting the needs of industry, although it has also been suggested that bituminous mixes including non-cubic fractions can lead to better road pavement layer stability [2,3]. The development of a rapid and efficient means for classifying aggregate size and shape could therefore enable the beneficial properties of an aggregate to be more fully exploited. Aggregate sizing is carried out in the industrial context by passing the material over sieves or screens of particular sizes. Aggregate is a three-dimensional material and as such need not necessarily meet the screen aperture size in all directions so as to pass through that screen. The British Standard specification (as do American and other European specifications) suggest that any single size aggregate may contain a percentage of larger and smaller sizes, the magnitude of this percentage depending on the use to which the aggregate is to be put. BS 63 Part 2, 1987 provides typical specifications. To monitor the range of size of aggregate particles produced from any particular screen, regular laboratory testing is carried out. This involves sampling the aggregate from either the moving conveyor belt or alternatively from the stockpile produced. A sieve analysis test is carried out to assess the range of particle sizes present in accordance with BS 812 Part 103 1985. This test is time-consuming and therefore only a relatively small fraction (2 kg per 400– 500 tonnes) of the aggregate produced is ever tested. The quality of the result also relies heavily on good sampling technique, which means that feedback to the quarry operators can be slow and in many cases unrepresentative. Certain shape parameters are also specified for particular uses, the most common being Flakiness and the Elongation indices (BS 812 Section 105.2 1990). These tests are also very labor intensive and time consuming, and are carried out on an even more limited number of samples. An ability to measure the size and shape characteristics of an aggregate or mix of aggregate,
ideally quickly, is therefore desirable to enable the most efficient use to be made of the aggregate and binder available. In this article, we will present a range of innovative results relating to the methodology we are using to address these issues. We will illustrate the capabilities of image processing methods and tools. Firstly, we consider image segmentation for handling coarse aggregate, mixed with finer aggregate. Then we consider fine aggregate mixes. For both, we illustrate the methodology used with examples, and show that the approaches proposed are effective and computationally efficient.
2. The image data This area of application is an ideal one for image content-based matching and retrieval, in support of automated grading. Compliance with mixture specification is tested by means of match against an image database of standard images. An important aim in our work is to assess how well 2D imaging can replicate the functionality of a 3D physical sieve. For our work, the image data capture consists of an experimental environment which can be replicated in an operational setting: limitation of 3D effects and occlusion; use of diffuse homogeneous light; and avoidance of shadow. For each class of aggregate mix, four separate samples were taken. Following each imaging, randomization was carried out on the aggregate mix. To provide a good sample in the case of each image, a subimage was extracted from a central region of each image. We took 50 images to represent each of the following 12 aggregate classes, giving 600 images analyzed. A further set of 108 images (9 from each class) was used for further testing. Classes were defined as passing: 6 mm sieve hole diameter; 30/40 mix; 50/10 mix; 10 mm; 14 mm; 28 mm drb; 40 mm; 28 mm dbc; 20 mm; 50–14 wc; 35 14 mm; and 510 bc mm. (Acronyms used here: drb = dense roadbase; wc = wearing course; dbc = dense binder course.) Figs. 1–5 show representative images from the first five of these classes. Our objective is to create an image-based ‘‘virtual sieve’’ which, through image matching against an
F. Murtagh et al. / Computers in Industry 56 (2005) 905–917
Fig. 1. Image from class 1.
Fig. 3. Image from class 3.
image database of standard images, will provide automated grading.
3. Segmentation using texture features Local energy at a range of wavelet transform bands is often used to provide a feature set [4–7]. The input for segmentation becomes therefore a multi-band image of wavelet coefficient energies. Since texture is a locally defined property, we must average these wavelet coefficient energies, prior to applying the segmentation algorithm. Because neighbourhood information is already included in our averaged
Fig. 4. Image from class 4.
Fig. 2. Image from class 2.
Fig. 5. Image from class 5.
907
908
F. Murtagh et al. / Computers in Industry 56 (2005) 905–917
wavelet coefficient energies (by virtue of the dilated wavelet function at successive scales), the clustering algorithm applied does not need to take additional neighbourhood information into consideration. Our algorithm consists of a wavelet transform; the taking of average energies of coefficients; and clustering the multiband wavelet coefficient energies. It is now described in greater detail. 1. Take the wavelet transform of the image, using an undecimated Mallat transform with three bands per scale, and using Antonini 7/9 biorthogonal filters. Use four scales. This provides 10 bands. Note that the smooth (or DC, ‘‘direct current’’) component is included here. 2. Determine the energies of all wavelet and smooth band coefficients. We used the absolute value rather than the squared value (i.e., 1-norm as against the 2-norm). 3. Texture is characteristic of a local region. Hence, we average the coefficient energies by convolving each band with a box filter. On the five-segment image of dimensions 512 512, we used a box with side length 31. On the four-segment image of dimensions 200 200, we used a box of side length 17. Other side lengths used are noted below. 4. Given the box filter convolution, it is best to remove image boundary pixels. Therefore, from the 512 512 image, we used a 448 448 image for further analysis. From the 200 200 image, we used a 162 162 image for further analysis. We cropped image boundaries. 5. The multiband images to be segmented were therefore respectively of dimensions 448 448 10 and 162 162 10, for the given 512 512 and 200 200 images. K-means clustering was used in 10-dimensional space. Rather than some variant of the commonly used EM (expectationmaximization) algorithm, we used the more robust and very efficient exchange algorithm [8]. This algorithm is described below. Fig. 6 shows the usual structure of the wavelet transform. The lateral and horizontal orientations at each resolution scale result from the use of the biorthogonal filters. We have three bands at each resolution level. Decimation of the bands, however, causes difficulty for sensitive demarcation of segment
Fig. 6. Ten bands, shown here in decimated form, were used, following the wavelet transform. A non-decimated wavelet transform was used by us in this work.
boundaries. Such aliasing is a well-known side effect of decimation. Therefore, we used this wavelet transform without decimation at each resolution scale. The MR multiresolution environment [9] (program mr_transform) was used for this. From an image of input dimensions 200 200, with 10 bands, the output ‘‘shaved’’ or cropped image to be clustered was a multiband image of dimensions 162 162 10. Generally, the minimum distance algorithm is used for k-means, a method which dates back to the separate work of MacQueen and Forgy in the 1960s. K-means is also close [10] to competitive learning and indeed to the Kohonen self-organizing map method. The two steps are: update cluster centres, and assign to closest centre. These two steps are referred to as expectation and maximization in the EM algorithm [11]. However, in many applications, we have found the exchange algorithm of Spa¨th [8], originally due to S. Re´gnier in the 1960s, to give efficient and more stable results. In particular the chances of a cluster becoming empty are small with Spa¨th’s exchange algorithm. From an initial partition, each observation vector is considered in turn for possible assignment to another cluster. For image dimensions of 500–700 squared, in 10-dimensional space, we generally find convergence with this method after 15–20 epochs. (An epoch is the consideration once of all observation vectors.) Initialization is important in k-means: we fit a Gaussian mixture model to the marginal density of the first image plane of the multiband image. The mixture model uses the EM algorithm. Note, however, that the partitioning algorithm itself is deterministic.
F. Murtagh et al. / Computers in Industry 56 (2005) 905–917
909
The computational requirement of the clustering depends linearly on image size, number of epochs (capped at 25), and number of segments. Typically for an image size of 200 200 10, the clustering takes 4 s, with about half this time being for the initialization.
4. Image analysis of aggregate and sand mixtures Figs. 7–11 show examples of the segmentation of widely used test images of textures. Fig. 7 is a particularly difficult image, and both test cases are bothersome in the regions where the textures are touching one another. The results in Figs. 8, 10 and 11 are good, even if not 100% correct. The four- versus five-class results shown in Figs. 10 and 11 are further confirmation of the quality of the results. Having appraised our approach on test image data for comparative assessment purposes, we now move on to real image data. The use of texture measures to model coarse and fine aggregate is demonstrated with Fig. 12. This figure is a composite from three images using sand passing 600 mm, 300 mm, and 75 mm, see Figs. 13– 15. Averaging of wavelet coefficient energies with a
box filter of size 31 leads to the segmentation shown in Fig. 16. Fig. 17 shows the result when the box filter is of size 27. In both cases, the analyzed images were of dimensions 164 164 10. Fig. 16 has somewhat thick inter-segment boundary regions, whereas Fig. 17 is a little better in this regard but at the expense of intra-cluster segmentation. This example shows (i) that the box filter size is not very sensitive and (ii) that we need to define it for the given texture characteristics of the data.
Fig. 7. Test image with five textured regions.
Fig. 9. Test image with four textured regions.
Fig. 8. Result of five-class segmentation of Fig. 7.
910
F. Murtagh et al. / Computers in Industry 56 (2005) 905–917
Fig. 10. Result of four-class segmentation of Fig. 9.
Next, a realistic image of coarse aggregate and sand was segmented. Originally of dimensions 420 449, with removal of overall image boundaries, the sizes of the images analyzed were 357 385. The latter are the image sizes shown in Figs. 18–20. We looked at both two-cluster and three-cluster solutions, to assess robustness. We find that we can derive a slightly more accurate result from the three-cluster solution. However, analysis of the two-cluster solution in Fig. 19 provides us with the easier way to remove unwanted regions from consideration. Segmentation is rendered difficult by partial obscuration. It appears that all visible non-minuscule pieces of aggregate have been captured by us. Have some pieces of aggregate been lost? Doubtlessly yes, and a quantitative answer to this question would
require use of receiver operating characteristic (ROC) analysis, where segmentation algorithm parameters are varied in order to trade off quality of recovered segments versus false alarm rate. Our objective here is to assess how adequate it is to find most of the visible aggregate.
Fig. 11. Result of five-class segmentation of Fig. 9.
Fig. 13. Grades of aggregate. 1—pebbles passing 600 mm.
Fig. 12. Test image with three grades of sand.
F. Murtagh et al. / Computers in Industry 56 (2005) 905–917
911
Fig. 16. Result of three-class segmentation of Fig. 12. A box filter of side length 31 was used.
Fig. 14. Grades of aggregate. 2—gravel passing 300 mm.
A report file on each object – large piece of aggregate – includes currently the following information: object sequence number shown in a corresponding plot; pixel coordinates; ‘‘peak-min’’ or the maximum value minus the minimum value in the object region with reference to original pixel values;
Fig. 15. Grades of aggregate. 3—sand passing 75 mm.
size of the object region in pixels; x2 goodness of fit values for the fit of Gaussian profiles in horizontal (or x) and vertical (or y) directions; and the full widths at half maximum (FWHM) of the x and y profiles. The FWHM is a measure of spread and, in the case of a Gaussian distribution, is equal to 2.35 times the standard deviation. The Gaussian profile fits are of course based on image intensity, which in turn represents the physical reflectance property of the aggregate. If objects are too close to the overall image boundary, we reject them as being unreliable. If objects do not pass a size threshold, here set at 200 pixels, they are also rejected as being uninteresting. In this case, the object size is noted, and also the average intensity value.
Fig. 17. Result of three-class segmentation of Fig. 12. A box filter of side length 27 was used.
912
F. Murtagh et al. / Computers in Industry 56 (2005) 905–917
Fig. 18. A mixture of stones and sand.
Fig. 20. Superimposition of the three-cluster solution in Fig. 18. The near-black regions are one cluster, and they give us the image regions of interest. A lighter and barely perceptible shade of gray is used for the second cluster. The third cluster is zero-valued and cannot be distinguished from the background sand and gravel.
Our conclusions on image segmentation are the following: Segmentation is feasible, using approaches related to texture analysis. 2D projections can capture 3D object shape information, subject to the hypothesis that any such 2D projection is a priori equally feasible (cf. individual object shape analysis in, e.g., quarrying [12,13]). Rather than trying to measure individual objects, we will, in the following sections, look at alternative approaches for characterizing the information content of an entire image at once.
5. Feature selection 5.1. Multiple scale entropy to quantify aggregate granularity Fig. 19. Superimposition of the two-cluster solution in Fig. 18. These black areas give us the regions of interest. The non-background cluster is displayed in black.
Having discussed image segmentation which led to analysis of individual objects, we now turn to look at information characterization of an entire image.
F. Murtagh et al. / Computers in Industry 56 (2005) 905–917
For fine-grained image characterization we carry out a ‘‘texture’’ analysis, and the wavelet transform is, by now, a traditional way to do this [5,14,15,4,6]. Our approach avoids any system parameter related to window size; and the undecimated wavelet transform used helps to avoid object aliasing. We used multiple scale image entropy to quantify aggregate granularity. Using five wavelet scales, from the B3 spline a`trous redundant wavelet transform, an entropy-per-scale was determined, and thus provided a five-valued feature vector for each image. Background on this is provided in [16], where it was concluded that this approach to feature definition performed well for discrimination of aggregate ‘‘textures’’. We additionally used five wavelet scales, with the same wavelet transform method, used on the edge map, i.e., the image transformed with a Canny edge detector [17,9]. In total, this provided 10 features per image. A B3 spline a`trous wavelet transform gives the following decomposition nPof the original signal: o l fxk jk ¼ 1; 2; . . . mg ¼ j¼i w j;k jk ¼ 1; 2; . . . ; m . l is the number of scales, and the number of samples in band (scale) m is constant for this redundant transform. Scale l is the smooth or continuum scale, and all other scales consist of zero-mean (per scale) wavelet or detail coefficients. The value of l is set by the user (here, 6, implying 5 wavelet scales). The feature set is defined from the resolution scale related decomposition as follows: H ¼ fH j j j ¼ 1; 2; . . . ; l 1g X m hðw j;k Þj j ¼ 1; 2; . . . ; l 1 ¼
(1)
k¼1
with hðw j;k Þ ¼ ln pðw j;k Þ The probability pðw j;k Þ is the probability that the wavelet coefficient w j;k is due to noise. The smaller this probability, the more important will be the information relative to the wavelet coefficient. For Gaussian noise we have hðw j;k Þ ¼
w2j;k 2s 2j
þ const:
(2)
where sj is the noise at scale j. Further discussion can be found in [16]. Table 1 presents results for Figs. 13–15 obtained with use of the redundant B3 spline a trous wavelet
913
Table 1 Aggregates passing coarse grade (600 mm) to fine (75 mm) Aggregate (mm)
Band 1
Band 2
Band 3
Band 4
Band 5
600 300 75
3.07 2.58 2.18
11.93 11.47 9.94
14.79 14.32 12.39
15.49 15.16 13.16
15.71 15.49 13.61
600 300 75
2.65 2.14 1.73
11.84 11.37 9.81
14.76 14.29 12.31
15.47 15.14 13.10
15.71 15.48 13.55
Top half: global entropy per band. Bottom half: entropy of signal (as opposed to what is taken as a separate noise component, based on an additive Gaussian noise model) in band. B3 spline a` trous wavelet transform used.
transform with five wavelet levels (plus the sixth smooth scale which is not considered for the noise modelling) and a stationary Gaussian noise model. We note the following from Table 1: the entropy values at each resolution level (band) are ranked in terms of the aggregate granularity. The signal entropies, at each resolution level, are similarly ranked. This result is not entirely unexpected, insofar as there are clear visual differences between the different granularities. 5.2. Multiple discriminant analysis To facilitate assessment of discrimination between the classes in feature space, we used multiple discriminant analysis (also termed discriminant factor analysis, or the multi-class version of Fisher’s linear discriminant analysis) [18,19]. Discriminating axes are determined in this space, in such a way that optimal separation of the predefined groups is attained. As a linear discrimination method, we expect that such problems as training set size, and generalization, will be less pronounced than for a nonlinear method. Consider the set of feature vectors, i 2 l; they are characterized by a feature set, j 2 J. A new orthogonal coordinate space is determined, such that the spread of class means in this new space is maximized, while the compactness of classes is restrained. Letting T be the total variance–covariance matrix of the n observations, and B be the between classes covariance matrix,
914
F. Murtagh et al. / Computers in Industry 56 (2005) 905–917
we seek eigenvectors of the matrix product T1B associated with non-increasing eigenvalues. It can be shown that multiple discriminant analysis is equivalent to a principal components analysis of centred vectors, i.e., the group means, in the T1 or Mahalanobis metric. Having the transformed feature vectors, i.e., their projection in the discriminant factor space, allows straightforward nearest mean assignment of vectors to the closest among the five groups used. In discriminant factor space, the (unweighted) Euclidean distance is used.
6. Results Fig. 21 shows one example of the projected images in the principal discriminant plane, based on use of five classes. Classes 1, 2 and 5 are well separated. In the multidimensional space (inherent dimensionality 5 = min of numbers of: features less 1 due to centring; observation; and groups) the distinction between groups 3 and 4 is clearer. This figure used 250 images, characterized in 10-dimensional feature space. The first five features are the multiscale entropy ones described above. The second five features are multiscale entropies based on a Canny edge transformed image. (See Section 5.1 above.) For five successive classes of image, among the 250 images used, we had numbers of images misclassified as: 0, 0, 3, 7, 0.
Among the many experiments carried out, a few important points are as follows: 1. We studied many different feature sets. Most were multiscale-based. In particular we initially used multiscale entropy features of Canny edge transformed images to provide information on larger pieces of aggregate. 2. We examined the denoising of the image data prior to analysis. This was not found to be of benefit. 3. Rather than the nearest mean classifier used in the multiple discriminant analysis, we also investigated nearest neighbour discrimination approaches (1-NN, 3-NN); and a multilayer perceptron. However. the hyperlinear approach was found to give very good results, and its operation was easily controlled and managed. 4. We worked on rebinned 454 340 images, obtained from the originally sized 2272 1704 images. We exhaustively tested the processing used on a battery of 600 training set images used additionally on the original 2272 1704 images. Days of compute time on a Sun Microsystems cluster gave us an approximate gain of 2% misclassification. We saw no justification in continuing to process the original images in this way. 5. We may note that as long as the misclassification in a class is shown to be less than 50%, then a majority class assignment based on a number of images is likely to increase the success rate.
Fig. 21. Principal discriminant factor plane, with projections of images from groups 1–5.
F. Murtagh et al. / Computers in Industry 56 (2005) 905–917
915
Fig. 22. Principal discriminant factor plane, with projections of images from groups 1–6. The training set made use of 50 images from each group. The test set made use of 10 images from each group. A 10-dimensional feature space was used.
In order to study the needs for training and test set cardinalities, assessments were carried out. An example is displayed graphically in Fig. 22. Five randomly chosen test sets (1, 2, . . ., and correspondingly randomly chosen training sets, A, B, . . .) yielded numbers of images misclassified as: 1, 0, 1, 1, 2, out of 30 in each case. Therefore, we had an average 97% success on the test sets. In the case of the smaller training sets and bigger test sets exemplified in Fig. 22, an average 95% success rate was obtained on the test sets. Specification of mixes are within standard bands. We have now started to investigate necessary specification band coverage. Not surprisingly our initial results show that good coverage is needed. In other words, the feature space of training set exemplars needs good coverage relative to the test set cases.
regions of aggregate material, works well. We then looked at the overall description of the information content of an image. We propose this latter approach as more suitable for the image-based grading task. From the point of view of operational use in the difficult conditions of the construction industry, we note that the algorithmic robustness and stability leaves just one area where care and attention will be required in practice: viz., the operational camera and lighting environment.
Acknowledgement This work was supported by EPSRC (Engineering and Physical Sciences Research Council, UK) project GR/R38835/01, ‘‘Virtual sieving: machine vision methods for grading crushed aggregates’’, 2001–2004.
7. Conclusions We have both evaluated and validated a new image content characterization approach on images of crushed aggregate containing different object sizes and morphologies. Our algorithms are computationally inexpensive, and scalable. In our experimental evaluation, we have found these algorithms to be robust and stable. We have found that segmentation in order to determine individual aggregate pieces, or
References [1] H.M. Treasury, The Green Budget (1998). [2] P. Hobeda. Krossningens betydelse pa˚ stenkvalitet, starskilt med avseende pa˚ kornform. Literaturstudie No. 050001, Statnes vag-och Trafikinstitut, VTI, Linko¨ping, Sweden, 1988. [3] V. Reinhardt, Schlagfester Splitt 8–11 mm oder stabiler Asphaltbeton 0–12 mm., Bitumen, Teere, Asphalte, Peche und verwandte Stoffe (1969), No. 11.
916
F. Murtagh et al. / Computers in Industry 56 (2005) 905–917
[4] M. Unser, Texture classification and segmentation using wavelet frames, IEEE Transactions on Image Processing 4 (1995) 1549–1560. [5] N. Fatemi-Ghomi. Performance Measures for Wavelet-Based Segmentation Algorithms. Ph.D. thesis, Surrey University, 1997. [6] P. Scheunders, S. Livens, G. Van de Wouwer, P. Vautrot, D. Van Dyck, Wavelet-based texture analysis, International Journal of Computer Science and Information Management 1 (2) (1998) 22–34. [7] T. Randen, J.H. Husoy, Filtering for texture classification: a comparative study, IEEE Transactions on Pattern Analysis and Machine Intelligence 21 (1999) 289–290. [8] H. Spa¨th, Cluster dissection and analysis, Ellis-Horwood (1985). [9] Multi Resolutions, MR—Multiresolution Analysis Environment, 2002, http://www.multiresolution.com. [10] F. Murtagh, M. Herna´ndez-Pajares, The Kohonen self-organizing map method: an assessment, Journal of Classification 12 (1995) 165–190. [11] G. McLachlan, T. Krishnan, The EM Algorithm and Extensions, Wiley, 1997. [12] W.X. Wang, O. Stephansson, Comparison between sieving and image analysis of aggregates, in: Measurement of Blast Fragmentation, Balkema, Rotterdam, 1996, pp. 141–149. [13] W. Wang, Image analysis of aggregates, Computers and Geosciences 25 (1999) 71–81. [14] S.G. Mallat, A theory of multiresolution signal decomposition: the wavelet representation, IEEE Transactions on Pattern Analysis and Machine Intelligence 11 (1989) 674–693. [15] S. Livens, P. Scheunders, G. Van de Wouwer, D. Van Dyck, H. Smets, J. Winkelmans, W. Bogaerts, A texture analysis approach to corrosion image classification, Microscopy, Microanalysis, Microstructures 7 (1996) 1–10. [16] F. Murtagh, X. Qiao, P. Walsh, P.A.M. Basheer, D. Crookes, A. Long, J.L. Starck, A machine vision approach to the grading of crushed aggregate, Machine Vision and Applications, 2005, in press. [17] J. Canny, Computational approach to edge detection, IEEE Transactions on Pattern Analysis and Machine Intelligence 8 (1986) 679–698. [18] F. Murtagh, A. Heck, Multivariate Data Analysis, Kluwer, 1987. [19] J.M. Romeder, Me´thodes et Programmes d’Analyse Discriminante, Dunod, 1973. Fionn Murtagh holds BA and BAI degrees in mathematics and engineering science, and an MSc in computer science, all from Trinity College Dublin, Ireland, a PhD in mathematical statistics from Universite´ P. and M. Curie, Paris VI, France, and an Habilitation from Universite´ L. Pasteur, Strasbourg, France. He is professor of computer science in the University of London at Royal Holloway, and head of the Department of Computer Science. He is editor-in-chief of The Computer Journal, a member of the Royal Irish Academy, and a fellow of the British Computer Society.
Xiaoyu Qiao (no image available) obtained BEng and Master degrees in the Department of Acoustics and Electronic Engineer, Harbin Engineering University, in China, in 1984 and 1987, respectively. After her graduation, she worked in HEU as Lecturer and later as an associate professor. In 1993, she started her PhD study in the co-education programme between HEU and Institute of Information Processing, National Research Council of Italy (I.E.I-CNR), was awarded a PhD degree in 1997. Since then, she has worked as DSP engineer in Audio Processing Technologies (APT) Ltd. and as a research fellow in the School of Computer Science, Queen’s University Belfast, UK, for several years. Recently, she joined GE Inspection Technologies as a DSP engineer. Her main working areas are: Image Analysis and Processing, Machine Vision, Digital Signal Processing, Neural Network and Intelligent System. Paul Walsh (no image available) worked as a research associate on this Virtual Sieve project. He is now with the East Down Institute of Further and Higher Education. P.A. Muhammed Basheer, BSc(Eng), MSc(Eng), PhD, CEng, FICE, MIE(I), MCS, is professor of structural materials at Queen’s University of Belfast, Northern Ireland, and has extensive research experience in developing new test techniques for measuring transport properties of concrete, assessing the effect of new materials and methods for improving the durability of concrete, predicting service life of reinforced concrete structures by non-destructive testing and the use of industrial by-products and waste materials in concrete. In these areas he has published more than 150 publications. He is a member of the UK Concrete Society, RILEM and American Concrete Institute. He is a member of Technical Committees ACI 211, 235, 236 and 365 and RILEM TC-ITZ, TC-TMC and TC-NEC. Within the European Union, he was a member of COST Actions 509 and 521 and is a member of COST Actions 530 and 534, all dealing with materials used in civil engineering. He is a member of the Editorial Board of the International Journals of Construction and Building Materials and Cement and Concrete Composites. Danny Crookes is professor of computer engineering at Queen’s University Belfast. For many years he has been developing software tools for high performance image processing, with projects sponsored by EPSRC, Nortel, The British Library, British Telecom and others. His current research interests include the use of programmable hardware devices (especially FPGAs) for a custom computing approach to image and video processing. He has applied expertise in language design, optimising compilers and software generators, and software tools for hardware description and architecture generation to the goal of developing high level software tools to enable rapid development of FPGA-based real time video processing systems. Professor Crookes has presented tutorials
F. Murtagh et al. / Computers in Industry 56 (2005) 905–917 on Parallel Image Processing at several international conferences, and has some 140 scientific papers in journals and international conferences. Adrian E. Long, PhD, FICE, FIStructE, FREng, FIEI, FACI, CEng, is professor of civil engineering, and has over 35 years of research experience; mostly related to structural concrete, both reinforced and prestressed, and concrete technology. He
917
has very active contacts with the UK construction industry and he is involved in many other national and international organisations, such as TRL, CIRIA, EPSRC, ACI, ICE, BRE and RILEM and has published well over 200 technical papers. Over the past 20 years, he has had a direct involvement with a large number of research projects related to concrete durability and he has supervised 10 PhDs in this area. In recent years he has become increasingly aware of the need to develop improved sensors which could be installed in concrete structures and allow condition monitoring to take place and in this regard has an EPSRC grant on Fibre Optic Sensors.