Nuclear Engineering and Design 241 (2011) 1945–1951
Contents lists available at ScienceDirect
Nuclear Engineering and Design journal homepage: www.elsevier.com/locate/nucengdes
The “thousand words” problem: Summarizing multi-dimensional data David M. Scott DuPont Company, Experimental Station, Wilmington, DE 19880-0304, USA
a r t i c l e
i n f o
Article history: Received 8 May 2010 Received in revised form 17 August 2010 Accepted 18 August 2010
a b s t r a c t An inherent difficulty in the application of multi-dimensional sensing to process monitoring and control is the extraction and interpretation of useful information. Ultimately the measured data must be collapsed into a relatively small number of values that capture the salient characteristics of the process. Although multiple dimensions are frequently necessary to isolate a particular physical attribute (such as the distribution of a particular chemical species in a reactor), plant control systems are not equipped to use such data directly. The production of a multi-dimensional data set (often displayed as an image) is not the final step of the measurement process, because information must still be extracted from the raw data. In the metaphor of one picture being equal to a thousand words, the problem becomes one of paraphrasing a lengthy description of the image with one or two well-chosen words. Various approaches to solving this problem are discussed using examples from the fields of particle characterization, image processing, and process tomography. © 2010 Elsevier B.V. All rights reserved.
1. Statement of the problem 1.1. The metaphor “A picture is worth a thousand words.” This familiar metaphor is certainly a true statement, and it usually points to the value of using a diagram or image to convey complex ideas or detailed information. Common examples in technical literature include graphs of data and schematics of electrical circuits or process vessel interconnections. Images are indeed an efficient method of conveying information to human beings, because the eye and visual cortex are able to receive and process gigabits per second (Lantz, 1997). However, the converse of the metaphor’s usual meaning is also true: it may well take a thousand words to describe a given picture adequately, particularly if there are subtle but important aspects to be recorded. For example, the first two images in Fig. 1 are very well known, but try to imagine describing them in sufficient detail so that a blind painter could make a reasonable copy. The fact is that images contain a lot of information, and tomographic images are no exception. There is a subtle difference between the last two images in Fig. 1, which are taken from an electrical capacitance tomography (ECT) study of a milling process (Scott and Gutsche, 1999). This difference can barely be seen by eye if the images are displayed in pseudo-color and compared side by side. A description of each individual image, in sufficient detail to reveal the difference between them, would be quite lengthy. Of
E-mail address:
[email protected]. 0029-5493/$ – see front matter © 2010 Elsevier B.V. All rights reserved. doi:10.1016/j.nucengdes.2010.09.013
course, images are routinely represented as pixel color values in a digital format (instead of words), and the amount of data is directly related to the size and dimensionality of the image space. 1.2. Multi-dimensional data sets The simplest type of information is a scalar value (i.e., a single number), which is a data set of dimension zero. The current reading from a thermocouple (for instance) can be represented with a scalar. If, however, the thermal history is to be recorded, then a series of temperature values must be stored in a 1-dimensional (1D) array. In this example the index of the array (which points to individual values) represents time. The array can also be considered as a function of a single integral variable. Another example of a 1D data set is particle size distribution (PSD), which is commonly used to characterize the size of particulate matter encountered in industrial operations (Allen, 1997). Leaving aside the complications of particle shape, in principle the PSD is a histogram showing the classification of particles according to their size. If the particles shown in Fig. 2a are sorted accorded to their diameters and counted, the result is a number-based particle size distribution (Fig. 2b). More typically the volume fraction associated with each size class is measured, yielding a volume-based distribution (Fig. 2c). In this example the PSD is a 1D array where the index of the array represents particle size and the value is the volume fraction at that size. If the PSD or any other 1D quantity is recorded as a function of time, a two-dimensional (2D) data set is obtained; in this case the two dimensions are “particle size” and “time”, each represented by a separate index. Pictures and cross-sectional tomographic images
1946
D.M. Scott / Nuclear Engineering and Design 241 (2011) 1945–1951
Nomenclature c measure of the fraction of one phase d particle size d0 median particle size Es specific energy i the imaginary number M sample mass N number of elements in an array n parameter used in Rosin-Rammler function P is the absorbed ultrasonic power qLN (d) log-normal distribution qN (d) normal distribution qRR (d) Rosin–Rammler function s spread parameter used in log-normal function S0 specific surface area T is a given amount of time t time x,y,z the three Cartesian spatial dimensions nth percentile of cumulative particle distribution xn {Xk } a set of complex Fourier coefficients {xn } a set of data ε0 and ε1 permittivities of two phases average permittivity εa size parameter used in log-normal function particle density standard deviation of particle size
are also 2D objects, where the two dimensions are typically orthogonal directions such as “x” and “y”. Therefore any of the images in Fig. 1 can be represented as a rectangular 2D array of gray levels or color values. Such a digital representation requires the image to be discretized into X columns and Y rows. The resolution of the stored image depends on how many pixels are available, which is given
Fig. 2. The concept of particle size distribution (PSD): (a) a sample of particles, (b) the number-based PSD, (c) the volume-based PSD of the same sample.
by the product XY. Process tomography images typically have hundreds or some thousands of pixels; by contrast commercial digital cameras have many millions of pixels. Three-dimensional tomography extends imaging into the third spatial dimension, producing “volume images” that are represented as a 3D array of (x,y,z). X-ray CT accomplishes this feat with a cone beam of radiation; electrical tomography uses multiple planes of electrodes with a 3D reconstruction algorithm. Time evolution studies that use 3D imaging will produce 4D data sets, with dimensions (x,y,z,t). The size of high-resolution 4D images can exceed several gigabytes. 1.3. Reality of plant control systems
Fig. 1. The “thousand words” challenge: describe the following images. Top: Mona Lisa (Leonardo da Vinci, ca. 1519) and The Artist’s Mother (James Whistler, 1871). Bottom: Cross-sectional images of grinding media in a vertical stirred mill operating at two rotational speeds.
At some point the question must be asked, “what shall we do with all of this data?” This question becomes especially critical when one proposes to use multi-dimensional sensors to monitor and control industrial processes at manufacturing plants. The goal is to improve process efficiency and product quality by adjusting pressure, flow rate, temperature, agitator speed, etc. according to on-line measurement data. However, for any given production line only a limited number of these parameters can be adjusted automatically, because actuators are expensive to install and maintain.
D.M. Scott / Nuclear Engineering and Design 241 (2011) 1945–1951
1947
Process operators can usually make additional changes manually, but there are only a limited number of knobs that can be turned. Thus there is a significant disparity in scale between the number of data generated and the number of actions that can be taken. Another issue is that existing distributed control systems (DCS) are not equipped to handle very large data sets. The interfaces with process sensors tend to be simple, for example traditional 4–20 mA signal loops (which transmit a single analog value encoded as current) or enhanced analog/digital interfaces that transmit a relatively small number of digital values. Typical process sensors output scalar information, not arrays, and in that context a few thousand inputs constitutes a large DCS; however, multidimensional data sets can easily exceed that capability by orders of magnitude. Fig. 3. The log-normal distribution, plotted on a logarithmic axis.
1.4. The “thousand words” problem Therefore higher dimensional data sets must be processed locally in order to extract information that can be transmitted and used by plant control systems. Luckily there is a distinction between raw data and information content, a fact that is exploited by image compression algorithms. To use the “thousand words” metaphor, the problem becomes how to paraphrase a full description of the images in Fig. 1 with one or two words. Many approaches, including those described below, can solve this problem. 2. Summarizing 1D data 2.1. Statistical analysis Statistical analysis is an obvious method for summarizing a 1D data set, and suitable descriptors may include average, standard deviation, skewness, kurtosis, or coefficients from a linear regression analysis. The choice of analysis will naturally depend on the specific application. 2.2. Integration In the case of particle size distributions (such as that shown in Fig. 2c), it is common practice in industry to summarize a PSD by percentile sizes, usually ×10, ×50, and ×90. These quantities are characteristic particle sizes that are defined in terms of the cumulative (integrated) mass distribution; for instance, 10% by mass of particles in the sample have a size smaller than ×10. The ×50 is equivalent by definition to the mass median size, and the quantity (×90 − ×10) is a measure of the width of the distribution. These values provide an approximate view of changes in the PSD, but they may be sufficient for many applications (e.g., the ×90 is useful for monitoring the reduction of particle size in grinding operations). Another way to characterize a PSD is by its equivalent surface area, which is useful for instance in surface treatment applications. The specific surface area S0 (m2 /kg) is calculated as a function of particle density and size d, and the volume-based PSD is used to obtain a weighted average over all n size classes: S0 (d) = S=
6 (d)
n i=1
S0 (di ) · PSD(di )
PSD by Eq. (2) is a scalar, which can easily be used by the plant control system. 2.3. Functional forms If the exact shape of the PSD is important, it is often possible to approximate the distribution by a particular functional form, in which case only the parameters of the function need to be communicated. Typical functional forms are the normal distribution qN (d), the log-normal distribution qLN (d), and the Rosin–Rammler function qRR (d), which is more commonly known as the Weibull distribution. The well-known normal distribution (3) is used extensively in statistical analysis, and its shape is completely specified by 2 parameters, the median d0 and the standard deviation : 2
qN (d) =
exp[−(d − d0 ) /2 2 ] √ 2
(3)
In real applications, it is often the case that the particle size is not normally distributed but instead skews towards larger particles (Allen, 1997). In many industrial processes the PSD follows the lognormal distribution (4) shown in Fig. 3; this distribution can also be described by two parameters, and s: 2
qLN (d) =
exp[−(ln(d) − ) /2s2 ] √ (sd 2)
(4)
These parameters are calculated from the PSD by setting = ln(x50 ) and
s = ln
x84 x16
(5)
Milling and crushing operations tend to produce particles with a PSD described by the Rosin–Rammler distribution, which is defined by parameters n and d0 : qRR (d) = d(n−1) exp
d n d0
(6)
(1)
By choosing a suitable functional form and adjusting its parameters to fit the observed PSD, it is possible to summarize the entire data set (comprising 50–100 size bins) with only two or three values.
(2)
2.4. Resolution into basis functions
Here the PSD is already normalized so that the sum over all sizes is equal to 1, and we are reminded by the summation symbol that the PSD is an array rather than a continuous function. This calculation takes into account only the envelope of the particles and not the surface-connected porosity. The information extracted from the
In some cases a single functional form is inadequate to describe the observed PSD. Fig. 4 shows the PSDs obtained by the author during an experiment where an aqueous suspension of particles was sonicated (i.e., subjected to intense ultrasonic energy) for varying amounts of time. Generally experiments of this type shift the PSD
Original data
10 8 6 4 2 0
50
350
40
280
30
210
20
140
10
70
0
0.01
0.1
1
10
100
1000
0 0
16
32
Size (micrometers)
towards smaller particle sizes, where the shift in size is related to the specific energy Es PT M
(7)
Volume fraction (%)
where P is the absorbed ultrasonic power in W, T is the sonication time in s, and M is the sample mass in kg. It is evident that the PSDs shown in Fig. 4 are not described by the basic log-normal function shown in Fig. 3, and furthermore the size distribution as a whole does not appear to shift significantly. A closer inspection of the individual PSDs in Fig. 4 suggests that each one contains two distinct size populations. It is possible to separate these two distributions by considering the PSD as a sum of two separate log-normal components and fitting the resulting function to the data. This process is illustrated in Fig. 5a, which shows the measured PSD, the resulting fit, and the two component distributions which are labeled A and B (denoted as blue and red lines, respectively). The fitted function is nearly indistinguishable from the PSD data, and the fit parameters provide the median sizes of both components directly. The results of this analysis are shown in Fig. 5b.
a
8
4 2 0.1
1
10
100
1000
Size (micrometers)
b
Size (micrometers)
PSD
fit (A+B)
comp. A
comp. B
5 4 3 2 1 0 100
1000
10000
Specific energy (J/g) Size of Component A
80
96
112 128
The analysis described above is essentially a deconvolution of the observed PSD into two component distributions. It is clear from Fig. 5b that the particles in component B are reduced in size by the ultrasound, as expected, and it is this information (rather than the full PSD) which is useful. An explanation of the relative stability of component A would require too lengthy a digression to be included here. The point of this example is that 1D data sets can sometimes be described by combining two or more simple functions, which might be considered as basis functions in a mathematical expansion. This approach is related to the method of principle component analysis, which seeks to determine the minimum set of orthonormal vectors (eigenvectors, or principle components) that can be used to represent measurement data. Once these basis functions are determined, the information contained in a set of data can be conveyed as the coefficients of the principle components. 2.5. Fourier analysis The concept of expressing 1D data in terms of coefficients in an expansion naturally leads to a consideration of Fourier analysis, where an arbitrary but periodic function can be expanded in terms of trigonometric functions. For data expressed as a sequence of N numbers {xn ; n = 0, 1, . . ., N − 1}, the discrete Fourier transform yields a set of complex coefficients {Xk ; k = 0, 1, . . ., N − 1}:
Size of Component B
Fig. 5. Analysis of the data in Fig. 4: (a) one PSD can be resolved into two component fractions; (b) the size of the particles in both components is shown as a function of total input energy.
N−1 n=0
i2
xn exp −
N
(8)
kn
This transformation is often implemented using the Fast Fourier Transform (FFT) algorithm. The resulting coefficients {Xk } can be used to reconstruct the original data set via the inverse transformation: xn =
6
10
64
Fig. 6. Fourier transformation of an arbitrary signal. The original signal is shown as a line; the magnitude of the Fourier components is depicted with triangles, and the bandwidth-limited reconstruction of the signal is depicted with circles.
Xk =
6
0 0.01
48
1D array index
Fig. 4. PSD data obtained from a sonication experiment where particle size was monitored as a function of the total ultrasonic energy.
Es =
FFT magnitude
D.M. Scott / Nuclear Engineering and Design 241 (2011) 1945–1951
Volume fraction (%)
1948
1 N−1 Xn exp N k=0
i2 N
kn
(9)
The summation over k in Eq. (9) is a summation over frequencies, and the {Xk } are frequency components with units of cycles per time or cycles per length, depending on whether the {xn } are measured as a function of time or spatial distance. Fig. 6 shows an example of Fourier transformation of an arbitrary signal (shown as a black line); the magnitudes of the complex coefficients {Xk } are depicted as triangles. For real data {xn }, only the first (N/2) values of the complex {Xk } are unique; the second half of {Xk } is a mirror image of the first half. The inverse Fourier transformation can reproduce the original data set exactly if all of the coefficients are used. However, measurement data is generally limited in bandwidth, in which case it may be approximated by ignoring the higher frequency components. For the data shown in Fig. 6, only the first 8 coefficients appear to be significant; by truncating the rest of the 64 coefficients and performing the inverse FFT, one obtains an excellent approximation (shown as red dots) of the original data. This result shows
D.M. Scott / Nuclear Engineering and Design 241 (2011) 1945–1951
1949
that most of the information contained in the original data set of 128 numbers is contained in the 16 numbers (counting real and imaginary components separately) of the truncated set of Fourier coefficients. In certain applications, such as ultrasonic spectroscopy (Scott et al., 1995), the relevant information is the magnitude of the FFT at a few selected frequencies. It should be emphasized, however, that the goal is not merely to compress the data but to extract and summarize useful information from it. 3. Summarizing 2D data Many of the approaches described above can be extended to data of higher dimensionality. Other approaches make use of the spatial relationships and correlation between neighboring pixels in 2D and 3D images. 3.1. Interpretation Returning to the tomographic images shown in Fig. 1, note that color images are arbitrary in the sense that the look-up table (LUT), which renders the 2D array’s 8-bit or 16-bit gray levels as various pseudo colors, could be changed without changing the information content. Colors enhance the appearance of certain features that would be difficult to see in a gray-scale image, but they also influence how humans perceive the underlying information. The operations described below avoid subjectivity because they act on the 2D arrays themselves, not on the colors generated by the LUT. Tomographic images are (or should be) quantitative, so that a given pixel value corresponds to a physical observable (i.e., a measurable quantity). In the case of ECT, the tomographic system is usually calibrated against low and high levels of permittivity, and (to the extent the system is linear) the tomographic measurement records the local permittivity as a function of (x,y) position. The displayed image is actually a representation where the pixel gray values are a linear function of the measured permittivity. For the sake of simplicity, in the following discussion no distinction will be made between the measured tomographic data and its representation as an image. 3.2. Averages Many early applications of tomography were focused on twophase flow (Plaskowski et al., 1995), and it is common for tomography systems to report a “hold-up value” or component fraction. This value is calculated from an average of the permittivity (or resistivity in the case of ERT) across the whole image. The average permittivity εa is compared with the permittivities of the two phases, ε0 and ε1 to determine the component fraction c: c=
(ε0 − εa ) (ε0 − ε1 )
(10)
The resulting scalar c, which has a value between 0 and 1, is a measure of the fraction of phase 1 in the cross-sectional image. Since this number can fluctuate quickly in flowing conditions, the value of c is usually averaged over a period of time. For some applications, the single value c is the only information that would be needed by the DCS. For other applications, a scalar does not contain enough information. The ECT images shown in Fig. 1 are taken from a study of a media mill, which contains grinding beads (Scott and Gutsche, 1999). The images are calibrated to show the local bead fraction as a function of position over the axial cross section of the mill. The purpose of the study was to determine the spatial distribution of beads within the mill, so valuable information would be lost by averaging over all pixels in the image. Due to the axial symmetry of the mill and the resulting tomographic image, it is natural to average over
Fig. 7. Averaging over polar angle to obtain radial profiles of bead fraction. The tomogram on the left, which yields the profile depicted as a dashed line, is more uniform than the tomogram on the right (profile depicted with the solid line).
the polar angle. Fig. 7 illustrates that the radial distribution of beads in the mill can be obtained by averaging over the polar angle in the cross-sectional images. The tomogram on the left yields the radial profile shown as a dashed line, and the one on the right yields the solid line. 3.3. Fourier analysis Fourier transformations can also be made in 2D and higher dimensions, just as in the case of 1D data (see Fig. 8). In most cases both dimensions are spatial, and the transform yields an inverse space where the distance from the origin signifies the spatial frequency (with units of 1/length). Therefore the fine detail in the image (which corresponds to high spatial frequencies) is contained in the pixels near the edge of the transform. The number of cycles per unit length observed in the image depends upon the polar angle of observation, and this corresponds to the polar angle in the transform. The result of the Fourier transformation is another image of the same size as the original. As in the case of 1D transforms, one can reduce the amount of data by limiting the bandwidth or selecting specific frequency components; masking certain regions in the transform enhances or suppresses fine detail in the image obtained by inverse transformation. Since electrical tomography images tend to have low spatial resolution, the important features can typically be captured with a relative small number of frequency components.
Fig. 8. 2D Fourier transformation of an image.
1950
D.M. Scott / Nuclear Engineering and Design 241 (2011) 1945–1951
Fig. 11. Two examples of state estimation. Fig. 9. Domain size: The image on the left has a domain 7 cm in diameter, whereas the image on the right has a domain 4 cm in diameter.
3.4. Domain size distribution analysis Image features represent regions within the process that differ in some way from the surrounding medium. These features may represent a separate phase such as a large gas bubble or a domain where the concentration of a chemical component differs from the bulk. In many applications the shapes of these domains are often poorly defined, and only the size is important. An example is the formation of caverns near the impeller during mixing of complex fluids (Nienow and Elson, 1988). The presence and size of the cavern is of more concern than its exact shape. In such cases the “domain size” and “domain size distribution” (similar to PSD) can be computed from 2D images using standard image processing tools (see Fig. 9). The domain size (e.g., size of a cavern) can be reported directly as a scalar quantity. If the resolution of the imaging technique is high (e.g., NMRI or X-ray CT), many domains of various sizes might be observed in a single image; in that case a domain size distribution (DSD) can be calculated and transmitted to the controller as a vector quantity.
3.5. Feature extraction In some measurement applications, the presence or absence of a particular feature in the image may be important. One example is the detection and sizing of the vortex in large stirred tanks (Bolton et al., 2006). A vortex that draws too much air into the process can interfere with circulation pumps and other equipment. Image processing techniques exist (Zhao et al., 1995) that can identify and locate specific features of interest (such as the painting on the wall in Fig. 10). The presence or absence of specific features in the data set can easily be transmitted to the control system.
3.6. State estimation An elegant way to extract information is to model the process and to determine the process state necessary to produce the observed data, which can be projection data; this approach seeks to summarize all of the information in a few process variables (see Fig. 11). Note that a tomographic reconstruction is not strictly necessary (although it may still be useful). This technique has used to determine the phase interfaces in sedimentation processes monitored with EIT (Tossavainen et al., 2007). 4. Summarizing 3D and 4D data 4.1. Domain size distribution A 2D image recorded at several points in time is a 3D data set, as is a single volume image from a full 3D reconstruction generated by a tomography system. Likewise, a 3D image that evolves over time is a 4D data set. As pointed out in the introduction, these data sets are generally quite large. One method of summarizing such data is to consider the domains embedded within such images. For example, in the case of a fluidized bed the size of large bubbles forming and rising through the bed could be measured over a short period of time to give a domain size distribution. Such a DSD vector would be a characteristic of the distributor plate and the operating details of the fluidized bed. The fluctuation of the DSD over time would give insight into the dynamics of the fluidized bed. 4.2. Cross correlation Cross correlation of pairs of images is well known for measurement of flow fields (Plaskowski et al., 1995). The two images can be separated by space (e.g., 2 ERT sensing planes) or by time, as in the case of particle image velocimetry (see Fig. 12). In PIV, tracer particles are injected into the flow and imaged at two moments in time. These images (Fig. 12a and b) define the beginning and end points of particle motion as shown in Fig. 12c, and the resulting vector field (Fig. 12d) shows the local direction and speed of the surrounding fluid flow. 4.3. Isosurfaces
Fig. 10. Example of the presence or absence of a specific feature in an image.
The interface between distinct 3D domains in a multiphase system forms a 2D isosurface. Fig. 13 shows examples from a mixing study by Mann et al. (2001), who injected brine into a large tank filled with tap water and observed the resulting plume over time as the tank was stirred by a Rushton turbine. In the figure the brine plume is depicted as a solid body, the surface of which is an isosurface separating the brine (even as it becomes more diluted) from the tap water. Another example would be the interface between
D.M. Scott / Nuclear Engineering and Design 241 (2011) 1945–1951
1951
4.4. Other methods Averaging, feature extraction, state estimation, Fourier analysis, and other 2D methods can be used on 3D and data sets of higher dimensionality. In general, any of the techniques described above will reduce the number of dimensions needed to capture the information embedded in the original data. 5. Conclusions
Fig. 12. The concept of particle image velocimetry (PIV): (a) first image, (b) second image, (c) particle tracks, (d) resulting vector field showing the local fluid flow.
Fig. 13. Isosurfaces formed by the mixing of brine and tap water, shown at 1, 2, and 3 s after injection of the brine. From Mann et al. (2001), copyright © 2001 SPIE Society of Photo-Optical Instrumentation Engineers. Reprinted by permission.
hydrophobic and hydrophilic phases in an emulsion; in this case the isosurface would be highly discontinuous. The total area of these surfaces indicates the homogeneity of the system, and a large surface area promotes mass transfer in reactors. In general the contacting area is a function of time, so a 4D data set can be reduced to a simple function of time (1D).
Existing plant control systems cannot handle the shear volume of data generated by multi-dimensional sensors, so additional data processing is necessary after the raw data set is generated. In all of these considerations it is important to note that there is a difference between data and useful information. The goal is therefore to extract only the minimum information necessary to control the industrial process; the approach will of course depend on the specific application. Fortunately, the “thousand words problem” can be addressed by a number of methods, many of which are described above. References Allen, T., 1997. Particle Size Measurement, vol.1., 5th ed. Chapman & Hall, London, pp. 44–70. Bolton, G., Bennett, M., Wang, M., Qiu, C., Wright, M., Rhodes, D., 2006. On the Development of an Electrical Tomographic System for Monitoring the Performance of a Heavy Metal Precipitation Step during Nuclear Fuel Reprocessing, AIChE 2006 Spring Meeting, Orlando. Lantz, E., 1997. Future directions in visual displays. Comput. Graph. 31, 38–44. Mann, R., Stanley, S., Vlaev, D., Wabo, E., Primrose, K., 2001. Augmented-reality visualization of fluid mixing in stirred chemical reactors using electrical resistance tomography. J. Electron. Imaging 10, 620–629. Nienow, A.W., Elson, T.P., 1988. Aspects of mixing in rheologically complex fluids. Chem. Eng. Res. Des. 66a, 5–15. Plaskowski, A., Beck, M.S., Thorn, R., Dyakowski, T., 1995. Imaging Industrial Flows. IOP Publishing, Bristol. Scott, D.M., Boxman, A., Jochen, C.E., 1995. Ultrasonic measurement of sub-micron particles. Part. Part. Syst. Charact. 12, 269–273. Scott, D.M., Gutsche, O.W., 1999. ECT studies of bead fluidization in vertical mills. In: Proceedings of 1st World Congress on Industrial Process Tomography, Buxton, UK, April 14–17. Tossavainen, O.P., Vauhkonen, M., Kolehmainen, V., 2007. A three-dimensional shape estimation approach for tracking of phase interfaces in sedimentation processes using electrical impedance tomography. Meas. Sci. Technol. 18, 1413–1424. Zhao, D., Chen, Y., Li, B., Zhao, C., Wang, X., Zhang, X., Chen, J., Tu, C., 1995. A 3-D robust range feature extraction tool for industrial automation applications. In: Proceedings 1995 International Conference on Image Processing, vol. 3, Washington DC, pp. 45–48.