A Vision-Based Particle Tracking Velocimetry

A Vision-Based Particle Tracking Velocimetry

Real-Time Imaging 7, 145–158 (2001) doi:10.1006/rtim.1999.0203, available online at http://www.idealibrary.com on A Vision-Based Particle Tracking Ve...

445KB Sizes 0 Downloads 84 Views

Real-Time Imaging 7, 145–158 (2001) doi:10.1006/rtim.1999.0203, available online at http://www.idealibrary.com on

A Vision-Based Particle Tracking Velocimetry

P

article Image Velocimetry (PIV) is a non-intrusive optical technique to measure velocity of flows. It provides the simultaneous visualization of the streamline pattern in unsteady flows and the quantification of the velocity field over the image plane. To reveal the flow motion, the flow is seeded by small scattering particles. The instantaneous fluid velocities are evaluated by recording the images of tracers, suspended in the fluid and traversing a light sheet. A PIV system consists of seeding particles, illumination unit, image acquisition system, and a computer for image processing. For industrial applications a classical PIV system is not suitable for its cost, sizes and needs of specialised users and work areas. Moreover, classical PIV are unable to work in real-time for the huge amount of data and expensive algorithms adopted. In this paper, a study and the implementation of a new PIV system is described and compared against classical PIV solutions. The solution proposed is capable of working in real-time and is a Continuous PIV, CPIV, system: with respect to classical PIV, it is composed of a continuous laser light source and a CCD camera. A specifically new image-processing algorithm for velocity estimation and recognition of correct traces has been developed. It is based on the grey level distribution in the particle trace image, and indicates those particles moving with an out-of-plane velocity vector component and provides the measure with a limited error. # 2001 Academic Press

A: Baldassarre1 , M: De Lucia1 , P: Nesi2 and F: Rossi1 1

Department of Energy Engineering (DEF), University of Florence, Italy E-mail: [email protected] 2 Department of Systems and Informatics (DSI), University of Florence, Italy E-mail: [email protected], [email protected]

Introduction The algorithms for motion estimation are relevant for several image-based systems. The moving objects/entities in the observed scene project their 3D velocity on the image plane (i.e. apparent velocity). This is usually called ‘‘velocity field’’ or ‘‘motion field’’ [1–3]. Most of the techniques adopted for evaluating velocity fields consider changes in the image brightness features/areas. For this reason, the fields obtained are normally called ‘‘optical flow’’ or ‘‘image flow’’ field. Generally, optical flow field differs from the velocity field, which is a pure geometric

1077-2014/01/040145+14 $35.00/0

concept. The latter is equal to the optical flow field only under specific physical conditions of illumination, reflectance, moving object texture, etc. [2]. The optical flow precision with respect to the true velocity field also depends on the estimation technique adopted [4]. On the other hand, the estimations of an approximated velocity field can be enough for many applications. Motion estimation is very useful in important areas: motion-compensated image sequence processing, dynamic scene analysis, and for measuring and tracking system [3,5–7].

#

2001 Academic Press

146

A. BALDASSARRE ETAL.

Most of the above mentioned applications need to consume optical flow fields in real-time. Motion estimation must be performed in real-time especially for image sequence coding and robot vision. The implementation of a parallel architecture and/or specific VLSI chip is a technique to achieve this goal. In the literature, there exists some motion estimation architectures for motion analysis in real-time e.g. [8–11]. In the literature, several main approaches for motion estimation in a regular grid of the image can be identified: spatio-temporal filtering-based, block-matching, pel-recursive, gradient-based, corner tracking, and line tracking approaches [3,11]. Only some of these approaches can be profitably used for motion estimation of fluid flow, thus for the building of vision-based velocimetries instead of using pressure-based tools that are intrusive. Typically, images of fluid flow do not present enough pattern information to estimate the optical flow with the above mentioned approaches. For this reason, seeding particles are typically inserted in the fluid flow. Therefore, the problem is reduced to estimating the velocity of small particles in a sequence of images or even in single images with long or multiple exposures. For this purpose, corner tracking, point tracking and line tracking produce more suitable solutions than gradient based algorithms. More specifically, both of these approaches can be used in different cases, as it will be discussed in the following. Specific applications for velocity estimation of fluid flow by using seeding particles and image processing are called Particle Image Velocimetry (PIV). During the last 10 years, PIV approaches have received a rapid evolution. They are typically based on non-intrusive optical measurement methods to capture flow velocity fields and [12–16]. PIV provides the simultaneous visualisation of the 2D streamline pattern in unsteady flows, as well as the quantification of the velocity field over an entire plane. With the improvement of image acquisition (optics and CCD technology) and light and processing systems, several kinds of PIV systems were studied and proposed [12,17]. Different techniques have been defined to study fluids in different regimes. In the last few years, a particular PIV system has been widely used, to study flows in many different conditions. Hereafter, this is called the classical PIV system. Here, a pulsed laser is used as a light source, while small seeding particles, and a CCD camera or a photographic film are used such as a recording system for the classical PIV system [18]. A common light

source is a Q-switched Nd:Yag laser with a doubled frequency, so that it is capable of producing successive pulses with a very short time lag between them. This approach is obtained by using cylindrical lenses to expand the highly collimated laser beam, thus producing an intense light sheet down to a thickness of tens of microns. In a PIV system, the illumination pulses need to be synchronized with the image exposure. This is usually achieved by programmable pulse delay generators that can either stand alone or be hosted by the computer performing the image acquisition and analysis. In classical PIV measures, it is possible to record one or two successive images of the small particle tracers. With non-standard cameras, especially developed for PIV, it is possible to acquire two images, each of which is synchronized with one laser pulse. The duration of the Q-switched Nd:Yag pulse is only a few nanoseconds (typically 10 ns or less). Thus, the images obtained show one illuminated spot due to the scattering particle, in two different positions in the two successive images. Knowing the time lag between the two light pulses and measuring the distance between the two positions allows the evaluation of the velocity component in the illuminated plane. The detection of the sign of particle velocity vectors is a classical problem of PIV analysis [19,20]. In fact, most of the proposed solutions are only capable of evaluating the magnitude and direction line of the velocity vector, without the sign. To solve this problem, many different solutions were proposed for the classical PIVs. One of the most widely used methods is based on the introduction of a known and fixed shift between the two images recorded corresponding to the two flashlights. This method is typically implemented by using rotating mirrors, [21], and was studied in several different conditions by many authors [22,23]. This solution can be used with previous systems, by using the camera to grab the images coming from the mirror instead of directly from the measurement area. The disadvantages of this technique are due to the complexity of the system alignment and to the configuration. Other techniques that have been defined and implemented to solve the sign ambiguity problem [24] are: electrooptical image shifting [25], multi-color systems [26,27], etc. Electro-optical image shifting methods employ differently polarized light as an illumination source by way of appropriate bi-refringent crystals this simplifies the identification of starting and ending points in the same image. Multi-color systems work by using two or more light sources at different frequencies. Images coming from the same particle can be ordered in time

A VISION-BASED PARTICLE TRACKING VELOCIMETRY by analysing their color and knowing the light frequency sequence. In a PIV system, the processing methods used to study velocity vectors are very important. These can be characterized on the basis of the seeding density and of the image acquisition method (single frame or multiple frames). Typically, on the basis of the seeding particle density, three different partially overlapped categories can be identified. These are (i) low density, (ii) medium density, and (iii) high density/smokes. (i) Low density In this case, the techniques for direct analysis are the mostly used. With these methods, a scanning analysis of the single frame is provided in order to find the single particle properties. Then, these properties are correlated between two subsequent particle images [28] (see Fig. 1). Therefore, the problem is shifted to match couples of points due to the two laser pulses e.g. algorithms for particle tracking discussed above. In these algorithms, it is usually required to remove erroneous velocity vectors that can be due to insufficient particle pairs, or for the presence of so-called out-of-plane particles or motions (when the particles come/go from/out of the image plane). This is due to the fact that a velocity component in the orthogonal direction to the light sheet can bring the particles seeding away from the illuminated area, during one of the two flashes. This problem cannot be avoided since the seeding particles arrive without synchronization with respect to laser pulses. Thus, by only having the intensity of the particle image, it is impossible to study a misalignment of this kind of motion for the particles. This means that the particles on the image, that are due to out-of-planes, do not have their counterpart and thus the velocity algorithm can erroneously couple them, creating and estimating wrong velocity vectors.

Figure 1. Two images related to the same particle taken at different time instants.

147

A variation of the light scattered from the single particle can be due to a different size of the particle, as the G. Mie scattering theory demonstrates. This problem can, from one hand, create confusion and from the other can be used for matching the same particle in successive images. (ii) Medium density In this case, statistical methods are typically used such as those based on correlation of the image gray level matrixes [28,29]. The images are scanned with rectangular windows, and in each window a uniform motion flow is supposed. Thus, the algorithms work on a local area of the flow image to produce a measure of the average local displacement of region particles. When one image showing the same scattering particles in two different positions is acquired, autocorrelation is used. Studying the autocorrelation matrix, it is possible to locate two symmetric peaks, corresponding to the shift vector in the single window. The symmetry does not allow for recognizing the sign of the particle shift. In multiple frame acquisition systems, the cross-correlation is determined for two subsequent images. In this case, the same local areas located in two successive images are taken (supposing the motion of particles is limited). Only one peak is present in the cross-correlation matrix and it represents the whole shift (module, sign and direction). Correlation-based processing is a quite robust technique, but it is time consuming and requires analysis of the image by window, thus reducing the spatial resolution of the system. (iii) High density Here, two images are typically considered. Thus, algorithms for optical flow estimation can be profitable when applied [1,4,9,11], such as the interference techniques [28]. These methods are based on optical systems producing interference patterns from a negative image of two equally spaced particle images. The separation between two adjacent fringes is proportional to the displacement of the single particle. With this technique, it is possible to estimate the velocity with high precision, but the needs of a particular optical arrangement for visualising and analysing the interference pattern do not allow real-time analysis. In addition to these techniques, there are the differential ones that are useful for flow motion analysis with very high density seeding or smokes. Differential techniques are based on several constraints, whose

148

A. BALDASSARRE ETAL.

unknown quantities are perspective projection components of the motion field on the image plane, for example, the Optical Flow Constraint (OFC), whose coefficients are partial derivatives of the brightness of the image plane [2,4]. The work described in this paper is mainly focused on medium and low density seeding. The goal of the described work has been to develop a low-cost, realtime, easy to use, safe and flexible solution for the massive measuring of velocity flow in centrifugal compressors of Nuovo Pignone GE. They perform the measures by using Laser-Doppler Velocimetry, LDV.

The Proposed Solution In recent years, the need for compact and low cost systems to study the gas flow velocity inside turbomachinery has become greater. The most important features that these measuring systems have to provide are; high data rate, computerized acquisition procedures, bi-dimensional analysis, realtime processing, small size, user-friendliness, and low cost. These characteristics have been considered in order to build a new system to study gas flow velocity inside turbomachinery. It is based on an image processing approach and has proved to be cheaper than other image- and non-image-based solutions for the adoption of a PC-based, low-cost laser solution. Other advantages of the employed method, regarding the measurements at low velocity, are: (i) the availability of simple and rapid image processing algorithms; (ii) the possibility of detecting the out-of-plane conditions for the particle seeding, and thus omitting these from the velocity estimation; (iii) the capability of determining the velocity vector sign. In order to satisfy the above points, the classical approach for implementing PIV system has been revised. In fact, in a classical system there is a Q-switched Nd:Yag laser as a light source, with a high energy pulse, a non-standard CCD camera and a PC for grabbing images and image processing. The main limitation, of adopting these systems for industrial application (for flow studies at a velocity in the range of 100 m/s), is in the use of high power pulsed lasers

(needing specialized and qualified staff in controlled areas). These typically have a large size and high costs. The solution proposed is composed of (i) a cw low power laser, (ii) a standard high resolution b/w CCD camera with a high-speed electronic shutter, and (iii) a frame grabber for the image acquisition and postprocessing of the PIV images. A specific new algorithm for the processing of the images obtained with this acquisition system has been developed. The algorithm is very simple and computationally cheaper with respect to the above mentioned cross or auto-correlation techniques. This method is based on the fact that it is possible to use (as a light source) a continuous wave laser instead of pulsed lasers. With this approach, the moving particles produce a continuous line on the image plane. The trace length depends on the exposure time of the CCD camera. We have called this solution ‘‘Continuous PIV, CPIV.’’ The solution is illustrated in Figure 2. It should be noted that the earlier PIV systems were based on the analysis of images produced with continuous light sources (streak photography technique). In the past, these techniques did not achieve great success because of their poor accuracy in the measurements based on an estimation of the length of the traces on the image [17]. Presently, the image acquisition systems and image analysis algorithms have been greatly improved as it is shown in this paper. The main problems faced for this kind of solution in the past were due to the lack of (i) sufficiently high velocity TV cameras, (ii) specific image processing algorithms for detecting the sign (the inverse of the particles velocity), and (iii) real-time image processing solutions for measuring the velocity. The main limit of this approach continues to be the maximum value of the velocity that can be measured. It is lower than can be performed by using other more expensive systems, such as those described above. In fact, with the increment of velocity, two facts limit the system performance. Firstly, at higher velocity the exposure time has to be reduced in order to maintain the particle inside the measurement area. The maximum length for single traces depends on the exposure time that is inversely proportional to the maximum measurable velocity. Moreover, at higher velocity, a lower brightness of the particle trace on the image is obtained. In fact, a particle moving at higher velocity remains in front of each CCD sensor element (i.e. confidentially the pixel) for a shorter time, scattering towards the CCD sensor with less photons. This can be balanced by using

A VISION-BASED PARTICLE TRACKING VELOCIMETRY

Figure 2.

149

(a) Experimental set-up of the CPIV solution (b) Particle trace negative image, as recorded by a CPIV system.

a higher power laser, knowing that the adoption of very high energy is only possible with a pulsed laser. With a simple CPIV system, is not possible to solve the sign of the velocity vector. This is not due to the use of a cw light source, but to the particular image recording technique (single frame). In order to remove the directional ambiguity, two CCD cameras (with different exposure times) have to be used simultaneously to acquire the images of the tracing particles. With this solution, in the grabbed images, each particle produces a trace of differing length for each image. If the exposure time of the cameras starts at the same time and finishes in a different time instant, the corresponding traces start at the same coordinates in both images, but they stop at two different coordinates of the image plane. The position of the trace ending gives the sign of the velocity vector. In order to analyse the continuous traces of the particles (see Fig. 3), a particular post-processing algorithm was defined and implemented. With a cw light source, it is possible to analyse the grey level distribution of the trace image, instead of studying the intensity of the single point particle image. The algorithm developed can be considered as a technique based on direct analysis of the images (see introduction). It works by scanning the image in order to find single

Figure 3. traces.

A negative CPIV image showing some particle

pixels belonging to a trace, and then by measuring the trace length (in pixels). The most important difference between this technique and the other direct analysis techniques is that here, it is not necessary to find similar particle images to reconstruct the trace, as in the case of a classical pulsed PIV. Beside this, the processing time can be reduced, scanning the traces on a grid. The algorithm developed presents three steps: (a) The detection of traces, (b) The extraction and the measure of each trace, and (c) The classification of each trace in order to avoid erroneous measures.

150

A. BALDASSARRE ETAL.

Trace detection In order to identify the traces on the image, a traditional algorithm for the exhaustive scanning of the whole image can be adopted. This solution is computationally expensive and unmotivated since the traces are typically large objects that can be detected at lower resolution. For this reason, we consider a grid in the image, generally with a different density along X and Y axes. The grid is scanned in both directions, looking for the intersections between the grid and the traces. The grid line step is a fundamental parameter: if the grid is too dense, the elaboration process is too computationally expensive, but if the grid is too large, it is impossible to intercept the traces or it is possible to miss some traces. The grid line step can then be chosen knowing the range of the velocity to be measured: the particle moving at the lowest velocity has to produce a trace whose length is longer than the dimension of the grid rectangular diagonal. If dx and dy are the grid dimensions, the following condition has to be satisfied: pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi vmin  dx2 þ dy2 5 Lp where: Vmin is the detectable minimum velocity, t is the exposure time, and Lp is the length of the diagonal of the single pixel in the measuring plane. Moreover, if the velocity vectors are mainly in a specific direction the grid along X and Y axes can be conveniently different, thus reducing the computational complexity and processing time. The scanning process compares every pixel image gray level with a threshold value, Tld (Threshold for line detection). Pixels having a gray level greater than Tld provoke the passing to the next step in which the depth of the trace is considered in order to reduce the number of errors. Only traces having a width, W, included in a range (Wmin5W5Wmax) are considered as valid. This is performed to avoid detection errors due to the acquisition noise and partially overlapped traces; both of these cases can originate errors in the measurements. It is possible to find intersections between a grid line and a trace that are larger than the expected width. This can be due to the presence of a small angle between the trace and the line grid. In this case, the intersection is neglected, since it can be easily recognized during the scanning in the other direction. In order to simplify the algorithm to make it faster, and since the velocity is mainly parallel to the X-axis, the estimation of width has been performed only along the X and Y axes. The

changes in width due to the angle between the trace and the grid are directly enforced in the detection parameters, Wmin, and Wmax. Trace extraction and measure The trace extraction and the corresponding measure of its length are performed during a successive processing phase. This image processing starts directly from the point in which the trace has been intercepted. The scan is performed by analysing adjacent pixels along the mean trace line, until the condition on the expected trace width is satisfied (thus the criteria reported above is used for deciding the start and the end of the trace). During the trace scan the gray level is compared with a threshold value, with respect to the one used for the grid scan. This is performed in order to correctly consider the reduced brightness at the trace end. Once the trace is completely analysed, the grid scan starts at the point at which it was suspended. The extracted trace is removed from the image in order to avoid multiple detection of the same line in different line grids. On the basis of the extracted trace, the measure of velocity (expressed in m/s) can be performed as: V¼

L10ÿ3 Mtexp

where M is the magnification (pixel/mm), L is the trace length (pixel) and texp (s) is the CCD exposure time. On the other hand, before making the measure, it is mandatory to verify if the detected trace is a topologically correct measure or not. This can be performed on the basis of the technique reported in the next section. Trace classification The algorithm is based on the analysis of the distribution of image gray levels of the trace of the single particle. With this analysis, it is possible to discover the real position of the seeding particle with respect to the light sheet (whose spatial intensity distribution is known). The distribution is close to being Gaussian centred in the middle of the illuminated area. By means of analysing the distribution of the gray level along the trace, it is possible to identify those scattering particles moving with a velocity vector parallel to the illuminated plane, and those that have a component of the velocity vector orthogonal to the same plane. The algorithm proposed estimates the mean gray level along an orthogonal direction to the trace, in every point

A VISION-BASED PARTICLE TRACKING VELOCIMETRY belonging to the trace itself. If the distribution of the single trace gray level is considered, the motion of the particle with respect to the light sheet can be classified. In Fig. 4, a schematization of the possible behavior of particles in traversing the light sheet (trends of the mean value of the image brightness along the trace) is illustrated. According to the possible trends it is feasible to indicate the following cases (see also Fig. 5 to see a real distribution of gray level along a trace): Case 1. The particle is moving parallel to the light sheet plane during the whole exposure time. There is a sharp variation in the gray level distribution in the correspondence of the exposure start and end time. The gray level is quite uniform along the whole trace, and its value mainly depends on (i) the particle dimension and (ii) the distance from the central plane of the light sheet. The correct velocity vector module is measured by knowing the particle trace length. The trace trend is affected by noise as can be seen in the rest of the paper. Case 2. (Including the case in which trace trend is decreasing). The particle traverses the light sheet forming an angle different from zero between its velocity

151

vector and the light sheet plane, but it is completely included in the illuminated area during the whole exposure time. The gray level presents a sharp variation at the CCD exposure start and has an increment of the image brightness until a new sharp variation in correspondence of the exposure conclusion occurs. The increment is due to the arrival towards the center of the light sheet plane in which a higher intensity is present. When the start and end time can be detected, the particle displacement and velocity are correctly measured (e.g. with a couple of images). In this case, it is also possible to coarsely estimate the orthogonal component of the velocity vector, because the partial derivative of the gray level distribution is proportional to it. Case 3. The particle passes across the light sheet and it is outside at the exposure starting time, it is an out-ofplane motion. The gray level distribution is similar to a triangle. It is not possible to exactly know the opening and/or closing time; the trace length is shorter than the real displacement of the particle in the exposure time: the velocity vector cannot be correctly measured. Case 4. The particle passes across the light sheet and it is outside at the exposure ending time—an out-of-plane motion. It is not possible to know the opening and/or closing time; the trace length is shorter than the actual

Figure 4. Schematization of a particle traversing the light sheet in different ways, and the corresponding trace mean gray-level distribution.

152

A. BALDASSARRE ETAL.

displacement of the particle, and the velocity vector cannot, be correctly measured. Case 5. The particle passes across the light sheet and it is outside at both the starting and ending times—an outof-plane motion. The gray level distribution increases to a maximum value and then decreases. The distribution reported in the figure is only a schematization, since the actual distribution should be close to a deformed Gaussian for the presence of a higher intensity of light in the middle. Moreover, the shape of the distribution also depends on the angle between the particle direction and the light sheet. The trace is shorter than the real displacement in the chosen exposure time and the velocity vector cannot be correctly estimated. From the above discussion, it is clear that only traces belonging to cases 1 and 2 have to be considered as correct sources for measuring the velocity vectors in the light sheet plane. The length of traces belonging to case 2 present an error due to the angle of the particle trajectory. For this reason, if they are too far from case 1 they have to be rejected. In order to classify the above cases, a specific algorithm has been defined. Since one of the goals was the development of a real–time measuring system, the measuring algorithm has to be very fast since the number of traces per image may be high. To this end, discrete and fast solution has been defined, avoiding the study of the entire distribution of gray levels along the trace. The estimation of the gray level distribution and the subsequent trace classification is performed on the basis of the mean gray level of three segments whose length is about 10% of the total trace length (see Fig. 5). One of

Figure 5.

Mean gray levels used by the classification index.

these segments is in the middle of the trace, and the other two are at the terminals (beginning and ending of the trace), at a fixed distance of 5% from the extremes. Two half traces are considered: the first starts at the first pixel of the trace and ends in the middle; the second starts from the middle and ends in the last pixel of the trace. Ei is the mean of the gray levels (image brightness) values of the i-th segment level. The particle trace is used to estimate the following classification index: I¼

ðE3 ÿ E2 Þ2 ðE2 ÿ E1 Þ2 þ 2 2 Eð2 Eð1 ,3Þ ,2Þ

where E(2,3) and E(1,2) are the mean value of the gray levels (image brightness) of the first and a second parts of the trace respectively. If the trace gray level presents an I index lower then a thresholding value, T, it belongs to cases 1 or 2 and it can be used for the velocity vector measurements. Typically, values close to 1% have been used for T. Several simulations and experimental tests have shown that these values produce the best results (above 90% of out-plane traces detected and only 2% of false negative). The index has been defined as above, instead of using the simple calculation of the derivatives of the two segments, such as: Ider ¼

jðE3 ÿ E2 Þj jðE2 ÿ E1 Þj þ Lð2,3Þ Lð1;2Þ

where L(2,3) and L(1,2) are the lengths of the two segments (L(2,3)=L(1,2)). This approach has some drawbacks. In fact, considering the first order derivatives, it is possible to eliminate cases 2,3,4, and 5 but traces belonging to case 1 with a very low gray value are considered valid as well. In effect, traces belonging to case 1 with a low mean value for the image brightness are due to particles passing on the extreme border of the laser light. In those conditions the noise is too high with respect to the image brightness value and this causes relevant errors in the trace length estimation. The chosen expression and algorithm allow the removal of traces having low values at the start or at the end (cases 3,4, and 5). As previously stated, the distribution of the energy produced by the laser is similar to a Gaussian with a small variance. For these reasons, it was initially decided to normalise the derivatives with the mean value of the image brightness in each segment. Then, the lengths of the segments were removed in order to reduce the computational complexity and improve the sensitivity of the method, since they are equal for each trace (as

A VISION-BASED PARTICLE TRACKING VELOCIMETRY stated above) and have a quite stable value for the traces of a same measure (the aim of the expression is to detect traces and not to estimate their length). The value of each normalised difference of the image brightness in I was multiplied by itself to increase the number of traces extracted belonging to case 2 and thus, having a non-horizontal trend but a limited derivative. More sophisticated detection algorithms based on rules may be defined but would be more computationally expensive. The great advantage of this algorithm is the possibility of detecting the out-of-plane particle motion condition. The importance of this is due to the fact that the particles traversing the light sheet plane with a velocity vector component orthogonal to that plane can give a wrong contribution to the velocity estimation. In fact, if a particle is outside the illuminated plane at the end and/or at the start of the exposure time, the trace representing its displacement is shorter than those that are completely included. Thus, considering its velocity, a lower value (obviously wrong) of the velocity vector could be obtained. Algorithm and system implementation The algorithm has been implemented by using Cþþ language. The whole software application for velocity estimation is comprised of three parts: (i) the user interface, for choosing the parameters used in the algorithm (gird dimensions dx and dy, maximum and minimum expected values for the trace length, threshold value for the gray level in the grid scanning, threshold value for the gray level in the trace scanning, fixed value for the trace classification index, CCD exposure time and magnification factor); (ii) frame grabber control for image acquisition (a MATRIX Vision GmbH PCgrab-G1); (iii) analysis and measurements algorithm, as above described. When the measurements starts, the program initializes a measurement loop, every recorded image is processed and the information of each identified and measured trace is saved in a report file. The measurement loop stops when the user wants or after a fixed number of identified and measured traces. The estimated velocity field can be visualised starting from the recording file.

153

Computational complexity As described above, the algorithm is mainly comprised of two phases that can be regarded as sequentially executed: (i) the searching for traces in the image. This is performed by inspecting the image along the ideal lines defined as a grid. This approach strongly reduces the computational complexity of inspecting all of the image that is an O(DX DY), where DX and DY are the image dimensions; (ii) the measuring of the trace length of the valid traces on the basis of the above reported classification in order to avoid the consideration of unsuitable and incorrect traces. The computational complexity of the first phase of the algorithm mainly depends on: (i) the dimensions of the image in terms of pixels (DX6DY), and (ii) the dimensions of the brid (dx, dy). This dimension is defined on the basis of the magnitude of velocity to be measured and thus on the probable length of traces. The complexity of the searching algorithm is an 0(DX DY/ dx dy). The cost of this phase of the algorithm is quite constant and may partially change with the number of traces, since every time that a trace is detected, the local processing is performed to verify the presence of a significant trace on the basis of its width. The total cost in terms of number of basic operations performed (subtractions, comparisons, additions, etc.) is above 600 (with standard PAL images and dx=20, dy=10; dy is larger than dx, since the main direction of traces is parallel to the X-axis). This cost can be neglected with respect to the cost of the second phase as discussed in the following. The computational cost of the second phase depends on the number of traces which are present on the image, #V; on the mean value of the trace length, L, and on the trace wideness, W; and, thus, according to the algorithm results to be an O(#V W L). The number of operations performed in this case is quite high: #V is typically close to 100, W and L are in order of 10 and 100, respectively. Thus, the number of operations is typically in the order of 100,000. This demonstrates that the second phase is computationally dominant with respect to the first phase. From the above analysis, it results that the asymptotical computational complexity is an O(#V W L).

154

A. BALDASSARRE ETAL.

Experimental Results In order to validate the above-described algorithm and solutions, a set of experiments have been performed in a free wind channel (i.e. without obstacles). Some testing conditions were assumed, i.e. that velocity was constant in time and uniform in the measuring area. This measuring area was previously used for the wind channel calibration. Thus, it has been possible to use the information on the calibration curve to establish the exact flow velocity in that area (called the ‘‘real’’ velocity of the flow) and then to estimate the velocity with the proposed system and the corresponding measuring error. The flow velocity used during the experimental test was changed in the range from 10 m/s to 40 m/s. As a first step, a brief description of the wind channel, the calibration method used for its characterisation, and the other items used in the CPIV system proposed, is reported. At the entrance of the wind channel, a convergent nozzle was placed. It guides the flow entering the channel, thus avoiding the problems of separation of fluid flow. Then, the flow passes through a honey comb and then goes in a horizontal square section of the channel (14061406900 mm), where the measurements are performed. There of the four channel walls are made of optical worked glass, with an antireflective coating. The laser beam passes through the upper horizontal wall, so that laser light sheet intercepts the channel in the direction of its axes and along its mean line. The light, scattered from the seeding particles, is collected through the front vertical glass. The fourth wall (the back one) is made of aluminum, and it has two little holes: one of these was used for measuring the inside static pressure, and the other was used for a Pitot tube (for obtaining reference measures). Inside the channel, a centrifugal fan for sucking up the airflow was placed. The fan is located at one side of the channel and it is moved with a 15 kW electric motor. This was connected to a mod. FR-A 240 Mitsubishi inverter, controlling the power frequency in the range 0–400 Hz, with 1/100 Hz precision. In this way, it has been possible to control the velocity flow inside the channel with sufficient accuracy. For the channel calibration, the static and dynamic pressures with a model K-FB Asea Brown Boveri probe were measured. The signal from the probe was read by an acquisition system with a 20 channels multiplexer (57 readings/s with precision of 5 1/2 digits). The signal was analysed with the software developed. Figure 6 shows

the calibration curve that was performed with the instrument. Vadim and Padim are defined as follows: if po is the total pressure at the channel inlet, p1 is the static pressure inside the channel, v1 is the flow velocity inside the channel and 0 the air density, two adimensional variables are defined: Padim ¼

P1 ; P0

v1 vadim ¼ qffiffiffiffi P0 0

Padim is the adimensional pressure ratio between room pressure and the measurement point pressure inside the gallery. During a PIV test measurement, it is possible to ascertain the real velocity (called in the following Vrif of the flow inside the gallery just by measuring the static pressure at the channel wall and the room pressure, and then by using the calibration curve plotted in Fig. 6. The CPIV system was composed of an image recording system, a cw laser as a light source, and a producer of seeding particles, as briefly described in the following. The scattering particle images were acquired by a standard low cost CCD camera with specifications as follows: image dimensions 7566580 pixels, BW standard composite video output signal, exposure time of 1/ 2,364,583 s, image acquisition triggered by an external signal, auto gain and white balance. A sigma photographic lens (f 90 mm) was used. The composite video output signal is collected from the frame grabber. The light sheet was generated by using a NEC cw Argon laser, having 150 mW power output, 800 mm beam diameter and 3 mrad divergence. The beam laser is first

Figure 6.

Wind channel calibration curve.

A VISION-BASED PARTICLE TRACKING VELOCIMETRY deflected from a mirror and then passes through a cylindrical beam expander. The most important seeding types used in PIV and LDV systems are smokes and polystyrene particles. The latter have some worthy characteristics, such as monodisperison, presence at the same time, several particle diameters, and well-known scattering and mechanical properties. On the other hand, the polystyrene particles sold by the industries are very expensive; thus, they are unsuitable for PIV or LDV low cost industrial applications. For this reason, we have developed a system to produce polystyrene particles, thus reducing the experimental costs in PIV and LDV applications. The seeding injection inside the measurement area is a very critical procedure, because the quality and repeatability of a PIV measurement depends on it. The seeding particles have not to significantly influence the flow field and the tracers’ distribution had to be uniform in the channel. There are several methods for injecting the seeding inside the test volume, depending on the flow under study and on the seeding itself. In the experimental conditions described in the present work, the polystyrene particles are created in a water solution. Thus, they have to be heated in order to vaporize the water content before their insertion into the channel. This is necessary because the water drops do not have a fixed size and several polystyrene particles could remain inside the drops. The temperature used to dry the seeding has to be limited to avoid the damaging of the particles, so therefore, a convergent/divergent nozzle was used, inside which compressed air was blown (at 1.5 bar) the air was heated at 708C. Inside the nozzle, the accelerated air sucked the seeding up from a tin. The distance of the nozzle from the channel entrance has to be big enough in order to vaporize all the seeding water, and at the same time it has to be short, so that all the seeding particles enter the channel, with a uniform spatial distribution. In each experiment, before recording the CPIV images, the system has been calibrated for determining the magnification factor (pixel/mm). This operation has been performed during the experiments to validate the results. For the calibration, a specific glass with a very fine grid has been used which was equally spaced (1 mm). The position of this glass has to be tuneable to be positioned in the light sheet, a function performed with two micrometric screws. When the test experiments were made, the temperature and static pressure in the measurement volume were

155

taken, in order to know the real velocity inside the channel Vrif. These parameters are used to evaluate adimensional parameters to extract velocity from the wind channel characteristic curve [30]. For each experiment, the Vrif has been compared with the velocity measured with the CPIV system, and then, the relative error (with sign) was calculated: E% ¼ 100

Vrif ÿ Vpiv Vrif

Six measurement sessions were then performed, with increasing velocity. The velocity range was between 10 ms and 35 ms. For each test session, several frames were recorded and processed, so that it was possible to manage data corresponding to 100 traces distributed on about 23–25 image frames. The measure of the velocity has been estimated as the average of the values of about 100 traces. These have been collected by successive images. The acquisition of 100 good traces has been obtained in about 2 s, processing 25 images. The number of good traces per image has been close to 2. This obviously depends on the depth of the laser light and on the density of the seeding particles in the flow. The estimation of performance has to be considered in realtime since the system is capable of estimating about two good traces 0.04 s, and selecting them from about 50 inadequate traces. The number of good traces can be increased by increasing the width of the laser light, which unfortunately increases its power and cost. In Table 1, Figure 7 and Figure 8, the test conditions and the results of the CPIV system measurements are reported: Vrif is the flow velocity measured by the wind channel calibration curve; Vpiv is the same velocity measured by the CPIV system; texp is the CCD camera exposure time; M is the magnification factor; and then there are the standard deviation and the percentage error (E% ad defined earlier) on the Vpiv measured. Observing Figure 7 and Figure 8, it is possible to see that with the proposed approach (in the velocity range considered) the error is lower than 5%. Data errors represented by different symbols are recorded with different exposure time (in order to avoid trace images too long with respect to the area recorded). The sign of the error is always positive, which can be due to the fact that the technique used tends to underestimate the velocity. Even if Vrif can have a systematic bias error, the possible causes of a bias error in the CPIV system proposed has been studied. One error source can be due to the seeding, that cannot exactly follow the flow. This

156

A. BALDASSARRE ETAL.

Table 1 CPIV experimental conditions and results Test session

P0 (Pa)

P1 (Pa)

1 2 3 4 5 6

101600 101700 101600 101700 101700 101500

101477 101492 101238 101203 101042 100639

Padim*100

Vadim

Vrif (m/s)

Vpiv (m/s)

S.D. (m/s)

texp (ms)

M (pixel/mm)

E% (%)

0.121 0.204 0.356 0.486 0.647 0.849

0.038 0.051 0.071 0.087 0.102 0.177

11.1 14.8 20.8 25.3 29.8 34.1

10.60 14.18 19.89 24.27 28.57 33.01

0.07 0.08 0.13 0.16 0.21 0.19

0.384 0.384 0.240 0.240 0.167 0.167

30 30 30 30 30 30

4.52 4.18 4.39 4.08 4.13 3.20

is difficult to prove, but it is common to all measurement techniques based on the use of particle tracers (such as LDV and classical PIV). In the above experiments, the velocity cannot be further changed without altering the general testing equipment, this producing results that cannot be compared with the above. With a classical PIV system, the accuracy depends on several factors, such as the background noise, the out-of-plane motion components, and errors due to the cross-correlation techniques used for image processing. It is impossible to have an estimation of the a priori error of a standard PIV system, without testing it in the measuring conditions. Beside this, it has to be said that in the commercial PIV systems, the image data are also post-processed, and the results proposed come from a statistical analysis on several PIV image couples. In some measurements that were made using standard PIV systems in a convergent-divergent nozzle, with a flow velocity of about 200 –400 m/s, the RMS value on the raw data was typically of 30% or more. The estimation, based on a larger number of data, makes classical PIV unsuitable for real-time estimation.

Figure 7.

Measured and reference velocity.

For our experiments, commercial PIVs (such as those proposed by TSI and DanTec, based on point matching, see Figure 1) presents errors close to 10% since they also consider the points corresponding to the out-of-plane traces in the measuring analysis. Without these problems, both systems (the proposed system and those based on point matching) theoretically present the same errors due to the uncertainty of finding the centers of the points (for the point matching algorithms), or the terminals of the line in the proposed CPIV. The algorithm can contribute to the production of errors, since the particle traces intensity is less bright at the ends. Using a simple binarization algorithm with a threshold value of an error of one pixel or more in the localization of the trace end is produced. In this way, the estimation algorithm continues to give an underestimation of the trace length. The number of pixels, corresponding to the absolute error value in the trace length determination, is independent of the trace length. If we consider having 2 pixels absolute error in the localization of the trace ends, and a trace of expected length between 120 and 90 pixels, there is a relative error of between 2.1% an d3.3%. This fact partially justifies

Figure 8. Percentage relative error. Data represented with different symbols are recorded with different exposure time.

A VISION-BASED PARTICLE TRACKING VELOCIMETRY

157

Conclusions In this paper, the definition, the development and the experimental validation of a new flow measurement system based on PIV technique has been described. The proposed solution can be regarded as an evolution of the classical PIV systems, and presents some modifications in order to adapt a PIV system to an industrial application (low cost, small sizes, high security and real-time estimation with respect to the several minutes of the classical PIV). Figure 9. Mean gray-level distribution of a particle with an out-of-plane velocity component.

the systematic error made in the test measurements. The error can be partially compensated for by using statistical considerations. Beside this, the percentage error seems to decrease by increasing the velocity. This is due to the fact that the absolute error of the trace length measuring is nearly always the same, but the traces are different in length and are recorded with different exposure times. In the previous section, it was stated that the new algorithm developed can indicate those seeding particles moving with an out-of-plane velocity component. This is done by analysis the mean gray level distribution of the trace image. In Figure 9 and Figure 10, there are the mean gray level distributions of three particles: as it can be seen, one is moving with an out-of-plane velocity component (see Figure 9); the other two are moving with the velocity vector parallel to the light sheet. The mean gray level distributions have two different mean values, because one is in a plane with higher brightness of the laser light with respect to the other (the distribution of the laser light along the direction perpendicular to the image plane is not uniform).

The experimental validation of the system was made on a test rig, after a preliminary period of study. The test measurements were made at velocities lower than 40 m/s because there were difficulties due to the test rig (if greater velocities were used) and to the instrumentation. Concerning the instrumentation, the problems were mostly due to the maximum power obtainable from the light source. In fact, by fixing the light source power, the time for which a particle remains in front of the CCD sensors decreases by increasing velocities, thus reducing the traces image brightness. A few experimental tests were made at velocities greater than 50 m/s, but the images recorded in this condition had very low brightness, not sufficient for good image processing. It is possible to have brighter images using a light source with higher power, for example just duplicating the power used for the experiments described in this work. The system could however remain at low cost, using a light source power of about 1 W. The accuracy of the system is satisfactory and it can be improved by studying some correction factors to introduce in the algorithm developed. The direction ambiguity of the system can be solved by using two cameras as previously stated. Measurement of the direction is only needed when the flow is turbulent and the sign is not known a priori.

References

Figure 10. Mean gray-level distributions (superimposed) of particles moving parallel to the light sheet plane.

1. Verri, A. & Poggio, T. (1989) Motion field and optical flow: Qualitative properties. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11: 490–498. 2. Del Bimbo, A., Nesi, P. & Sanz, J.L.C. (1995) Analysis of Optical Flow Constraints. IEEE Trans. on Image Processing, USA, 4: 460–469. 3. Nesi, P. (1996) Real-time motion analysis, in Real-Time Imaging: Theory, Techniques, and Applications (P. Laplante and A. Stoyenko, eds.), IEEE and IEEE Computer Society Press, 27–57.

158

A. BALDASSARRE ETAL.

4. Del Bimbo, A., Nesi, P. & Sanz, J.L.C. (1996) Optical Computation Using Extended Constraints. IEEE Trans. on Image Proc., 5: 720–739. 5. Musmann, H.G., Pirsh, P. & Grallert, H.-J. (1985) Advances in picture coding. Proceedings of the IEEE, 73: 523–548. 6. Dufaux P. & Moscheni, F. (1995) Motion estimation techniques for digital tv: A review and a new contribution. Proceedings of the IEEE, 83: 858–876. 7. Aggarwal, J.K. & Nandhakumar, N. (1988) On the computation of motion from sequences of images–a review, Proceedings of IEEE, 6: 917–935. 8. DelBimbo, A. & Nesi, P. (1993) Optical flow estimation on connection machine-2, in Proc. of the International Workshop on Computer Architecture for Machine Perception, CAMP’93, (New Orleans, Luisiana, USA), IEEE Press, 15–17. 9. Danielsson, P., Emanuelsson, P., Chen, K., Ingelhag, P. & Svensson, C. (1990) Single-chip high-speed computation of optical flow, in Proc. of the International Workshop on Machine Vision Applications, MVA’90 IAPR, Tokyo, 331–335. 10. Nesi, P. & Del Bimbo, A. (1996) A vision system for estimating people flow, in Image Technology: Advances in Image Processing, Multimedia and Machine Vision (J.L.C. Sanz, ed.), Springer-Verlag, 179–201. 11. Nesi, P., Innocenti, F., & Pezzati, P. (1998) RETIMAC: Real-Time Motion Analysis Chip, IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, IEEE press, USA, ISSN 1057–7130, 45: 361–375. 12. Adrian, R.J. Bibliography of Particle Velocimetry using Imaging methods: (1917–1995) TAM Report n.817, UIUC-ENG-96-6004, Dept. of Theoretical & Applied mechanics, University of Illinois at Urbana-Champaign, March 1996. 13. Buchhave, P. (1992) Particle Image Velocimetry–Status and Trends. Experimental Thermal and Fluid Science, 5: 586–604. 14. Soria, J. Particle Image Velocimetry, Department of Mechanical Engineering, Monash University, Clayton, VIC 3168, Australia, [email protected]. 15. Willert, C.E. & Gharib, M. (1991) Digital Particle image velocimetry, Exp. Fluids, 10. 16. Shepherd, I.C., La Fountaine, R.F., Welch, L.W., Soria, J., & Pearson, I.G. (1991) Measurements of Instantaneous Flow using particle Image Velocimetry, Second World

17. 18. 19. 20. 21.

22. 23. 24. 25.

26. 27. 28. 29. 30.

Conference on Experimental Heat Transfer. Fluid Mechanics and Thermodynamics, Dubrovnik, Ygoslavia, 23–28. Riethmuller, M.L. ed. (1996) Particle Image Velocimetry. Von Karman Institute for Fluid Dynamics, Lecture Series, 3. Raffel, X. M., Willert, C., Kompenhans, J. (1998) Particle Image Velocimetry A Practical Guide, Berlin, SpringerVerlag. Adrian, R.J., (1986) Image shifting technique to resolve directional ambiguity in double-pulsed velocimetry, Appl. Optics, 25: 3855–3858. Grant, I. & Liu, A. (1989) Directional ambiguity resolution in particle image velocimetry by pulse tagging, Exp. Fluids, 10: 71–76. Raffael, M., & Kompenhans, J. (1995) Theoretical and experimental aspects of image shifting by means of a rotating mirror system for particle image velocimetry. Meas. Sci. Techn., 6: 795–808. Landreth, C.C., Adrian, R.J., & Yao, C.S. (1988) Doublepulsed particle image velocimeter with directional resolution for complex flow. Exp. Fluids, 6: 119–128. Grant, I., Smith, G.H. & Owens, E.H. (1998) A directionally sensitive particle image velocimetry, J. Phys. Sci. Instrum., 21. Grant, I. ed., (1994) Selected papers on particle image velocimetry, SPIE Milestone Series, SPIE Optical Engineering Press, Belligham, Washington. Reuss, D.L. (General Motors Research Labs.): (1993) Two-dimensional particle-image velocimetry with electrooptical image shifting in an internal combustion engine. Proc. SPIE 2005: 413–424. Optical Diagnostics in Fluid and Thermal Flow. Goss, L.P., Post, M.E., Trump, D.D. & Sarka, B. (1989) Two-color particle velocimetry. Proc. ICALEO ’89, L.I.A., 68: 101–111. Nino, E., Gajdeczko, B.F. & Felton, P.G. (1993) Twocolor particle image velocimetry in an engine with combustion. SAE, paper 930872. Grant, I. (1997) Particle image velocimetry: a review, Proceedings Institute of Mechanical Engineers, 211, part C: 55–76. Adrian, R.J. (1991) Particle-imaging techniques for experimental fluid mechanics, Annual Rev. fluid Mech., 23: 261–304. Streeter, V.L. & Wylie, E.B. Wylie, (1987) Fluid Mechanics, Auckland, McGraw-Hill.