Measurement 44 (2011) 2205–2216
Contents lists available at SciVerse ScienceDirect
Measurement journal homepage: www.elsevier.com/locate/measurement
MEMS in-plane motion/vibration measurement system based CCD camera D. Teyssieux ⇑, S. Euphrasie, B. Cretin Institute FEMTO-ST, Department MN2S, CNRS UMR 6174, University of Franche-Comté, ENSMM, 32 av. de l’observatoire, F-25044 Besançon Cedex, France
a r t i c l e
i n f o
Article history: Received 11 February 2009 Received in revised form 11 May 2011 Accepted 21 June 2011 Available online 12 July 2011 Keywords: In-plane vibration CCD Motion Measurement
a b s t r a c t We report on the development of a vibrometer for in-plane motion which is particularly suitable for Micro Electro Mechanical System (MEMS) samples. The system combines a conventional microscope, coherent electronics and a Full Frame CCD camera. Stroboscope lighting allows the system to freeze the motion. The obtained images correspond to different phases of the sample motion. Different subpixel motion measurement algorithms are compared in terms of precision and computation time. An algorithm that we specially design for this application proved to be the best. It is based on the shift Fourier theorem and uses first harmonics, where most energy is present. Thus the system allows computing the phase and magnitude of the sample displacement. With the system we can obtain a measurement resolution of 100 pm demonstrated on the AFM cantilever vibrations measurement. The effect of the edge roughness was studied. It decreases the performance of the algorithm but roughness is very low on most MEMS applications. This method based on a CCD camera, is very well suited for measuring in-plane MEMS vibration since it does not require surface roughness for scattering the light as with the speckle methods (for example). It can quickly obtain full field motion, with a high precision. Furthermore, the used algorithm is simple, fast and very noise insensitive. Crown Copyright Ó 2011 Published by Elsevier Ltd. All rights reserved.
1. Introduction In the Micro Electro Mechanical System (MEMS) and Micro Opto Electro Mechanical System (MOEMS) domains, the measurement of microstructure motion over a wide measurement range and with a nanometer scale precision is very important. Numerous measurement systems allow to measure out of plane displacements. They usually use an optical probe which can be very sensitive [1]. However this kind of method requires a scanning of the sample to have a full field motion and cannot be used for in-plane displacement. There are several systems for in-plane measurements that have been developed. Laser Doppler vibrometry (LDV) and electronic speckle pattern interferometry (ESPI) [2] have become reference techniques for the measurement of in-plane displacements with full field ⇑ Corresponding author. Tel.: +33 03 81 85 39 99; fax: +33 03 81 85 39 98. E-mail address:
[email protected] (D. Teyssieux).
analyses. Nevertheless they are not suited for micro devices. Indeed the LDV method is based on the back scattering light and the speckle is based on the roughness of the study sample. Usually MEMS have a very low roughness (smaller than the wavelength) and the surfaces do not have suitable properties for these two methods. Thus other vibration measurement systems based on Charge Coupled Devices (CCD) and based on direct object observation have been developed [3]. Many of these systems are based on stroboscopic illumination of an object in harmonic motion [4]. These systems can be very sensitive and allow an in-plane precision of 2.5 nm [5] and up to 800 kHz [6]. The main advantage of the CCD method is that it can directly measure the 2D motion field with an easy image computation. We specially designed an algorithm for this system to measure in-plane MEMS vibrations. Its goal was to be quickly, precise and noise insensitive. The first section of this paper describes the bases of the method. We report on the current displacement measurement algorithms using in the image processing domain
0263-2241/$ - see front matter Crown Copyright Ó 2011 Published by Elsevier Ltd. All rights reserved. doi:10.1016/j.measurement.2011.06.020
2206
D. Teyssieux et al. / Measurement 44 (2011) 2205–2216
and we test them on synthetic images and compare them in terms of speed and error. The second section describes the experimental setup and the electronic system developed for MEMS motion measurements. The third section shows the experimental study of a tuning fork and an Atomic Force Microscope (AFM) cantilever motion. Finally, we study the precision of the method and the effect of the edge roughness.
when the movement of the object is rigid (which is not the case for a thermoelastic movement for example). Because of optical diffraction (due to the Rayleigh limit for instance) the change between dark and bright areas takes place on several pixels. One way to represent this is with this one dimension equation:
2. In-plane motion measurements: principle and algorithms
where Dx is the displacement according to x, V1 is the vector (one dimension matrix) corresponding to the AA curve and V2 is the vector corresponding to the BB curve. Thus, despite a displacement smaller than one pixel, there exists a relation between the grey level of the edge of the object and the displacement of the object (information zone on Fig. 1). By scanning all the lines of the images, it is possible to measure the displacement of the object according to the y axis. The smallest detectable displacement is directly dependent on the pixel dynamic range, therefore on the Signal to Noise Ratio (SNR); this fact will be presented in Section 5. All of the displacement measurement methods presented in the next section are based on this model. Section 5 will show that this linear model can be problematic for particular samples (with edge roughness).
2.1. Principle of the method The edge luminosity of an object is variable with the movement of this object when the sensor is a CCD. Fig. 1 shows one ideal case when the object has moved with a subpixel displacement. The arrays represent a CCD sensor with a fill factor equal to 100% (This principle is not valid for a CCD with a fill factor of less than 100%); the grey zones represent the object and the curves AA and BB (Fig. 1) represent the intensities of a pixel line according to the x axis. The ideal case of Fig. 1 is a linear model between AA and BB vectors,
V 1 ðxÞ ¼ ð1 Dx ÞV 2 ðxÞ þ Dx V 2 ðx 1Þ
Fig. 1. Ideal case of a displacement (linear model).
ð1Þ
D. Teyssieux et al. / Measurement 44 (2011) 2205–2216
I1 ðx; yÞ ¼ I2 ðx þ Dx ; y þ Dy Þ
2.2. Motion analysis algorithms: state of the art In the computer vision domain, motion analysis is commonly used. It can be used in image registration [7], superresolution algorithm [8] or video compression [9]. Motion analysis can be divided into two main groups: spatial methods and frequency methods. For the spatial methods we can quote the Block Matching algorithm (BM). The idea behind BM is to divide the current frame into macro blocks that are then compared with corresponding blocks and its adjacent neighbors in the previous frame to create a vector that stipulates the movement of macro block from one location to another in the previous frame. Several BM algorithms exist: exhaustive research [10], Three Step Search [11] or Diamond Search [12] for example. The problem with these algorithms is that they require many computations. Another spatial method is based on optical flow. For a pixel at location (x, y, t) with intensity I(x, y, t) which has moved by Dx, Dy and Dt between the two frames, the following image constraint equation can be given:
Iðx; y; tÞ ¼ Iðx þ Dx ; y þ Dy ; t þ Dt Þ
@I @I @I Iðx þ Dx ; y þ Dy ; t þ Dt Þ ¼ Iðx; y; tÞ þ Dx þ Dy þ Dt @x @y @t ð3Þ From these equations we obtain:
ð4Þ
This equation cannot be solved as such. To find the optical flow, another additional constraint is needed. Horn and Schunk [13] proposed to combine the gradient constraint with a global smoothness term to constrain the optical flow equation. Lucas and Kanade [14] assumed that the @y flow @x ; is constant in a small window, which is well @t @t adapted to our study. For the frequency methods we can quote the cross-correlation. This method is used, for example, in Particle Image Velocity (PIV) [15]. The cross-correlation is a measure of the similarity of two signals. The cross-correlation is similar in nature to the convolution of two functions. Thus, if we consider two signals I1(x, y) and I2(x + Dx, y + Dy), where Dx and Dy are the movements according to x and y, the cross-correlation Cxy is:
C xy ¼ F1 ðbI 2 ðwx ; wy Þ bI 1 ðwx ; wy Þ Þ 1
ð6Þ
Then according to the Fourier Shift Theorem:
bI 1 ðwx ; wy Þ ¼ bI 2 ðwx ; wy Þ expððwx Dx þ wy Dy ÞÞ
ð7Þ
This is equivalent to:
bI ðw ; w ÞbI ðw ; w Þ 1 x y 2 x y ¼ expðwx Dx þ wy Dy Þ ¼ n b I 1 ðwx ; wy ÞbI 2 ðwx ; wy Þ
ð8Þ
It is now simple to determine Dx and Dy:
F1 ðnÞ ¼ F1 ðexpððwx Dx þ wy Dy ÞÞÞ ¼ dðDx ; Dy Þ
ð9Þ
The result is a Dirac function centered at (Dx, Dy). The authors of the paper [17] propose an improvement to this algorithm which allows to have a sub-pixel resolution. The problem with the phase-correlation is that it can be false with small spectral signal due to a numerical computation (if frequency harmonics converge rapidly to zero, the computation can lead to a division by zero and gives a false result due to the numerical noise.
ð2Þ
By assuming a small movement, we can develop the image constraint at I(x, y, t) with Taylor series to get:
@I @I @I Dx þ Dy þ Dt ¼ 0 @x @y @t
2207
ð5Þ
where F is the inverse Discrete Fourier Transform (DFT), bI 2 the DFT of I2 and bI the DFT conjugate of I1. 1 The location of the maximum value of the Cxy gives directly the values of Dx and Dy. In its basic form use, this algorithm does not allow to measure a sub-pixel motion. Consequently, to obtain a sub-pixel resolution it is necessary to use an interpolation. Another frequency method is based on the phase-correlation which is based on the Fourier Shift Theorem [16]. Thus, if we consider two signals I1(x, y) and I2(x + Dx, y + Dy) with:
2.3. Development of an another algorithm In our study we consider that the movement is semi-rigid, i.e. that the displacement is constant according to an axis. It is the typical case of a cantilever deflection and for low motion magnitudes. Fig. 2 shows an example, where a cantilever is clamped at the end and moves at the first frequency vibration mode. This shows that the motion of point YA is equal to the motion of point YB for a small displacement; YA = YB and YC = YD. It is true according to the vertical axis, but it is false according to the horizontal axis; YA – YC and YB – YD. Then it is possible, in this case, to measure the deflection of the beam by using a 1D algorithm. This 1D algorithm must be repeated for all vertical pixel lines. The study can therefore be reduced to a phase measurement between two 1D vectors before and after the displacement. Fig. 2 shows two beam images at two phases. We can consider that these images are of finite size and discrete (CCD images). I1(m, l) corresponds to the first image at position 1 and I2(m,l) corresponds to the second image at position 2. The size of the images are M L with m = [1, . . . , M], m 2 N⁄, and l = [1, . . . , L], l 2 N⁄. Let us consider only vertical lines, corresponding at the position a, for example. By applying the 1D difference phase between v1(m) and v2(m) with v1 and v2 corresponding to the vertical line at the horizontal index a:
b / ðmÞ V b / ðmÞ D/1=2 ðmÞ ¼ V 1 2 b / ðmÞ V 1
ð10Þ
b / ðmÞ V 2
where and represent the spectral phases of v1 and v2. Based on the Eq. (7), then the Dl motion is defined by:
Dðl¼aÞ ðiÞ ¼
D/1=2 ðiÞ ðM 1Þ 2pði 1Þ
ð11Þ
where i represents the harmonic index. By using the fundamental harmonic (the majority of the signal is in the fundamental harmonic) we have:
2208
D. Teyssieux et al. / Measurement 44 (2011) 2205–2216
Fig. 2. Example of the cantilever displacement.
Dðl¼aÞ ¼
D/1=2 ð2Þ ðM 1Þ 2p
ð12Þ
2.4. Algorithms comparison With the aim of testing these algorithms speed and error, we used synthetic images. The main advantage of synthetic images is that the motion field and scene properties can be controlled. We will use again the cantilever example at two different phases /1 and /2. The profile used, for I1 image, is a Gaussian type repeated for each column. The displacement between I1(l, m) and I2(l, m) is obtained by a linear model:
I2 ðl; mÞ ¼ ð1 Dx ÞI1 ðl; mÞ þ Dx I1 ðl; m 1Þ
ð13Þ
Thus, we can simulate the cantilever deflection by considering that Dx depends on m. The problem can therefore be considered as a one dimension problem. Fig. 3 shows the I1 and I2 images and the displacement according to m. In these pictures, the displacement on the I2 images is exaggerated for illustration’s sake: only sub-pixel displacement was used in the study. To obtain a simulation close to reality, it is necessary to add a source of noise on the images. Thus we can consider for each image that:
I1 ðl; mÞ ¼ I01 ðl; mÞ þ gI2 ðl; mÞ ¼ I02 ðl; mÞ þ g
ð14Þ
where g is the noise function. We assume a normal distribution of the noise around an average value of zero and a standard deviation r. We can quantify the noise effect by using a Peak Signal to Noise Ratio (PSNR) between a noisy image and a clean image. The comparison of different motion estimation algorithms is defined by the criterion:
j¼
M 1 X Dx ðmÞ Dcx ðmÞ M m¼1 Dx ðmÞ
ð15Þ
where Dx(m) is the synthetic displacement (see Fig. 3), Dcx ðmÞ is the calculated displacement and M the size of the vectors. The algorithm computation time is defined for images with a size of 200 80 pixels. According to Section 2.2, we compared three algorithms: – Lucas–Kanade algorithm: LK. – Phase difference: DP. – Cross-correlation: CC. For the cross-correlation algorithm we use a zero-padding interpolation to obtain sub-pixel resolution. The interpolation factor defines the resolution of the algorithm. For this study we used a interpolation factor of 100. The
Fig. 3. I1 and I2 images and displacement curve according to m axis.
D. Teyssieux et al. / Measurement 44 (2011) 2205–2216
2209
Fig. 4. j criterion comparison curves.
Table 1 Computation time comparison. LKsn = 3
LKsn = 5
DP
CCf = 100
350 ms
230 ms
37 ms
26 s
Lucas–Kanade method is solved by using the least square and the central differences. The spatial neighborhood can be equal to sn = 3, 5, . . ., 11. Using image filtering is critical and can modify results according to the properties of the image, so we decided to use no filter. Results of this comparison are presented in Fig. 4 for the j criterion and Table 1 for the time comparison. The cross-correlation and the phase difference algorithms are the best in terms of resolution. But the cross-correlation has a computation time which is very high. In terms of computation time the phase difference is the best. We have demonstrated in this section that for our applications the phase difference algorithm is the best adapted. We will therefore use this algorithm from now on.
should be less than 10% of the period). The camera is a Full-Frame based on a Kodak KAF series CCD. This CCD is a low noise sensor. Usually (by first approximation), the main noises are the read out noise, the dark noise and the photon shot noise. Our sensor is cooled by Peltier stage at 258 K; thus the measured read out noise is equal to rr = 13 electrons/pixel and the measured dark noise equal to 0.9 electron/pixel/s. A detail of the sensor characteristics is presented in the communication [20]. The LED pulse is repeated once per excitation period at a selected phase for several periods which make the first frame (Frame 0). After the first frame, the pulse phase is p/2 radian shifted (corresponding to Frame 1). The process is repeated for four frames corresponding to the phases u + p/2,u + p, u + 3p/2 and u + 2p (u is the start phase). This sequence is shown in Fig. 6. In order to increase the Signal to Noise Ratio, this sequence is repeated for a chosen number N. This system allows us to obtain four images corresponding to the four phases of the vibration period. 3.2. Magnitude and phase of the motion
3. Experimental setup 3.1. Experimental system The experimental setup is shown schematically in Fig. 5. The system is based on stroboscopic illumination and Full-Frame camera. A waveform generator is used to excite the micro-device and to deliver a synchronous clock. An electronic system, synchronized on the micro-device excitation, allows us to deliver a pulse with an adjustable duration (Dt), it can be inferior to 100 ns. This pulse corresponds to the Light Emitted Diode (LED) flashes with a commercial Luxeon star LXHL-MB1C high power blue LED. This LED can be controlled with a switching time of less than 200 ns. This response time allows us to image the device vibration up to 250 kHz with good image sharpness. LEDs with a switching time of 20 ns exist, so a device vibration up to 2.5 MHz can be measured (pulse time
Assuming a harmonic excitation and harmonic motion of the sample by using four images Ii (i = 1–4) at four different phases, we can determine the magnitude and the phase of the motion. Let us consider the motion defined by:
V i ¼ A0 þ A sinðhi þ uÞ
ð16Þ
where hi = (i 1)p/2 modulo 2p. By combining the previous equations we obtain:
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðV 1 V 3 Þ2 þ ðV 2 V 4 Þ2 V V1 u ¼ arctan 3 V2 V4 1 A0 ¼ ðV 1 þ V 2 þ V 3 þ V 4 Þ 4 A¼
1 2
ð17Þ ð18Þ ð19Þ
V1 = D11 = 0, V2 = D12, V3 = D13 and V4 = D14 are respectively the specimen displacements between the first images and
2210
D. Teyssieux et al. / Measurement 44 (2011) 2205–2216
Fig. 5. Schema of the experimental setup for MEMS in-plane vibration.
Fig. 6. Timing diagram of the system.
the three following ones. Thus, by using this experimental system and the algorithms detailed in section two, it is possible to obtain the magnitude and the phase of the sample motion. 4. Experimental study of two different samples 4.1. Experimental study of a tuning fork The study sample is a tuning fork used for time reference. The tuning fork optical image and the tuning fork dimension are given in Fig. 7. We consider that the flexion of the two tuning fork arms are symmetric. The microscope’s magnification is set to 5, so we have an equivalent pixel size of 1.8 lm. A
first study with a network analyzer Agilent 4194A (admittance magnitude study) allows us to determine the frequency of the first and second flexion mode (Fig. 8). By using our system we can extract the vibration magnitude (Fig. 9) versus frequency for the first and second flexion modes. The frequencies obtained with our system are very close to those given by the network analyzer. A difference of 1 Hz (0.003% error) for the first vibration mode and 33 Hz (0.017% error) for the second vibration mode. Thus we can validate the system detection and measurement algorithm (in term of detection of mechanical vibration mode). The main advantage of this system is that it allows us to obtain a 2D image of the displacement. Thus, we can directly determine the deflection of the tuning fork. For this
D. Teyssieux et al. / Measurement 44 (2011) 2205–2216
2211
Fig. 7. Optical image and dimensions of the tuning fork. (a) Optical image of the tuning fork. (b) Dimensions image of the tuning fork.
Fig. 8. Admittance measurement for the first and second mode. (a) Admittance magnitude of the first mode versus frequency (center frequency = 32757 Hz and span = 46 Hz). (b) Admittance magnitude of the second mode versus frequency (center frequency = 191780 Hz and span = 160 Hz).
Fig. 9. Magnitude response of the tuning fork versus frequency. (a) Magnitude response of the first mode. (b) Magnitude response of the second mode.
we use the phase difference algorithm on all lines of the images. Fig. 10 shows this for the first mode at different excitation voltages and for the second mode with an excitation voltage of 1.5 V. 4.2. Experimental study of an AFM cantilever The second study sample is an AFM probe mounted onto a bimorph piezoelectric transducer (PZT) actuator (Fig. 11). The probe is 135 lm long, 35 lm wide and 4 lm thick. It is used in a clamped-free model. A finite element analysis has been used with the aim of determining the harmonic response of the clamped-free probe. This analysis has been developed with Ansys software with a harmonic solver. The results of this analysis show that
the first vibration mode is at 266 kHz. In our experimental measurement we studied the first flexion mode. The cantilever is positioned perpendicularly to the optical system. The edge of the cantilever does not permit the use of light reflexion (since the edge is not flat) so we used our system in a light transmission mode. The microscope’s magnification is set to 20, hence an equivalent pixel size of 450 nm. In order to measure the first harmonic mode we excited the bimorphe with a sinusoidal signal. The magnitude of this signal is v = 50mv and the frequency range is equal to [200–300 kHz]. By using the experimental setup presented in Section 3 and the phase difference algorithm we can determine the frequency response at the end of the cantilever. With a fast frequency increment (by step of 5 kHz) we can approximatively identify the first
2212
D. Teyssieux et al. / Measurement 44 (2011) 2205–2216
Fig. 10. Deflection of one tuning fork arm for the first and second mode. (a) First mode deflection for different excitation voltages. (b) Second mode deflection for an excitation voltage of 1.5 V.
Fig. 11. SEM image of the AFM cantilever.
harmonic mode which corresponds to a frequency range equal to [200–300 kHz]. A second study with smaller steps gives the amplitude and phase of the first mode (Fig. 12). Each value has been determined with 40 sequences (one sequence is defined in Fig. 6). The experimental first mode is around 254 kHz which is 12 kHz different from the finite analysis value considering that a length variation of 10% can shift the first mode frequency to 50 kHz, the difference is within the margin of error with the finite element analysis). As in the previous sub-section, we can directly determine the deflection of the beam. Fig. 13 shows this for a frequency of 256 kHz and a voltage excitation of 50 mV. Furthermore Fig. 13 shows that low magnitude measures are inferior to one nanometer. Fig. 14 is a measure-
Fig. 13. AFM cantilever deflection at f = 256 KHz and V = 50 mV.
ment of the amplitude of the displacement of point C (see Fig. 13) for different excitation voltages. Fig. 14 shows a linear curve and the possibility of measuring low amplitude deflections (for smaller than 1 nm). 5. Precision and problems due to the roughness of sample 5.1. Precision of the method The precision consists in the minimal detectable displacement (we consider that the minimal detectable displacement is defined by measurement repeatability [18]).
Fig. 12. Magnitude and phase of the AFM cantilever versus frequency (around first mode). (a) Magnitude response of the AFM cantilever. (b) Phase response of the AFM cantilever.
D. Teyssieux et al. / Measurement 44 (2011) 2205–2216
2213
Fig. 14. Magnitude of the point C versus excitation voltage. Fig. 17. Henry curve for V = 800 mV (normal distribution probability).
these measurements (Data) are plotted in Fig. 16. The bins width (or equivalent to the bins number) of the histogram can be defined by the Sturge’s rule [19]:
h ¼ log2 n þ 1
ð20Þ
so it is equivalent to a bins number:
k¼
Fig. 15. Area studied for precision determination.
In order to measure the precision we have used a statistical method. This method consists in a deflexion measurement of the AFM cantilever for a high image number (or sequence as defined in Fig. 6). With the aim of obtaining a low vibration magnitude, the AFM cantilever is not excited at its resonance frequency but at 230 kHz. Moreover we studied only a short part of the cantilever near to the clamp end (see Fig. 15). First, it is necessary to verify that the noise associated to the magnitude vibration has a normal distribution (in these conditions and for a high measurement number). Thus 7500 displacement measurements (of the point represented by the cross on the Fig. 15) are carried out with a 800 mV voltage excitation. The histogram repartition of
D log2 n þ 1
ð21Þ
where D = (Max(Data) Min(Data)). Although this histogram does not allow us to be sure that the data have a normal distribution, several tests enable us to verify this. For example, the Kolmogorov–Smirnov test or the Lilliefors test are used to check the hypothesis that a given sample n is a sample of normal random variable. A simple test currently used is the Normal Probability Plot. Fig. 17 shows the Henry curve (normplot with Matlab software) for our distribution and confirms that the distribution is normal with around 96% of confidence. This fact (that the data have a normal distribution), is very important because it gives several informations concerning the measurement method (in these experimental conditions): – The noise is a sum of decorrelated noises with one much more important. – The ultimate precision will depend on the number of images (or number of sequences).
Fig. 16. Distribution of measurement for V = 800 mV (standard deviation of 0.0034 pixel) and V = 400 mV (standard deviation of 0.0032 pixel). (a) V = 800 mV and (b) V = 400 mV.
2214
D. Teyssieux et al. / Measurement 44 (2011) 2205–2216
For example, by using only one sequence, (as defined in Fig. 6) it is possible to expect a precision of 0.0034 pixel (corresponding to the standard deviation of the histogram of Fig. 16). So, the idea consists in analyzing the average effect on the precision of the displacement measurement. For this, by using the displacement measurement vector Data, it is possible to apply these different steps:
Fig. 18. Standard deviation according to Nav factor (cross) and function pffiffiffiffiffiffiffiffi n ¼ 0:034= N av (continuous line).
−3
Standard deviation measurement Synthetic standard deviation
3
1.80 1.35
2
0.90
1
0.45
0
0
100
300
500
700
Resolution (nm)
Resolution (Pixel)
4 x 10
0
Nav Fig. 19. Synthetic standard deviation (continuous line) and standard deviation measurement (cross).
– It will be difficult to isolate the different noise sources, but one noise is. Thus, by analyzing the standard deviation of the distribution it is possible to obtain the precision of the measurement according to the number of images (or sequences).
– A new vector Data0 is obtained by the summation of Nav values of Data (for example if Nav = 10, the new vector Data0 corresponds to Data/Nav = 750 values. – Extraction of the standard deviation of the new vector Data0 . – Repetition of these steps for several Nav values. Thus Fig. 18 shows the standard deviation according to Nav factor. This curve shows the ultimate minimal displacement that the method allows us to obtain (in these experimental conditions). This floor is near to 100 pm, which corresponds to the precision of the system.
5.2. Noise effect on the system’s precision For our case we supposed that the ultimate precision of the system is directly related to the noise of the camera because we used very precise electronics, massive pneumatic marble slab and worked in a noise-free acoustical environment. For a CCD camera the main governing noises can be reduced to the readout noise, the shot noise, and the dark current noise ([20]). Moreover, these noises can be considered as decorrelated, so that the total noise is defined for one pixel by:
Fig. 20. Moving system setup for to show roughness effect.
D. Teyssieux et al. / Measurement 44 (2011) 2205–2216
rT ¼
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi r2r þ r2th þ r2p
2215
ð22Þ
where rp is the photon shot noise, rth is the dark current shot noise,and rr is the readout noise. In our case, due to the highest level of the lighting, the main noise is the photon shot noise. The statistical law which governs this noise is Poisson statistics. The photon noise in a signal will approach a normal distribution (which corresponds well with the Fig. 18) for large numbers of photons collected. In order to simulate this noise effect on the results we used synthetic profiles with an additional photon shot noise. The camera dynamic is 17 bits therefore we used 120,000 levels. The images are averaged by ten. The synthetic profiles are gaussian in order to simulate the optical diffraction effect. The aforementioned algorithm was employed to give the results presented in Fig. 19 along with the ones of Fig. 18. The synthetic and experimental values are surprisingly well matched. The synthetic curve is lower than expected and pffiffiffiffiffiffiffiffiffi shows the same 1= N a v shape. The hypothesis that photon shot noise is the main noise, therefore the source of the accuracy limitation, seems to be confirmed. 5.3. Edge roughness effect In order to show the effect of edge roughnes, we have used a simple platinum layer moved by a piezo stage table (Fig. 20). The displacement table is a Physik Instrumente (PI) P517 multi-axis piezo-nanopositioning stages. This table is driven by a digital piezo controller E500 double channel which gives a displacement precision near to 10 nm. A 250 nm displacement is applied along the Y axis and is repeated at a frequency of 20 Hz. The test sample is presented in Fig. 21 and consists of a platinum wire deposited on glass in a clean room by evaporation process. The edge of the wire has been voluntary damaged (roughness of several microns). The displacement of the sample has been determined by using the measurement method (see Fig. 22). Fig. 22 clearly shows that the error of displacement measure depends on the edge roughness of
Fig. 22. Comparison between the displacement measurement and the image of the platinum wire.
the wire. It increases when the edge is very damaged (several tens of nanometers for a few microns roughness). This is due to the fact that the method is based on a linear model as presented in Section 2.1. Therefore this model is valid only if the edge of the study sample is perpendicular (or parallel) to the image axis and if the displacement is very inferior to the pixel’s size, which is the case for most MEMS applications. This problem can also be seen in the majority of in-plane measurement methods using a camera. Nevertheless, this can be viewed as an opportunity to have a good idea of the edge roughness (note that we are talking about edge roughness and not surface roughness).
6. Conclusions
Fig. 21. Platinum wire used for edge damaged effect on the displacement measurement value.
In the study we have presented a system to measure inplane displacements of micro-devices. The system is based on stroboscopic lightning, a low cost CCD camera and a phase difference algorithm. The used algorithm is simple and the method does not need image filtering. Thus it is independent from the sample images. We have shown that the method permits us to have full field motion. A precision of 100 pm has been achieved with the measure of the vibration of an AFM cantilever. The system is based on the study of the edge of the sample, so it can be used on the most micro-devices. These results show that the Rayleigh limitation is not a problem because the method is based on edge gray level variation. Of course, a greater precision can be obtained with a lower wavelength of the stroboscopic illumination. Future studies will concern the proper use of this method to measure the edge roughness. Another work will concern the adaptation of this method and a thermal system measurement [21] for the study of in-plane full field thermoelastic effect of micro-samples.
2216
D. Teyssieux et al. / Measurement 44 (2011) 2205–2216
References [1] P. Vairac, B. Cretin, New structures for heterodyne interferometric probes using double-pass, Opt. Commun. 132 (1997) 19–23. [2] A. Svandro, In-plane dynamic speckle interferometry: comparison between a combined speckle interferometry/speckle correlation and an update of the reference image, Appl. Opt. 43 (2004) 4172– 4177. [3] S. Petitgrand, A. Bosseboeuf, Simultaneous mapping of out-of-plane and in-plane vibrations of MEMS with (sub)nanometer resolution, J. Micromech. Microeng. 14 (2004) 97–101. [4] A. Hafianes, S. Petitgrand, O. Gigan, S. Bouchafa, A. Bosseboeuf, Study of sub-pixel image processing algorithms for MEMS in-plane vibration measurements by stroboscopic microscopy, Proc. SPIE, Microsyst. Eng.: Metrol. Insp. III 5145 (2003) 169–180. [5] D.M. Freeman, Measuring motions of MEMS, materials research society bulletin, Microelectromech. Syst. (MEMS): Technol. Appl. 26 (2001) 305–306. [6] S. Petitgrand, R. Yahiaoui, K. Danaie, A. Bosseboeuf, J.P. Gilles, 3D measurement of micromechanical devices vibration mode shapes by stroboscopic microscopic interferometry, Opt. Lasers Eng. 36 (2001) 77–101. [7] B. Zitova, J. Flusser, Image registration methods: a survey, Image Vision Comput. 21 (2003) 977–1000. [8] V. Cheung, B.J. Frey, N. Jojic, Video epitomes, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, 2005, pp. 42–49. [9] T. Akiyama et al., MPEG2 video codec using image compression DSP, IEEE Trans. Consum. Electron. 40 (1994) 466–472. [10] B. Liu, A. Zaccarin, New fast algorithms for the estimation of block motion vectors, IEEE Trans. Circ. Syst. Vid. 3 (1993) 148–157. [11] R. Li, B. Zeng, M.L. Liou, A new three-step search algorithm for block motion estimation, IEEE Trans. Circ. Syst. Video Technol. 4 (1994) 438–442.
[12] S. Zhu, K. Ma, A new diamond search algorithm for fast blockmatching motion estimation, IEEE Trans. Image Process. 9 (2000) 287–290. [13] B.K. Horn, B.G. Schunk, Determining optical flow, Artif. Intell. 17 (1981) 185–203. [14] B.D. Lucas, T. Kanade, An iterative image registration technique with an application to stereo vision, in: Proceedings of the DARPA Image Understanding Workshop, 1981, pp. 121–130. [15] K. Jambunathan, X.Y. Ju, B.N. Dobbins, S. Ashforth-Frost, An improved cross correlation technique for particle image velocimetry, Meas. Sci. Technol. 6 (1995) 507–514. [16] C.D. Kuglin, D.C. Hines, The phase correlation image alignment method, in: Proceedings of the International Conference on Cybernetics Society, 1975, pp. 163–165. [17] H. Foroosh, J.B. Zerubia, M. Berthod, Extension of phase correlation to subpixel registration, IEEE Trans. Image Process. 11 (2002) 188–200. [18] BIPM, International Vocabulary of Metrology Basic and General Concepts and Associated Terms, JCGM (2008). Available from: http://www.bipm.org/utils/common/documents/jcgm/ JCGM_200_2008.pdf. [19] R. Brunelli, O. Mich, Histograms analysis for image retrieval, Pattern Recognit. 11 (2000) 1625–1637. [20] D. Teyssieux, L. Thiery, B. Cretin, Near-infrared thermography using a charge-coupled device camera: application to microsystems, Rev. Sci. Instrum. 78 (2007) 034902. [21] D. Teyssieux, D. Briand, J. Charnay, N.F. de Rooij, B. Cretin, Dynamic and static thermal study of micromachined heaters: the advantages of the visible and near-infrared thermography compared to classical methods, J. Micromech. Microeng. 18 (2008) 065005.