Speckle photography combined with speckle interferometry

Speckle photography combined with speckle interferometry

ARTICLE IN PRESS Optics and Lasers in Engineering 41 (2004) 673–686 Speckle photography combined with speckle interferometry . N.-E. Molin, M. Sjoda...

1MB Sizes 0 Downloads 140 Views

ARTICLE IN PRESS

Optics and Lasers in Engineering 41 (2004) 673–686

Speckle photography combined with speckle interferometry . N.-E. Molin, M. Sjodahl, P. Gren, A. Svanbron ( Sweden Division of Experimental Mechanics, Lulea( University of Technology, SE-97187, Lulea, Accepted 21 October 2002

Abstract By combining speckle interferometry (SI) measurements with speckle photography, the fringe visibility can be kept high despite the presence of a large bulk or rotating motion of the object. This combined technique improves the usability and measuring range of both pulsed and phase-stepped SI-methods. This paper reviews the theory of fringe formation in SI and shows some recent applications of this combined technique. r 2002 Elsevier Science Ltd. All rights reserved. Keywords: Speckle photography; Speckle correlation; Speckle interferometry; TV holography; Pulsed TV holography; Shearography

1. Introduction 1.1. Background In experimental mechanics, a bulk motion of the object is often added to the deformation field that is to be measured. Misalignments, mechanical play, rotations, deformation of fixtures, etc. may contribute to bulk motions. Such effects are hard to avoid. Fringes in speckle interferometry (SI) disappear if the speckle motion in the image or aperture planes gets too large. For a high-magnification interferometric system, the relative influence of a bulk motion is even worse. Speckle decorrelation [1–5] is one of the main fundamental limiting factors of SI. It sets a limit to the

n

E-mail address: [email protected] (A. Svanbro). Corresponding author. Fax: +46-920-910-74I

0143-8166/02/$ - see front matter r 2002 Elsevier Science Ltd. All rights reserved. PII: S 0 1 4 3 - 8 1 6 6 ( 0 2 ) 0 0 1 7 5 - 6

ARTICLE IN PRESS 674

N.-E. Molin et al. / Optics and Lasers in Engineering 41 (2004) 673–686

measuring range; it lowers the fringe visibility and the ability to extract quantitative data from the measurements. Part of the decorrelation or the lowered fringe visibility may be caused by environmental disturbances, variations in laser output energy and thermal or other types of noise. Such influences may be avoided or diminished by vibration isolation; the use of pulsed lasers, by a high stability in laser output energy and in laser pointing stability. Cooled detectors diminish thermal noise. There are however other causes for decorrelation or loss of fringe visibility: * *

*

Changes in microstructure or scattering properties of the object surface. In-plane motion of the speckles at the objective lens primarily due to object tilt and strain. Motion of the speckles in the image space due to motion of the object.

In a tensile test of a specimen, in a corrosion test, in a hot oven or with changes in humidity of a paper sample, the microstructure of parts of the test sample might change considerably. If the microstructure is changed then also the scattering properties of the object changes and a quite different local speckle pattern is generated. Speckle decorrelation is actually proposed as a measure of changes in microstructure due to plasticity or corrosion, see for instance [6]. Sometimes the decorrelation can be counteracted by painting or by depositing a metal layer onto the object surface. Such procedures may be recommended for semi-transparent objects. One obvious thing to do when changes in microstructure are present is to update reference images in SI measurements so often that the speckle decorrelation is more or less avoided [7]. The next two cases, decorrelation caused by speckle motion in the pupil plane and in the plane of the detector will be more thoroughly covered in Section 2. The purpose of this paper is to review a technique that uses speckle photography (SP) to compensate for speckle motion before reconstructing the phase and hence improves fringe visibility considerably. Methods to perform this compensation on images from both temporal and spatial phase shifted images are outlined in Section 3, and several examples from quasi-static, transient, and shearography measurements are presented in Section 4. 1.2. Sandwich holography, an early attempt to compensate for rigid body motion Already in 1974, Abramson [8,9] proposed the sandwich holographic interferometry technique. Instead of making a double holographic exposure on one photographic plate, the two exposures were made on different plates in a special plate holder. By combining the two optically reconstructed object fields from the two holograms (the developed photographic plates), it was possible to study the change of an object state that had occurred between the exposures, as in double exposed holograms. But also relative motions between the two fields could be introduced during the reconstruction by

ARTICLE IN PRESS N.-E. Molin et al. / Optics and Lasers in Engineering 41 (2004) 673–686

675

shearing and translating the two holograms relative to each other. Advantages using several plates were: *

*

*

*

Possibilities to determine the direction of the displacements by moving the plates (i.e. the reconstructed object fields) relative to each other. By recording a sequence of holograms on different plates, any pair of plates or object states could be compared. Rigid body motions could be compensated for, leaving the wanted local deformation fields. Carrier fringe could be introduced for FFT analysis approach.

Some disadvantages were: * *

*

Precise repositioning of the hologram plates in the process is necessary. The change of photographic plates introduces errors due to different inhomogeneities and thickness’ in the plates. Time consuming and need for highly qualified personnel.

In digital SI or TV holography, the exposures are recorded on different (sometimes sequential) CCD frames. This is done electronically without mechanical motion or exchange of any mechanical component without any photographic developing process. This method then has many of the advantages of sandwich holography. In addition, many of the disadvantages are eliminated since the physical process to change to new photographic plates is eliminated. The great advantages to add or subtract any phase distribution to compensate or introduce a rigid body or other motion to the numerically reconstructed fields, in principle, still exist. One has to remember, however, that in sandwich holography the two amplitude fields are optically reconstructed and added. In digital holography [10], these fields are calculated and in principle, all the things done in Sandwich holography can be repeated digitally. Digital SI methods are also better adapted to in-plane measurements. For the combined SP and SI technique, discussed more in detail below, another advantage is present. The speckle motion introduced by the object bulk motion can be measured using the SP technique (frequently referred to as digital speckle photography, DSP) and be compensated for in the SI measurements to avoid decorrelation and therefore restore the visibility of the interference fringes. It permits a quantitative evaluation of interferograms and shearograms that is hard to achieve in sandwich holography. In the following, a background for the combined SP and SI method is given. This is followed by a number of experimental situations where the speckle motion caused by rigid body motions has been measured and compensated for and thus retrieving the fringe visibility in SI or measuring the shear field in shearography.

ARTICLE IN PRESS 676

N.-E. Molin et al. / Optics and Lasers in Engineering 41 (2004) 673–686

2. Causes for loss of visibility in SI The rules of fringe formation in SI are nowadays well understood [11,12]. The fundamental parameter is the speckle correlation function [3],     2  R N ik ik 2 2    N PðpÞP ðp þ Ap Þ exp 0 p  ðq  AÞ exp 0 2 jpj ða  AZ Þ d p   L 2L  ;ð1Þ gðqÞ ¼  RN  2 2   N jPðpÞj d p   which states the structural similarity between two acquired speckle patterns. In Eq. (1), k is the wavenumber, q is the three-dimensional correlation parameter, P is the pupil function, p is a pupil plane co-ordinate, Ap and A are speckle displacements in the pupil plane and in image space, respectively, and L0 is the distance from the exit pupil plane to the imaging detector. Further, a and AZ are components of q and A, respectively, along the optical axis. Eq. (1) is related to several interesting phenomena regarding speckles. For instance, if all speckle displacements are set to zero, Eq. (1) gives the threedimensional average size of the speckle through the three-dimensional autocorrelation function [13]. The first exponential function in Eq. (1) gives the in-plane size and the second exponential function gives the out-of-plane size. Typically, a speckle is ‘‘cigar shaped’’ with its main axis pointing towards the aperture. For normal imaging apertures, the dimension of the speckle is an order of magnitude larger along the optical axis than perpendicular to it. We next consider the case where Eq. (1) is maximised, that is q is chosen such that both the exponential functions become unity. Eq. (1) then reduces to the so-called Yamaguchi correlation factor [2], which is graphically understood as the relative overlap between two pupil functions separated by Ap, the speckle displacement in the aperture plane. For a given speckle displacement in the aperture plane, a larger pupil will result in a higher correlation value and hence a higher fringe visibility. SI systems are therefore run with as large aperture as possible. In this context, it should be noted that SI systems (with temporal phase stepping) do not need to resolve individual speckles. Each pixel element will serve as an individual detector that averages the intensity of all speckles falling onto it [14]. Too many speckles per pixel, however, result in blurring and loss of fringe visibility. The Yamaguchi correlation factor is generally considered to be the maximum correlation possible to obtain for a specific measurement. Recently, however, Burke [15] has shown that the Yamaguchi correlation factor can be improved considerably if a synthetic filter is applied to the speckles in the aperture plane. The purpose of this filter is to filter out those speckles that only show up in one of the exposures. In practice, the width of this filter will depend on the measured speckle displacement (which can be measured with DSP) in the aperture plane, a process that is most straightforward for the spatial phasestepping technique. Improved fringe visibility due to adaptive filtering in the Fourier plane is demonstrated in Section 4.1.

ARTICLE IN PRESS N.-E. Molin et al. / Optics and Lasers in Engineering 41 (2004) 673–686

677

Our last concern will be decorrelation effects due to speckle motions in image space. If we for a moment disregard Yamaguchi decorrelation, speckle decorrelation will be introduced if the values jq  Aj> ; where > refers to the in-plane component, or/and a  AZ differ from zero. The speed of decay is identical to the threedimensional auto-correlation of a speckle pattern that defines the average speckle size (see above). The cigar used to visualise an average speckle can therefore also be used to visualise motion-induced speckle decorrelation in the image space, but where the axes are replaced by jq  Aj> and a  AZ ; respectively. Since the loss of speckle correlation is an order of magnitude quicker in the in-plane direction than in the outof-plane direction, it is more vital to compensate in-plane motions. In many practical situations, such as tensile tests, decorrelation caused by in-plane motion is the main limiting factor in the measurement. To make the measurement robust to in-plane motion, we need to make the expression jq  Aj> zero in all points of the object throughout the measurement, which practically means that we need to follow the bulk motion of the speckles as the object deforms. Doing so two main advantages are gained. By following the phase evolution in a specific speckle rather than a specific pixel, fringe visibility is kept high even for large motions and since each speckle relates to a specific area on the object calculation of mechanical quantities such as strain is simplified. Methods to evaluate bulk motion from temporal as well as spatial phase shifted SI images are outlined in Section 3. Sections 4.2 and 4.3 show applications of these two techniques. As an additional example on the advantage of combining SP and SI, quantitative speckle shearing interferometry is discussed in Section 4.4. As is understood from the discussion above, speckle decorrelation is introduced by speckle motion. Understanding the rules of speckle motion then helps optimising a specific set-up. It can be shown [3] that, apart from possible magnification, the components of the speckle motion fundamentally are given by A> ¼ Lrðm  aÞ

ð2Þ

for the in-plane component, and Az ¼ 

L2 2 r ð m  aÞ 2

ð3Þ

for the component parallel to the optical axis. In Eqs. (2) and (3), m is the sensitivity vector known  from holographic interferometry, a is the object displacement, r ¼ @=@x; @=@y is the gradient operator, and L is a typical length given by the optical set-up. For the image plane, the distance L is the defocusing distance and for the aperture plane L is the distance from the object to the entrance pupil plane. Hence, the transverse speckle motion is proportional to the gradient and the longitudinal speckle motion is proportional to the Laplacian of m  a; respectively. Eqs. (2) and (3), in general, become somewhat complicated when written in component form [3] but in principle they consist of two parts, object deformations and object deformation gradients (tilt, rotation and strain) scaled by L. For a focused optical system, the image plane speckle motions are dominated by object deformation while the object tilt and rotation dominate aperture plane speckle motions. In this context,

ARTICLE IN PRESS 678

N.-E. Molin et al. / Optics and Lasers in Engineering 41 (2004) 673–686

it is worth pointing out that the bulk motion and the phase of a speckle are not necessarily connected. The phase depends on the interferometric set-up while the bulk motion more directly depends on the deformation of the object. In most practical situations, the phase within a speckle can also be assumed constant. It is therefore only necessary to track the speckle motion with a precision of one speckle radius to get reliable results of the phase.

3. Methods to evaluate the in-plane speckle motion from SI experiments Two methods to evaluate the in-plane speckle motion from SI-experiment are shortly described. The first concerns the often-used (temporal) phase-stepping method and the second is the Fourier transform method, a method often used in pulsed SI. It should, however, be pointed out that a third more ‘‘direct’’ method is to calculate the speckle motion using the SP technique from ‘‘interferometric recordings’’ when the reference beam is blocked to directly record the object speckle fields before and after deformation. This last method is, however, not possible to use in experiments where simultaneous recordings of the speckle motion and of the interferograms are essential. In [16], the in-plane speckle motion was calculated in a digital SI experiment using the four-frame phase-stepping technique [17,18]. Other stepping numbers or schemes are of course also possible to use with a similar technique. In general, a set of N phase-stepped images is recorded before and after deformation of the object, respectively. The irradiance of these recordings can be written as In ðr1 Þ ¼ I0 ðr1 Þ þ Im ðr1 Þ cos ½fðr1 Þ þ na ;

n ¼ 0; 1; :::; N  1;

In ðr2 Þ ¼ I0 ðr2 Þ þ Im ðr2 Þ cos ½fðr2 Þ þ na þ Oðr2 Þ ;

n ¼ 0; 1; :::; N  1;

ð4Þ ð5Þ

where I0 is the slowly varying background intensity, Im the modulation irradiance (speckles), a is the (known) phase step, f the random speckle phase, and O is the sought phase information. r1 and r2 are detector positions before and after deformation, respectively. If N is at least three, it is possible to separate the individual parameters in Eqs. (4) and (5) by a combination of the acquired images [19]. In SI, it is usually assumed that the speckles remain stationary (r2 ¼ r1 ) throughout the deformation so that the sought phase information can be calculated directly from the acquired images. Violation of this assumption results in speckle decorrelation and loss of fringe visibility (see Section 2). In the combined technique, however, the bulk motions between Im ðr1 Þ and Im ðr2 Þ are calculated using DSP prior to the phase evaluation. This enables us to follow the phase evolution in a specific speckle and hence the phase evolution within a specific area on the object. An application of this technique is shown in Section 4.2. The pulsed SI technique (also called pulsed TV holography) [20] is used for transient, dynamic and fast events. The speed of the events to be recorded does not usually allow the time-consuming phase-stepping technique as used above. Instead, the Fourier transform evaluation (spatial phase shifting) method is used. A slightly

ARTICLE IN PRESS N.-E. Molin et al. / Optics and Lasers in Engineering 41 (2004) 673–686

679

off-axis and smooth reference beam is introduced in the set-up at an angle somewhat smaller than the value imposed by the Nyquist condition. A spatial carrier of frequency f0 now substitutes the phase-stepping term na; in Eqs. (4) and (5). The remaining two equations are Iðr1 Þ ¼ I0 ðr1 Þ þ Im ðr1 Þ cos ½fðr1 Þ þ 2pf0 ;

ð6Þ

Iðr2 Þ ¼ I0 ðr2 Þ þ Im ðr2 Þ cos ½fðr2 Þ þ 2pf0 þ Oðr2 Þ :

ð7Þ

The cosine function above is first written as the sum of two conjugate complex, exponential functions. The two-dimensional Fourier transform of each of these equations then in principle, can be written as Iðu; vÞ ¼ I0 ðu; vÞ þ Cðu  f0 ; vÞ þ C n ðu þ f0 ; vÞ:

ð8Þ

The notation I above is used for the Fourier transform of I. C is then the Fourier transform of Im eif ei2pf0 =2 and C n is conjugate. With the spatial carrier frequency f0 correctly chosen, the spectral components will be well separated in the spectral domain and the individual spectrum can be filtered out. We now have several possibilities to minimise speckle decorrelation. By calculating the magnitude of one of the C spectra we get the speckle patterns in the aperture plane before and after deformation, respectively. The deformation between these two speckle patterns can then be calculated using DSP and a filter can be chosen such that only the speckles that show up in pairs are allowed to pass the synthetic aperture. As shown in Section 4.1, this filter, first suggested by Burke [15], is able to improve the Yamaguchi correlation factor considerably. The remaining spectrum is then back-transformed to the ðx; yÞ-space where the magnitude (speckle pattern) and phase (the arctan of the quotient between its imaginary and real part) of the speckles can be calculated directly. With the amplitude and phase of the speckles separated, we can follow the same procedure as for the temporal phase stepping described above. It should be noted that the number of points in which the in-plane speckle fields are calculated and needed, depend upon the application. Remember that we only need to locate the speckle motion with the precision of approximately one pixel. For a pure translation, only a few evaluation points are necessary but for motions that are more complicated, a higher number is recommended.

4. Some special cases In the following sections, a number of different applications of the combined technique are presented. 4.1. Improving fringe visibility through adaptive fourier filtering The purpose of this section is to show an experiment that demonstrates the improvement in fringe visibility that can be obtained through adaptive Fourier filtering as described in Sections 2 and 3. When transformed to the Fourier plane

ARTICLE IN PRESS 680

N.-E. Molin et al. / Optics and Lasers in Engineering 41 (2004) 673–686

(and appropriate filtering has taken place) two speckle patterns representing the states before and after deformation can be constructed as the magnitude squared of each frequency component. The average motion between these two speckle patterns is calculated in one central sub-image using cross correlation (DSP). The magnitude of this motion defines the size of the filter to be used. For practical reasons the motion is truncated to the closest integral pixel value. Once the size is determined, it is applied to separate sides of the two Fourier spectra cancelling those speckles that do not appear in both spectra. Once the filter is applied, the two spectra are back transformed and the phase difference is calculated. Results from a numerical experiment with an SI set-up sensitive to out-of-plane deformations are shown in Fig. 1. The results show unfiltered and filtered responses due to an object tilt about the vertical axis for three different tilt magnitudes. These tilt angles resulted in calculated relative speckle displacements of 1.9%, 3.9%, and 7.8% of the aperture width, respectively. As can be seen from the wrapped phase maps in Fig. 1, the filter results in a significant improvement in fringe contrast, particularly for the larger tilt magnitudes. 4.2. Tensile test of small specimen To achieve the deformation in the plane perpendicular to the observation angle, an in-plane interferometric set-up is used [21]. This means that two beams illuminate the object with equal angle to the observation direction, in this case 451. The laser is a frequency doubled Nd:YAG laser with a wavelength of 532 nm, and to capture the images an NEC Model TI-234A camera is used (480 512 pixels). When studying small objects of mm size, it is likely that the deformation becomes larger than one speckle diameter and the fringes will therefore be lost due to speckle decorrelation. To compensate for this in order to retrieve the fringes, SP is used as mentioned in Section 3 [16]. In Fig. 2, the results from an experiment on a plastic plate with a small hole (1 mm in diameter) are shown. The uncompensated and the compensated phase maps are shown, as well as the DSP result that is used to determine the compensation. The sample is loaded in the horizontal direction in a tensile test machine, and the phase map shows the deformation field in the same direction. During the experiment, the sample is being deformed at the same time as there is a large translation in the same direction. The translation is 0.3 mm, which corresponds to a displacement of 9 pixels on the detector; this displacement is seen in Fig. 2b. When this translation is compensated for, a retrieved phase map is obtained that is seen in Fig. 2c. It is 10 fringes across the interferogram, which means that there is a deformation of 3.8 mm across the object in this experiment. It clearly shows that with this combined technique, fringes that would be lost when using ordinary interferometry can be retrieved. There is one drawback with this technique when two beams are used to illuminate the object at equal angles. It is the fact that if the speckles are the slightest defocused

ARTICLE IN PRESS N.-E. Molin et al. / Optics and Lasers in Engineering 41 (2004) 673–686

unfiltered

681

filtered

(a)

(b)

(c) Fig. 1. Filtered and unfiltered phase maps from computer generated speckle patterns as a result of three different tilt magnitudes. The calculated speckle motions across the aperture were (a) 1.9%, (b) 3.9% and (c) 7.8% of the aperture width, respectively.

(i.e. the object is not perfectly focused onto the detector) the speckle field from the two beams will be moving in different directions in the detector plane if there is an inplane rotation or shear on the object. This means that the SP calculations might be hard to perform.

ARTICLE IN PRESS N.-E. Molin et al. / Optics and Lasers in Engineering 41 (2004) 673–686

14 12

yaxis [mm]

10 8 6 4 2 0 0

2

4

6

(a)

8

10

12

14

16

18

12

14

16

18

12

14

16

18

xaxis [mm] 14 12

yaxis [mm]

10 8 6 4 2 0 0

2

4

6

(b)

8

10

xaxis [mm] 14 12 10

yaxis [mm]

682

8 6 4 2 0 0

(c)

2

4

6

8

10

xaxis [mm]

ARTICLE IN PRESS N.-E. Molin et al. / Optics and Lasers in Engineering 41 (2004) 673–686

683

4.3. Bending wave propagation in a rotating hard disc Rotating objects such as disc brakes and wheels are often technically important. To measure an out-of-plane deformation field of a wheel when it is rotating (inplane) is however not always an easy task. Holographic interferometry techniques [22] have been developed to attack such problems. Methods, which most often make use of pulsed lasers, include *

*

*

*

An optical set-up with illumination and observation directions such that the measuring sensitivity is low for in-plane motions and high for out-of-plane deformations. Object related triggering of pulsed lasers so that pulses are fired at the same angular positions but at different revolutions. To use an interferometer that is rotating synchronously with the object. This is a complicated and expensive device but it allows the recording of double exposures at all angular positions of the rotating object. To produce a stationary object wave field from a rotating object by using an image derotator [22, 23]. The technique is also quite expensive and complicated.

Out-of-plane transient deformations like bending wave propagation in moving objects can be measured using the combined SP and pulsed SI (pulsed TV holography) technique described in Section 3 [24]. This technique is applied for the measurement of transient bending waves on a rotating hard disc. A twin-cavity injection seeded pulsed Nd:YAG laser (Spectron) has been used as light source as well as for excitation of the bending waves. The wavelength is 532 nm (frequency doubled Nd:YAG). A CCD-camera (PCO Sensicam) records two image-plane holograms of the object. The first laser pulse illuminates the disc in its ‘‘undisturbed’’ condition and at the same time instant, a part of the light is focused onto the disc to generate bending waves. The next laser pulse is fired some microseconds later and the camera records the disc in a ‘‘disturbed’’ state. Fig. 3a shows a phase map of the hard disc 20 ms after impact without any compensation. The centre of the disc is at x ¼ 0 and y ¼ 21 mm and the rim is seen (radius 47.5 mm) at the right-hand side. The impact position is 3 mm above the top part of the image and 5 mm from the rim. The rotation causes the rim to move 0.36 mm between the recorded images corresponding to a speckle translation of about 9 pixels in the image plane. This causes the interference phase to be lost except at close to the centre and outside the disc. Due to the comparatively large in-plane rotation, unwanted straight fringes will be introduced caused by slightly different sensitivity over the field of view. By Fig. 2. Deformation field of a plastic plate with an added translation. The plate has a 1 mm hole in the middle and the field of view is 14 18 mm. (a) Uncompensated phase map of the deformation. (b) DSP results, where the arrows indicate the magnitude and direction of the rigid body motion in each sub-image. This result is then used to obtain the compensated phase map in (c), where the fringes describe the in-plane deformation of the plate.

ARTICLE IN PRESS 684

N.-E. Molin et al. / Optics and Lasers in Engineering 41 (2004) 673–686 40

35

30

y (mm)

25

20

15

10

5

(a)

0

0

5

10

15

20

25

30

35

40

45

x (mm)

40

35

30

y (mm)

25

20

15

10

5

0

(b)

0

5

10

15

20

25

30

35

40

45

x (mm)

Fig. 3. Bending waves in a rotating hard disc at 3660 rpm with a radius of 47.5 mm, is excited by a focused laser pulse at a point 3 mm above the image and 5 mm from the rim. (a) Phase map 20 ms after impact without compensation for the rotation. (b) Phase map when compensation for the rotation is performed, restoring the bending waves.

compensating for this rotational motion the interference phase is restored. Fig. 3b shows a phase map of the bending waves over the surface. The straight fringes in Fig. 3a are also compensated for. It is interesting to note that early reflected bending waves at the free rim have a high amplitude and seem to travel with quite high

ARTICLE IN PRESS N.-E. Molin et al. / Optics and Lasers in Engineering 41 (2004) 673–686

685

amplitude along the rim (at this time instant positioned at x ¼ 45 mm y ¼ 33 mm). The fastest moving bending waves observed have a velocity of about 1500 m/s. This experiment shows that an out-of-plane deformation that is in the order of 1/1000 of the in-plane motion is possible to measure. 4.4. Measuring the shear in shearography Shearography [25] is a robust interferometric method where neighbouring (sheared) points on the object surface are brought to interfere. The method is suitable for non-destructive testing since it detects local defects and faults. Shearography usually only offers a qualitative picture, a shearogram, of the local slope change. To calculate a quantitative measure, the shear magnitude in all points must be known. In [26], the shear-magnitude was measured in a shearography set-up using the speckle correlation technique. The object field was divided by a beam splitter (in a Michelson interferometer set-up) onto mirrors where one of them was tilted, to introduce a variable amount of shear. The shear field was easily determined from speckle recordings where one of the object beams is blocked and the other is open and vice versa. As the shear field in this way was determined, the slope field could be calculated, i.e. quantitative values were obtained. The simplicity of the combined speckle correlation and shearography technique improves the usability of shearography. In [27], numerically reconstructed shearograms were produced from digital holographic interferometric recordings. Numerically reconstructed holographic interferograms were then shifted or sheared a variable amount in the digital reconstruction, creating shearograms. With this method, however, the robustness in ordinary shearography is more or less lost since holographic recordings were used as starting points. The method has, however, interesting properties in that any shear can be introduced in the shearograms later in the numerical reconstruction.

5. Conclusions By combining digital SP with digital SI or shearography methods, fringe visibility may be restored and quantitative measures of deformation fields in interferometry or the shear in shearography are obtained.

Acknowledgements This research has been supported by: The Swedish Research Council for Engineering Sciences (TFR), Knut and Alice Wallenberg foundation, J.C. Kempe foundation and the Swedish Foundation for Strategic Research via the Integral Vehicle Structure research school (IVS).

ARTICLE IN PRESS 686

N.-E. Molin et al. / Optics and Lasers in Engineering 41 (2004) 673–686

References [1] Rastogi PK. Measurement of static surface displacements, derivatives of displacements, and threedimensional surface shapes—examples of applications to non-destructive testing. In: Rastogi PK, editor. Digital speckle pattern interferometry and related techniques. West Sussex: Wiley, 2001. [2] Yamaguchi I. Speckle displacement and decorrelation in the diffraction and image fields for small object deformation. Opt Acta 1981;28:1359–76. . [3] Sjodahl M. Digital speckle photography. In: Rastogi PK, editor. Digital speckle pattern interferometry and related techniques. West Sussex: Wiley, 2001. [4] Jones R, Wykes C. Holographic and speckle interferometry. Cambridge, UK: Cambridge University Press, 1983. [5] Erf RK. Speckle metrology. New York: Academic Press, 1978. . [6] Sjodahl M. Digital speckle photography. In: Rastogi PK, Inaudi D, editors. Trends in optical nondestructive testing and inspection. Oxford: Elsevier, 2000. [7] Huntley J. Automated analysis of speckle interferograms. In: Rastogi PK, editor. Digital speckle pattern interferometry and related techniques. West Sussex: Wiley, 2001. [8] Abramson N. The making and evaluation of holograms. London: Academic Press, 1981. [9] Abramson N. Sandwich hologram interferometry: a new dimension in holographic comparison. Appl Opt 1974;13:2019. [10] Kreis T. Holographic interferometry principles and methods. Berlin: Academie Verlag GmBH, 1996. [11] Yamaguchi I. Fringe formation in deformation and vibration measurements using laser light. In: Wolf E, editor. Progress in optics, vol. XXII. Amsterdam: Elsevier, 1985. [12] Rastogi PK, editor. Digital speckle pattern interferometry and related techniques, West Sussex: Wiley, 2001. [13] Leushacke L, Kirchner M. Three-dimensional correlation coefficient of speckle intensity for rectangular and circular apertures. J Opt Soc Am A 1990;7:827–32. [14] Lehmann M. Speckle statistics in the context of digital speckle interferometry. In: Rastogi PK, Inaudi D, editors. Trends in optical non-destructive testing and inspection. Oxford: Elsevier, 2000. [15] Burke J. Application and optimisation of the spatial phase shifting technique in digital speckle interferometry. Aachen: Shaker Verlag, 2001. . [16] Andersson A, Runnemalm A, Sjodahl M. Digital speckle pattern interferometry: fringe retrieval for large in-plane deformations with digital speckle photography. Appl Opt 1999;38(25):5408–12. [17] Stetson KA, Brohinsky WR. Electro-optic holography and its application to hologram interferometry. Appl Opt 1985;24:3631–7. [18] Stetson KA. Theory, applications of electronic holography. In: Stetson KA, Pruputniewics RJ, editors. Proceedings of International Conference on Hologram Interferometry and Speckle metrology, Society for Experimental Mechanics, CO. USA: Bethel, 1990. [19] Surrel Y. Customised phase shift algorithms. In: Rastogi PK, Inaudi D, editors. Trends in optical non-destructive testing and inspection. Oxford: Elsevier, 2000. [20] Gren P, Schedin S, Li X. Tomographic reconstruction of transient acoustic fields recorded by pulsed TV holography. Appl Opt 1998;37(5):834–40. [21] Leendertz J. Interferometric displacement measurements on scattering surfaces utilizing the speckle effect. J Phys E 1970;3:214–8. [22] Vest CM. Holographic interferometry. New York: Wiley, 1979 (Chapter 4). [23] Stetson K. The use of an image derotator in hologram and speckle interferometry of rotating objects. Exp Mech 1978;18:67–73. [24] Gren P. Pulsed TV holography combined with digital speckle photography restores lost interference phase. Appl Opt 2001;40(14):2304–9. [25] Hung Y. Shearography: a new optical method of strain measurement and non-destructive testing. Opt Eng 1982;21:391–5. . [26] Andersson A, Krishna Mohan N, Sjodahl M, Molin N. TV shearography: quantitative measurement of shear magnitude fields using digital speckle photography. Appl Opt 2000;39(16):2565–8. [27] Schnars U, Juptner W. Digital recording and reconstruction of holograms in hologram interferometry and shearography. Appl Opt 1994;33(20):4373–7.