Fringe pattern denoising using spatial oriented gaussian filters

Fringe pattern denoising using spatial oriented gaussian filters

Optics Communications 457 (2020) 124704 Contents lists available at ScienceDirect Optics Communications journal homepage: www.elsevier.com/locate/op...

7MB Sizes 0 Downloads 68 Views

Optics Communications 457 (2020) 124704

Contents lists available at ScienceDirect

Optics Communications journal homepage: www.elsevier.com/locate/optcom

Fringe pattern denoising using spatial oriented gaussian filters Jesús Villa a ,∗, Efrén González a , Gamaliel Moreno a , Ismael de la Rosa a , Jorge Luis Flores b , Daniel Alaniz a a b

Unidad Académica de Ingeniería Eléctrica, Universidad Autónoma de Zacatecas, Av. Ramón López Velarde 801, Zacatecas, C.P. 98000, Mexico Departamento de Electrónica, Universidad de Guadalajara, Av. Revolución 1500, C.P. 44840, Guadalajara, Jalisco, Mexico

ARTICLE

INFO

Keywords: Fringe images Filtering Gaussian filters

ABSTRACT In this paper, it is proposed a gaussian convolution-based fringe pattern denoising method. As will be shown, this method is robust enough compared with some of the most outstanding methods in the literature. Additionally, the proposed method overcomes the problem of underperformance in low-frequency fringes that is common in most oriented filtering methods, while keeping the great advantages of convolution-based filters. The advantages of the proposed denoising method will be demonstrated with experiments realized over synthetic and real fringe patterns, and comparing the performance with four representative methods, already reported.

1. Introduction Digital fringe patterns are commonly the result of optical methods of measurement such as holographic interferometry, moiré interferometry or electronic speckle pattern interferometry (ESPI) [1–3]. To extract the physical information from fringe patterns the well-known phase demodulation is required [4,5]. Due to experimental conditions in this kind of optical methods the quality of the fringe images are usually affected by noise. For this reason, for a proper phase demodulation process, fringe pattern denoising has become a relevant topic widely investigated in the last years. Owing to its particular characteristics, it is well-known that denoising fringe patterns requires the use of special kind of filters, especially if the fringe image contains highfrequency regions. The strategy of using oriented filters has been widely adopted obtaining good results. For example, in [6–8] the use of the regularization theory was adopted to propose new kind of oriented filters. The partial differential equations have also been used to design several types of oriented filters [9–16]. The oriented spatial filter masks [17,18], that resulted in a practical and effective method, is a good alternative. Methods that deal with the decomposition of the image in high-frequency fringes, low-frequency fringes, and noise, have also been proposed [19,20]. The use of the Fourier transform by means of windows has also been proposed in [21] and [22]. Although these two Fourier based methods do not require the fringe orientation, they have resulted in good alternatives for this purpose. Recently, new kind of methods based on neural networks have also been proposed [23,24]. The proposal by Jiang et al. [25], consists in a adaptable window filter that adjusts its size and shape according to the fringes’ direction and

curvature. The work reported in [26] proposes the nonlocal means and its related adaptive kernel-based methods. With experience in the field of fringe pattern denoising, we have realized that most oriented filtering methods have the best performance in high-frequency fringes, however, they have a drawback: they underperform in low-frequency regions which causes the filtered fringes present structures that look like the brushstrokes of Vincent Van-Gogh’s ‘‘The Starry Night’’ painting. The reason for this drawback is that the filtering is emphasized along a given direction (anisotropic filter), in other words, the filtering is realized in one dimension. This problem gets worse when fringe orientation is bad estimated, which is a common situation in low-frequency fringes. Therefore, to overcome this problem, the filter must become more isotropic as the fringe frequency decreases, which can be achieved by knowing information about the fringe frequency to adapt the wide-band of the filter. In this paper, a denoising technique based in convolution gaussian filters is proposed. As will be shown, the proposal offers a good performance against noise and adequately overcomes the above mentioned problems in low-frequency fringes while keeping the advantages of convolution filters: easy implementation, low processing time and low computational effort. This will be demonstrated with simulated and real fringe patterns. 2. Materials and methods In order to introduce the problem of denoising fringe patterns, we start defining the mathematical model of a fringe pattern: 𝐼(𝑥, 𝑦) = 𝑎(𝑥, 𝑦) + 𝑏(𝑥, 𝑦) cos[𝜙(𝑥, 𝑦)] + 𝑛(𝑥, 𝑦).

∗ Corresponding author. E-mail address: [email protected] (J. Villa).

https://doi.org/10.1016/j.optcom.2019.124704 Received 6 June 2019; Received in revised form 16 August 2019; Accepted 3 October 2019 Available online 5 October 2019 0030-4018/© 2019 Published by Elsevier B.V.

(1)

J. Villa, E. González, G. Moreno et al.

Optics Communications 457 (2020) 124704

In Eq. (1), 𝑎(𝑥, 𝑦) is the background, 𝑏(𝑥, 𝑦) is the amplitude modulation, 𝑛(𝑥, 𝑦) is a kind of additive noise, and 𝜙(𝑥, 𝑦) is the phase, which is related to the physical quantity to be measured. Other kind of models can be used, for example, when ESPI fringes are processed, some kind of multiplicative noise is present. When the phase 𝜙(𝑥, 𝑦) is a smooth function the image presents low-frequency fringes (note that local fringe frequency is 𝜈(𝑥, 𝑦) ≈ ‖∇𝜙(𝑥, 𝑦)‖ ), therefore noise and fringe information are well separated in the frequency domain. In this situation, denoising this kind of images may represent an easy task using linear-spatial-invariant filters such as convolution masks or Fourier based filters. Unfortunately, many times this condition does not come about. Thereupon, the use of more sophisticated methods are required, such as the well-known oriented filtering techniques which need the previous estimation of the fringe orientation. In the subsequent explanations will be assumed that in a small square region 𝛤 of a fringe pattern with the pixel (𝑥, 𝑦) at the center of 𝛤 , the phase 𝜙(𝜉, 𝜂) is approximated to a plane, where (𝜉, 𝜂) ∈ 𝛤 . In other words, the Taylor’s series approximation is assumed: 𝜙(𝜉, 𝜂) ≈ 𝜙(𝑥, 𝑦) + ∇𝜙(𝑥, 𝑦) ⋅ (𝑥 − 𝜉, 𝑦 − 𝜂).

(2)

Consequently, the model of the fringe pattern in 𝛤 is 𝐼(𝜉, 𝜂) ≈ 𝑎(𝜉, 𝜂) + 𝑏(𝜉, 𝜂) cos [𝜙(𝑥, 𝑦) + ∇𝜙(𝑥, 𝑦) ⋅ (𝑥 − 𝜉, 𝑦 − 𝜂)] ,

(3) Fig. 1. (a) Window showing a region in a fringe pattern with an orientation of 34 𝜋 radians with respect to 𝑥-axis. (b) Graph of 𝜎 2 (𝜃𝑘 ). Note that the maximum occurs when 𝜃𝑘 ≈ 43 𝜋.

which represents straight fringes. As known in the field of signal and image processing, the local frequency ) of a signal of the form cos 𝜙(𝑥, 𝑦), is given by the vector ( 𝜕𝜙 𝜕𝜙 , , therefore, in an ideal case in which 𝑎(𝑥, 𝑦) and 𝑏(𝑥, 𝑦) are 𝜕𝑥 𝜕𝑦 constants, the fringe frequency is equal to the phase gradient. The physical signification of fringe frequency or the phase gradient depends on the kind of optical measurement method. For example, in a Fizeau interferometer for optical tests, the fringe frequency represents the wavefront gradient. On the other hand, in shearing interferometers the fringe frequency represents the second-order derivatives of the wavefront under test. In our case, we are not focusing the work in a specific optical test.

Now, defining a set of 𝑛 angles 𝜃𝑘 ∈ [0, 𝜋), for 𝑘 = 0, 1, 2, … , 𝑛 − 1, , 3𝜋 , … , (𝑛−1)𝜋 }, the following set of variance values that is 𝜃𝑘 = {0, 𝜋𝑛 , 2𝜋 𝑛 𝑛 𝑛 is computed: 𝜎 2 (𝜃𝑘 ) =

)2 1 ∑ ( 𝐷𝐯 (𝜉, 𝜂, 𝜃𝑘 ) − 𝜇(𝜃𝑘 ) , 𝐿2 (𝜉,𝜂)∈𝛤

(8)

where 1 ∑ 𝐷𝐯 (𝜉, 𝜂, 𝜃𝑘 ). 𝐿2 (𝜉,𝜂)∈𝛤

(9)

2.1. Fringe orientation and fringe frequency estimation

𝜇(𝜃𝑘 ) =

Here, a method to estimate the orientation and fringe frequency is presented. The first step to do this task is the normalization of the image. In our case we recommend the normalization method reported in [27]. After normalization, the fringe image acquires values in the interval [−1, 1], that is 𝐼𝑛 (𝑥, 𝑦) = cos[𝜙(𝑥, 𝑦)]. The gradient of 𝐼𝑛 (𝑥, 𝑦), defined as [ ] 𝜕𝜙 𝜕𝜙 ∇𝐼𝑛 (𝑥, 𝑦) = sin[𝜙(𝑥, 𝑦)] , sin[𝜙(𝑥, 𝑦)] , (4) 𝜕𝑥 𝜕𝑦

Owing that 𝜎 2 (𝜃𝑘 ) is a measure of the dispersion of 𝐷𝐯 ∈ 𝛤 as a function of 𝜃𝑘 , it can be deduced that, if second order partial derivatives of 𝜙(𝜉, 𝜂) are close to zero (i.e. 𝜙(𝜉, 𝜂) can be represented by a plane), a good approximation to the fringe orientation 𝜓 at pixel (𝑥, 𝑦) can be 𝜓(𝑥, 𝑦) ≈ argmax{𝜎 2 (𝜃𝑘 )}.

Additionally, this information can be used to estimate the fringe frequency 𝜈, because

with the simple discrete approximation using center differences,

𝜈(𝑥, 𝑦) ∝ max{𝜎 2 (𝜃𝑘 )}.

1 [𝐼(𝑥 + 1, 𝑦) − 𝐼(𝑥 − 1, 𝑦), 𝐼(𝑥, 𝑦 + 1) − 𝐼(𝑥, 𝑦 − 1)], (5) 2 may be used to compute a rough version of the fringe orientation, however, owing to the noise, the poor spatial support and the fact that in some places sin[𝜙(𝑥, 𝑦)] = 0 (it occurs in peaks and valleys), this method is not recommended. Instead, in this paper, we propose the method described in the following paragraphs. Consider the fringe pattern gradient ∇𝐼𝑛 (𝑥, 𝑦) has been previously computed with (5). Then, consider a squared window 𝛤 of size 𝐿 × 𝐿 with center at pixel (𝑥, 𝑦) where the orientation and frequency is being computed. Then, the field of directional derivatives along the direction defined by the angle 𝜃 in the window, is ∇𝐼𝑛 (𝑥, 𝑦) ≈

𝐷𝐯 (𝜉, 𝜂) = ∇𝐼𝑛 (𝜉, 𝜂) ⋅ 𝐯,

(𝜉, 𝜂) ∈ 𝛤 ,

(11)

In Fig. 1 we can see an example of a region 𝛤 in a fringe pattern and its corresponding 𝜎 2 (𝜃𝑘 ) graph.

(6)

A practical way to establish a rule of conversion max{𝜎 2 (𝜃𝑘 )} → 𝜈 is a calibration procedure using synthetic fringe windows of size 𝐿 × 𝐿 for a given set of 𝑛𝜈 known frequencies. Once the set of 𝜎 2 values is computed from the synthetic windows, a polynomial fitting is carried out, as shown in Fig. 2. The polynomial fitting is realized for each of the 𝑛 orientation angles 𝜃𝑘 , then, once the orientation is estimated in an experiment, to compute the frequency the respective polynomial coefficients are applied according to the estimated orientation at each pixel (𝑥, 𝑦).

(7)

Although the estimation of fringe orientation and frequency is sensitive to noise, a rough computing of these parameters is enough for the proposed method, as will be shown in the experiments.

where 𝐯 = [cos(𝜃), sin(𝜃)].

(10)

2

J. Villa, E. González, G. Moreno et al.

Optics Communications 457 (2020) 124704

Fig. 2. Polynomial fitting to establish a rule of conversion max{𝜎 2 (𝜃𝑘 )} → 𝜈. At the top of the Figure we see some synthetic fringe image windows of different frequencies. After computing the set of 𝜎 2 values along the known 𝜃, we use least-squares fitting. We found that for practical purposes a set of 𝑛𝜈 = 30 frequency values and a fifth-order polynomial gives proper results for most cases.

As can be seen in Eq. (15), the value of 𝜎𝑥2 grows in inverse proportion to the frequency 𝜈, however, 𝜎𝑦2 is fixed according to the size of the window. This is because the phase in the region of the window is considered a plane (i.e. second order derivatives are considered equal to zero). A better performance could be achieved if the value of 𝜎𝑦2 were also variable, however, this would require the knowledge about second-order partial derivatives of the phase, in other words, the local curvature of fringes.

2.2. Spatial oriented gaussian filter The fringe pattern denoising method proposed here using spatial oriented gaussian filters requires the previous estimation of the fringe orientation and fringe frequency. Let start defining a convolution matrix 𝑊 = 𝑤(𝜉, 𝜂; 𝜓, 𝜈) of size 𝑁 × 𝑁, where 𝜉 = 𝜂 = −𝑡, −𝑡 + 1, … , 0, … , 𝑡 − 1, 𝑡.

(12)

The values 𝜓 and 𝜈 are the local orientation and frequency in the image where 𝑊 is applied, respectively; and 𝑁 = 2 ⋅ 𝑡 + 1. We define the weights of 𝑊 as ( ) 𝑥2𝑟 𝑦2𝑟 𝑊 = 𝑤(𝜉, 𝜂; 𝜓, 𝜈) = exp − − , (13) 2𝜎𝑥2 2𝜎𝑦2

3. Experimental results and discussion In the following subsections we describe the experimental results obtained applying our proposed method compared with other four methods: The oriented spatial filter masks (OSFM) [17,18], the localized Fourier transform filter (LFTF) [21], the windowed Fourier transform filter (WFTF) [22], and the boundary-aware coherence enhancing diffusion (BCED) [15]. In particular, we have found the WFTF has one of the best performance against noise among all already reported methods. All the experiments where performed in a general purpose 3.4 GHz Intel Core i5 (7th Gen) computer with 8 GB RAM. Except in the simulations, in all the other experiments the fringe pattern normalization method reported in [27] were applied previous to the fringe orientation and frequency estimation, and after filtering. In all the following experiments, for the orientation and frequency estimation we used a fifth order polynomial, 𝑛𝜈 = 30 frequencies, and 𝑛 = 50 angles.

where 𝑥𝑟 = 𝜉 sin 𝜓 + 𝜂 cos 𝜓 𝑦𝑟 = 𝜉 cos 𝜓 − 𝜂 sin 𝜓,

and

(14)

represent a rotation of the axis. The variances 𝜎𝑥2 and 𝜎𝑦2 define the wide-band of the filter along the 𝑥 and 𝑦 direction, respectively (of course, keeping in mind that 𝑥 − 𝑦 axis are rotated according to 𝜓). In our case, these parameters are functions of the local frequency 𝜈 in the image. In general, 𝑊 represent a gaussian filter whose orientation is defined by the local fringe orientation in the image and, additionally, the wide-band is adjusted according to the local fringe frequency in the image. As will be explained later, the way the parameters 𝜎𝑥2 and 𝜎𝑦2 are related to the local fringe frequency were empirically deduced according to experimental results. The working principle of the proposed method consists of increasing the cut-off frequency of the filter along the fringe orientation (i.e. decreasing the value of 𝜎𝑥2 ) as the fringe frequency increases, while keeping small the cut-off frequency along the fringe contour (i.e. keeping large the value of 𝜎𝑦2 ). This means making the filter more anisotropic. On the other hand, decreasing the cut-off frequency of the filter along the fringe orientation (i.e. increasing the value of 𝜎𝑥2 ) as the fringe frequency decreases, while keeping small the cut-off frequency along the fringe contour (i.e. keeping large the value of 𝜎𝑦2 ). This means making the filter more isotropic. See Fig. 3 Keeping in mind that the local fringe period in the image is 𝜈 −1 , we found empirically that for most practical cases, best results were obtained using ( )2 1 and 𝜎𝑦2 = 𝑡2 . (15) 𝜎𝑥2 = 6𝜈

3.1. Simulations and error analysis The first experiment consisted in the comparative analysis using a simulated fringe pattern. In Fig. 4 it is shown the noiseless image, the noisy image, the estimated fringe orientation, and the estimated fringe frequency. In this case we used uniformly distributed phase-noise ranging in [−1, 1] radians. It is important to remark that phase noise is a good selection to evaluate the performance of denoising methods because it represents a mix of additive and multiplicative noise that can be used to simulate speckle noise, present in ESPI and digital holography. That is 𝐼𝑛 = cos[𝜙 + 𝑛] = cos[𝜙] cos[𝑛] − sin[𝜙] sin[𝑛].

(16)

Fig. 5 and Table 1 show the results obtained using the four compared denoising methods. The parameters values that best results offered for each method were used in all experiments. The parameter values in this experiment for the LFTF method were 𝜎 = 0.01 and 3

J. Villa, E. González, G. Moreno et al.

Optics Communications 457 (2020) 124704

Fig. 3. Gray level representation of spatial oriented gaussian filters applied in zones with fringe orientation of 85 𝜋 radians, for three different frequencies. Filters applied in, (a) high-frequency fringes, (b) middle-frequency fringes and (c) low-frequency fringes. Note that the filter becomes more anisotropic as frequency increases. Conversely, the filter becomes more isotropic as frequency decreases.

Fig. 4. (a) Simulated noiseless fringe pattern and (b) simulated nosy fringe pattern with uniformly distributed phase-noise ranging in [−1, 1] radians. (c) Estimated fringe orientation and (d) estimated fringe frequency, both codified in gray levels.

𝛼 = 1. For the case of the WFTF method, the parameter values were 𝑤𝑥1 = 𝑤𝑥ℎ = 𝑤𝑦1 = 𝑤𝑦ℎ = 2, 𝑤𝑥𝑖 = 𝑤𝑦𝑖 = 0.1, 𝑡ℎ𝑟 = 2 and 𝜎 = 10. For the case of the OSFM method, it is necessary to apply the filter several times, as suggested by authors. Similarly, the parameters of the BCED method where selected as recommended by authors. To evaluate the performance of the methods against noise we computed two different image quality measures: the fidelity and the so-called Universal Image Quality Index (IQI) reported in [28]. The fidelity is defined as ‖𝐼 − 𝐼‖2 𝑓 =1− , ‖𝐼‖2

Fig. 5. Using the simulated interferogram shown in Fig. 4, these are the results obtained by applying (a) the OSFM, (b) our method (OSGF), (c) the LFTF, (d) the WFTF, and (f) the BCED.

(17)

where 𝐼 is the noiseless fringe pattern and 𝐼 the noisy fringe pattern. The IQI has the advantage that measures distortions considering three factors: Loss of correlation, luminance distortion and contrast distortion. The dynamic range of the IQI is [−1, 1], where the worst value is −1 and the best value is 1 (see reference [28] for details).

of our proposal when an interferogram has both low-frequency and high-frequency fringes. Fig. 6 shows a real interferogram, its normalized version, the estimated orientation, and the estimated frequency. In this case the size of the image were 200 × 200. Fig. 7 shows the results

3.2. Application to real fringe patterns

obtained by applying the methods already mentioned. The processing parameters used for the methods in this experiment are shown in the

The following are three experiments realized over real interferograms. The main purpose of these experiments is to evidence the merit

Table 2. 4

J. Villa, E. González, G. Moreno et al.

Optics Communications 457 (2020) 124704

Table 1 Performance comparison of the oriented spatial filter masks (OSFM), our method (OSGF), the localized Fourier transform filter (LFTF), the windowed Fourier transform filter (WFTF), and the boundary-aware coherence enhancing diffusion (BCED). Technique

OSFM

OSGF

LFTF

WFTF

BCED

Window size Time processing (s) Number of applications Fidelity IQI

5 × 5 33.91 30 0.9658 0.8481

9 × 9 0.97 1 0.9785 0.8644

13 × 13 7.25 1 0.9751 0.8627

21 × 21 60.33 1 0.9836 0.9294

– 12.99 1 0.9814 0.8869

Table 2 Processing parameters used for the methods applied over interferogram shown in Fig. 6. The input parameters required for the LFTF and the WFTF methods were the same as those used in the simulated interferogram, except that now 𝜎 = 0.1 for the LFTF method. Technique

OSFM

OSGF

LFTF

WFTF

BCED

Window size Time processing (s) Number of applications

5 × 5 5.37 30

9 × 9 0.20 1

15 × 15 1.70 1

21 × 21 20.13 1

– 15.81 1

Table 3 Processing parameters used for the methods applied over interferogram shown in Fig. 8. The input parameters required for the LFTF and the WFTF methods were the same as those used in the simulated interferogram. Note that the window size for the LFTF method was increased twofold to obtain the best result. Technique

OSFM

OSGF

LFTF

WFTF

BCED

Window size Time processing (s) Number of applications

5 × 5 57.06 30

13 × 13 1.80 1

31 × 31 30.32 1

21 × 21 89.25 1

– 19.59 1

Fig. 7. Using the interferogram shown in Fig. 6, these are the results obtained by applying (a) the OSFM, (b) our method (OSGF), (c) the LFTF, (d) the WFTF, and (f) the BCED. Note that due to the bad estimation of fringe orientation, the OSFM produces structures in low-frequency regions. On the other hand, owing to the anisotropic behavior in low-frequency regions, our method works better than the OSFM.

Fig. 6. (a) Real interferogram of size 200 × 200 containing low and high frequency fringes. (b) Normalized interferogram. (c) Estimated fringe orientation. (d) Estimated fringe frequency. Note that fringe orientation has not well estimated in low-frequency regions, however, a good estimation of frequency were obtained.

The second experiment was realized over a real moire fringe pattern. In this case the size of the image was 512 × 512. Fig. 8 shows the real image, its normalized version, the estimated orientation, and the estimated frequency. Fig. 9 shows the results obtained by applying the five methods. The processing parameters used for the methods in this experiment are shown in the Table 3. The third experiment was the denoising of a real interferogram of size 400 × 500, which is shown in Fig. 10: the original image, its normalized version, the estimated fringe orientation, and the estimated fringe frequency. Fig. 11 shows the results obtained by applying the five methods. The processing parameters used for the methods in this experiment are shown in the Table 4.

Fig. 8. (a) Real interferogram of size 512 × 512. (b) Normalized interferogram. (c) Estimated fringe orientation. (d) Estimated fringe frequency.

5

J. Villa, E. González, G. Moreno et al.

Optics Communications 457 (2020) 124704

Fig. 10. Real interferogram of size 400 × 500 containing low and high frequency fringes. (b) Normalized interferogram. (c) Estimated fringe orientation. (d) Estimated fringe frequency.

(group (2)) offer good performance, as depicted in the comparative analysis in [17,18], they are usually difficult to implement (as occurs with the BCED method [14,15]) and the parameters have to be selected by trial and error. Even though the techniques represented by group (1) (convolution methods) can be considered the simplest and fastest methods (for example, one of the earliest methods [31]), most of them are not considered robust enough. In most of the fringe pattern denoising methods that require the previous estimation of the fringe orientation (these methods belong to groups (1) and (2), mainly), it has been found that they have low performance in low-frequency fringes, as has been already explained in this paper. Therefore, for this reason, the proposal in this paper consists in an algorithm that resumes the following advantages: robust enough, low computational effort, easy implementation, fast, and better performance in low-frequency fringes, compared with other oriented denoising methods.

Fig. 9. Using the interferogram shown in Fig. 8, these are the results obtained by applying (a) the OSFM, (b) our method (OSGF), (c) the LFTF, (d) the WFTF, and (f) the BCED. Table 4 Processing parameters used for the methods applied over interferogram shown in Fig. 10. The input parameters required for the LFTF and the WFTF methods were the same as those used in the simulated interferogram, except that 𝑡ℎ𝑟 = 1.5 for the WFTF. Technique

OSFM

OSGF

LFTF

WFTF

BCED

Window size Time processing (s) Number of applications

5 × 5 28.57 20

13 × 13 1.33 1

15 × 15 9.46 1

21 × 21 70.57 1

– 2.47 1

4. Conclusions We have proposed a convolution-based fringe pattern denoising method. The convolution window consists of a gaussian profile that is adaptable according to the orientation and the frequency of fringes. As has been demonstrated with experiments and comparisons with other representative techniques, the proposal has an outstanding performance against noise, especially in low-frequency fringes as does not occur with other oriented filtering methods. These merits are added to those already included in the convolution-based filtering methods.

3.3. Discussion In general, the fringe pattern denoising methods can be classified into three main groups: (1) The convolution based methods, (2) the global estimation methods (for example, those based in partial differential equations, regularization, image decomposition, etc.) and, (3) the transform based methods (mainly represented by the windowed Fourier transform, the localized Fourier transform, and wavelet transform based methods [29,30]). Unlike groups (1) and (2), group (3) has the main advantage that they do not require the previous estimation of the fringe orientation. In particular the WFTF [22] offers an outstanding performance, however, it has three main drawbacks: requires a high computational effort and a long processing time, besides, as occurs with the LFTF [21], the input parameters are usually difficult to tune. On the other hand, although most global estimation methods

Acknowledgments We acknowledge the Programa para el Desarrollo Profesional Docente (PRODEP) of México for the partial support of this work and for the postdoctoral grant of Gamaliel Moreno. We also acknowledge Drs. H. Wang and Q. Kemao for providing the code for processing the fringe images with the BCDE method. Finally, we acknowledge the reviewers for their helpful comments. 6

J. Villa, E. González, G. Moreno et al.

Optics Communications 457 (2020) 124704 [8] J. Villa, R. Rodriguez-Vera, J.A. Quiroga, I. Del-la Rosa, E. Gonzalez, Anisotropic phase-map denoising using a regularized cost-function with complex-valued markov-random-fields, Opt. Lasers Eng. 48 (6) (2010) 650–656. [9] C. Tang, L. Han, H. Ren, D. Zhou, Y. Chang, X. Wang, et al., Second-order oriented partial-differential equations for denoising in electronic-speckle-pattern interferometry fringes, Opt. Lett. 33 (2008) 2179–2181. [10] C. Tang, L. Han, H. Ren, Z. Gao T. Wang, K. Tang, The oriented-couple partial differential equations for filtering in wrapped phase patterns, Opt. Express 17 (2009) 5606–5617. [11] C. Tang, N. Yang, H. Yan, X. Yan, The new second-order single oriented partial differential equations for optical interferometry fringes with high density, Opt. Lasers Eng. 51 (2013) 707–715. [12] C. Tang, Q. Mi, Y. Si, Numerous possible oriented partial differential equations and investigation of their performance for optical interferometry fringes denoising, Appl. Opt. 52 (35) (2013) 8439–8450. [13] L. Cheng, C. Tang, S. Yan, X. Chen, L. Wang, B. Wang, New fourth-order partial differential equations for filtering in electronic speckle pattern interferometry fringes, Opt. Commun. 284 (2011) 5549–5555. [14] H. Wang, Q. Kemao, W. Gao, F. Lin, S. Seah, Fringe pattern denoising using coherence enhancing diffusion, Opt. Lett. 34 (2009) 1141–1143. [15] H. Wang, Q. Kemao, Local orientation coherence based segmentation and boundary-aware diffusion for discontinuous fringe patterns, Opt. Express 24 (2016) 15609–15619. [16] Y. Wu, H. Cheng, Y. Wen, X. Chen, Y. Wang, Coherent noise reduction of phase images in digital holographic microscopy based on the adaptive anisotropic diffusion, Appl. Opt. 57 (19) (2018) 5364–5370. [17] C. Tang, T. Gao, S. Yan, L. Wang, J. Wu, The oriented spatial filter masks for electronic speckle pattern interferometry phase patterns, Opt. Express 18 (2010) 8942–8947. [18] C. Tang, L. Wang, H. Yan, C. Li, Comparison on performance of some representative and recent filtering methods in electronic speckle pattern interferometry, Opt. Lasers Eng. 50 (2012) 1036–1051. [19] B. Li, C. Tang, G. Gao, M. Chen, S. Tang, Z. Lei, General filtering method for electronic speckle pattern interferometry fringe images with various densities based on variational image decomposition, Appl. Opt. 56 (16) (2017) 4843–4853. [20] S. Fu, C. Zhang, Fringe pattern denoising via image decomposition, Opt. Lett. 37 (3) (2012) 422–424. [21] C. Li, C. Tang, H. Yan, L. Wang, H. Zhang, Localized fourier transform filter for noise removal in electronic speckle pattern interferometry wrapped phase patterns, Appl. Opt. 50 (2011) 4903–4911. [22] K. Qian, Windowed fourier transform for fringe pattern analysis, Appl. Opt. 43 (2004) 2695–2702. [23] F. Hao, C. Tang, M. Xu, Z. Lei, Batch denoising of espi fringe patterns based on convolutional neural network, Appl. Opt. 58 (13) (2019) 3338–3346. [24] K. Yan, Y. Yu, C. Huang, L. Sui, K. Qian, A. Asundi, Fringe pattern denoising based on deep learning, Opt. Commun. 437 (2019) 148–152. [25] H. Jiang, Y. Ma, Z. Su, M. Dai, F. Yang, X. He, Speckle-interferometric phase fringe patterns de-noising by using fringes’ direction and curvature, Opt. Commun. 119 (2019) 30–36. [26] Y. Tounsi, M. Kumar, A. Nassim, F. Mendoza-Santoyo, Speckle noise reduction in digital speckle pattern interferometric fringes by nonlocal means and its related adaptive kernel-based methods, Appl. Opt. 57 (27) (2018) 7681–7690. [27] J.A. Quiroga, J.A. Gomez-Pedrero, A. Garcia-Botello, An algorith for fringe pattern normalization, Opt. Commun. 197 (2001) 43–51. [28] Wang. Z, A. Bovik, A universal quality index, IEEE Signal Proc. Lett. 9 (3) (2002) 81–84. [29] S. Zada, Y. Tounsi, M. Kumar, F. Mendoza-Santoyo, A. Nassim, Contribution study of monogenic wavelets transform to reduce speckle noise in digital speckle pattern interferometry, Opt. Eng. 58 (3) (2019) 034109-1–11. [30] N. Escalante, J. Villa, I. De-la Rosa, E. De-la Rosa, E. Gonzalez-Ramirez, O. Gutierrez, C. Olvera, M. Araiza, 2-d continuous wavelet transform for ESPI phase-maps denoising, Opt. Lasers Eng. 51 (2013) 1060–1065. [31] Q. Yu, X. Liu, X. Sun, Generalized spin filtering and an improved derivative-sign binary image method for the extraction of fringe skeletons, Appl. Opt. 37 (20) (1998) 4504–4509.

Fig. 11. Using the interferogram shown in Fig. 10, these are the results obtained by applying (a) the OSFM, (b) our method (OSGF), (c) the LFTF, (d) the WFTF, and (f) the BCED.

References [1] K.J. Gasvik, Optical Metrology, third ed., John Wiley & Sons, 2002. [2] G. Cloud, Optical Methods of Engineering Analysis, Cambridge University Press, 1995. [3] S. Sirohi, Optical Methods of Measurement, Wholefield Techniques, second ed., CRC Press, 2009. [4] M. Servin, J.A. Quiroga, J.M. Padilla, Fringe Pattern Analysis for Optical Metrology, Wiley-VCH, 2014. [5] D. Malacara, M. Servín, Z. Malacara, Interferogram Analysis for Optical Testing, second ed., Taylor & Francis, 2005. [6] Y. Li, S. Qu, X. Chen, Z. Luo, Phase pattern denoising using a regularized cost function with complex-valued markov random fields based on a discrete model, Appl. Opt. 49 (36) (2010) 6845–6849. [7] J. Villa, J.A. Quiroga, I. De-La-Rosa, Regularized quadratic cost-function for oriented fringe-pattern filtering, Opt. Lett. 34 (2009) 1741–1743.

7