A novel algorithm of remote sensing image fusion based on Shearlets and PCNN

A novel algorithm of remote sensing image fusion based on Shearlets and PCNN

Neurocomputing 117 (2013) 47–53 Contents lists available at SciVerse ScienceDirect Neurocomputing journal homepage: www.elsevier.com/locate/neucom ...

2MB Sizes 0 Downloads 128 Views

Neurocomputing 117 (2013) 47–53

Contents lists available at SciVerse ScienceDirect

Neurocomputing journal homepage: www.elsevier.com/locate/neucom

A novel algorithm of remote sensing image fusion based on Shearlets and PCNN Shi Cheng, Miao Qiguang n, Xu Pengfei School of Computer, Xidian University, Xi’an, Shaanxi, China

art ic l e i nf o

a b s t r a c t

Article history: Received 23 December 2010 Received in revised form 8 May 2012 Accepted 21 October 2012 Communicated by Bijaya Ketan Panigrahi Available online 1 December 2012

Shearlet was proposed in 2005. With flexible direction features and a multi-resolution structure which is similar to wavelet, the application of Shearlet in image processing has been developing. In this paper, based on Shearlets and PCNN, a new algorithm of image fusion is proposed. A shear matrix is used to decompose the source images into several directions. To each direction, we extract the gradient characteristics, make multi-scale decomposition using wavelet, and fuse the high frequency coefficients using PCNN. Experiments show that the algorithm we proposed is effective to image fusion. & 2012 Elsevier B.V. All rights reserved.

Keywords: Image processing Multi-resolution Wavelet Shearlet

1. Introduction Recently, the theory for high dimensional data representation called Multi-scale Geometric Analysis (MGA) was proposed, such as Ridgelet, Curvelet, Contourlet and Shearlet [1–3]. It is well known that wavelet is good at dealing with one-dimensional data, such as signal, but is not well performed in high dimensional data, such as image. These new MGA tools provide more directional features than traditional wavelet, so they are widely used in image processing. Proposed in 2005, Shearlet has all properties the other MGA tools have, including multi-scale, localization, anisotropy and directionality. Moreover, with a rich mathematical structure similar to wavelet, which is associated to multi-resolution analysis, it is more efficient for mathematics analysis [4–6]. The decomposition of Shearlets consists of two parts: the multi-scale decomposition and the multidirectional decomposition, which is similar to Contourlets. The Contourlets consist of an application of the Laplacian pyramid followed by a directional filtering, but it is less efficient in direction representation. Shearlets take the place of the directional filtering with a shear matrix, which can provide much more directions. Due to the characteristics of rigorous mathematics structure and the multidirectional, Shearlets can not only construct in discrete domain, but also capture the edges of the image better. In recent years, the application of Shearlets is mainly to image compression, edge detection and denoising [7–10]. The image fusion using Shearlets is still under studying.

n

Corresponding author. Tel.: þ86 13759947711. E-mail address: [email protected] (M. Qiguang).

0925-2312/$ - see front matter & 2012 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.neucom.2012.10.025

In image fusion, fusion rules are important to the results. PCNN, produced in the 1990s, comes from the study of neural cells in the visual cortex by Eckhorn. It is a novel artificial neural network model through simulating the activity of visual nerve cell [11,12]. Compared with the traditional Error Back propagation Neural Network (BP neural network) [13], PCNN model is taking advantage of two characteristics: the linear summation of neuron and the nonlinear modulation couple. Broussard and his collaborators use PCNN to fuse images to improve the rate of recognition. They have proved the relationship between neural firing frequency and image intensity, and confirmed the feasibility of PCNN for image fusion [14]. In this paper, a new fusion method based on PCNN is proposed. The highlight of the new algorithm is to utilize the global feature of source images because PCNN has the global couple and pulse synchronization characteristics [15,16], and accords with the physiological characteristic of human visual neural system. Since the human visibility is more sensitive to the edge and orientation information [17,18], we calculate the gradient information of the images after the images are decomposed into several directions, and then decompose the images with wavelet. The high frequency coefficients are input into PCNN to get a firing map, from which the high coefficients of the images are obtained. Because the characteristics of directional and gradient facilitate motivating PCNN neurons, we get a precise image fusion result. The paper is organized as follows: Section 2 introduces the theory of Shearlets. PCNN is introduced in Section 3. In Section 4, a novel algorithm of image fusion based on Shearlets and PCNN is proposed. Simulational experiments are shown in Section 5. Concluding remarks are given in Section 6.

48

S. Cheng et al. / Neurocomputing 117 (2013) 47–53

Fig. 1. Model of PCNN neuron.

2. Shearlets [10,19,20] An affine family generated by ψ∈L2 ðℝÞ is a collection of   functions of the form: ψ a,t ðxÞ ¼ a−1=2 ψða−1 x−tÞ : a 4 0, t∈ℝ . ψ is called a continuous wavelet, for all f ∈L2 ðℝÞ, Z ∞Z ∞ da : o f ,ψ a,t ðxÞ 4ψ a,t ðxÞ dt f ðxÞ ¼ a 0 −∞ The continuous wavelet transform of f is Wf ða,tÞ ¼ o f ,ψ a,t 4 . Extending the theory of the continuous wavelet transform to high dimensions: fψ M,t ðxÞ ¼ jdetMj−1=2 ψðM −1 x−tÞ : M∈G, t∈ℝn g. Similar to the one-dimensional case, ψ is called a continuous wavelet if, for all f ∈L2 ðℝÞ, Z Z o f ,ψ M,t 4ψ M,t ðxÞ dt dλðMÞ: f ðxÞ ¼ G ℝn

where λðMÞ is measured on G, and Wf ðM,tÞ ¼ o f ,ψ M,t 4 is called the continuous wavelet transform of f. We consider a 2-dimensional affine system n o n ψ ast ðxÞ ¼ jdetM as j−1=2 ψðM −1 as x−tÞ : M as ∈Γ, t∈ℝ

Fig. 2. Image decomposition framework with Shearlets.

where Γ is a 2-parameter dilation group Γ ¼ fM as ¼ pffiffiffi ! a as pffiffiffi : ða,sÞ∈ℝ þ  ℝg. The orientation is controlled by the 0 a

function is

parameters, and they become increasingly elongated at ^ ^ 1 ,ξ2 Þ ¼ ψ^ 1 ðξ1 Þψ^ 2 fine scales (as a-0). Choose ψ such that ψðξÞ ¼ ψðξ ðξ2 =ξ1 Þ, where ψ^ 1 ∈C ∞ ðℝÞ is a wavelet, and suppψ^ 1 ⊂½−1=2, −1=16∪½1=16,1=2; ψ^ 2 ∈C ∞ ðℝÞ, and suppψ^ 2 ⊂½−1,1. This implies ∞ 2 ^ ^ ψ∈C ðℝÞ, and suppψ⊂½−1=2,1=2 . For all a∈ℝ þ , s∈ℝ, and t∈ℝ2 , Sf ða,s,tÞ ¼ o f ,ψ ast 4 is called continuous Shearlet transform of

where ψ^ ð0Þ ðξÞ ¼ ψ^ ð0Þ ðξ1 ,ξ2 Þ ¼ ψ^ 1 ðξ1 Þψ^ 2 ðξ2 =ξ1 Þ. ψ 1 ,ψ 2 are defined n o ^ 2 : jξ1 j≥1=8,jξ2 j≤1 is above. For any ðξ1 ,ξ2 Þ∈D0 , D0 ¼ ðξ1 ,ξ2 Þ∈ℝ

f ∈L2 ðℝÞ. In this affine system, it is obvious that the matrix M as can be ! pffiffiffi !   a 0 a as 1 s pffiffiffi , M j,l ¼ Bl Aj , pffiffiffi ¼ factored as M as ¼ 0 a 0 a 0 1     4 0 1 1 , B0 ¼ are the anisotropic dilation where A0 ¼ 0 2 0 1 matrix and shear matrix. Hence the discrete Shearlet transform

3j

ð0Þ ψ j,l,k ðxÞ ¼ 2 2 ψ ð0Þ ðBl Aj x−kÞ,

called the horizontal cone. Similarly, the vertical cone in D1 is        ^ 2 : ξ2 ≥1=8, ξ1 j≤1g: D1 ¼ ðξ1 ,ξ2 Þ∈ℝ   ξ 2  Let A1 ¼

2

0



 , B1 ¼

1

0

 , and ψ^ ð1Þ ðξÞ ¼ ψ^ ð1Þ ðξ1 ,ξ2 Þ ¼

0 4 1 1 ψ^ 1 ðξ2 Þψ^ 2 ðξ1 =ξ2 Þ, where ψ 1 ,ψ 2 are defined above. Hence, the Shearn 3j ð1Þ let transform function in D1 is ψ j,l,k ðxÞ ¼ 2 2 ψ ð1Þ ðBl1 Aj1 x−kÞ : j≥0, −2j ≤l≤2j −1, k∈Z2 g.

S. Cheng et al. / Neurocomputing 117 (2013) 47–53

49

where I M is the index set of the M largest inner products j〈f ,ψ μ 〉j. The resulting approximation error is

To make this discussion more rigorous, it will be useful to examine this problem from the point of view of approximation   theory. If F ¼ ψ μ : μ∈I is a basis or, more generally, a tight frame for L2 ðR2 Þ, then an image f can be approximated by the partial sums

εM ¼ jjf −f M jj2 ¼ ∑ jf ,ψ μ j2 , μ∉I M

and this quantity approaches asymptotically zero as M increases. The approximation error of Fourier approximations is εM ≤CM −1=2 , of the Wavelet is εM ≤CM −1 , and the approximation

f M ¼ ∑ o f ,ψ μ 4 ψ μ , μ∈I M

Fig. 3. Image fusion framework with Shearlets and PCNN.

Optical

SAR

Shearlet—PCNN

C-P

W-P

CP

LP

GP

R [22]

Fig. 4. Optical and SAR images fusion results based on Shearlets and PCNN.

50

S. Cheng et al. / Neurocomputing 117 (2013) 47–53

error of Shearlets is εM ≤Cðlog MÞ3 M −2 , which is better than Fourier and Wavelet approximations.

V F and V L are the link amplitude coefficients, β is the value of link strength, and mijkl and wijkl are the link weight matrices (Fig. 1).

3. Pulse couple neural network (PCNN)

4. Shearlets and PCNN in image fusion

PCNN, called the third generation artificial neural network, is feedback network formed by the connection of lots of neurons, according to the inspiration of biologic visual cortex pattern. Every neuron is made up of three sections: receptive section, modulation and pulse generator section, which can be described by discrete equation. The receptive field receives the input from the other neurons or external environment, and transmits them in two channels: Fchannel and L-channel. In the modulation on field, add a positive offset on signal Lj from L-channel; use the result to multiply modulation with signal F j from F-channel. When the neuron threshold θj ≥U j , the pulse generator is turned off; otherwise, the pulse generator is turned on, and output a pulse. Mathematical model of PCNN is described below. 8 F ij ½n ¼ expð−αF ÞF ij ½n−1 þ V F ∑mijkl Y kl ½n−1 þ Sij > > > > > L ½n ¼ expð−αL ÞLij ½n−1 þ V L ∑wijkl Y kl ½n−1 > < ij U ij ½n ¼ F ij ½nð1 þ βLij ½nÞ > > > Y ij ½n ¼ 1 if U ij ½n 4 θij ½n or 0 otherwise > > > : θ ½n ¼ expð−α Þθ ½n−1 þ V Y ½n−1

4.1. Decomposition of Shearlets

ij

θ

ij

θ

The decomposition of Shearlets is divided into two parts: multi-direction and multi-scale decomposition. The image can be decomposed into any number of directions using shear filter, and then each direction can be decomposed by Wavelets in any scale. The framework of image decomposition with Shearlets is shown in Fig. 2 [21]. 4.2. Image fusion based on PCNN When PCNN is used for image processing, it is a single twodimensional network. The number of the neurons is equal to the number of pixels. There is a one-to-one correspondence between the image pixels and the network neurons. In this paper, Shearlets and PCNN are used to fuse images. The steps are described below, and the framework of the process is shown in Fig. 3.

ij

(1) Decompose the original images A and B respectively into many different directions f NA , f^ NA ,f NB , f^ NB ðN ¼ 1,:::, nÞ via shear matrixs (in this paper, n ¼ 3).

where αF and αL are the constant time of decay, αθ is the threshold constant time of decay, V θ is the threshold amplitude coefficient,

remote-8

remote-4

Shearlet—PCNN

C-P

GP

LP

CP

W-P

R [22]

Fig. 5. Remote sensing image fusion results based on Shearlets and PCNN.

S. Cheng et al. / Neurocomputing 117 (2013) 47–53

51

(2) Calculate the gradient features in every direction to form feature maps, Gradf , Gradf^ , Gradf , Gradf^ .

5. Experimental results

(3) Decompose feature maps of all directions using DWT, DGf NA , DGf^ NA , DGf NB , DGf^ NB are high frequency coefficients after the decomposition. (4) Take DGf , DGf^ , DGf , DGf^ NA NA NB NB into PCNN, and fire maps i n all directions f iref , f iref^ , f iref , f iref^ are obtained.

In this section, three different examples, optical and SAR images, remote sensing image and hyperspectral image, are provided to demonstrate the effectiveness of the proposed method. Many different methods, including Average, Laplacian Pyramid (LP), Gradient Pyramid (GP), Contrast Pyramid (CP), Contourlet–PCNN (C–P), and Wavelet– PCNN (W–P), are used to compare with our proposed approach. Reference [22] shows another algorithm of image fusion using PCNN and Shearlets. But in the article, the gradient information of the image is not considered, only the direction of information with Shearlet decomposition and PCNN is used to fusion the image. The algorithm in reference [22] is also compared with our proposed in the paper. The subjective visual perception gives us direct comparisons, and some objective image quality assessments are also used to evaluate the performance of the proposed approach. The following image quality metrics are used in this paper: Entropy (EN), Overall cross entropy (OCE), Standard deviation (STD), Average gradient (Ave-grad), Q , and Q AB=F . In these three different experiments, the parameters of values of PCNN are showing as follows:

NA

NA

NA

NA

NB

NB

NB

NB

(5) Take the Shearlets on original images A and B, the high frequency h h h h coefficients in all directions are f NA , f^ NA , f NB and f^ NB , and the low l l l l frequency coefficients are f NA , f^ NA , f NB and f^ NB . The fused high frequency coefficients in all directions can be selected as follows: 8 8 h < f h , f iref ≥f iref < f^ , f iref^ ≥f iref^ h NA NB NA NB NA NA h ^ , fN ¼ : fN ¼ : f hNB , f iref NA o f iref NB : f^ h , f iref^ o f iref^ NB

NA

NB

The fusion rule of the low frequency coefficients in any direction is described below 8 8 l l l < f l , Varf l ≥Varf l < f^ , V ar f^ ≥Var f^ l NA NA NB NA NA NB l ^ ¼ , f : fN ¼ N : f lNB , Varf lNA o Varf lNB : f^ l , Var f^ l oVar f^ l NB NA NB

Experiment 0 pffiffiffi 1= 2 B W ¼@ 1 pffiffiffi 1= 2

where Varf is the variance of f. (6) The fused image is obtained using the inverse Shearlet transform.

Hyperspectral 1

C-P

CP

Hyperspectral 2

GP

W-P

1: 1 1 1

αL ¼ 0:03, αθ ¼ 0:1, V L ¼ 1, V θ ¼ 10, β ¼ 0:2, pffiffiffi 1 1= 2 1 C A, and the iterative number is n ¼ 100. pffiffiffi 1= 2

Shearlet—PCNN

LP

R [22]

Fig. 6. Hyperspectral image fusion results based on Shearlets and PCNN.

52

S. Cheng et al. / Neurocomputing 117 (2013) 47–53

Table 1 Comparison of image quality metrics. Dataset

Algorithm

Q AB=F

Q

EN

STD

Ave-grad

OCE

Experiment 1

R[22] LP GP CP C–P W–P Proposed

0.4220 0.3002 0.2412 0.2816 0.3562 0.3753 0.4226

0.4941 0.3017 0.2953 0.2961 0.4523 0.4976 0.5010

6.9973 6.5209 6.3993 6.4759 6.7424 6.6142 6.9961

22.3334 24.8906 22.6744 24.1864 31.2693 25.2683 34.1192

0.0540 0.0478 0.0379 0.0457 0.0665 0.0662 0.0575

0.5349 3.0844 3.2336 3.1292 0.5538 0.5689 0.5410

Experiment 2

R[22] LP GP CP C–P W–P Proposed

0.5981 0.5219 0.4736 0.5120 0.5658 0.4283 0.6212

0.7741 0.7530 0.7599 0.7475 0.7516 0.7547 0.7775

7.4100 6.9594 6.9024 6.9237 7.3332 6.8543 7.1572

55.9924 49.2283 47.0888 48.9839 54.3504 47.3304 56.2993

0.0303 0.0399 0.0342 0.0392 0.0390 0.0346 0.0381

2.8047 3.3738 3.6190 3.3812 3.0628 3.2436 2.9046

Experiment 3

R [22] LP GP CP C–P W–P Proposed

0.6303 0.6414 0.5720 0.5909 0.5838 0.5319 0.6230

0.7873 0.7728 0.7898 0.7469 0.7435 0.7788 0.7502

6.9324 6.8883 6.5649 6.7499 6.9451 6.5847 7.0791

54.5235 47.4990 41.3974 43.4631 46.5294 41.6623 55.9533

0.0226 0.0274 0.0223 0.0318 0.0262 0.0231 0.0246

0.5134 0.9959 1.0249 0.9834 1.1745 1.5318 0.5246

Experiment 2: αL ¼ 0:02, αθ ¼ 0:05, V L ¼ 1, V θ ¼ 15, β ¼ 0:7, pffiffiffi 1 0 pffiffiffi 1= 2 1 1= 2 B 1 1 C W ¼@ 1 A, and the iterative number is n ¼ 100. pffiffiffi pffiffiffi 1= 2 1 1= 2 Experiment 3: αL ¼ 0:03, αθ ¼ 0:1, V L ¼ 1, V θ ¼ 15, β ¼ 0:5, pffiffiffi 1 0 pffiffiffi 1= 2 1 1= 2 B 1 1 C W ¼@ 1 A, and the iterative number is n ¼ 100. pffiffiffi pffiffiffi 1= 2 1 1= 2

As optical and SAR images, remote sensing image and hyperspectral image are widely used in military, so the study of these images in image fusion are of very important. Figs. 4–6 give the fused images with Shearlet–PCNN and some other different methods. From Figs. 4–6 and Table 1, we can see that image fusion based on Shearlets and PCNN can get more information and less distortion than other methods. In experiment 1, the edge feature from Fig. 4(a) and spectral information from Fig. 4(b) are kept in the fused image by using the proposed method, which is showing in Fig. 4(c). In Fig. 4(d), the spectral character in the fused image, fused by Contourlet and PCNN, is distorted and the from visual point of view, the color of image is too prominent. From Fig. 4(e)–(f), spectral information of the fused images is lost and the edge features are vague. Fig. 5 shows the fused remote sensing image, which is able to provide more new information since it can penetrate clouds, rain, and even vegetation. With different imaging modalities and different bands, its features are different in each image. In Fig. 5(c) and (d), band 8 has more river characteristics but less city information, while band 4 has opposite imaging features. Fig. 5(c) is the fused image using Shearlets and PCNN. The numerical results in Fig. 5 and Table 1 show that the fused image based on Shearlets and PCNN keep better river information, and even involve excellent city features. In Fig. 5(d), in the middle of the fused image using Contourlet and PCNN, has obvious splicing effect. The representation of the detail in Fig. 5(i) is weaker than Fig. 5(c). Fig. 6(c) is the fused hyperspectral image. Fig. 6(a) and (b) is the two original images. The track of the airport is clear in Fig. 6(a), however, some planes information are lost. Fig. 6(b) shows the different information. In

the fused image, the track information is more clearly, and aircrafts characters are more obvious. But lines on the runways are not clear enough in the fused images using other methods. From Table 1 we can see that most metric values using the proposed method are better than other methods do.

6. Conclusion The theory of Shearlets and PCNN is introduced in this paper, and a new algorithm of image fusion based on them is proposed. As a new MGA tool, Shearlet is equipped with a rich mathematical structure similar to wavelet and can capture the information in any direction. And the edge and orientation information are more sensitive than gray according to human visibility. We take full advantage of multi-direction of Shearlets and gradient information to fuse image. Moreover, PCNN is selected as a fusion rule to select the fusion coefficients. Because the characteristics of directional and gradient facilitate motivating PCNN neurons, the more precise image fusion results are gotten. Several different kinds of images, shown in the experiments, prove that the new algorithm we proposed in this paper is effective. After development in recent years, the theory of Shearlets is gradually improving. But the time complexity of Shearlets decomposition and the implementation of PCNN have been the focus of the study. In the near future, we will research more effective algorithm of Shearlets and PCNN in image fusion.

Acknowledgments The authors would like to thank the anonymous reviewers for their helpful comments and advices which contributed much to the improvement of this paper. The work was jointly supported by the National Natural Science Foundations of China under Grant nos. 61072109, 61272280, 61142011, 61272195 and 61173090, the Fundamental Research Funds for the Central Universities under Grant nos. K5051203020 and K5051203001, the Creative Project of the Science and Technology State of Xi’an under Grant nos. CXY1133(1) and CXY1119(6).

S. Cheng et al. / Neurocomputing 117 (2013) 47–53

References [1] E.J. Candes, D.L. Donoho, Continuous curvelet transform. I. Resolution of the wavefront set, Appl. Comput. Harmonic Anal. 19 (2005) 162–197. [2] Xiao-Hui Yang, Li-Cheng Jiao, Fusion Algorithm for Remote Sensing Images Based on Nonsubsampled Contourlet Transform, Acta Autom. Sin. 34 (2008) 274–281. [3] K. Guo, D. Labate, Optimally sparse multidimensional representation using shearlets, SIAM J. Math. Anal. 39 (2007) 298–318. [4] G. Kutyniok, D. Labate, Resolution of the wavefront set using continuous shearlets, Trans. Am. Math. Soc. 361 (2009) 271–2754. [5] G. Kutyniok, D. Labate, Construction of regular and irregular shearlet frames, J. Wavelet Theory Appl. 1 (2007) 1–10. [6] Wang.-Q. Lin, The discrete shearlet transform: a new directional transform and compactly supported shearlet frame, IEEE Trans. Image Process. 5 (2010) 1166–1180. [7] Glenn Easley, Demetrio Labate, Wang.-Q. Lim, Sparse directional image representations using the discrete shearlet transform, Appl. Comput. Harmonic Anal. 25 (2008) 25–46. [8] K. Guo, D. Labate, W. Lim, Edge analysis and identification using the continuous shearlet transform, Appl. Comput. Harmonic Anal. 27 (2009) 24–46. [9] Gitta Kutyniok, Wang.-Q. Lin, image separation using wavelets and shearlets, Lect. Notes Comput. Sci. 6920 (2012) 416–430. [10] Shearlet Webpage, 〈http://www.shearlet.org〉. [11] R. Eckhorn, H.J. Reitboeck, M. Arndt, et al., Feature linking via synchronization among distributed assemblies: simulation of results from cat cortex, Neural Comput. 2 (1990) 293–307. [12] R. Eckhorn, H.J. Reitboeck, M. Arndt, et al., Feature linking via stimulus-evoked oscillations: experimental results from cat visual cortex and functional implications form network model, in: Proceedings of the International JCNN, Washington DC, vol. 1, 1989, pp. 723–730. [13] Wen Jin, Zhao Jia Li, Luo Si Wei, Han Zhen The improvements of BP neural network learning algorithm, in: Proceedings of 5th International Conference on Signal Processing, WCCC-ICSP 2000, vol. 3, 2000, pp. 1647–1649. [14] Randy.P. Broussard, Steven.K. Rogers, Mark.E. Oxley, et al., Physiologically motivated image fusion for object detection using a pulse coupled neural network, IEEE Trans. Neural Network 10 (1999) 554–563. [15] Weisheng Chen, Licheng Jiao, Adaptive tracking for periodically time-varying and nonlinearly parameterized systems using multilayer neural networks, IEEE Trans. Neural Networks 21 (2010) 345–351. [16] Weisheng Chen, Zhengqiang Zhang, Globally stable adaptive backstepping fuzzy control for output-feedback systems with unknown high-frequency gain sign, Fuzzy Sets Syst. 161 (2010) 821–836. [17] Yan Jingwen Qu Xiaobo, Image fusion algorithm based on features motivated multi-channel pulse coupled neural networks, Bioinf. Biomed. Eng. (2008) 2103–2106. [18] Xiaobo Qu, Changwei Hu, Jingwen Yan, Image fusion algorithm based on orientation information motivated pulse coupled neural networks, Intell. Control Autom. (2008) 2437–2441. [19] R. Easley, Demetrio Labate, Flavia Colonna, Shearlet based total variation diffusion for denoising, IEEE Trans. Image Process. 18 (2009) 260–268. [20] Glenn.R. Easley, Demetrio Labate, Wang.-Q. Lim, Optimally sparse image representations using shearlets, Signals Syst. Comput. 11 (2006) 974–978.

53

[21] Miao Qiguang, Shi Cheng, A novel algorithm of image fusion using shearlets, Optics Commun. 284 (2011) 1540–1547. [22] Yinhua Ma, Yuting Zhai, Peng Geng, Pengzhan Yan, A novel algorithm of image fusion based on PCNN and shearlet, Int. J. Digital Content Technol. Appl. (JDCTA) 5 (2011) 347–354.

Shi Cheng received the bachelor degree from Xi’an University of Architecture & Technology. She is currently pursuing the Doctor degree in Computer Application Technology at Xidian University in China. Her main research interests include: multiscale geometric analysis, image processing and information fusion.

Miao Qiguang received the M.Eng. and Doctor degrees in computer science from >Xidian University, China, in 2004 and 2005. He is currently working as a professor at school of computer, Xidian University.> His research interests include: the intelligent information processing, the intelligent> image processing, and multiscale geometric representations for image.

Xu Pengfei received the B.Sc. degree from the Department of Information System and Information Manage. He is currently pursuing the Doctor degree in Computer Application Technology at Xidian> University in China. His main research interests include: multiscale geometric representations for image and the intelligent image processing.