Biomedical Signal Processing and Control 46 (2018) 281–292
Contents lists available at ScienceDirect
Biomedical Signal Processing and Control journal homepage: www.elsevier.com/locate/bspc
Ripplet domain fusion approach for CT and MR medical image information Sneha Singh ∗ , R.S. Anand Departemnt of Electrical Engineering, Indian Institute of Technology, Roorkee 247667, Uttarakhand, India
a r t i c l e
i n f o
Article history: Received 22 February 2017 Received in revised form 4 April 2018 Accepted 28 May 2018 Keywords: Multimodal Image fusion Ripplet transform CT MR Spatial frequency
a b s t r a c t Multimodal medical image fusion (MIF) plays an important role as an assistant for medical professionals by providing a better visualization of diagnostic information using different imaging modalities. The process of image fusion helps the radiologists in the precise diagnosis of several critical diseases and its treatment. In this paper, the proposed framework presents a fusion approach for multimodal medical images that utilize both the features extracted by the discrete ripplet transform (DRT) and pulse coupled neural network. The DRT having different features and a competent depiction of the image coefficients provides several directional high-frequency subband coefficients. The DRT decomposition can preserve more detailed information present in the reference images and further enhance the visualization of the fused images. Firstly, the DRT is applied to decompose the reference images into several low and highfrequency subimage coefficients that are fused by computing the novel sum modified Laplacian and novel modified spatial frequency motivated pulse coupled neural model. This model is used to preserve the redundant information also. Finally, fused images are reconstructed by applying the inverse DRT. The performance of the proposed fusion approach is validated by extensive simulation on the different CT-MR image datasets. Experimental results demonstrate that the proposed method provides the better fused images in terms of visual quality along with the quantitative measures as compared to several existing fusion approaches. © 2018 Elsevier Ltd. All rights reserved.
1. Introduction In the recent years, multimodal MIF technology has emerged as a potential research area because most of the screening programs are focused on the analyzing of digital images and detection based these programs is a major asset in the struggle against the critical diseases like Cancer, Hemorrhage and Alzheimer and so many others. This is only the main reason for attracting towards the fusion of multimodal images because there are several modalities of medical imaging, giving a different insight of the human body. However, because of the several sources of medical images used by the radiologist, the problem of information overloading occurs. None of the imaging modality is able to produce comprehensive and accurate information, especially in critical diseases that is very rigorous, costly, time consuming, the chance of human errors and most importantly requires lots of years of experience. This is the main motivation by capturing the most relevant diagnostic information from the source CT and MR images into a single image that
∗ Corresponding author. E-mail address:
[email protected] (S. Singh). https://doi.org/10.1016/j.bspc.2018.05.042 1746-8094/© 2018 Elsevier Ltd. All rights reserved.
plays an important role in the medical diagnosis. Moreover, the advanced imaging modalities are too much costlier that also puts an extra burden on the individuals. Another reason for this attraction is the possibility to fuse all complementary and contrasting information acquired from the multiple images of the same organs into a single fused image. Therefore, there is a need to develop some effective fusion approaches to merge all of the features into a single image that has a significant clinical interpretation and suitable for the effective diagnostic analysis. Previously, lots of researchers have concentrated on the fusion of medical images (MIF) [1–5]. Image fusion (IF) approach can be developed at the different pixel, feature or decision level. The fusion, at the pixel level is again categorized as spatial and transform domain. The spatial domain approach is based on averaging and weighted averaging the source images [6]. This method leads to reduce the contrast of the fused images and there would be a loss of the few structures in the fused image. Partitioning of the image based method has been presented [1] in which the selection of the block is based on its saliency or activity. In this approach, the selection of a block, their size, and saliency criteria decide the quality of fused images. It suffers from the loss of information at each location that also affects the diagnosis. Different authors [2–4]
282
S. Singh, R.S. Anand / Biomedical Signal Processing and Control 46 (2018) 281–292
have employed neural network for pixel selection or region selection. The dimensionality reduction techniques based on the PCA were studied in [5]. Joshita and Selin [7] proposed a PCA based fusion approach of several input images as a weighted superposition of all input images to improve the resolution of an image. In [8], Liu et al. quoted a fusion approach based on average gradient and mutual information to fuse high-frequency information. This approach retains more detail information; however, it still suffered from blocking artifacts. To boost the performance of the IF/MIF approaches, the authors have also moved towards the transform domain. In 1989, Toet et al. [9] introduced different pyramid schemes for data fusion. In [10], the authors proposed a wavelet transform (WT) based fusion approach using maximum selection rule. However, this approach suffers from the blocking effects/artifacts. Pajares et al. also presented a fusion approach with the similar or different resolution level of multiple images [11]. In [12], the authors presented another WT based fusion approach for multispectral and panchromatic images. In [13], Yang et al. presented a WT based approach in which visibility and variance based fusion rules are selected to fuse the low and high frequency subband coefficients, respectively. The MIF methods based on the WT is capable only to capture of 1-dimensional singularity. It means that it has only limited directional information, thus it causes artifacts along the edges and loose the important diagnostic information also. To overcome the limitations of the WT, ridgelet transform has been introduced to extract the edges [14], but it did not do well for capturing the curve singularities. So, Donoho et al. have introduced the curvelet transform (CVT) to capture 2-D singularities of any arbitrary curve [15]. However, the CVT cannot be built directly in the discrete domain [16]. Furthermore, the performance of the IF/MIF methods [17] has been analyzed with other multi-scale transformation techniques like curvelet [18–20] and contourlet transform [21–23]. In [24], the authors proposed a contourlet transform based fusion approach in which weighted average and max selection rules are utilized to evaluate the performance of the fusion approach. In [25], NSCT decomposition is utilized to enhance the quality of the fused images based on the pulse coupled neural network motivated by the spatial frequency for highpass subband coefficients. In [26,27], the NSCT based decomposition is further utilized with max selection rule for fusing the low frequency and modified spatial frequency for high frequency coefficients. In [28], NSCT decomposition alongwith the novel sum modified Laplacian (NSML) is utilized for fusing the image components and getting the local features presented in the reference images. However, the NSCT based image decomposition suffers from the lack of shift invariance [29] and the limited number of directional components. To overcome the limitation of the NSCT based approaches, authors presented shearlet based fusion approach along with pulse coupled neural network in [30]. In [31], the author proposed multimodal medical image fusion approach based on the shift-invariant shearlet transform in which averaging and maximum fusion rules were utilized for fusing the decomposed coefficients. In another fusion approach [32], the authors utilized the nonsubsampled shearlet transform and the NSML for fusing the subband image coefficients. To overcome the limitations of the real-valued WT problems, an improved WT i.e. Daubechies complex wavelet transform has been proposed in [33] in which maximum selection fusion rule is utilized. In [34], the authors proposed a fusion approach to enhance the correlation between the subband image based on the discrete fractional wavelet transform. In this approach, all subband coefficients are fused using the weighted regional variance fusion rule. In another approach [35], local extrema is used to decompose the reference images. In this approach, energy and contrast guided fusion rules are applied to evaluate the performance. Jun et al. [36] introduced discrete ripplet transform (DRT) type I which generalizes the CVT
by adding two new parameters that assured to present the singularities along arbitrary shaped curves. The DRT is also able to overcome the limitations of other transformation approaches by providing the sparse representation of an image object. In [37], authors proposed a hybrid approach using the WT and DRT in which the approximation component obtained after the WT decomposition, is further decomposed using the DRT, however the results still suffered from shift invariance effect. In last few years, a biologically inspired feedback neural network (BIFNN) [38] i.e. the PCNN, is efficiently introduced in several applications of image processing [27,39–41]. Based on the outcomes of these methods [27,38,40,41], it is observed that they produce good visual results, but they have some problems related to contrast reduction and loss of diagnostic information [2,42]. In [43], another fusion approach was proposed using an improved neural model that also enhances the quality of the fused images. The PCNN and its modified versions with these aforementioned transform techniques have been presented in the IF/MIF domains by various authors in [44–46]. In this paper, the proposed fusion approach is framed based on the concept of DRT and pulse coupled neural model, in which the feeding input is not provided as a conventional PCNN for fusing the low and high-frequency image coefficients. In addition to that, firstly novel sum modified spatial frequency (NMSF) and novel sum modified Laplacian (NSML) are utilized as feeding inputs to the neural model in the DRT domain. The DRT decomposition can preserve more details present in the reference images and further enhance the visualization of the fused images. The NMSF is utilized to express the clarity and activity level of the input images within a specified window. It is also able to reflect the directional informative content present in the reference images. The NSML is utilized as an external input to the neural model for improving the performance of the proposed fusion approach. Furthermore, the performance of the fusion approach is analyzed visually and quantitatively by performing the extensive experiments on source CT and MR image pairs. In addition to this, the salient contribution of the proposed fusion framework in the DRT domain over several other fusion methods developed previously is summarized as follows, • This paper presents a fusion approach for fusing the CT and MR medical images that rely on the combination of the DRT and PCNN by improving the feeding inputs that provides more details present in the reference images and further enhance the visualization of the fused images. • Different fusion rules are proposed for combining the low and high-frequency subimage coefficients. • The biologically inspired feedback neural model is utilized for high and low-frequency DRT subimage coefficients based on the firing times and improved feeding inputs that is also able to capture the suitable differences and provide the resultant images with high contrast and clarity. • For fusing the low and high-frequency DRT coefficients, the computation of the NSML and NMSF is proposed and used as the inputs to motivate the PCNN model and also able to capture the fine details present in the reference images. 2. Methodology 2.1. Ripplet Transform A higher dimensional framework called discrete ripplet transform (DRT) [36] is able to characterized to an image at various scales and directions. The DRT is quite different as compared to curvelet transform that uses a parabolic scaling and captures 2D singularities along C2 curves, but on the other side, the DRT provides a new
S. Singh, R.S. Anand / Biomedical Signal Processing and Control 46 (2018) 281–292
283
Fig. 1. (a) The tiling of the polar frequency domain (b) source MR image (c) DRT decomposition of the source image.
tight frame with a sparse representation for images with discontinuities along Cd curves [36] by representing the cubic scaling and so forth on by selecting d = 3, 4 and so on. So, it is assumed that the DRT is a generalized form of curvelet transform by the inclusion of two parameters i.e. support c and degree d. The anisotropic capabilities of the DRT are capable to efficiently represent the singularities along the random curve shape due to the addition of these two new parameters c and d. Discrete ripplet transform (DRT) is evaluated by discretizing the parameters of ripplets. The parameter a is sampled at dyadic →
intervals whereas b are sampled at equal-spaced intervals. The frequency response of ripplet function is given by [36], (1)
y = where w and v satisfy the following conditions and ax = 2−x , b T
[c · 2−x · y1, 2−x/d · y2 ] and z = (2/c) · (2−[x(1−1/d)] ) · z , wherey = [y1 y2 ]T ,(·)T denotes the transpose of a vector and x, y1, y2, z ∈ Z . 2
|w(2−x · r)| = 1 and
∞
2
|v(
z=−∞
x=0
2−x(1/d−1) · ω − z)| = 1 c
(2)
These two windows split its polar frequency domain into wedges presented in Fig. 1(a), where the shadowed wedge refers to the frequency transform of the key function. For a particular combination of the parameters, c and d are used to find the total number of the directions at each subband together. The decomposition of a 2-D image r(i, j) with size n × m done by the DRT is expressed in terms of the DRT coefficients Rx,y ,z Rx,y ,z =
m−1 n−1
r(i, j)px,y ,z (i, j)
(3a)
i=0 j=0
After applying the inverse DRT, an image rˆ(x, y) can be approximated as, rˆ (i, j) =
x
y
Rx,y ,z px,y ,z (i, j)
⎫ ⎪ ⎪ ⎪ ⎪ Li,j [n] = e−˛L Li,j [n − 1] + VL k,l Wi,j,k,l Yk,l [n − 1] ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ k k k ⎬ Ui,j [n] = Fi,j [n](1 + ˇLi,j [n]) Fi,j [n] = Si,j
k [n] = e−˛T T k [n − 1] + V Y k [n − 1] Ti,j T i,j i,j
1 1+d 2−x(1/d−1) pˆ x (r, ω) = √ a 2d w(2−x · r)v( · ω − z) c c
∞
to the neighborhood neurons. This model is not required to train and also able to extract the useful information from a typical background. The PCNN neuron consists of a receptive field, linking field and pulse generator as illustrated in Fig. 2. In the proposed work, an improved model [27,51] is used and its mathematical formulation is given as
(3b)
z
Figs. 1(b) and 1(c) show a real MR image and decomposition of the image processed with ripplet transform, respectively. 2.2. Biologically inspired neural network The PCNN is an example of the biologically inspired feedback neural network (BIFNN) [47–50] that is applied to several applications of image processing tasks [41]. In this model, every pixel of the source images is connected to a neuron that is further connected
k [n] Yi,j
=
k (n) 1, Ui,j (n) > Ti,j
0,
otherwise
⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎭
(4)
where i and j refer to pixel positions, Si,j is an external input signal.Ui,j ,Yi,j and Ti,j refer to the internal, external state activity of neuron and threshold, respectively. This model has feeding (Fi,j ) and linking input (Li,j ) with the decay constants ˛L and ˛T , iteration n. 3. Proposed Fusion Approach This section provides a detailed discussion of the implementation steps involved in the formulation of the proposed fusion approach. In the proposed approach, the subband decomposition as discussed above is utilized for collecting the most important diagnostic information. After the DRT decomposition of the reference CT/MR images, one low frequency and a series of high-frequency subband coefficients are obtained. A low-frequency component conveys the useful diagnostic information of the source CT/MR image, whereas high-frequency subbands depict the details by varying its directions and scales based on the DRT parameters. During the preprocessing step, it is taken care of that the original source images are correctly registered. In addition to the fusion of low and high-frequency coefficients, novel sum modified Laplacian and novel sum modified spatial frequency are computed and applied to motivate the neural model as an external input. The novel sum modified spatial frequency is computed to include the directional contents that reflect both the clarity and activity level. The NSML is utilized as an activity level measurement parameter for the subband coefficient that reflects the more amount of the informative contents present in both the reference images. It can also be able to represent the information related to the contours and boundaries of multiple objects present in the reference images. Moreover, the salient implementation steps involved in the process flow of the proposed fusion approach are shown in Fig. 3.
284
S. Singh, R.S. Anand / Biomedical Signal Processing and Control 46 (2018) 281–292
Fig. 2. Biologically inspired neural network.
Let R = R(i, j) and S = S(i, j) be two source images acquired from different medical imaging modality such as the CT and MR, respectively, and the implementation of the above aspects, the proposed fusion algorithm is formulated as follows, Step 1: Start with the decomposition of the reference images using the DRT (with the parameter c = 1 and d = 4) into low frequency (lf ) and high frequency (hf ) coefficients based on the successive experiments from coarser to the finer scale of decomposition. [lfRDRT , hfRDRT ] = DRT (Ri,j ) and [lfSDRT , hfSDRT ] = DRT (Si,j )
(5)
Step 2: For the lf coefficients fusion, firstly compute the NSML as follows NSML(i, j) =
m
w(m, n) X(i + m, j + n)
X(i, j) =
− 1, j) − lfZDRT (i
+ 1, j)|+
1/15 2/15
w (m, n) = ⎣ 2/15 3/15 1/15 2/15
1/15
(7)
⎤ (8)
1/15
1 2 M N DRT DRT M(N − 1) i=1 j=2 (hfZ (i, j − 1) − hfZ (i, j)) NMSF = 1 2 M N DRT DRT (i, j) − hfZ
−
1 (M − 1)(N − 1)
M N (hfZDRT (i − 1, j) − hfZ i=2 j=2
(i − 1, j))
+ Df
where M and N refer to the image size and the third term Df is an additional diagonal frequency in the neighborhood pixels which is
(i, j − 1))
Step 4: Apply the NSML of the lf components and the NMSF of each individual hf coefficients to activate the neural model and build up the pulse using the following equations, For the lf coefficients fusion lf
Fi,j [n] = NSMLZi,j
(11)
For the hf coefficients fusion Z
[n] = NMSF i,j
(12)
Z,P Z,P Li,j [n] = e−˛L Li,j [n − 1] + VL
Z,P [n] = Yi,j
k,l
1,
Z,P Z,P Ui,j > Ti,j
0,
otherwise
⎫ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎬
Z,P Z,P Wi,j,k,l Yi,j [n − 1] ⎪
⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎭
(13)
where P refers to the term lf or hf and Z denotes the original images either R or S. Z,P Z,P Step 5: Evaluate the firing times (sum of the Yi,j = 1 for Ui,j > Z,P Ti,j ) in n iteration for both the subbands fusion, as follows, Z,P Z,P Z,P ti,j [n] = ti,j [n − 1] + Yi,j [n]
(14)
Step 6: If n = nmax , process stop and the lf and hf coefficients are fused, based on the fusion rule by firing time evaluated in step 5, as given below
(9)
⎞2 − 1, j − 1)) + ⎟ ⎟ ⎟ ⎟ (10) ⎠ 2 DRT 2
hfZDRT (i
Z,P Z,P Z,P Ti,j [n] = e−˛T Ti,j [n − 1] + VT Yi,j [n − 1]
2/15 ⎦
i=2 j=1 (hfZ
M N (hfZDRT (i, j) i=2 j=2
Z,P Z,P Z,P Ui,j [n] = Fi,j [n](1 + ˇLi,j [n])
where Z denotes the original image either R or S and w (m, n) denotes a 3 × 3 template considered in the present work. m × n is the size of the template usually taken as 3 × 3, 5 × 5 or 7 × 7. Step 3: For the hf coefficients fusion, compute the NMSF that is utilized to express the clarity and activity level of the input source images in a template of size 3 × 3 and also able to detect the edges present in the input reference images. The mathematical expression for the NMSF is expressed as,
(M − 1)N
1 (M − 1)(N − 1)
and
|2lfZDRT (i, j) − lfZDRT (i, j − 1) − lfZDRT (i, j + 1)|
+
⎜ ⎜ ⎜ ⎝
Df = ⎜
hf Fi,j
|2lfZDRT (i, j) − lfZDRT (i
⎡
⎛
(6)
n
and
added in the expression of the spatial frequency and it is expressed as
DRT lf F
=
=
S,lf
R,lf
S,lf
;
ifti,j [nmax ] ≥ ti,j [nmax ]
DRT
;
ifti,j [nmax ] < ti,j [nmax ]
lf S
DRT hf F
R,lf
DRT
lf R
DRT
hf R
;
DRT hf S ;
R,hf
ifti,j
R,hf ifti,j
S,hf
[nmax ]
S,hf ti,j
[nmax ]
[nmax ] ≥ ti,j [nmax ] <
(15)
(16)
S. Singh, R.S. Anand / Biomedical Signal Processing and Control 46 (2018) 281–292
285
Fig. 3. The process flow of the proposed fusion approach.
Step 7: Finally, fused images is reconstructed by applying the inverse DRT on the fused lf and hf coefficients as follows
F = DRT −1 (lfFDRT , hfFDRT )
(17)
4. Experimental results and discussions Several experiments have been performed to analyze the effectiveness of the proposed MIF (P-MIF) approach. The fusion results computed by the P-MIF approach and applied to the different datasets of the source CT and MR images taken from the whole
286
S. Singh, R.S. Anand / Biomedical Signal Processing and Control 46 (2018) 281–292
Table 1 Performance measures used to evaluate the fusion performance obtained by proposed and other existing approaches. Sl. No.
Performance measures
Mathematical formulation
1.
Entropy (En)
EN = −L−1 p(i)log2 p(i) i=0
2.
Standard deviation (STD)
STD =
3.
Spatial frequency (SF) [27]
SF =
N M i=1 j=1
[X(i,j)−(
1 M N X(i,j))]2 M×N i=1 j=1
M×N
RF 2 + CF 2
1 M N (X(i, j i=1 j=2 M(N−1)
where, RF =
− 1) − X(i, j))
2
CF = 4.
Mutual information (MI) [51]
1 (M−1)N
M N (X(i, j) − X(i − 1, j)) i=2 j=1
2
MI = I(xr ; xF ) + I(xr ; xF ) h
(u,v)
, where I(xr ; xF ) = Lu=1 Lv=1 hr,F (u, v)log2 h r,F r (u)hF (v) Edge Index(Qrs/F )[52]
5.
Q rs/F =
M N Q rF (i,j)wr (i,j)+Q sF (i,j)ws (i,j) i=1
j=1 M N wr (i,j)+ws (i,j) i=1 j=1
Fig. 4. (a) Original CT images (b) Original MR images (c) Fused images obtained by the P-MIF approach.
brain atlas (available on http://www.med.harvard.edu/) are presented. Moreover, the analysis of the fused images has been done in terms of the quantitative and qualitative manner. The former involves the visual comparison between the source and the fused images. Besides this, analysis of the quantitative outcomes is also done by computing the several performance measures. These quantitative approaches include a set of predefined measures which are often required for the correctness of any visual observation. These performance measures are given in Table 1. This section illustrates the performance analysis of the fusion approaches applied to different datasets. Several experiments were performed on nine pairs of source CT and MR images as shown in Fig. 4. All the images are taken from the same patients and all the images are considered as preregistered images. To perform all the experiments, the following parameters are selected based on the successive experiments with DRT parameters as c = 1 and d = 4 for the decomposition levels of the input reference images and the PCNN parameters √ √ √ ˛L = √0.3, ˛T = 0.1, ˇ = 0.2, VL = 1, VT = 10, W = √1 [1 21; 20 2; 1 21] and iter = 150. 2
4.1. Experiment 1 This section presents the subjective analysis of fusion performance obtained by the proposed MIF approach. To conduct this experiment, nine pairs of different source images have been considered as shown in Fig. 4 and the fused images are presented in
Table 2 Performance indices obtained by the P-MIF scheme for fused images shown in Fig. 4. Dataset
#1 #2 #3 #4 #5 #6 #7 #8 #9
Performance measures EN
MI
STD
SF
Q rs/F
4.9681 5.1254 5.4298 4.8954 5.2125 5.5232 5.0147 4.9876 4.9512
2.9125 3.2145 2.8754 3.2214 3.0452 3.3745 3.1712 3.1420 3.2419
85.9565 82.7658 84.8644 87.2412 91.0145 89.5877 85.2565 93.0147 96.6752
6.5978 6.9045 8.8431 8.9176 7.7455 5.6754 6.9454 6.2787 6.4785
0.6142 0.5541 0.5286 0.5784 0.5544 0.4916 0.5014 0.5115 0.6210
Fig. 4(c), correspondingly. From the resultant images presented in Fig. 4, it is visualized that the output fused images have more diagnostic information extracted by the proposed MIF approach. It is also verified by the quantitative measures computed by the proposed approach and presented in Table 2. From the results presented in Table 2, it is analyzed that all the fused results have also better quantitative indices by achieving the significant values of the En, MI, STD, SF, and quality indices. To analyze the superiority of the proposed approach in a better way, a bar graph is shown in Fig. 5 to present the improved values of the En, STD and SF computed for the fused images compared to the reference images. The higher En values indicate more informative content preserved in the fused images and higher contrast is shown by higher STD value.
S. Singh, R.S. Anand / Biomedical Signal Processing and Control 46 (2018) 281–292
287
Fig. 5. Comparative En, STD and SF values obtained for the fused images and reference CT and MR image.
4.2. Experiment 2 For a detailed comparative analysis, all the CT and MR image datasets are processed using the proposed MIF approach and the other existing methods, out of which some of them are presented in Fig. 6. Furthermore, the superiority of the proposed MIF approach is examined by a comparative analysis of the fusion performances obtained by the other existing fusion approaches as given below, Method 1: Image fusion approach based on the WT (WT AVG MAX) in which the averaging is done for fusing the low frequency (lf) subimage coefficients and maximum selection rule for fusing the high frequency (hf) detail coefficients [10,17]. Third level decomposition is used for decomposing the source images, individually. Method 2: Nonsubsampled contourlet (NSCT) decomposition based fusion approach with mean and maximum (NSCT AVG MAX) for the lf and hf subbands as discussed in [27]. Method 3: A fusion method (NSCT MAX MSF PCNN) as mentioned in [26,27] with similar parameters. Method 4: Image fusion using the PCNN in NSCT domain (NSCT RE NSML PCNN) as mentioned in [28] with the same level of NSCT decomposition. Method 5: Image fusion using the nonsubsampled shearlet (NSST) domain with max selection rule and the PCNN model (NSST MAX SF PCNN) as described in [30]. In this approach, a neuron is stimulated by the SF of the high-frequency NSST coefficients with the decomposition level = [2,3,3] and the PCNN parameters ˛L = √ 0.3, ˛T = 0.1, ˇ = 0.2, VL = 1, VT = 10, W = √ √ as √ √1 [1 21; 20 2; 1 21] and n = 200. 2
Method 6: The NSST based fusion scheme with the same parameters as discussed in [32]. Method 7: The proposed MIF approach with similar parameters to perform the first experiment. To analyze the comparison between the fusion results obtained by the different approaches as mentioned above, the fused images are shown in Fig. 6 (c)-(i). From the results as mentioned in Fig, 6, it is visualized that the proposed P-MIF approach succeeded
to retain both the soft tissue content and bony information in comparison to other fusion approaches. Moreover, to validate the subjective results obtained by the other fusion approaches, Table 3 presents a comparative view of different computed quantitative measures. The values of performance measure listed in Table 2 explain the superiority of the proposed P-MIF approach over to others by achieving higher En and STD values. For a couple of times, the WT based approach shown better En values, but again lower than the P-MIF approach. Table 3 presents larger values of MI and SF values that also signify the preservation of the more information in the fused images obtained by the PMIF approach as compared to other methods. Moreover, Table 3 also shows higher Q rs/F values to indicate better preservation of the edge details by the P-MIF approach. To support the quantitative results presented in Table 3, the averaged performance indices evaluated by the all fusion approaches mentioned above are provided in Table 4. Furthermore, some of the following points are summarized based on the results showed in Table 4.
1. The proposed P-MIF approach achieves higher En values by 67.16%, 32.03% than reference CT and MR images, respectively. Furthermore, the P-MIF approach obtains approx 9.5%, 8.87%, 6.55%, 5.92%, 4.85% and 2.7% higher En values than the existing methods 1–6, respectively. These results refer to the presence of more detailed information in the fused images obtained by the P-MIF approach. 2. The proposed P MIF approach also gains approx 29.55%, 18.33%, 4.14%, 3.5%, 3.24% and 2% higher MI values than the methods 1–6, respectively, Moreover, it has also approx 35.85% and 13% higher SF values than the original CT and MR images and approx. 5.1–19.71% higher than the other existing PCNN based approaches. Higher values of the MI and SF signify the preservation of more diagnostic informative content and more clarity level. 3. The P-MIF approach also achieves larger STD values by 39.46%, 41.53%, 1.82%, 1.66%, 1.5% and 1.1% than the methods 1–6,
288
S. Singh, R.S. Anand / Biomedical Signal Processing and Control 46 (2018) 281–292
Fig. 6. Comparative visual results obtained by the fusion methods applied to the reference (a) CT and (b) MR images. The fused image obtained by the (c) Method 1 (d) Method 2 (e) Method 3 (f) Method 4 (g) Method 5 (h) Method 6 (i) Proposed method.
respectively. The increased STD values also illustrate higher contrast as gained by the proposed P-MIF approach. 4. Finally, the P-MIF approach gains larger Q rs/F values also that depicts the better quality of the fused images with the preservation of more edges. It indicates the by achieving 3.2–15.1% increased value of Q rs/F compared to the PCNN based approaches.
Therefore, based on both the analysis of the fusion results, it is examined that the P-MIF approach shows its superiority by outperforming the fusion results obtained by the other fusion methods and provides a good quality fused images with more preservation of edge information.
4.3. Experiment 3 This section presents a further investigation of the proposed PMIF approach over to another state of the art approaches applied to another CT-MR image pair as shown in Fig. 7 (a) and (b). Based on all the fused images obtained by the proposed and other approaches shown in Fig. 7 (c)-(ab), it is visualized that the P-MIF approach outperforms to other approaches in terms of visual quality. Apart from the visual analysis, the quantitative results are also presented in Table 5. The results mentioned in Table 5 also show larger MI values gained by the P-MIF approach than other existing approaches except for the scheme [8] as in this approach, noise, and artifacts are also suppressed before fusing the images. Moreover, the proposed P-MIF approach has better quantitative and qualitative
S. Singh, R.S. Anand / Biomedical Signal Processing and Control 46 (2018) 281–292
289
Table 3 Comparative analysis between the performance measures evaluated by the different fusion methods. Performance measures Image # 1 En MI STD SF Q rs/F Image # 2 En MI STD SF Q rs/F Image # 3 En MI STD SF Q rs/F Image # 4 En MI STD SF Q rs/F Image # 5 En MI STD SF Q rs/F Image # 6 En MI STD SF Q rs/F Image # 7 En MI STD SF Q rs/F Image # 8 En MI STD SF Q rs/F Image # 9 En MI STD SF Q rs/F
Fusion methods Method 1
Method 2
Method 3
Method 4
Method 5
Method 6
Proposed
4.7939 2.3288 62.674 5.8714 0.2929
4.8013 2.5628 60.324 6.0536 0.3930
4.8604 2.8607 84.363 6.2186 0.5772
4.8878 2.8624 84.631 6.3458 0.5784
4.8970 2.8813 84.659 6.4386 0.5797
4.9426 2.8933 85.033 6.5816 0.5930
4.9681 2.9125 85.957 6.5978 0.6142
4.8940 2.5240 59.750 6.6219 0.2586
4.8601 2.7564 57.183 6.2705 0.3460
4.9361 3.0489 81.587 6.6242 0.5088
4.9362 3.0604 81.544 6.3185 0.5098
4.9914 3.0644 81.737 6.6959 0.5134
5.0335 3.0853 82.035 6.8577 0.5496
5.1254 3.2145 82.766 6.9045 0.5541
5.2507 2.1770 57.008 7.1815 0.2502
5.2960 2.4050 60.234 7.1192 0.3401
5.3184 2.8223 83.368 7.3292 0.4866
5.3255 2.8377 83.393 7.4228 0.4962
5.3486 2.8432 83.492 7.6077 0.5122
5.4243 2.8509 83.814 8.8396 0.5255
5.4298 2.8754 84.864 8.8431 0.5286
4.1145 2.6323 62.414 5.5684 0.2669
4.2301 2.7863 60.492 5.3852 0.3456
4.3675 3.1358 85.631 5.5829 0.4975
4.4298 3.1544 85.642 5.6298 0.5008
4.4379 3.1570 85.948 5.7063 0.5112
4.6794 3.2026 86.153 5.8769 0.5514
4.8954 3.2214 87.241 8.9176 0.5784
4.8650 2.3399 65.749 6.5576 0.2405
4.7970 2.5112 63.178 6.2101 0.3328
4.9108 2.9532 89.489 6.6292 0.4729
4.9413 2.9641 89.643 6.6472 0.4843
4.9528 2.9722 89.653 6.8515 0.4919
4.9927 2.9886 89.908 7.7340 0.5188
5.2125 3.0452 91.015 7.7455 0.5544
5.2127 2.5634 65.380 5.1216 0.2769
5.1201 3.0783 63.842 4.8754 0.3784
5.2727 3.1384 86.275 5.4210 0.4375
5.2839 3.1822 86.546 5.5582 0.4480
5.2989 3.2226 86.695 5.5828 0.4494
5.3430 3.2956 87.186 5.7153 0.4685
5.5232 3.3745 89.588 5.6754 0.4916
4.5105 2.4310 61.429 6.0779 0.2395
4.6494 2.6086 59.097 6.2537 0.3118
4.7240 3.0406 84.361 6.7005 0.4203
4.7392 3.0475 84.394 6.7025 0.4211
4.7826 3.0502 84.433 6.8568 0.4244
4.9018 3.0973 84.852 6.9213 0.4969
5.0147 3.1712 85.257 6.9454 0.5014
4.4542 2.3354 66.890 5.5161 0.2592
4.5035 2.4772 65.102 5.6107 0.3443
4.6935 3.0389 91.667 5.8879 0.4234
4.7231 3.0454 92.162 5.8938 0.4236
4.7247 3.0310 92.174 6.1315 0.4261
4.8078 3.0724 92.686 6.2712 0.5002
4.9876 3.1420 93.015 6.2787 0.5115
4.0108 2.4357 69.762 6.3112 0.2758
4.0645 2.5961 68.909 5.8846 0.4086
4.1897 3.0390 95.402 6.2845 0.4825
4.2629 3.0902 95.421 6.2919 0.5211
4.5431 3.0913 95.789 6.3453 0.5231
4.7696 3.1514 96.160 6.4666 0.5970
4.9512 3.2419 96.675 6.4785 0.6210
Table 4 Averaged performance evaluation parameters computed by the P-MIF and other fusion approaches. Methods
Source CT Source MR Method 1 Method 2 Method 3 Method 4 Method 5 Method 6 Proposed
Performance measures EN
MI
STD
SF
Q rs/F
3.0647 ± 0.3035 3.8803 ± 0.3366 4.6785 ± 0.4401 4.7057 ± 0.3802 4.8081 ± 0.3713 4.8366 ± 0.3499 4.8863 ± 0.3073 4.9883 ± 0.2508 5.1231 ± 0.2230
– – 2.4186 ± 0.1404 2.6478 ± 0.2069 3.0086 ± 0.1102 3.0271 ± 0.1189 3.0348 ± 0.1218 3.0708 ± 0.1423 3.1332 ± 0.1616
83.9079 ± 5.4953 59.3509 ± 4.9843 63.4506 ± 3.8991 62.5210 ± 3.5133 86.9047 ± 4.4341 87.0419 ± 4.4968 87.1756 ± 4.5285 87.5363 ± 4.5573 88.4863 ± 4.4637
5.2680 ± 0.6784 6.3309 ± 0.8517 6.0920 ± 0.6451 5.9763 ± 0.6386 6.2976 ± 0.6017 6.3123 ± 0.5819 6.4685 ± 0.6267 6.8071 ± 0.9693 7.1541 ± 1.1257
– – 0.2623 ± 0.0177 0.3694 ± 0.0297 0.4785 ± 0.0490 0.4870 ± 0.0503 0.4924 ± 0.0508 0.5334 ± 0.0435 0.5506 ± 0.0471
290
S. Singh, R.S. Anand / Biomedical Signal Processing and Control 46 (2018) 281–292
Fig. 7. Comparative visual analysis done by the different IF approaches (a) CT image (b) MR image (c) Scheme [2] (d) Scheme [2] (e) Scheme [27,53] (f) Scheme [13,27] (g) Scheme [7,8,12] (h) Scheme [8] (i) Scheme [31,35] (j) Scheme [35] (k) Scheme [24,27] (l) Scheme [33] (m) Scheme [17,33] (n) Scheme [17,33] (o) Scheme [10,17] (p) Scheme [17] (q) Scheme [16] (r) Scheme [37] (s) Scheme [25,27] (t) Scheme [27] (u) Scheme [24] (v) Scheme [32] (w) Scheme [30] (x) Scheme [28] (y) Scheme [43] (z) Scheme [34] (aa) Scheme [32] (ab) P-MIF approach.
fusion results obtained by the P-MIF as compared to the other IF approaches. To analyze the fused images individually, a medical expert does not consider any predefined measures. To evaluate the performance obtained by the different multimodal fusion approaches, all the fused images were visually observed by the experienced experts before and after the fusing process in terms of the visuality, resolution, contrast, better preservation of details and diagnostic content preservation. For each image, a predefined score is assigned in one to five scale (poor = 1, fair = 2, good = 3, very good = 4 and excellent = 5). Sometimes, similar scores are provided to more than one image. In that situation, those images which have similar scores are evaluated again. Table 6 shows the averaged results (mean ± standard deviation) of the visual evaluation provided by the medical experts. From Table 6, it is also observed that the analysis done by the expert also supports the subjective and objective analysis done in the previous sections.
5. Conclusion This paper presents a framework for fusing the CT and MR medical images based on the DRT decomposition and NSML and NMSF motivated BIFNN model. In the P-MIF approach, the DRT provides several scales and directional decomposition based on two new parameters added. For analyzing the performance of the P-MIF approach, several experiments have been performed on the different dataset and their performance is evaluated in terms of both the subjective and objective manner. Moreover, the fusion results achieved by the proposed P-MIF approach are compared with the other decomposition based methods such as the WT, CVT, NSCT and NSST and several PCNN models also. From the results presented in the paper, it is concluded that the DRT decomposition helps to extract the significant edge detail information from the reference images and the fusion rules provide more activity and, clarity levels for diagnostic details present in the reference images. However, the presented approach takes more time as compared to wavelet based fusion techniques, but it is acceptable at the cost of so much improved quality of fused images. Therefore, in our next work, more
S. Singh, R.S. Anand / Biomedical Signal Processing and Control 46 (2018) 281–292 Table 5 Performance measures obtained by the P-MIF approach and other fusion methods. Fusion schemes Scheme [2] Scheme [2] Scheme [27,53] Scheme [13,27] Scheme [7,8,12] Scheme [8] Scheme [31,35] Scheme [35] Scheme [24,27] Scheme [33] Scheme [17,33] Scheme [17,33] Scheme [10,17] Scheme [17] Scheme [16] Scheme [37] Scheme [25,27] Scheme [27] Scheme [24] Scheme [32] Scheme [30] Scheme [28] Scheme [43] Scheme [34] Scheme [32] P-MIF scheme
Performance measures MI
EN
STD
2.368 2.410 2.057 2.714 6.263 6.273 3.744 3.985 2.529 – – – 3.073 2.748 3.318 – 3.734 3.452 – 3.774 3.793 4.100 3.586 5.889 4.155 6.269
– – 4.982 6.729 – – – – 6.387 5.990 6.314 5.960 6.096 6.199 6.065 6.730 6.771 6.767 6.387 6.777 6.780 6.801 – – 6.835 6.848
– – 33.65 57.97 54.15 64.70 – – 53.82 32.90 32.66 32.55 41.56 40.56 40.22 60.32 59.83 59.85 53.82 62.03 60.02 60.11 34.85 20.89 62.17 65.29
Table 6 Averaged scores of the visual comparison of the fused images. Fusion schemes
Averaged scores(mean ± standard deviation)
Scheme [2] Scheme [2] Scheme [27,53] Scheme [13,27] Scheme [7,8,12] Scheme [8] Scheme [31,35] Scheme [35] Scheme [24,27] Scheme [33] Scheme [17,33] Scheme [17,33] Scheme [10,17] Scheme [17] Scheme [16] Scheme [37] Scheme [25,27] Scheme [27] Scheme [24] Scheme [32] Scheme [30] Scheme [28] Scheme [43] Scheme [34] Scheme [32] P-MIF scheme
2.215 ± 0.699 2.643 ± 0.929 1.858 ± 0.663 1.929 ± 0.475 3.143 ± 0.865 2.934 ± 0.884 1.929 ± 0.731 2.929 ± 0.712 3.500 ± 0.759 2.308 ± 0.482 1.358 ± 0.498 1.429 ± 0.514 1.143 ± 0.364 1.858 ± 0.865 2.358 ± 0.634 2.715 ± 0.469 3.215 ± 0.698 3.215 ± 0.426 3.143 ± 0.535 3.715 ± 0.469 3.358 ± 0.492 3.929 ± 0.475 3.429 ± 0.514 4.072 ± 0.616 4.143 ± 0.663 4.358 ± 0.634
emphasis will be given to reduce of the computational time and to estimate all the parameters of the neural model, adaptively instead of its fixed values. On the basis of experimental results, it is assured that the proposed approach provides improved performance compared to the others by preserving more diagnostic information in reference to source CT and MR images, along with the better visual quality of the fused images. References [1] W. Huang, Z. Jing, Evaluation of focus measures in multi-focus image fusion, Pattern Recognit. Lett. 28 (2007) 493–500.
291
[2] Z. Wang, Y. Ma, Medical image fusion using m-PCNN, Inf. Fusion 9 (2008) 176–185. [3] S. Li, J.T. Kwok, Y. Wang, Multifocus image fusion using artificial neural networks, Pattern Recognit. Lett. 23 (2002) 985–997. [4] M. Li, W. Cai, Z. Tan, A region-based multi-sensor image fusion scheme using pulse-coupled neural network, Pattern Recognit. Lett. 27 (2006) 1948–1956. [5] T. Wan, C. Zhu, Z. Qin, Multifocus image fusion based on robust principal component analysis, Pattern Recognit. Lett. 34 (2013) 1001–1008. [6] W. Zhijun, D. Ziou, C. Armenakis, D. Li, L. Qingquan, A comparative analysis of image fusion methods, IEEE Trans. Geosci. Remote Sens. 43 (2005) 1391–1402. [7] J. Nirosha Joshitha, R.M. Selin, Image fusion using PCA in multifeature based palmprint recognition, Int. J. Soft Comp. Eng. 2 (2012) 226–230. [8] Z. Liu, H. Yin, Y. Chai, S.X. Yang, A novel approach for multimodal medical image fusion, Expert Syst. Appl. 41 (2014) 7425–7435. [9] A. Toet, Image fusion by a ratio of low-pass pyramid, Pattern Recognit. Lett. 9 (1989) 245–253. [10] H. Li, B.S. Manjunath, S.K. Mitra, Multisensor image fusion using the wavelet transform, Graph Models Image Process. 57 (1995) 235–245. [11] G. Pajares, J. Manuel de la Cruz, A wavelet-based image fusion tutorial, Pattern Recognit. 37 (2004) 1855–1872. [12] P.S. Pradhan, R.L. King, N.H. Younan, D.W. Holcomb, Estimation of the number of decomposition levels for a wavelet-based multiresolution multisensor image fusion, IEEE Trans. Geosci. Remote Sens. 44 (2006) 3674–3686. [13] Y. Yang, D.S. Park, S. Huang, N. Rao, Medical image fusion via an effective wavelet-based approach, EURASIP J. Adv. Signal. Process. 2010 (2010) 44. [14] M.N. Do, M. Vetterli, The finite ridgelet transform for image representation, IEEE Transactions Image Processing : Publication the IEEE Signal. Processing Soc. 12 (2003) 16–28. [15] J.L. Starck, E.J. Candes, D.L. Donoho, The curvelet transform for image denoising, IEEE Trans. Image Process. 11 (2002) 670–684. [16] Q.-g. Miao, C. Shi, P.-f. Xu, M. Yang, Y.-b. Shi, A novel algorithm of image fusion using shearlets, Opt. Commun. 284 (2011) 1540–1547. [17] S. Li, B. Yang, J. Hu, Performance comparison of different multi-resolution transforms for image fusion, Inf. Fusion 12 (2011) 74–84. [18] S. Li, B. Yang, Multifocus image fusion by combining curvelet and wavelet transform, Pattern Recognit. Lett. 29 (2008) 1295–1301. [19] N. Kaur, J. Kaur, A novel method for pixel level image fusion based on curvelet transform, Int. J. Res. Eng. Technol. 1 (2011) 38–44. [20] S. Richa, P. Om, K. Ashish, Local energy-based multimodal medical image fusion in curvelet domain, IET Comp. Vis. 10 (2016) 513–527. [21] S. Yang, M. Wang, L. Jiao, R. Wu, Z. Wang, Image fusion based on a new contourlet packet, Inf. Fusion 11 (2010) 78–84. [22] P. Ganasala, V. Kumar, CT and MR image fusion scheme in nonsubsampled contourlet transform domain, J. Digit Imaging (2014) 1–12. [23] V. Bhateja, H. Patel, A. Krishn, A. Sahu, A. Lay-Ekuakille, Multimodal medical image sensor fusion framework using cascade of wavelet and contourlet transform domains, IEEE Sens. J. 15 (2015) 6783–6790. [24] L. Yang, B. Guo, W. Ni, Multimodality medical image fusion based on multiscale geometric analysis of contourlet transform, Neurocomputing 72 (2008) 203–211. [25] X.-B. Qu, J.-W. Yan, H.-Z. Xiao, Z.-Q. Zhu, Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain, Acta Autom. Sin. 34 (2008) 1508–1514. [26] S. Das, M.K. Kundu, Ripplet based multimodality medical image fusion using pulse-coupled neural network and modified spatial frequency, in: IEEE International Conference on Recent Trends in Information Systems, 2011, pp. 229–234. [27] S. Das, M. Kundu, NSCT-based multimodal medical image fusion using pulsecoupled neural network and modified spatial frequency, Med. Biol. Eng. Comput. 50 (2012) 1105–1114. [28] Y. Chai, H. Li, X. Zhang, Multifocus image fusion based on features contrast of multiscale products in nonsubsampled contourlet transform domain, OptikInt. J. Light Electron. Optics 123 (2012) 569–581. [29] W. Kong, Y. Lei, Technique for image fusion between gray-scale visual light and infrared images based on NSST and improved RF, Optik – Int. J. Light Electron. Optics 124 (2013) 6423–6431. [30] P. Geng, Z. Wang, Z. Zhang, Z. Xiao, Image fusion by pulse couple neural network with shearlet, Opt. Eng. 51 (2012), 067005-067001-067005-067007. [31] L. Wang, B. Li, L.-F. Tian, Multi-modal medical image fusion using the inter-scale and intra-scale dependencies between image shift-invariant shearlet coefficients, Information Fusion, 2018 19 20-28. [32] S. Singh, D. Gupta, R.S. Anand, V. Kumar, Nonsubsampled shearlet based CT and MR medical image fusion using biologically inspired spiking neural network, Biomed. Signal. Process. Control 18 (2015) 91–101. [33] R. Singh, A. Khare, Fusion of multimodal medical images using daubechies complex wavelet transform – a multiresolution approach, Inf. Fusion 19 (2014) 49–60. [34] X. Xu, Y. Wang, S. Chen, Medical image fusion using discrete fractional wavelet transform, Biomed. Signal. Process. Control 27 (2016) 103–111. [35] Z. Xu, Medical image fusion using multi-level local extrema, Inf. Fusion 19 (2014) 38–48. [36] J. Xu, L. Yang, D. Wu, Ripplet: a new transform for image processing, J. Vis. Commun. Image Represent. 21 (2010) 627–639. [37] C. Kavitha, C. Chellamuthu, R. Rajesh, Medical image fusion using combined discrete wavelet and ripplet transforms, Proc. Eng. 38 (2012) 813–820.
292
S. Singh, R.S. Anand / Biomedical Signal Processing and Control 46 (2018) 281–292
[38] Z. Wang, Y. Ma, F. Cheng, L. Yang, Review of pulse-coupled neural networks, Image Vis. Comp. 28 (2010) 5–13. [39] L. Fu, L. Yifan, L. Xin, Image fusion based on nonsubsampled contourlet transform and pulse coupled neural networks, in: IEEE International Conference on Intelligent Computation Technology and Automation (ICICTA), 2011, pp. 572–575. [40] N. Wang, Y. Ma, W. Wang, DWT-based multisource image fusion using spatial frequency and simplified pulse coupled neural network, J. Multimedia 9 (2014) 159–165. [41] M. Monica Subashini, S.K. Sahoo, Pulse coupled neural networks and its applications, Expert Syst. Appl. 41 (2014) 3965–3974. [42] M.M. Deepika, V. Vaithyanathan, An efficient method to improve the spatial property of medical medical images, J. Theor. Appl. Inf. Technol. 35 (2012) 141–148. [43] G. Wang, X. Xu, X. Jiang, R. Nie, A modified model of pulse coupled neural networks with adaptive parameters and its application on image fusion, ICIC Exp. Lett. 6 (2015) 2523–2530. [44] J.-x. Xia, X.-h. Duan, S.-c. Wei, Application of adaptive PCNN based on neighborhood to medical image fusion, Appl. Res. Comp. 10 (2011) 3929–3933. [45] F. Liu, J. Li, H. Caiyun, Image fusion algorithm based on simplified PCNN in nonsubsampled contourlet transform domain, Proc. Eng. 29 (2012) 1434–1438.
[46] P. Ganasala, V. Kumar, Feature-motivated simplified adaptive PCNN-based medical image fusion algorithm in NSST domain, J. Digit Imaging 29 (2016) 73–85. [47] X. Liu, W. Mei, H. Du, Multimodality medical image fusion algorithm based on gradient minimization smoothing filter and pulse coupled neural network, Biomed. Signal. Process. Control 30 (2016) 140–148. [48] X. Xu, D. Shan, G. Wang, X. Jiang, Multimodal medical image fusion using PCNN optimized by the QPSO algorithm, Appl. Soft Comput. 46 (2016) 588–595. [49] D. Agrawal, J. Singhai, Multifocus image fusion using modified pulse coupled neural network for improved image quality, IET Image Process. 4 (2010) 443–451. [50] G.S. El-taweel, A.K. Helmy, Image fusion scheme based on modified dual pulse coupled neural network, IET Image Process. 7 (2013) 407–414. [51] Y. Yang, Y. Que, S.-Y. Huang, P. Lin, Technique for multi-focus image fusion based on fuzzy-adaptive pulse-coupled neural network, Signal. Image Video Process. 11 (2017) 439–446. [52] C.S. Xydeas, V. Petrovic, Objective image fusion performance measure, Electron. Lett 36 (2000) 308–309. [53] T. Hua, F. Ya-nan, W. Pei-Guang, Image fusion algorithm based on regional variance and multi-wavelet bases, in: 2nd International Conference on Future Computer and Communication, 2010, pp. 792–795.