Information Fusion xxx (2014) xxx–xxx
Contents lists available at ScienceDirect
Information Fusion journal homepage: www.elsevier.com/locate/inffus
Image fusion using intuitionistic fuzzy sets P. Balasubramaniam ⇑, V.P. Ananthi Department of Mathematics, Gandhigram Rural Institute, Deemed University, Gandhigram 624 302, Tamil Nadu, India
a r t i c l e
i n f o
Article history: Received 19 June 2013 Received in revised form 9 October 2013 Accepted 20 October 2013 Available online xxxx Keywords: Intuitionistic fuzzy set Intuitionistic fuzzy image Entropy Image fusion
a b s t r a c t Image fusion is the process of combining one or more images which are obtained from different environment into a single image which is more useful for further image processing tasks. Image registration and image fusion are of great importance in defence and civilian sectors, particularly for recognizing a ground/air force vehicle and medical imaging. In this paper a new way is drawn to fuse two or more images by using maximum, minimum operations in intuitionistic fuzzy sets (IFSs). IFSs are more suitable for image processing since every digital image have lot of uncertainties. In processing phase, images are reformed into intuitionistic fuzzy images (IFIs). Entropy is employed to obtain the optimum value of the parameter in membership and non-membership function. Then the resulting IFIs are disintegrated into image blocks and the corresponding blocks of the images are reunioned by finding the count of blackness and whiteness of the blocks. This paper evaluates the performance of simple averaging (AVG), principal component analysis (PCA), discrete wavelet transform (DWT), stationary wavelet transform (SWT), dual tree complex wavelet transform (DTCWT), multi-resolution singular value decomposition (MSVD), nonsubsampled contourlet transform (NSCT) and IFS (proposed method) in terms of various performance measure. The experimental and comparison results show that luminance and contrast is of great importance for image processing and prove that the proposed method is better than all other methods. Ó 2013 Elsevier B.V. All rights reserved.
1. Introduction Image fusion is significant in image analysis and computer vision, for more details see [1–4] and references there in. In recent days, various types of image sensors are used in satellite imaging technology, robotics, biometrics and finger print scanners. But, data obtained from these sensors are pleonastic and incompatible. Images received by employing magnetic resonance offers soft tissue details and computed tomography shows the details of bone, blood vessels and so on. In order to get an image that simultaneously renders both the soft tissue and other parts like fat, bone for assisting human visually to detect abnormalities. In this paper, images from various disciplines such as remote sensing and medicine are considered for fusion. Spatial fusion and transform fusion are two main techniques in image fusion. Depending upon the phases of unification, fusion scheme is classified into three levels namely pixel, feature and decision level [5]. The pixel level fusion combines directly the pixel values of the fusing images and renders a composite image [6]. It is attracted by many researchers and various techniques have been developed, such as Laplacian pyramid, PCA [7], gradient pyramid and wavelet transform. Simplest image fusion method is just taking average of summed pixel values of the source images. The ⇑ Corresponding author. Tel.: +91 451 2452371; fax: +91 451 2453071. E-mail address:
[email protected] (P. Balasubramaniam).
AVG fusion algorithm shows degraded performance. To overcome this problem many types of multi-resolution transforms emerged and are applied for image fusion. They are pyramid decomposition [8–10], wavelet transforms [11–23], MSVD [24–26], curvelet transform [27], contourlet transform (CT) [28–30] and NSCT [31–34]. Image fusion by singular value decomposition (SVD) performs almost similar to wavelets. SVD does not contain fixed set of basis vector but they depend on the data set. The fusion using SVD works very well on a pixel by pixel basis and outscores PCA. The MSVD provides an analysis tool to inquire into the properties (isotropy, sphericity, self-similarity) of signals. The MSVD is viewed as faster than the approximated SVD, for details see [24]. Zhang and Blum [12] demonstrated a fusion technique using multiscale decomposition to get a highly featured picture. Pajares and Cruz [13] have given some ideas on methods of wavelet based image fusion and they compared various resolution levels in several wavelet classes. Decomposition of wavelets and multiscale transforms have two dimensional bases which has tensors of one-dimensional basis function. To overcome such problems, few different multiscale transforms like curvelet and contourlet have been suggested in [27,28] respectively. Shifting invariance is more desirable for image denoising and image fusion, in which CT lacks. In 2006, Cunha and Zhou [31] proposed NSCT, a complete transform, since then it has been successfully utilized in various phases of image processing like image denoising, image enhancement and image fusion. NSCT outperforms wavelet based fusion algorithm
1566-2535/$ - see front matter Ó 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.inffus.2013.10.011
Please cite this article in press as: P. Balasubramaniam, V.P. Ananthi, Image fusion using intuitionistic fuzzy sets, Informat. Fusion (2014), http://dx.doi.org/ 10.1016/j.inffus.2013.10.011
2
P. Balasubramaniam, V.P. Ananthi / Information Fusion xxx (2014) xxx–xxx
[33]. However, in every phase of image processing, there exist many uncertainties. These uncertainties can be removed using fuzzy sets and it helps to illuminate luminance and contrast of the image. The fuzzy set (FS) theory was proposed by Zadeh [35] in 1965, since then it is widely used in various fields. However, the membership function of fuzzy sets has only a single value. It cannot express the evidence of the support, opposition and hesitation at the same time. Atanassov [36] introduced the concept of IFS, which are development and marketing of the traditional FSs, in 1986. IFSs take into account of three aspects of information that are the degree of membership, non-membership and hesitation at the same time. Non-membership function can better describe the fuzzy concept of ‘‘non-this non-that’’ and more delicately portray the ambiguity nature of the objective world. Thus, the IFSs have greater flexibility and practicality in the treatment of fuzzy information and uncertainty than the traditional FSs. Every digital image have some fuzziness in it, for example poor illumination makes medical images uncertain and these can be eliminated using IFS. This paper presents a new method for fusing two or more images. Firstly, the source images are fuzzified. Secondly, the highest value of entropy is used to find the membership and non-membership degree. This in turn employed for generating IFIs of the source images. These IFIs are disintegrated into small blocks and the total count of blackness and whiteness in each corresponding blocks are estimated. Then the blocks of fused image can be obtained by applying max, min operation based on the totality of blackness and whiteness in the image. Fused image is then reconstructed using the fused blocks having high luminance and contrast. Fused IFI is then defuzzified to get a crisp image without uncertainty. Image registration is an important requirement applied for fusion technique. In this paper, it is assumed that the source images are filed in database before commencing the fusion process. The following renders the frame work of this paper. Section 2 deals with the preliminary details of some existing fusion algorithms along with our proposed image fusion technique. Section 3 deals with the various performance measures of the image fusion with reference and without reference images. In Section 4, the experimental results and comparisons are provided to show our fusion technique surpasses other fusion algorithms. Finally, conclusions with future directions are drawn in Section 5.
ter used. The idea behind MSVD is to replace the finite impulse response filters in SVD [24]. MSVD initially fragments the source images into k, k ¼ 1; 2; . . . ; K, levels and then fuse them by taking highest absolute value of detailed coefficients of MSVD obtained from the source images. Fusion rule at roughest stage (k ¼ K) take average of approximated MSVD coefficients. Similarly, at each stage of decomposition, the fusion takes place by averaging MSVD eigen matrices obtained from source images. The fused images can be reconstructed by reversing MSVD. More details will be available in [24], which gives MSVD for a single image. In this paper, we use the same methodology to find MSVD for multiple images and they are fused as described above. 2.3. Intuitionistic fuzzy image (IFI) Image processing by IFSs mainly needs membership and nonmembership function. Now let us see briefly about the construction by initiating from FSs. Consider a finite set X ¼ fx1 ; x2 ; x3 ; :::; xn g. A fuzzy set [37] F of X may be mathematically written as
F ¼ fðx; lF ðxÞÞjx 2 Xg where the function lF ðxÞ : X ! ½0; 1 represents the membership degree of an element x in X. Therefore, the non-membership degree of x is 1 lA ðxÞ. Support of a fuzzy set F in X is defined as SuppðFÞ ¼ fx 2 XjlF ðxÞ > 0g. A fuzzy set is said to be a fuzzy singleton if its support is a single point. Atanassov [36] and Atanassov and Stoeva [38] generalized fuzzy sets as IFS. An IFS F in X can be mathematically symbolized as
F ¼ fðx; lF ðxÞ;
mF ðxÞÞjx 2 Xg
where the functions lF ðxÞ; mF ðxÞ : X ! ½0; 1 represents the degree of membership and non-membership of an element x in X respectively, with the essential condition 0 6 lF ðxÞ þ mF ðxÞ 1. On observation, it is clearly seen that FSs is a peculiar case of IFS. A new parameter pF ðxÞ, which originates due to lack of knowledge called hesitation degree, has been introduced by Szmidt and Kacpryzk [39] while computing distance between FSs. IFSs is defined as follows based on the hesitation degree
2. Fusion algorithms
F ¼ fðx; lF ðxÞ; mF ðxÞ; pF ðxÞÞjx 2 Xg
The details of AVG, MSVD and proposed fusion method and their use in image fusion are described in this section.
where the condition lF ðxÞ þ mF ðxÞ þ pF ðxÞ ¼ 1 holds. Proposed method initially fuzzifies the source image. Naturally there arises a question about fuzziness in an image stated as what is fuzzy here? Various image properties like edges, brightness and so on are fuzzy due to built-in defects in imaging equipments, inherent image vagueness or by nonuniform illumination. Since a gray scale image inherits uncertainty with in pixel due to possible multi-valued levels of brightness, which is taken as fuzzy throughout this paper. There are two methods for image enhancement in the literature namely, spatial domain and frequency domain. Spatial domain method is the simple and more powerful technique in which pixels are directly manipulated. One of the significant feature for enhancing contrast in gray scale images is brightness. But due to several levels of brightness, the image pixels are uncertain. So initially images are transferred to fuzzy domain (extracted from spatial domain), which helps in stretching out the membership function over the total range of gray level. Thus, this study utilizes degree of gray levels as a membership function for fuzzification of the given source image by choosing quantized and normalized gray levels directly. Based on fuzzy sets, an image I of P Q dimension and L levels of grayness can be regarded as a P Q array of fuzzy singletons
2.1. Image fusion by AVG This technique is a basic and straightforward technique. Fusion could be achieved by simple averaging the corresponding pixels in each input image which is described by
If ¼
I1 þ I2 2
where I1 ; I2 are the source images and If represents the fused image. 2.2. Image fusion by MSVD Multi-resolution singular value decomposition (MSVD) is analog to that of wavelets transform. In MSVD, first level decomposition can be attained by decimating the filtered output signal by a factor of two and the second level by filtering the decimated low pass filtered output by a factor of two. This process can be repeated to get successive levels of decomposition. Here the filtering process is same as in wavelet decomposition method except the type of fil-
Please cite this article in press as: P. Balasubramaniam, V.P. Ananthi, Image fusion using intuitionistic fuzzy sets, Informat. Fusion (2014), http://dx.doi.org/ 10.1016/j.inffus.2013.10.011
P. Balasubramaniam, V.P. Ananthi / Information Fusion xxx (2014) xxx–xxx
relating the intensity values of the pixels. For more detail, see [40]. Thus a fuzzy image (IF ) of an image (I) can be defined as a function described by Iði;jÞ¼g
lI ðgÞ
IF : P Q ! G ! ½0; 1
lI ðIði; jÞÞ ¼
where the value of k is optimized by finding the highest entropy value, that is kopt ¼ maxk ðENTðIFS; kÞÞ. Using this k in Eq. (2), degrees of membership of the pixels in the IFI are calculated and finally, an IFI is produced as below.
(
where Iði; jÞ is ði; jÞth pixel value of the image I; Iði; jÞ ¼ g 2 Gj G ¼ f0; 1; 2; . . . ; L 1g and G contains nonnegative integers describing the gray values of the image. lI ðgÞ denotes the belongingness of the element g to the image set I defined as
g g min g max g min
ð1Þ
where g min and g max are respectively the least and uppermost values of the gray levels of the image I. Hence the image IF in the fuzzy domain is defined by
IF ¼ fðIði; jÞÞ; lI ðIði; jÞÞÞj 0 6 i 6 P 1; 0 6 j 6 Q 1; 0 6 Iði; jÞ 6 L 1; 0 6 lI 6 1g Image processing based on FSs is furnished to deal with vagueness in qualitative properties of the image by patterning membership function. But there arises hesitation while specifying the quantity of brightness of the considered pixel. The main aim of the proposed method is to fuse two/more images by eliminating grayness ambiguity in image data along with the removal of uncertainty in assigning membership values to those uncertain image pixels. For these reasons in the proposed method, an image in fuzzy domain is transferred to intuitionistic fuzzy domain which includes an another parameter known as hesitation degree. In images hesitation arises due to improper acquisition which makes unclear to suggest whether the considered pixel is an edge or gray. Usually membership functions are defined in an intuitive way by an expert. Hence membership function is chosen to remove vagueness by keeping a point in mind that one cannot confidently decide that opted membership function is the best. Motivated by above defects, IFS is introduced to remove uncertainty in the allotment of values to the gray levels of an ambiguous image and hesitation arises in defining membership function due lack of knowledge/personal error. Due to this hesitation, the values of membership function lies in a range. The proposed method seems to reduce this inherent hesitancy. Based on the membership degree lI in Eq. (1), the degree of membership of IFI can be computed as in [41]:
lIFS ðIði; jÞ; kÞ ¼ 1 ð1 lI ðIði; jÞÞÞk ; k P 0
ð2Þ
The degree of non-belongingness is defined as kðkþ1Þ
mIFS ðIði; jÞ; kÞ ¼ ð1 lI ðIði; jÞÞÞ
;
kP0 ðkþ1Þ
by using negation function nðxÞ ¼ ð1 xÞ The hesitation degree is computed as
IIFS ¼ ðIði; jÞ; lIFS ðIði; jÞ; kÞ; mIFS ðIði; jÞ; kÞ; pIFS ðIði; jÞ; kÞÞ Iði; jÞ 2 f0; . . . ; L 1g
ð6Þ
2.3.1. Fusion by proposed method The fusion algorithm based on IFS are as follows: 1. Denote the two input images by I1 and I2 respectively. 2. Input images are initially fuzzified using Eq. (1) and calculate the optimal value of k using ENT given in Eq. (5) for both the input images separately. 3. Calculate the degrees of membership, non-membership and hesitation of IFI for both input images by employing Eqs. (2)–(4) respectively with their optimal k value and the resulting images are denoted by IF1 and IF2 . 4. Decompose the two images obtained from step 3 into p q blocks and denote the kth image block of two decomposed images by IF1k and IF2k , respectively. 5. Compute the total count of blackness, whiteness of two corresponding blocks. 6. Reconstruct the kth block of the blended image Ifk as 8 > < minfIF1k ði;jÞ;IF2k ði;jÞg; if countðblacknessÞ > countðwhitenessÞ Ifk ði;jÞ ¼ maxfIF1k ði;jÞ;IF2k ði;jÞg; if countðblacknessÞ < countðwhitenessÞ > : IF1k ði;jÞþIF2k ði;jÞ ; otherwise 2
where min and max denote respectively the minimum and maximum operations in IFS. 7. Reconstruct the image from the blocks obtained from step 6 to get a fused IFI without uncertainty. 8. IFI obtained from step 7 is defuzzified to get a crisp image using defuzzification function I0 ði; jÞ ¼ ðg max g min Þ lIFS ðIði; jÞÞ þ g min , which is obtained by using the inverse function of Eq. (1). 3. Evaluation criteria Now we are highlighting the following measures which will be used in the sequel to quantify performance with and without reference image, see [46]. 3.1. With reference image
ð3Þ ; k P 0.
pIFS ðIði; jÞ; kÞ ¼ 1 lIFS ðIði; jÞ; kÞ mIFS ðIði; jÞ; kÞ
3
ð4Þ
The parameter k varies according to the selection of image. Since numerous IFSs can be generated for a single image by changing the values of k, it is necessary to find its optimum value and this is achieved by using entropy (ENT). Skeleton of entropy was initially given by De Luca and Termini [42] in FS theory. Authors in [39,43–45] have suggested different entropy measures using IFS theory. The entropy given by Vlachos and Sergiadis [43] has been used to develop the proposed scheme which is described as Q 1 P1 X 1 X 2lIFS ðIði; jÞ;kÞmIFS ðIði;jÞ;kÞ þ p2IFS ðIði;jÞ;kÞ ENTðIFS; kÞ ¼ P Q i¼0 j¼0 l2IFS ðIði; jÞ; kÞ þ m2IFS ðIði; jÞ; kÞ þ p2IFS ðIði; jÞ; kÞ
ð5Þ
It is possible to generate a reference image, for some source images such as multifocused images, which is used for comparing fusion results obtained experimentally. Some typical quality metrics which are used for these comparisons are listed below with reference image denoted by R and fused image by F. 3.1.1. Root Mean Square Error (RMSE) RMSE between the reference image and the fused image can be calculated as
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u Q P X u 1 X RMSE ¼ t ðRij F ij Þ2 P Q i¼1 j¼1 where P Q denotes the size of the image or totality of pixel in it and Rij ; F ij denotes respectively the ði; jÞth pixel value of the reference image and fused image. RMSE approaches zero whenever the reference (ideal) and fused images are similar and it will increase when the similarity decreases.
Please cite this article in press as: P. Balasubramaniam, V.P. Ananthi, Image fusion using intuitionistic fuzzy sets, Informat. Fusion (2014), http://dx.doi.org/ 10.1016/j.inffus.2013.10.011
4
P. Balasubramaniam, V.P. Ananthi / Information Fusion xxx (2014) xxx–xxx
P
kRij F ij k 100 kRij k
PFE ¼
P
P Q lF ¼ PQ1 P i¼1 j¼1 F ij , P PQ 2 1 rR ¼ PQ 1 Pi¼1 Pj¼1 ðRij lR Þ2 , r2F ¼ PQ11 PPi¼1 PQj¼1 ðF ij lF Þ2 , r2RF ¼ PQ11 Pi¼1 Qj¼1 ðRij lR ÞðF ij lF Þ.
3.1.2. Percentage Fit Error (PFE) PFE can be calculated by using the formula
where k k denotes the operator norm. The value of PFE tends to zero when the ideal and fused image are same otherwise its value increases.
The range of this metric lies between 1 and 1. If the reference and fused images are similar then QI reaches its maximal value one and attains its minimal value only if F ¼ 2lR R.
3.1.3. Mean Absolute Error (MAE) It calculates the mean absolute error between the ideal and fused images.
3.1.9. Measure of Structural Similarity (SSIM) All natural images are well organized and their image elements are vigorously dependent which inherits vital data regarding structure of an image. It works in three phases namely, luminance, contrast and structural comparison. The third phase correlates the internal patterns of intensities among luminance and contrast of a normalized image.
MAE ¼
Q P X 1 X jRij F ij j P Q i¼1 j¼1
MSE decreases with increase in similarity among ideal and reference image and vice versa. 3.1.4. Correlation (CORR) The standard value is one when the ideal and fused images are similar and is less than one whenever dissimilarity increases.
CORR ¼
2C rf Cr þ Cf
where C r ¼ Rij F ij .
SSIM ¼
ð2lR lF þ C 1 Þð2rRF þ C 2 Þ ðl2R þ l2F þ C 1 Þðr2R þ r2F þ C 2 Þ
where C 1 and C 2 are constants which is appended to retain stability whenever l2R þ l2F and r2R þ r2F approaching zero. 3.2. Without reference image
PP PQ i¼1
j¼1
R2ij , C f ¼
PP PQ i¼1
j¼1
F 2ij
and C rf ¼
PP PQ i¼1
j¼1
3.1.5. Signal to Noise Ratio (SNR)
3.2.1. Spatial Frequency (SF) Spatial frequency can be computed as
" PP PQ # ðR Þ2 i¼1 j¼1 ij SNR ¼ 10log10 PP PQ 2 i¼1
j¼1
It is too difficult to materialize the performance of fusion methods when a reference image is unknown. Some of the performance evaluation techniques which operates without an ideal images are as follows.
ðRij F ij Þ
SF ¼
High SNR value indicates the ideal and fused images are similar. Higher value implies better fusion. 3.1.6. Peak Signal to Noise Ratio (PSNR)
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi RF2 þ CF2
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi PP PQ 2 1 and CF ¼ where RF ¼ PQ i¼1 j¼2 ðF ij F iðj1Þ Þ ffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P P Q P 2 1 j¼1 i¼2 ðF ij F ði1Þj Þ are row and column frequency of the PQ image respectively. SF of the fused image is high when its activity
PSNR ¼ 10log10
2
1 PQ
PP PL Q i¼1
j¼1
ðRij F ij Þ
Here L is the count of gray levels in the image. PSNR value is high when the ideal and fused image are same. Higher value implies better fusion. 3.1.7. Mutual Information (MI) Let hR ; hF denotes the normalized histogram of the reference and fused images respectively. hRF denotes the joint histogram between the fused and reference image. Mutual information between reference and resulting fused images are
MI ¼
Q P X X hRF ij log2
i¼1 j¼1
hRF ij
ij
hR hF
ij
Larger values implies better image quality. 3.1.8. Universal Quality Index (QI) This measures quantity of data that has been transferred from an ideal image into the resulting fused image.
QI ¼
ð4rRF ÞðlR þ lF Þ ðl2R þ l2F Þðr2R þ r2F Þ
where
lR ¼ PQ1
PP PQ i¼1
j¼1
Rij ,
level is huge.
2
3.2.2. Mean (MEAN) The luminance of an image is estimated using its mean intensity described by
MEAN ¼
Q P X 1 X jF ij j P Q i¼1 j¼1
where P Q represents the number of pixels in F ¼ ðF ij Þ. 3.2.3. Standard Deviation (STD) Standard deviation comprises both original image and noise acquired during transmission. It is more powerful when there is no noise in the transmitted image and portraits the contrast of an image. STD can be calculated as
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 XP XQ STD ¼ ðF ij MEANÞ2 i¼1 j¼1 PQ
4. Experimental result and quantitative analysis Experiments are carried out on many types of source images shown in Fig. 1(a)–(j). These image consists of three types, namely medical images (Fig. 1(a) and (b)), infrared–visible images (Fig. 1(c) and (d)) and multi-focused images (pepsi images (Fig. 1(e) and (f)), clock images (Fig. 1(g) and (h)), aircraft images (Fig. 1(i) and (j)).
Please cite this article in press as: P. Balasubramaniam, V.P. Ananthi, Image fusion using intuitionistic fuzzy sets, Informat. Fusion (2014), http://dx.doi.org/ 10.1016/j.inffus.2013.10.011
5
P. Balasubramaniam, V.P. Ananthi / Information Fusion xxx (2014) xxx–xxx
Fig. 1. Images employed for fusion. (a and b) Medical images, (c and d) infrared–visible images, (e and f) multifocused pepsi images, (g and h) multifocused clock images, (i and j) multifocused aircraft images are the source images and (k and l) are the reference images for the pairs (g–j) respectively.
Reference images for the multifocused clock and aircraft images are shown in Fig. 1(k) and (l) respectively. In this paper, we assume that images (to be fused) are registered before initiating the fusion process. The fusion results of proposed method are compared with simple averaging (AVG), PCA [7], DWT [16], SWT [17], DTCWT [23], MSVD, NSCT [33]. Fusion results of medical images using AVG, PCA, DWT, SWT, DTCWT, MSVD, NSCT and IFS are shown in Fig. 2(a)–(h), respectively. It is observed that image obtained by proposed IFS fusion method is high in luminance and contrast than other existing methods. The performance comparison of various fusion methods along with the proposed method (without reference image) is
Table 1 Performance comparison of the IFS fusion of medical images with different fusion rules (without reference image). Fusion algorithm
MEAN
SF
STD
AVG PCA DWT SWT DTCWT MSVD NSCT IFS
14.4200 16.1846 16.1984 16.5855 18.5251 18.9071 20.5709 21.3657
33.1407 43.2659 33.1921 33.1402 33.1407 33.1365 55.9197 58.5202
41.7548 50.4119 41.8921 42.5775 43.8864 42.1180 62.3947 65.6562
Fig. 2. Comparison of medical image fusion by various techniques. (a) AVG, (b) PCA, (c) DWT, (d) SWT, (e) DTCWT, (f) MSVD, (g) NSCT, and (h) IFS (proposed method).
Please cite this article in press as: P. Balasubramaniam, V.P. Ananthi, Image fusion using intuitionistic fuzzy sets, Informat. Fusion (2014), http://dx.doi.org/ 10.1016/j.inffus.2013.10.011
6
P. Balasubramaniam, V.P. Ananthi / Information Fusion xxx (2014) xxx–xxx
Fig. 3. Comparison of infrared–visible image fusion by various techniques. (a) AVG, (b) PCA, (c) DWT, (d) SWT, (e) DTCWT, (f) MSVD, (g) NSCT, and (h) IFS (proposed method).
Table 2 Performance comparison of the IFS fusion of infrared–visible with different fusion algorithm (without reference image). Fusion algorithm
SF
MEAN
STD
AVG PCA DWT SWT DTCWT MSVD NSCT IFS
6.4266 10.8263 8.9510 9.5171 9.0277 8.8997 11.4892 11.5367
91.0903 100.8487 91.0909 91.0904 91.0908 91.0903 108.3221 111.5537
22.6507 37.0766 22.9166 23.1597 22.9949 22.8665 32.2789 37.4649
shown in Table 1. The best fusion results of medical image pair are quoted in bold font. From the table, it is clearly seen that the overall activity level in IFS is higher than all other existing methods and are represented graphically in Fig. 7(a).
Fig. 3(a)–(h) shows that fusion of infrared–visible images by various existing methods (AVG, PCA, DWT, SWT, DTCWT, MSVD, NSCT) and proposed IFS method, respectively. On observation, it is seen that the IFS (proposed technique) inherits all the vital information from the two source images than other methods. Table 2 reflects the performance of fusion and the best value of metrics which are casted in bold font and are represented graphically in Fig. 7(b). The value of metrics clearly shows that the proposed fusion method is better than the other existing fusion techniques, since it eliminates fuzziness from the source images. Since every digital images have uncertainties, fusion of uncertain images leads to an uncertain fused image. Visually, it is seen from the fusion of multifocused pepsi images shown in Fig. 4(a)– (h), the proposed fusion eliminates the uncertainties from the source images and have better illuminance and contrast than the other existing methods. Variations of evaluation metrics along with the fusion methods multifocused pepsi images are shown in
Fig. 4. Comparison of multifocused pepsi image fusion by various techniques. (a) AVG, (b) PCA, (c) DWT, (d) SWT, (e) DTCWT, (f) MSVD, (g) NSCT, and (h) IFS (proposed method).
Please cite this article in press as: P. Balasubramaniam, V.P. Ananthi, Image fusion using intuitionistic fuzzy sets, Informat. Fusion (2014), http://dx.doi.org/ 10.1016/j.inffus.2013.10.011
P. Balasubramaniam, V.P. Ananthi / Information Fusion xxx (2014) xxx–xxx Table 3 Performance comparison of the IFS fusion of multifocused pepsi images with different fusion algorithm (without reference image). Fusion algorithm
SF
MEAN
STD
AVG PCA DWT SWT DTCWT MSVD NSCT IFS
10.2992 10.3010 11.8686 12.0727 13.3922 13.6116 13.4886 14.0067
143.3574 143.3570 143.3457 143.3575 143.3574 143.3573 144.9373 146.2681
65.6767 65.6767 65.7625 65.8140 66.3792 65.8757 66.8421 70.2272
7
Fig. 7(c). Table 3 illustrates that the proposed fusion is the best among other existing methods. In Figs. 5 and 6, the first and third column show the fused images, the second and fourth column show their corresponding error images. The error (difference) between the fused and ideal image is computed as Eij ¼ Rij F ij . The fusion of multifocused clock images by eight various methods are shown in Fig. 5(a1)– (h1), that is the first and third column of Fig. 5 along with their corresponding error images shown in Fig. 5(a2)–(h2), that is the second and fourth column of Fig. 5. Here, Fig. 1(k) is the reference image. Tables 4 and 5 show the quality of image fusion via various
Fig. 5. Comparison of multifocused clock image fusion by various techniques. (a1) AVG, (b1) PCA, (c1) DWT, (d1) SWT, (e1) DTCWT, (f1) MSVD, (g1) NSCT, (h1) IFS (proposed method) and their corresponding error images (a2), (b2), (c2), (d2), (e2), (f2), (g2), (h2) obtained from AVG, PCA, DWT, SWT, DTCWT, MSVD, NSCT and IFS (proposed method), respectively, by using the reference image 1(k).
Fig. 6. Comparison of multifocused aircraft image fusion by various techniques. (a1) AVG, (b1) PCA, (c1) DWT, (d1) SWT, (e1) DTCWT, (f1) MSVD, (g1) NSCT, (h1) IFS (proposed method) and their corresponding error images (a2), (b2), (c2), (d2), (e2), (f2), (g2), (h2) obtained from AVG, PCA, DWT, SWT, DTCWT, MSVD, NSCT and IFS (proposed method), respectively, by using the reference image 1(l).
Please cite this article in press as: P. Balasubramaniam, V.P. Ananthi, Image fusion using intuitionistic fuzzy sets, Informat. Fusion (2014), http://dx.doi.org/ 10.1016/j.inffus.2013.10.011
8
P. Balasubramaniam, V.P. Ananthi / Information Fusion xxx (2014) xxx–xxx
Fig. 7. Graphical representation of values of performance metrics based on various image fusion methods.
Table 4 Performance comparison of the IFS fusion of multifocused clock images with different fusion algorithm (without reference image Fig. 1(k)). Fusion algorithm
SF
MEAN
STD
AVG PCA DWT SWT DTCWT MSVD NSCT IFS
6.2058 6.2151 7.3846 7.6699 8.6418 8.1791 10.0338 10.2478
97.9637 97.9746 98.0043 97.9639 97.9637 97.9637 99.7454 99.7454
49.8176 49.8205 49.8293 49.9462 50.1218 49.8766 51.5874 52.3082
Table 5 Performance comparison of the IFS fusion of multifocused clock images with different fusion algorithm (with reference image Fig. 1(k)). Fusion algorithm
RMSE
PFE
MAE
CORR
SNR
PSNR
MI
QI
SSIM
AVG PCA DWT SWT DTCWT MSVD NSCT IFS
6.4345 6.4126 6.4595 6.2220 5.7745 6.8611 5.3335 3.4326
5.8362 5.8163 5.8588 5.6435 5.2376 6.2231 4.8376 3.1134
3.6908 3.6838 3.7503 3.6268 3.4067 3.9516 3.7066 2.3153
0.9983 0.9983 0.9983 0.9984 0.9986 0.9980 0.9989 0.9989
24.6774 24.7070 24.6437 24.9691 25.6174 24.1198 26.3075 27.1145
40.0796 40.0945 40.0628 40.2255 40.5497 39.8009 40.8947 63.8636
1.3108 1.3100 1.3016 1.3017 1.3144 1.2966 1.3185 1.6653
0.7577 0.7579 0.7419 0.7486 0.7561 0.7252 0.7412 0.7462
0.9425 0.9427 0.9373 0.9460 0.9535 0.9295 0.9584 0.9999
Please cite this article in press as: P. Balasubramaniam, V.P. Ananthi, Image fusion using intuitionistic fuzzy sets, Informat. Fusion (2014), http://dx.doi.org/ 10.1016/j.inffus.2013.10.011
9
P. Balasubramaniam, V.P. Ananthi / Information Fusion xxx (2014) xxx–xxx
Fig. 8. Graphical representation of values of performance metrics of various image fusion methods for multifocused clock images.
Table 6 Performance comparison of the IFS fusion of multifocused aircraft images with different fusion algorithm (without reference image Fig. 1(l)). Fusion algorithm
SF
MEAN
STD
AVG PCA DWT SWT DTCWT MSVD NSCT IFS
9.1266 9.1679 12.8783 13.3948 13.1172 13.8678 16.9916 17.0008
227.6697 227.6697 227.6723 227.6697 227.6679 227.6697 228.1178 228.6727
45.8804 45.9204 46.1907 46.3173 46.8390 46.4184 49.3678 50.4271
evaluation metrics without and with an ideal image and are plotted as graph in Fig. 8(a) and (b). On observation it is seen that NSCT method of fusion has high MEAN and CORR values similar to that of IFS method, but are lower in all other metrics than IFS method. Just like the image fusion of other images described above, proposed method shows better performance in all the metrics than the existing one since all uncertainties are eliminated and the best value of the metrics are shown in bold font. Fig. 6 shows the fusion results and the error images of multifocused aircraft images just like that of multifocused clock image described above. Here, Fig. 1(l) is the reference image. Performance metrics computed without and with the reference image are
Fig. 9. Graphical representation of values of performance metrics of various image fusion methods for multifocused aircraft images.
shown in Tables 6 and 7 and are interpreted graphically in Fig. 9(a) and (b). All the metrics in the tables show that the proposed method is better than the other existing methods and only the CORR value of NSCT method is same as proposed method. All the figures qualitatively show that the proposed method is good in luminance and contrast than other existing methods and all the tables and graphs, quantitatively reveal that the fusion of various image using proposed method is more effective since errors such as RMSE, MAE, PFE value obtained is low and other metrics such as CORR, SNR, PSNR, MI, QI, SSIM are all high than other existing methods in the literature. 5. Conclusion This paper furnishes a new way for fusing various images by using IFSs. This new approach is being simple and can be implemented to real-time problems. Since IFS absorbs more uncertainties from digital images, it makes this fusion technique surpasses other techniques. Comparing with other methods, our fusion method is superior in terms of both quantitatively and visually. Experimental results show that the images obtained by IFS method has high contrast and luminance than all other methods. Thus proposed fusion technique using IFS gives better result by
Table 7 Performance comparison of the IFS fusion of multifocused aircraft images with different fusion algorithm (with reference image Fig. 1(l)). Fusion algorithm
RMSE
PFE
MAE
CORR
SNR
PSNR
MI
QI
SSIM
AVG PCA DWT SWT DTCWT MSVD NSCT IFS
9.4172 9.3383 8.9098 8.2336 5.7745 8.6387 1.1442 0.5457
4.0395 4.0057 3.8218 3.5318 5.2376 3.7056 0.4908 0.2346
3.0654 3.0333 2.7982 2.6765 3.4067 2.8080 0.6236 0.0076
0.9991 0.9992 0.9993 0.9994 0.9986 0.9993 0.9999 0.9999
27.8733 27.9464 28.3544 29.0401 25.6174 28.6228 46.1819 80.7454
38.4255 38.4621 38.6661 39.0089 40.5497 38.8003 47.5798 88.9270
1.4069 1.4489 1.4457 1.4713 1.3144 1.4426 1.5613 1.9981
0.5069 0.5149 0.8718 0.8024 0.7561 0.5651 0.6445 0.9965
0.9689 0.9695 0.9727 0.9774 0.9535 0.9739 0.9991 0.9999
Please cite this article in press as: P. Balasubramaniam, V.P. Ananthi, Image fusion using intuitionistic fuzzy sets, Informat. Fusion (2014), http://dx.doi.org/ 10.1016/j.inffus.2013.10.011
10
P. Balasubramaniam, V.P. Ananthi / Information Fusion xxx (2014) xxx–xxx
eliminating uncertainties and vagueness. Future research will be done for better fusion result by embedding non-membership fuzzy logic into Neural Networks. References [1] J.K. Aggarwal, Multisensor Fusion for Computer Vision, Springer, Berlin, 1993. [2] A. Akerman, Pyramid techniques for multisensor fusion, Proc. SPIE 2828 (1992) 124–131. [3] D.L. Hall, J. Llinas, An introduction to multisensor data fusion, Proc. IEEE 85 (1) (1997) 6–23. [4] P.K. Varshney, Multisensor data fusion, Electron. Commun. Eng. J. 9 (6) (1997) 245–253. [5] A.A. Goshtasby, S. Nikolov, Image fusion: advances in the state of the art, Inf. Fusion 8 (2) (2007) 114–118. [6] V.S. Petrovic, C.S. Xydeas, Gradiant-based multiresolution image fusion, IEEE Trans. Image Process. 13 (2) (2004) 228–237. [7] J.F. Sun, Y.J. Jiang, S.Y. Zeng, A study of PCA image fusion techniques on remote sensing, Proc. SPIE 5985 (2005) 739–744. [8] A. Toet, Image fusion by a ratio of low-pass pyramid, Pattern Recognit. Lett. 9 (4) (1989) 245–253. [9] Z. Liu, K. Tsukada, K. Hanasaki, Y.K. Ho, Y.P. Dai, Image fusion by using steerable pyramid, Pattern Recogonit. Lett. 22 (9) (2001) 929–939. [10] P.T. Burt, E.H. Andelson, The Laplacian pyramind as a compact image code, IEEE Trans. Commun. 31 (4) (1983) 532–540. [11] H. Li, B. Manjunath, S. Mitra, Multisensor image fusion using the wavelet transform, Graph. Model. Image Process. 57 (3) (1995) 235–245. [12] Z. Zhang, R.S. Blum, A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application, Proc. IEEE 87 (8) (1999) 1315–1326. [13] G. Pajares, J. Cruz, A wavelet-based image fusion tutorial, Pattern Recognit. 37 (9) (2004) 1855–1872. [14] S. Mallat, A Wavelet Tour of Signal Processing, third ed., Academic Press, 2008. [15] M. Unser, Texture classification and segmentation using wavelet frames, IEEE Trans. Image Process. 4 (11) (1995) 1549–1560. [16] T. Pu, G. Ni, Contrast based image fusion using discrete wavelet transform, Opt. Eng. 39 (8) (2000) 2075–2082. [17] S. Li, Multisensor remote sensing image fusion using stationary wavelet transform: effects of basis and decomposition level, Int. J. Wavelets Multiresolut. Inform. Process. 6 (1) (2008) 37–50. [18] I.W. Selesnick, R.G. Baraniuk, N.G. Kingsbury, The dual-tree complex wavelet transform, IEEE Signal Process. Mag. 22 (6) (2005) 123–151. [19] T. Wan, N. Canagarajah, A. Achim, Segmentation-driven image fusion based on alpha-stable modeling of wavelet coefficients, IEEE Trans. Multimedia 11 (4) (2009) 624–633. [20] B. Forster, D.V.D. Ville, J. Berent, D. Sage, M. Unser, Complex wavelets for extended depth-of-field: a new method for the fusion of multichannel microscopy images, Microsc. Res. Tech. 65 (1–2) (2004) 33–42. [21] L.A. Ray, R.R. Adhami, Dual tree discrete wavelet transform with application to image fusion, in: Proc. 38th Southeast. Symp. Syst. Theory, Huntsville, Albania, 2006, pp. 430–433. [22] N.G. Kingsbury, Complex wavelets for shift invariant analysis and filtering of signals, Appl. Comput. Harmon. Anal. 10 (3) (2001) 234–253. [23] S. Wei, W. Ke, A multi-focus image fusion algorithm with DT-CWT, in: Int. Conf. Comput. Intell. Secur., IEEE, 2007, pp. 147–151.
[24] R. Kakarla, P.O. Ogunbona, Signal analysis using a multiresolution form of the singular value decomposition, IEEE Trans. Image Process. 10 (5) (2001) 724– 735. [25] Shung-Yung Lung, Multi-resolution form of SVD for textindependent speaker recognition, Pattern Recognit. 35 (7) (2002) 1637–1639. [26] M. Yoshikawa, Y. Gong, R. Ashino, R. Vaillancourt, Case study on SVD multiresolution analysis, Scientific Proceedings of Riga Technical University, Bound. Field Problem. Comput. Simul. 46 (2005) 65–79. [27] E. Candès, L. Demanet, D. Donoho, L.X. Ying, Fast discrete curvelet transforms, SIAM Multiscale Model. Simul. 5 (3) (2006) 861–899. [28] M.N. Do, M. Vetterli, The contourlet transform: an efficient directional multiresolution image representation, IEEE Trans. Image Process. 14 (12) (2005) 2091–2106. [29] L. Yang, B.L. Guo, W. Ni, Multimodality medical image fusion based on multiscale geometric analysis of contourlet transform, Neurocomputing 72 (13) (2008) 203–211. [30] S.Y. Yang, M. Wang, L.C. Jiao, R.X. Wu, Z.X. Wang, Image fusion based on a new contourlet packet, Inform. Fusion 11 (2) (2010) 78–84. [31] L.D. Cunha, J.P. Zhou, The nonsubsampled contourlet transform: theory, design, and applications, IEEE Trans. Image Process. 15 (10) (2006) 3089–3101. [32] B. Yang, S.T. Li, F.M. Sun, Image fusion using nonsubsampled contourlet transform, in: Proc. 4th Int. Conf. Image Graph., Sichuan, China, 2007, pp. 719– 724. [33] Q. Zhang, B.L. Guo, Multifocus image fusion using the nonsubsampled contourlet transform, Signal Process. 89 (7) (2009) 1334–1346. [34] C. Ye, L. Zhang, Z. Zhang, SAR and panchromatic image fusion based on region features in nonsubsampled contourlet transform domain, in: Proc. IEEE Int. Conf. Autom. Logist., Zhengzhou, China, August 2012, pp. 358–362. [35] L.A. Zadeh, Fuzzy sets, Inform. Control 8 (3) (1965) 338–353. [36] K.T. Atanassov, Intuitionistic fuzzy set, Fuzzy Sets Syst. 20 (1) (1986) 87–96. [37] L.A. Zadeh, Concept of linguistic variable and its application to approximate, I, II, III, Inform. Sci. 8 (1975) 199–249. 301–357; L.A. Zadeh, Concept of linguistic variable and its application to approximate, I, II, III, Inform. Sci. 9 (1975) 43–80. [38] K.T. Atanassov, S. Stoeva, Intuitionistic fuzzy set, in: Proc. Polish Symp. Interval Fuzzy Math., Poznan, August 1993, pp. 23–26. [39] E. Szmidt, J. Kacpryzyk, Distance between intuitionistic fuzzy set, Fuzzy Sets Syst. 114 (3) (2000) 505–518. [40] S.K. Pal, R.A. King, Image enhancement using smoothing with fuzzy sets, IEEE Trans. Syst. Man Cybern. 11 (7) (1989) 494–501. [41] T. Chaira, A rank ordered filter for medical image edge enhancement and detection using intuitionistic fuzzy set, Appl. Soft Comput. 12 (4) (2012) 1259– 1266. [42] A. De Luca, S. Termini, A definition of non probabilistic entropy in the setting of fuzzy set theory, Inform. Control 20 (4) (1972) 301–312. [43] I.K. Vlachos, G.D. Sergiadis, Role of entropy in intuitionistic fuzzy contrast enhancement, Lecture Notes in Artificial Intelligence, vol. 4529, Springer, 2007, pp. 104–113. [44] T. Chaira, A novel intuitionistic fuzzy c means clustering algorithm and its application to medical images, Appl. Soft Comput. 11 (2) (2011) 1711–1717. [45] P. Burillo, H. Bustince, Entropy on intuitionistic fuzzy sets and on intervalued, Fuzzy Sets Syst. 78 (3) (1996) 305–316. [46] R.S. Blum, Z. Liu, Multi-sensor Image Fusion and Its Applications, CRC Press, Taylor & Francis Group, NW, 2006.
Please cite this article in press as: P. Balasubramaniam, V.P. Ananthi, Image fusion using intuitionistic fuzzy sets, Informat. Fusion (2014), http://dx.doi.org/ 10.1016/j.inffus.2013.10.011