G Model
ARTICLE IN PRESS
IJLEO 56558 1–6
Optik xxx (2015) xxx–xxx
Contents lists available at ScienceDirect
Optik journal homepage: www.elsevier.de/ijleo
Multi-focus image fusion based on cartoon-texture image decomposition
1
2
3 4 5 6
Q1
Yongxin Zhang a,b , Li Chen a,∗ , Zhihua Zhao a , Jian Jia c a b c
School of Information Science and Technology, Northwest University, Xi’an 710127, China School of Information Technology, Luoyang Normal University, Luoyang 471022, China Department of Mathematics, Northwest University, Xi’an 710127, China
7
8 20
a r t i c l e
i n f o
a b s t r a c t
9 10 11 12 13
Article history: Received 2 January 2015 Accepted 12 October 2015 Available online xxx
14
19
Keywords: Image fusion Split bregman algorithm Total variation Multi-component fusion
21
1. Introduction
15 16 17 18
22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42
In order to represent the source images effectively and completely, a multi-component fusion approach is proposed for multi-focus image fusion. The registered source images are decomposed into cartoon and texture components by using cartoon-texture image decomposition. The significant features are selected from the cartoon and texture components, respectively to form a composite feature space. The local features that represent the salient information of the source images are integrated to construct the fused image. Experimental results demonstrate the proposed approach works better in extracting the focused regions and improving the fusion quality compared to the other existing fusion methods in both spatial and transform domain. © 2015 Published by Elsevier GmbH.
Multi-focus image fusion can be defined as a process of combing substantial information from multiple images of the same scene to create a single composite image that will be more suitable for human visual perception or further computer processing [1]. It has been proven to be an effective way to extend the depth of the field. In general, the fusion methods can be categorized into two groups: spatial domain fusion and transform domain fusion [2]. In this paper, we concentrate on the spatial domain methods. The spatial domain methods are easy to implement and have low computational complexity. The spatial domain fusion methods can be divided into two categories: pixel based methods and region based methods. The simplest pixel based fusion method is to take the average of the source images pixel by pixel. However, the simplicity may reduce the contrast of the fused image. To improve the quality of fused image, some region based methods have been proposed to combine partitioned blocks or segmented regions based on their sharpness [3]. The sharpness is measured by using local spatial features [4], such as energy of image gradient (EOG) and spatial frequency (SF). Then, the focused blocks or regions are selected from source images by simply copying them into the fused image. However, if the size of blocks is too small, the block selection is sensitive
∗ Corresponding author. Tel.: +86 18637930027. E-mail addresses:
[email protected] (Y. Zhang),
[email protected] (L. Chen),
[email protected] (J. Jia).
to noise and is subject to incorrect selection of blocks from the corresponding source images. Or else, if the size of blocks is too large, the in-focus and out-of-focus pixels are partitioned in the same block, which are selected to build the final fused image. Accordingly, the blocking artifacts are produced and may compromise the quality of the final fused image. To eliminate the blocking artifacts, researchers have proposed some improved schemes. Li et al. [5,6] have selected the focused blocks by using learning based methods, such as artificial neural networks (ANN) and support vector machine (SVM). Due to the difficulty in obtaining empirical data in most multi-focus image fusion cases, the learning based methods are not widely used. Fedorov et al. [7] have selected the best focus by titling source images with overlapping neighbourhoods and improved the visual quality of the fused image. But this method is afflicted by temporal and geometric distortions between images. Aslantas et al. [8] have selected the optimal block-size by using differential evolution algorithm and enhanced the self-adaptation of the fusion method. But this method requires longer computational time. Wu et al. [9] have selected the focused patches from the source images by using a belief propagation algorithm. But the algorithm is complicated and time-consuming. Goshtasby et al. [10] have detected the focused blocks by computing the weight sum of the blocks. The iterative procedure is time-consuming. De et al. [11] have determined the optimal block-size by using quad tree structure and effectively solved the problem of determining of block-size. These schemes all achieve better performance than the traditional methods and significantly inhibit the blocking artifacts.
http://dx.doi.org/10.1016/j.ijleo.2015.10.098 0030-4026/© 2015 Published by Elsevier GmbH.
Please cite this article in press as: Y. Zhang, et al., Multi-focus image fusion based on cartoon-texture image decomposition, Optik - Int. J. Light Electron Opt. (2015), http://dx.doi.org/10.1016/j.ijleo.2015.10.098
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70
G Model
ARTICLE IN PRESS
IJLEO 56558 1–6
Y. Zhang et al. / Optik xxx (2015) xxx–xxx
2
103
In order to effectively and completely represent the source images, a novel fusion method based on cartoon-texture image decomposition is proposed. Cartoon-texture image decomposition is an important way of image processing, which has been widely used in image analysis and vision applications, such as enhancement, inpainting, segmentation, texture and shape analysis [12]. Cartoon-texture image decomposition separates a given image into cartoon and texture components. The cartoon component holds the geometric structures, isophotes and smooth-piece of the source images, while the texture component contains textures, oscillating patterns, fine details and noise [13]. The cartoon and texture components represent the most meaningful information of the source images, which is important for image fusion. Cartoontexture image decomposition has been proven to be an effective way to extract the structure information and texture information from image [14]. The objective of this paper is to investigate the potential application of cartoon-texture image decomposition in the multi-focus image fusion. The main contribution of this paper is that a multi-component fusion framework is established. The framework is based on the discriminative features that computed from the cartoon and texture components of the source images. The focused regions are detected by computing the salient feature of the neighbourhood region of each pixel in the cartoon and texture components. The proposed method works well in inhibiting the blocking artifacts and representing the source images. The rest of the paper is organized as follows. In Section 2, the basic idea of cartoon-texture image decomposition will be briefly described, followed by the new method based on cartoon-texture image decomposition for image fusion in Section 3. In Section 4, extensive simulations are performed to evaluate the performance of the proposed method. In addition, several experimental results are presented and discussed. Finally, concluding remarks are drawn in Section 5.
104
2. Cartoon-texture image decomposition
71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102
110
Nowadays, in many problems of image analysis [15], an observed image f represents a real scene. The image f may contain texture or noise. In order to extract the most meaningful information from f , most models [16–25] try to find another image u, “close” to f , such that u is a cartoon or simplification of f . These models assume that the following relation between f and u:
111
f =u+v
105 106 107 108 109
(1)
where v is noise or texture. In 1989, Mumford et al. [16] have established a model to decom113 pose the black and white static image by using bounded variation 114 function. In 1992, Rudin et al. [17] have simplified the Mum115 ford–Shah model and proposed total variation minimization energy 116 Q2 functional model of Rudin–Osher–Fatemi (ROF) as: 117 112
118
(||∇ u||dxdy) +
EROF (u) = R
119 120 121 122 123 124 125 126 127 128 129 130
(u − u0 )2 dxdy
(2)
R
The ROF model is very efficient for de-noising images while keeping sharp edges. However, the ROF model will remove the texture when is small enough [18]. In 2002, Vese et al. [19] have developed a partial differential equation (PDE) based iterative numerical algorithm to approximate Meyer’s weaker norm || · ||G by using Lp . However, this model is time consuming. To improve the computation efficiency, many models and methods have been proposed. They proposed Osher–Sole–Vese (OSV) [20] model based on total variation (TV) and norm H −1 . Aujol et al. [21] have introduced dual norm to image decomposition. Chana et al. [22] have proposed CEP − H −1 model based on OSV. However, these methods are still complicated. In 2008, Goldstein et al. [23] have proposed
Fig. 1. Cartoon-texture decomposition results of multi-focus image ‘Clock’ using image decomposition: (a) source images I. (b) cartoon component U and (c) texture component V .
Split Bregman algorithm by combining the split method [24] with Bregman iteration [25]. This algorithm is easy to implement and has low computational complexity. This paper performs cartoontexture image decomposition based on ROF model by using Split Bregman algorithm. Fig. 1(b and c) shows the cartoon-texture image decomposition results of source images ‘Clock’. It is obvious that the salient features of the cartoon and texture components are corresponding to the local feature of objects in focus. Thus, the cartoon and texture components can be useful to build a robust fusion scheme to accurately discriminate the focused regions from defocused regions. In this paper, the salient features of the cartoon and texture components are used to detect the focused regions. 3. Fusion method based on cartoon-texture image decomposition 3.1. Fusion algorithm In this Section, a novel method based on image decomposition is proposed. The source images must be initially decomposed into cartoon and texture components, respectively. Then, both components are integrated according to certain fusion rules, respectively. The proposed fusion framework is depicted in Fig. 2 and the detailed design is described as follows. For simplicity, this paper assumes that there are only two source images, namely I A and IB , here. The rationale behind the proposed scheme applies to the fusion of more than two multi-focus images. The source images are assumed to pre-registered and the image registration is not included in the framework. The fusion algorithm consists of the following 3 steps: Step 1: Perform cartoon-texture image decomposition on the source images I A , IB to obtain cartoon and texture components, respectively. For the source image I A , let UA , VA denote the cartoon and texture components, respectively. For the source image I B , UB , VB have the roles similar to U A and VA . Moreover, the color source
Fig. 2. Block diagram of proposed multi-focus images fusion framework.
Please cite this article in press as: Y. Zhang, et al., Multi-focus image fusion based on cartoon-texture image decomposition, Optik - Int. J. Light Electron Opt. (2015), http://dx.doi.org/10.1016/j.ijleo.2015.10.098
131 132 133 134 135 136 137 138 139 140 141 142 143
144 145
146
147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162
G Model
ARTICLE IN PRESS
IJLEO 56558 1–6
Y. Zhang et al. / Optik xxx (2015) xxx–xxx
165
image can be processed as a single matrix made up of three individual source images. For a color image, the cartoon and texture components can be defined as:
166
Ic = U c + V c ,
163 164
167 168 169 170 171 172
173
174 175 176 177 178 179 180 181 182 183 184 185 186
187
c ∈
R band, G band, B band
(3)
where c denotes the index of the image color band. Step 2: According to the fusion rules, U A and UB are integrated to obtain U which denotes the cartoon component of the fused image. Similarly, V A and VB are combined to form V which denotes the texture component of the fused image. Step 3: U and V are superposed to form the fused image F. 3.2. Fusion rule There are two key issues [13] for the fusion rules. One is how to measure the activity level of the cartoon and texture components, respectively, which recognizes the sharpness of the source images. Fig. 3(a–d) show the relationship between multi-component of the source images ‘Clock’ and their 3D shapes. It is obvious that the salient protruding portions of the 3D shape of the multicomponent are corresponding to the salient regions of the cartoon and texture components, and the salient regions of the cartoon and texture components are corresponding to the focused regions of the source images. Thus, we use the EOG of the pixels within a M × N (M = 2s + 1, N = 2t + 1) window of the cartoon and texture components to measure the activity level, respectively. s and t are all positive integers. The EOG is calculated as:
⎧ ⎪ ⎪ ⎪ ⎪ ⎨ EOG(i, j) =
(M−1)/2
(N−1)/2
m=−(M−1)/2n=−(N−1)/2
⎪ Ii+m = I(i + m + 1, j) − I(i + m, j) ⎪ ⎪ ⎪ ⎩
2
2 Ii+m + Ij+n
region of the pixel location (i, j) in UA , UB , VA and VB are, respectively, compared to determine which pixel is likely to belong to the focused regions. Two decision matrices H U and H V are constructed for recording the comparison results according to the selection rules as follows:
H U (i, j) =
V
H (i, j) =
189 190 191 192 193 194 195 196 197 198 199
0,
otherwise
≥
UB EOG(i,j)
V
U(i, j) =
V (i, j) =
200 201 202 203 204
(5)
205
(6)
206
V
A B 1, EOG(i,j) ≥ EOG(i,j)
otherwise
where “1” in H U indicates the pixel location (i, j) in image UA is in focus while “0” in H U indicates the pixel location (i, j) in image UB is in focus. Likewise, the “1” in H V indicates the pixel location (i, j) in image VA is in focus while “0” in H V indicates the pixel location (i, j) in image VB is in focus. However, judging by EOG alone is not sufficient to distinguish all the focused pixels. There are thin protrusions, narrow breaks, thin gulfs, small holes, etc. in H U and H V . To overcome these disadvantages, morphological operations [26] are performed on H U and H V . Opening, denoted as H U ◦ Z and H V ◦ Z, is simply erosion of H U and H V by the structure element Z, followed by dilation of the result by Z. This process can remove thin gulfs and thin protrusions. Closing, denoted as H U • Z and H V • Z, is dilation followed by erosion. It can join narrow breaks and thin gulfs. To correctly judge the small holes, a threshold is set to remove the holes smaller than the threshold. Thus, the final fused cartoon and texture components are constructed according to the rules as follows:
(4)
where I(i, j) indicates the value of the pixel location (i, j) in the cartoon or texture components. The window is set as 5 × 5 and the detailed size can be set based on the experimental results. The other is how to integrate the focused pixels or regions from the cartoon and texture components into the counterparts of the fused image. In order to eliminate the blocking artifacts, a sliding window technique is applied to the cartoon and texture compoUB A nents. Let EOGU (i,j) and EOG(i,j) denote the EOG of all the pixels within the sliding windows which cover the neighbourhood region A of the pixel location (i, j) in UA and UB , respectively. The EOGV(i,j) B have the roles similar to EOGUA and EOGUB for the and EOGV(i,j) (i,j) (i,j) pixel location (i, j) in VA and VB . The EOG of the neighbourhood
1,
UA EOG(i,j)
0,
Ij+n = I(i, j + n + 1) − I(i, j + n)
188
3
UA (i, j),
H U (i, j)
UB (i, j),
H U (i, j) = 0
=1
VA (i, j), H V (i, j) = 1 VB (i, j), H V (i, j) = 0
207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223
(7)
224
(8)
225
4. Experimental results and discussion In order to evaluate the performance of the proposed method, several experiments are performed on four pairs of multi-focus source images as shown in Fig. 4. The upper two pairs are grayscale images with size of 640 × 380 pixels. The rest are color images with size of 512 × 384 pixels and 640 × 380 pixels, respectively. In general, image registration should be performed before image fusion.
Fig. 3. The relationship between multi-component of the source images ‘Clock’ and their 3D shapes: (a) cartoon component of the far focused image, (b) cartoon component of the near focused image, (c) texture component of the far focused image and (d) texture component of the near focused image.
Please cite this article in press as: Y. Zhang, et al., Multi-focus image fusion based on cartoon-texture image decomposition, Optik - Int. J. Light Electron Opt. (2015), http://dx.doi.org/10.1016/j.ijleo.2015.10.098
226
227 228 229 230 231 232
G Model IJLEO 56558 1–6
ARTICLE IN PRESS Y. Zhang et al. / Optik xxx (2015) xxx–xxx
4
Fig. 4. Multi-focus source images: (a) near focused image ‘Disk’, (b) far focused image ‘Disk’, (c) near focused image ‘Lab’, (d) far focused image ‘Lab’, (e) far focused image ‘Rose’, (f) near focused image ‘Rose’, (g) far focused image ‘Book’ and (h) near focused image ‘Book’.
256
Experiments are conducted with Matlab R2011b in Windows environment on a computer with Intel Xeon X5570 and 48G memory. For comparison, besides the proposed method, some existing multi-focus image fusion methods are also implemented on the same set of source images [27,28]. These methods are laplacian pyramid (LAP), discrete wavelet transform (DWT), nonsubsumpled contourlet transform (NSCT), principal component analysis (PCA), SF (Li’s method [3]). Due to the lack of original source codes, this paper uses the Eduardo Fernandez Canga’s Matlab image fusion toolbox [29] as a reference (LAP, DWT, PCA, SF). Specifically, the Daubechies wavelet function ‘bi97 is used in the DWT. The decomposition level of DWT and LAP is 4. The NSCT toolbox [30] is used as the reference for NSCT. The pyramid filter ‘9-7 and the orientation filter ‘7-9 with {4, 4, 3} levels of decomposition are set for the fusion method based on NSCT. The Split Bregman toolbox [31] is used as the reference for the proposed method. In order to quantitatively compare the performance of proposed method and that of the other methods mentioned above, two metrics are used to evaluate the fusion performance. The two metrics are: (i) Mutual information (MI) [32] reveal the degree of dependence of the source images and the fused image. (ii) Q AB/F [33] reflects the amount of edge information transferred from the source images to the fused image. A larger value for them means a better fusion result.
257
4.1. Comparison results on grayscale images
233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255
258 259 260 261 262
For visual evaluation, the fused images ‘Disk’ and ‘Lab’ obtained by different fusion methods are shown in Figs. 5(a–f) and 6(a–f). The difference between the fused images and their corresponding far focused source image for ‘Lab’ are shown in Fig. 7(a–f). The fused images obtained by the other fusion methods demonstrate obvious
Fig. 5. The fused images ‘Disk’ obtained by LAP (a), DWT (b), NSCT (c), PCA (d), SF (e) and proposed (f).
Fig. 6. The fused images ‘Lab’ obtained by LAP (a), DWT (b), NSCT (c), PCA (d), SF (e) and proposed (f).
Fig. 7. The difference images between right focused source image ‘Lab’ and corresponding fused images obtained by LAP (a), DWT (b), NSCT (c), PCA (d), SF (e) and proposed (f).
blurs and artifacts, such as the blurry regions of the white book in Fig. 5(a–d), and the upper edge of the student’s head in Fig. 6(a–d). Blocking artifacts appear in the fused images obtained by SF, such as the upper edge of the clock in Fig. 5(e), and the upper edge of the student’s head in Fig. 6(e). The contrast of the fused image obtained by PCA is worst, and the contrast of the fused image obtained by the proposed method is best. There are obvious residuals in the difference images obtained by LAP, DWT, NSCT, PCA and SF. There are distortions in the difference images in Fig. 7(a–c). There are a few residuals in the left regions of Fig. 7(d and e). Thus, the fused image of proposed method achieves superior visual performance by containing all the focused contents from the source images without introducing artifacts.
Please cite this article in press as: Y. Zhang, et al., Multi-focus image fusion based on cartoon-texture image decomposition, Optik - Int. J. Light Electron Opt. (2015), http://dx.doi.org/10.1016/j.ijleo.2015.10.098
263 264 265 266 267 268 269 270 271 272 273 274 275
G Model
ARTICLE IN PRESS
IJLEO 56558 1–6
Y. Zhang et al. / Optik xxx (2015) xxx–xxx Table 1 The performance of different fusion methods for grayscale multi-focus images ‘Disk’ and ‘Lab’. Method
LAP DWT NSCT PCA SF Proposed
Disk
Lab Q AB/F
Run-time(s)
MI
Q AB/F
Run-time(s)
6.14 5.36 5.88 6.02 7.00 7.25
0.69 0.64 0.67 0.53 0.68 0.72
0.91 0.64 463.19 0.32 1.01 21.08
7.10 6.47 6.95 7.12 7.94 8.20
0.71 0.69 0.71 0.59 0.72 0.75
0.91 0.59 468.51 0.08 1.03 17.09
284
For quantitative comparison, the quantitative results on grayscale multi-focus images in two quality measures and the running time are also shown in Table 1. The proposed method gains higher MI and Q AB/F values than the other methods. One can see that the running time of the proposed method is larger than that of the other methods except for NSCT. Due to the sliding window technique is applied for the detection of focused regions, the computation of EOG of all pixels of each sliding window in the proposed method requires longer computational time.
285
4.2. Comparison results on color images
276 277 278 279 280 281 282 283
286 287 288 289 290 291 292
Table 2 The performance of different fusion methods for color multi-focus images ‘Rose’ and ‘Book’. Method
MI
For visual evaluation, the fused images ‘Rose’ and ‘Book’ obtained by different fusion methods are shown in Figs. 8(a–f) and 9(a–f). The fused images obtained by LAP, DWT and NSCT demonstrate obvious blurs and artifacts, such as the upper edge of the rose flower in Fig. 8(a–c) and the right corner of the left book in Fig. 9(a–c). Blocking arifacts appear in the fused images obtained by SF, such
5
LAP DWT NSCT PCA SF Proposed
Rose
Book
MI
Q AB/F
Run-time(s)
MI
Q AB/F
Run-time(s)
5.75 5.07 5.33 5.73 7.15 7.30
0.69 0.66 0.69 0.70 0.72 0.73
2.32 1.01 222.86 0.04 1.90 42.14
6.94 6.42 6.51 7.16 7.71 7.87
0.71 0.69 0.69 0.62 0.70 0.73
3.12 1.57 366.40 0.14 3.10 66.82
as the right edge of the door frame in Fig. 8(e) and the cover of the left book in Fig. 9(e). In addition, a blurry edge between the two books appears in Fig. 9(e). The contrast of the fused images obtained by PCA is worst, such as the rose flower in Fig. 8(d) and the cover of the left book in Fig. 9(d). The contrast of the fused images obtained by the proposed fusion method is better than other fusion methods. Thus, the fused image of proposed method achieves better qualitative performance by integrating all the focused regions from the source images without introducing artifacts. For quantitative comparison, the quantitative results on color multi-focus images in two quality measures are shown in Table 2. The proposed method gains higher MI and Q AB/F values than the other methods. The computational time are also shown in Table 2. Again, the proposed method needs longer running time than the other methods except for NSCT. The drawback of the high computation complexity lies in that the sliding widow searches the entire image to compute the EOG of all the pixels within the window, which inefficient coded using a loop structure in matlab. 5. Conclusion and future work This paper proposes a novel multi-focus image fusion method based on cartoon-texture image decomposition. The salient features computed from the cartoon and texture components are able to represent the salient information from the source images. The experiments were performed on various pairs of grayscale and color images. The qualitative and quantitative evaluation have demonstrated that the proposed method achieves superior fusion results compared to some existing fusion methods and significantly improves the quality of the fused image. In the future, we will consider optimizing the proposed method to reduce the timeconsuming. Acknowledgements
Fig. 8. The fused image ‘Rose’ obtained by LAP (a), DWT (b), NSCT (c), PCA (d), SF (e), proposed (f).
The work was supported by National Key Technology Science Q3 and Technique Support Program (No. 2013BAH49F03), National Natural Science Foundation of China (No. 61379010), Key Technologies R&D Program of Henan Province (No. 132102210515), Natural Science Basic Research Plan in Shaanxi Province of China (Program Q4 No. 2012JQ1012). References
Fig. 9. The fused image ‘Book’ obtained by LAP (a), DWT (b), NSCT (c), PCA (d), SF (e) and proposed (f).
[1] Y. Zhang, L. Chen, Z. Zhao, J. Jia, Multi-focus image fusion with robust principal component analysis and pulse coupled neural network, Optik 125 (17) (2014) 5002–5006, 2014. [2] S.T. Li, X.D. Kang, J.W. Hu, B. Yang, Image matting for fusion of multi-focus images in dynamic scenes, Inf. Fusion 14 (2) (2013) 147–162. [3] S.T. Li, J.T. Kwok, Y. Wang, Combination of images with diverse focus using spatial frequency, Inf. Fusion 2 (3) (2001) 169–176. [4] W. Huang, Z. Jing, Evaluation of focus measures in multi-focus image fusion, Pattern Recognit. Lett. 28 (4) (2007) 493–500. [5] S. Li, J.T. Kwok, Y. Wang, Multifocus image fusion using artificial neural networks, Pattern Recognit. Lett. 23 (8) (2002) 985–997.
Please cite this article in press as: Y. Zhang, et al., Multi-focus image fusion based on cartoon-texture image decomposition, Optik - Int. J. Light Electron Opt. (2015), http://dx.doi.org/10.1016/j.ijleo.2015.10.098
293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310
311
312 313 314 315 316 317 318 319 320 321 322
323
324 325 326 327 328 329
330
331 332 333 334 335 336 337 338 339 340 341
G Model IJLEO 56558 1–6 6
ARTICLE IN PRESS Y. Zhang et al. / Optik xxx (2015) xxx–xxx
[6] S. Li, J. Kwok, I. Tsang, Y. Wang, Fusing images with different focuses using support vector machines, IEEE Trans. Neural Networks 15 (6) (2004) 1555–1561. 344 [7] D. Fedorov, B. Sumengen, B.S. Manjunath, Multi-focus imaging using local 345 focus estimation and mosaicking, in: Processings of IEEE International Con346 ference on Image Processing, October, 2006, Atlanta, GA, USA, October, 2006, 347 pp. 2093–2096. 348 [8] V. Aslantas, R. Kurban, Fusion of multi-focus images using differential evolution 349 algorithm, Expert Syst. Appl. 37 (12) (2010) 8861–8870. 350 [9] W. Wu, X.M. Yang, Y.P.J. Pang, G. Jeon, A multifocus image fusion method by 351 using hidden Markov model, Opt. Commun. 287 (2013) 63–72. 352 [10] A. Goshtasby, Fusion of Multifocus Images to Maximize Image Information, 353 Defense and Security Symposium, Orlando, FL, 2006, pp. 17–21. 354 [11] I. De, B. Chanda, Multi-focus image fusion using a morphology-based focus 355 measure in a quad-tree structure, Inf. Fusion 14 (2) (2013) 136–146. 356 [12] Y.F. Li, X.C. Feng, Image decomposition via learning the morphological diversity, 357 Pattern Recognit. Lett. 33 (2) (2012) 111–120. 358 [13] Y. Jiang, M. Wang, Image fusion with morphological component analysis, Inf. 359 Fusion 18 (2014) 107–118. 360 [14] W. Casaca, A. Paiva, E.G. Nieto, P. Joia, L.G. Nonato, Spectral image segmentation 361 using Image decomposition and inner product-based metric, J. Math. Imaging 362 Vis. 45 (3) (2013) 227–238. 363 [15] Z.C. Guo, J.X. Yin, Q. Liu, On a reaction-diffusion system applied to image decom364 position and restoration, Math. Comput. Modell. 53 (5–6) (2011) 1336–1350. 365 [16] D. Mumford, J. Shah, Optimal approximations by piecewise smooth functions 366 and associated variational problems, Commun. Pure Appl. Math. 42 (5) (1989) 367 577–685. 368 [17] L. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal 369 algorithms, Phys. D: Nonlinear Phenom. 60 (1–4) (1992) 259–268. 370 [18] Y. Meyer, Oscillating Patterns in Image Processing and Nonlinear Evolution Q5 Equations, University Lecture Series, 22, AMS, 2001. 342 343
[19] L.A. Vese, S.J. Osher, Modeling textures with total variation minimization and oscillating patterns in image processing, SIAM J. Sci. Comput. 19 (1–3) (2003) 553–572. [20] S. Osher, A. Sole, L. Vese, Image decomposition and restoration using total variation minimization and the H-1 norm, J. Sci. Comput. 1 (3) (2003) 349–370. [21] J.F. Aujol, A. Chambolle, Dual norms and image decomposition models, Int. J. Comput. Vision 63 (l) (2005) 85–104. [22] T.F. Chana, S. Esedoglua, F.E. Park, Image decomposition combining staircase reduction and texture extraction, J. Visual Commun. Image Represent. 18 (6) (2007) 464–486. [23] T. Goldstein, S. Osher, The split bregman method for l1-regularized problems, SIAM J. Imag. Sci. 2 (2) (2009) 323–343. [24] Y.L. Wang, J.F. Yang, W.T. Yin, Y. Zhang, A new alternating minimization algorithm for total variation image reconstruction, SIAM J. Imag. Sci. 1 (3) (2008) 248–272. [25] S. Osher, M. Burger, D. Goldfarb, J.J. Xu, W.T. Yin, An iterative regularization method for total variation-based image restoration, Multiscale Model. Simul. 4 (2) (2005) 460–489. [26] X.Z. Bai, F.G. Zhou, B.D. Xue, Image fusion through local feature extraction byusing multi-scale top-hat by reconstruction operators, Optik 124 (18) (2013) 3198–3203. [27] Image sets: http://www.ece.lehigh.edu/spcrl , 2005. Q6 [28] Image sets: http://www.imgfsr.com/sitebuilder/images , 2009. [29] Image fusion toolbox: http://www.imagefusion.org/ . [30] NSCT toolbox: http://www.ifp.illinois.edu/minhdo/software/ . [31] Split Bregman toolbox: http://tag7.web.rice.edu/Split Bregman files/ . [32] D.J.C. MacKay, Information Theory, Inference and Learning Algorithms, Cambridge University Press, 2003. [33] C.S. Xydeas, V. Petrovic, Objective image fusion performance measure, Electron. Lett. 36 (4) (2000) 308–309.
Please cite this article in press as: Y. Zhang, et al., Multi-focus image fusion based on cartoon-texture image decomposition, Optik - Int. J. Light Electron Opt. (2015), http://dx.doi.org/10.1016/j.ijleo.2015.10.098
371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400