Collaborative block compressed sensing reconstruction with dual-domain sparse representation

Collaborative block compressed sensing reconstruction with dual-domain sparse representation

Information Sciences 472 (2019) 77–93 Contents lists available at ScienceDirect Information Sciences journal homepage: www.elsevier.com/locate/ins ...

3MB Sizes 0 Downloads 34 Views

Information Sciences 472 (2019) 77–93

Contents lists available at ScienceDirect

Information Sciences journal homepage: www.elsevier.com/locate/ins

Collaborative block compressed sensing reconstruction with dual-domain sparse representation Yu Zhou a, Hainan Guo b,∗ a b

College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, China Research Institute of Business Analytics and Supply Chain Management, College of Management, Shenzhen University, 518060, China

a r t i c l e

i n f o

Article history: Received 29 April 2018 Revised 23 August 2018 Accepted 26 August 2018 Available online 11 September 2018 Keywords: Block compressed sensing Kernel regression Perceptual Non-local similarity Dual-domain sparse representation

a b s t r a c t In the past decade, image reconstruction based on compressed sensing (CS) has attracted great interest from researchers in signal processing. Due to the tremendous amount of information that an image contains, block compressed sensing (BCS) is often applied to divide an entire image into non-overlapping sub-blocks, treating all sub-blocks separately. However, an independent reconstruction ignores the correlation between adjacent subblocks and results in quality degradation, both in objective and subjective assessments. To obtain a satisfactory reconstructed image, this paper proposes a collaborative BCS (CBCS) framework with dual-domain sparse representation, where local structural information (LSI) and non-local pixel similarity are jointly considered. During a reconstruction, a local data-adaptive kernel regressor is introduced to extract the local image structure, which helps build a correlation of pixels between adjacent sub-blocks and preserves the details of an image. At the same time, a perceptually non-local similarity (PNLS), based on the human visual system, is used to improve visual quality. In addition, we employed both an analysis model and a synthesis model to further enhance sparseness and to formulate a dual-domain sparse representation based BCS reconstruction problem. Finally, an efficient, iterative approach, based on the multi-criteria Nash equilibrium technique, is proposed to solve this problem. Experimental results on benchmark images demonstrate that the proposed method can achieve competitive results both in both numerical and visual comparisons with some state-of-the-art BCS algorithms. © 2018 Published by Elsevier Inc.

1. Introduction In recent years, due to the paradigm of simultaneous acquisition and dimension reduction of signals, compressed sensing (CS) [15] has been widely applied in different areas, such as Dynamic MRI [30,37], wireless networks [3,43], image or video coding [26,28] and signal processing [52]. CS states that a sparse or compressive signal can be exactly reconstructed from a small number of highly incomplete linear measurements, as long as the Restricted Isometry Property (RIP) [8] condition is satisfied. Currently, due to the huge amount of information that multidimensional signals (images or video) contain, the primary challenge for image reconstruction involves low computational cost for the algorithm and easy storage for memory, when implemented. To address this problem, block compressed sensing (BCS), which divides the image signal into several non-overlapping image sub-blocks was developed [20,21]. ∗

Corresponding author. Fax: +86 755 26054111. E-mail addresses: [email protected] (Y. Zhou), [email protected], [email protected] (H. Guo).

https://doi.org/10.1016/j.ins.2018.08.064 0020-0255/© 2018 Published by Elsevier Inc.

78

Y. Zhou, H. Guo / Information Sciences 472 (2019) 77–93

Although a block-by-block method is able to achieve competitive performances in CS reconstruction, each sub-block is recovered independently and lacks cooperation from sub-blocks in the neighborhood area during the reconstruction process. The most straightforward outcome would be possible artifacts between adjacent sub-blocks, which not only decrease the Peak Signal-Noise Ratio (PSNR) but also degrade the visual quality. One solution is to extract some overlapped sub-blocks instead, with the recovered image finally being obtained by averaging the overlapped region. However, the average alone does not capture image details well. Whats worse, this operation may cause some details to blur. To better characterize the different structures and to maintain smoothness between pixels in a CS reconstruction, some local priors are adopted. In [40], a piecewise autoregressive (PAR) method is introduced to build a model that estimates local structures effectively. However, the reconstruction is largely dependent on the model’s accuracy, which may be affected by inaccurate estimates of the original image. In [35], a local kernel regressor is proposed, to estimate local structures by introducing a data-adaptive kernel function. This local kernel regression method, which can be implemented iteratively to improve the quality of the entire image, has been proven effective in the application of image inpainting, denoising, fusion, and interpolation. In addition, inspired by the fact that natural images often contain repetitive image structures, non-local self-similarities [6],[7] are also considered as constraints. In [42], a non-local similarity was introduced into the process of patch aggregation in term of l2 norm, which preserved the sharpness of edge well and improved the image quality. In [51], a perceptually non-local similarity (PNLS) constraint in BCS was proposed, which improved the perceived visual quality of the reconstructed image. However, these options either consider modeling the pixel relationship in local structures or taking non-local similarity into consideration, which ignores the collaboration between them. It is also well known that sparsity plays an important role in CS reconstruction. Once sparsity is well explored, a signal can be recovered perfectly with high probability. Because an image is not sparse in a spatial domain, an exploration should be done using the sparse representation technique. Conventional BCS methods employ a transform basis, such as discrete cosine transform (DCT), discrete wavelets transform (DWT), and curvelets. Thus, a sparse signal can be obtained by this forward transformation, which is regarded as the analysis model for sparse representation. With the development of machine learning techniques, the adaptive dictionary learned from samples, known as a synthesis model [1,2,17], has been widely applied in the area of image processing and computer vision, such as image superresolution [22,49,50,53], image fusion [54], 3D human pose recovery [25], image reranking [44] and classification [48]. The idea of incorporating dictionary learning-based sparse representation into image reconstruction was introduced in [14], where local sparsity constraints were taken into consideration, and an l1 norm-based non-local regularizer was proposed to exploit pixel redundancies. Although these methods improve reconstruction quality significantly, image sparsity is still not fully explored; therefore, the quality of the reconstruction can be improved. In this paper, to obtain a better-reconstructed image, a collaborative BCS (CBCS) reconstruction framework with dualdomain sparse representation is developed. In this framework, both local structure information (LSI) and perceptually nonlocal similarity (PNLS) are employed. Specifically, the block-based, locally data-adaptive kernel regressor is introduced, which provides a good approximation for LSI across different sub-blocks. The relationship between adjacent sub-blocks is built intrinsically. Simultaneously, PNLS constraints based on a structural similarity (SSIM) index [38] are used to enhance the perceived visual quality of a reconstructed image. In addition, we employ both analysis and synthesis models to further enhance sparseness and to formulate a dual-domain sparse representation based on the BCS reconstruction problem. An efficient, iterative approach, using a multi-criteria Nash equilibrium technique, is proposed to solve this problem and to obtain a final, recovered image. Experimental simulations are taken when testing natural images from benchmark datasets. Results demonstrate that the proposed method achieved competitive results both numerically and visually, compared with some state-of-the-art BCS algorithms. In summary, the contributions of this paper are threefold as follows: •





A collaborative block compressed sensing (CBCS) framework is proposed, where the LSI and PNLS better characterize structural details and maintain pixel smoothness. Combining synthesis with an analysis sparse coding model is applied in the formulated CBCS, which better explores signal sparsity. An efficient, iterative approach, based on a multi-criteria Nash equilibrium technique, is proposed to solve the reconstruction problem. This results in better performance than comparative state-of-the-art BCS methods.

The remainder of this paper is organized as follows. Related work is introduced in Section 2. In Section 3, a detailed description of our proposed method is presented. Numerical and visual results from benchmark images are shown in Section 4. Finally, Section 5 offers a conclusion and suggests future work. 2. Related work Most sparsity-based approaches can be divided into two categories, analysis-based and synthesis-based sparse regularization [18]. Natural images are known to be approximately sparse in analytical transform domains. In [31,32] and [20], the discrete wavelet transform (DWT), discrete cosine transform (DCT), and dual-tree DWT (DDWT) are applied, respectively, in BCS reconstruction after a sampling in the spatial domain. Coefficients in the transformed domain are updated by soft threshold shrinkage to constrain sparsity for the entire image. A smoothed, projected Landweber recovery algorithm is proposed to reconstruct each sub-block in an iterative way, and wiener filtering is also applied to reduce the block artifact. A multi-scale sampling technique in an analysis-based transform domain is also developed. In [23] and [24], based on a

Y. Zhou, H. Guo / Information Sciences 472 (2019) 77–93

79

Bayesian CS framework, a tree structure in wavelets is exploited, utilizing structural dependencies to reduce the degree of freedom of sparse coefficients, thus leading to better reconstruction quality. In [39], a CS SAR image reconstruction method based on Bayesian framework is proposed, which decomposes the image into main and residual parts. Under the assumption that wavelet coefficients of the SAR image have a generalized Gaussian distribution (GGD), the residual part of the image in the wavelet domain is recovered by a Bayesian Evolutionary pursuit algorithm (BEPA). These multi-scale details in the wavelet domain help capture the various structures and produce higher reconstruction accuracy. In addition to sparsity-enhanced approaches, local and non-local priors of natural images are also exploited. Local priors aim to maintain the smoothness of the reconstructed image, such as quadratic Tikhonov regularization [36] and the total variation (TV) model [33]. To better capture local structures, some local regression models in the spatial domain also are proposed in a CS reconstruction. In [40], a piecewise autoregressive (PAR) method is introduced to build a model for estimating local structures effectively. However, reconstruction is largely dependent on the model’s accuracy, which may be affected by inaccurate estimates of the original image. In [35], a local kernel regressor is proposed to estimate local structures by introducing a data-adaptive kernel function. This local-kernel regression method, which can be implemented iteratively to improve the quality of the entire image, has been effective in the application of image inpainting, denoising, fusion, and interpolation. In addition, inspired by the fact that images, especially natural images, often contain repetitive image structures, a nonlocal self-similarity [6,7] is also considered as a constraint to the CS problem. In [42], a non-local similarity, introduced into the process of patch aggregation in terms of an l2 norm helps preserve edge sharpness and improves image quality in a super-resolution reconstruction. A non-local prior is often combined with sparsity, which yields a better result. In [14], the idea of incorporating dictionary learning into image reconstruction in compressive sensing has been introduced, where the local sparsity constraint is taken into consideration and an l1 norm-based, non-local regularizer is proposed to exploit non-local redundancies, which performed better than l2 norm and a non-local TV regularizer [47]. In [13], a local sparsity constraint is employed, together with a weighted, non-local similarity regularization, which increase the overall performance of image restoration. Similarly, in [40], where a model-assisted, adaptive CS recovery method is proposed, local structural sparsity and nonlocal self-similarity are also combined. In [34], a non-local recovering approach is proposed, where both a non-local patch correlation and a local piecewise prior are well explored to reduce the sampling rate for recovery. In [12], to enhance structure sparsity in CS recovery, a non-local, low-rank model is applied. A patch-grouping strategy helps capture the selfsimilarity of an image. The patch-grouping idea also appears in [45], where sparsity is highly stressed in terms of local 2D sparsity and non-local 3D sparsity. In [19], a joint, adaptive sparsity regularization (JASR) is proposed, which uses adaptive curvelet thresholding and 3D collaborative filtering to enhance sparseness. Although the literature’s aforementioned approaches generally use both local and non-local regularization to enhance sparsity, they either choose an analysis-based sparsity model or a synthesis-based model. Our approach considers a combination of these two types, which are linked together, resulting in a multi-criteria game theory problem. We also consider different local and non-local priors, where local structural information based on a local data-adaptive regressor is extracted, and a perceptual, non-local similarity term, instead of a conventional one, is introduced. 3. Proposed method 3.1. Formulation A conventional BCS model based on sparse representation can be described as follows:

min ||αi ||0 , αi

||yi − AB αi ||22 ≤1 2

(1)

where A ∈ Rm × n denotes the sensing matrix for each sub-block, B is the block-based sparse representation basis, αi represents the vectorized sparse coefficients of the ith sub-block, || · ||0 denotes the l0 norm operator that counts the number of nonzero elements, || · ||2 denotes the l2 norm of the vector and  1 is the tolerance of the measurement error. We note that a traditional syntheses sparse representation model is involved. In CBCS, both the LSI and PNLS are considered as constraints which could help accurately capture structural information in the image. The recovered model for each image sub-block can be described as:

min ||αi ||0 , ||yi − AB αi ||22 ≤1 2 αi

||xi − ui ||22 ≤2 2 , ||xi − vi ||22 ≤3 2

(2)

where ui and vi denote the LSI and PNLS for xi , respectively.  2 and  3 denote the tolerance of the representation error. To further explore the sparseness of the image, we incorporate the analysis model of sparse representation as another constraint into (2). Thus, with dual-domain sparse representation, we derive the following optimization objective function:

min ||αi ||0 , ||yi − AB αi ||22 ≤1 2 αi

||xi − ui ||22 ≤2 2 , ||xi − vi ||22 ≤3 2 , αi = B xi where B denotes the sparse transform matrix, that converts the signal xi into the sparse vector.

(3)

80

Y. Zhou, H. Guo / Information Sciences 472 (2019) 77–93

Fig. 1. Framework of the Proposed Iterative Approach.

We note that (3) contains multiple constraints, which makes the problem difficult to solve. According to convex optimization theory, a Lagrangian multiplier is usually applied to transform the constraint to be part of the objective function. Besides, it is found that two variables exist in (3). Therefore, we apply variable separation to decompose (3) into two objectives, where the objective function in (4) denotes the analysis model and the objective function in (5) denotes the synthesis sparse coding model. The task is to find a good αi as well as an xi that minimizes both objectives.

f 1 ( αi , x i ) =

1 ||αi − B xi ||22 + τα ||αi ||0 2

(4)

f 2 ( αi , x i ) =

1 1 1 1 ||yi − Axi ||22 + 2 ||xi − B αi ||22 + ||xi − ui ||22 + ||xi − vi ||22 21 2 2γ 22 2 23 2

(5)

This gives us our collaborative BCS (CBCS) reconstruction model. 3.2. Proposed iterative approach To tackle the formulated bi-variable problem in (4) and (5) is non-trivial. In addition, these two variables exist in both objectives, and the determination of one variable would affect the other. Specifically, based on the synthesis-based and analysis-based sparse coding model, the relationship between xi and αi is described as xi = B αi and αi = B xi . To deal with this scenario, inspired by ideas in [9] and [10], we turn to the multi-criteria Nash equilibrium for both objectives, which means that any deviation from the fixed point (α∗i , x∗i ) in (6) or (7) would cause an increase of at least one objective.

α∗i = arg min f2 (αi , x∗i )

(6)

x∗i = arg min f1 (α∗i , xi )

(7)

αi

xi

For (6) and (7), one well-known methodology is to fix one and update the other alternatively until convergence. Thus, with an iterative approach, we rewrite the objectives as follows:

αi(k) = arg min f1 (αi , xi(k) )

(8)

xi(k+1) = arg min f2 (αi(k ) , xi )

(9)

αi

xi

Considering (4) and (6) together, the analysis-based model can be written as

1 2

αi(k) = arg min ||αi − B xi(k) ||22 + τα ||αi ||0 αi

(10)

Similarly, combining (5) and (7), we obtain the synthesis model as follows:

xi(k+1) = arg min xi

1 1 1 1 ||yi − Axi ||22 + 2 ||xi − B αi(k) ||22 + ||xi − ui ||22 + ||xi − vi ||22 21 2 2γ 22 2 23 2

(11)

According to (10) and (11) the entire proposed iterative approach framework is presented in Fig. 1. In (10), given xi(k ) and a sparse basis B , a hard-thresholding shrinkage approach is applied to obtain the sparse repre-

sentation vector αi(k ) , where the scalar element denoted by αi(k ) in αi(k ) is updated by (12).



αi(k) =

αi(k)

αi(k) ≥ T H;

0,

αi(k) < T H

where TH denotes the predefined threshold. The detailed procedure for an analysis-based model is presented in Fig. 2.

(12)

Y. Zhou, H. Guo / Information Sciences 472 (2019) 77–93

81

Fig. 2. The analysis-based model.

Fig. 3. The Synthesis-Based Model.

We find that Eq. (11) is derivable with respect to xi . Therefore, the gradient decent approach is applied, and the iterative steps are expressed as (13). (k+1 )

xi

= xˆ i(k ) + β (AT A )+



1

1

A (yi − Axˆ i(k ) ) − 2 T

1

γ

(k )

( xi

− xˆ i(k ) ) −

1

1

2

3

(xˆ i(k) − ui(k) ) − 2

(xˆ i(k) − vi(k) ) 2



(13)

where + denotes the Moore–Penrose pseudo-inverse and β represents the step size or learning rate. The intermediate variable, xˆ i(k ) , is obtained by B αi(k ) . The procedure for solving synthesis-based model is shown in Fig. 3. In the synthesis-based model, at each iteration, uki and vki can be obtained by (14) and (15), respectively.

ui(k ) = LSI (xˆ ki )

(14)

vi(k ) = P NLS(xˆ ki )

(15)

where LSI( · ) and PNLS( · ) denote operators that extract the LSI and PNLS of the corresponding image block, respectively. 3.3. Implementation details 3.3.1. LSI Inspired by the fact that the local data-adaptive kernel regressor (LDAKR) [35] is good at capturing the local structure of an image, a block-based LDAKR (BLDAKR) is used in our method, the detail of which is described as follows: In an image X, a local searching window W Bi , i = 1, . . . , T with size wL × wL covering the area of the ith block xi is located. Specifically, xi should lie in the center of WBi . For the jth pixel in xi with the 2D coordinate ti, j , its LSI can be obtained by the weighted summation of its neighborhood pixels (regraded as reference pixels) in a kernel sized wg × wg . Thus, the regression task can be described as (16).

Fˆ (ti, j ) =

Pi, j 

Wpi, j (ti,p j , KHi, j,adapt ,N )qi,p j , i = 1, . . . , T , j = 1, . . . , B2 p

(16)

o

p=1

with Pi, j 

Wpi, j = 1

(17)

p=1

i, j i, j where the value of Fˆ (ti, j ) denotes the LSI of the pixel located at ti, j , Wp is the equivalent weight, q p , p = 1, . . . , Pi, j denotes i, j

the reference pixels, Pi, j represents the number of reference pixels, and t p is the 2D coordinates of the pth reference pixel.

82

Y. Zhou, H. Guo / Information Sciences 472 (2019) 77–93

Fig. 4. The Details for BLDAKR. Block A and F lie in the center of the local searching window 1 and window T, respectively. In window 1, the black dots denote the reference pixels while the white ones denote the LSI of the current pixel to be calculated. For example, for the down-left pixel in block A, the reference pixels to compute the LSI are regarded as its neighborhood pixels in a certain Gaussian kernel size (the area in the dashed circle with itself lying in the center).

i, j

In (16), besides the location of the pixel and the kernel function, the equivalent weight Wp is related to the order of the regression, No . In LSI, we set No = 0. Moreover, the pth kernel function K i, j,adapt used above is defined in (18). Hp

 KHi, j,adapt ( p

t pi, j

−t ) = i, j

det (Ci,p j )

2π h2 μ2i, j,k

 exp −

(ti,p j − ti, j )T Ci,p j (ti,k j − ti, j )



−1

2 2h2 μi, j,k

(18)

where μi, j,k denotes the local sampling density and Ck s are the corresponding covariance matrices. More details regarding calculation of the equivalent weights can be found in [35]. The details for LSI, obtained by BLDAKR, are graphically illustrated in Fig. 4. Different from the conventional, pixel-based processing in LDAKR, BLDAKR makes use of a local searching window in a more efficient way: a local window should cover the reconstructed block, with the block in the center area. Then, each pixel in the block is calculated one by one by using kernel regression. In BCS, when considering non-overlapping blocks, the number of local windows is significantly reduced, compared with that of one-step sliding windows. In our method, the sliding step of a local window is equal to B. Therefore, at each iteration, the PNLS for block xi(k ) is obtained by aggregating all the values of Fˆ (ti, j ) into one vector i, j

ui(k ) . The process of calculating the LSI ui of xi is described in Algorithm 1. Algorithm 1 LSI.

Input: The image X and its corresponding blocks xi , i = 1, . . . , T ; The local searching window, W Bi , i = 1, . . . , T ; Number of reference points for jth pixel in the ith block, Pi, j ; Output: The LSI, ui , i = 1, . . . , T ; 1: for all xi ∈ X, i = 1 : T do Locate W Bi for xi ; 2: for all j ∈ xi , j = 1 : B2 do 3:  Calculate PNLS for j pixel by using (16) with kernel size wg = Pi, j + 1. 4: end for 5: Aggregating PNLS results for all pixels in xi to form a vector ui ; 6: 7: end for 8: Store all ui s for blocks in the image.

Y. Zhou, H. Guo / Information Sciences 472 (2019) 77–93

83

Fig. 5. PNLS.

3.3.2. PNLS Since the emergence of a non-local means filtering method, the self-similarity between pixels in non-local areas of an image has been explored widely in image processing. In this paper, therefore, we apply the PNLS model proposed in [51], which is based on the SSIM Index. As shown in Fig. 5, each block, xi is firstly divided into a certain number of b × b non-overlapping sub-blocks, zm ,m = i



1, . . . , Bb . Then, the PNLS for zim is obtained from a weighted average of similar sub-blocks in the non-local window NLi . The perception similarity measure is given in (19).

{zni } = arg min {zni }

Ns  

2 − S1 ( zm , zni ) − S2 (zm , zni ) i i

(19)

n=1

where zni denotes the similar patch in NLi and Ns denotes the number of selected similar patches. In addition, S1 and S2 are defined in (20) and (21), respectively.

S1 (x, y ) =

2xy + c1 2

2

x + y + c1

(20)

where x and y are the mean values for two comparative image sub-blocks.

S2 (x, y ) =

2 sx sy + c2 s2x + s2x + c2

(21)

where s2x and s2y are the variances for two comparative image sub-blocks, respectively. Then, the PNLS for zm is obtained by weighted sum described as follows: i

zm i =

Ns 

zni ωm,n , m = 1, . . . , Nsub

(22)

n=1

where Nsub is the number of non-overlapping sub-blocks in xi . where ωm, n is determined by (23)



ωi,k

exp(−h 2 − S1 (zm , zni ) − S2 (zm , zni )) i i =  m n exp(−h 2 − S1 (zi , zi ) − S2 (zm , zni )) i

(23)

zni ∈NLi

and where h > 0 is a scaling parameter. Therefore, after obtaining all the updated patches zm , m = 1, . . . , Nsub in sub-block i xi , the PNLS of xi can be constructed by aggregating them together. 3.3.3. The complete CBCS From (10) and (11), we observe that once the sparse representation αi(k ) is obtained, the estimated reconstructed blocks can be updated. Because an analysis of the sparse representation model is essentially a transform basis, it is usually applied to the entire image. So, in our implementation, we adopt the matrix  which operates on the entire image. In our BCS

84

Y. Zhou, H. Guo / Information Sciences 472 (2019) 77–93

algorithm, the sensing matrix is block-based, and all the sub-blocks initially are recovered by solving (13). Then, all the subblocks are used to constitute the image Xk+1 , and the corresponding HTS approach is applied to obtain the updated sparse coefficients αk+1 for the entire image. Without loss of generality, the classical DDWT transform basis suggested in ([20]) is ˆ k+1 can be obtained chosen and we follow the settings for the threshold during the iteration. Lastly, the estimated image X and then divided into blocks optimized in (10) and (11) again, and go on the loop The entire process of CBCS is described in Algorithm 2. Algorithm 2 The proposed iterative approach. Input: Initialized solution xi(0 ) , i = 1, . . . , T

Initialized reconstructed image X(0 ) ; The sparse basis , ; The maximum number of iteration,K; Output: The reconstructed image, Xˆ ; (k−1 ) − X(k−2 ) ||2 < τ do 1: while k ≤ K or ||X(k ) − X(k−1 ) ||2 2 − ||X 2 2: k = 1; Apply transform basis  and HTS to obtain α(k ) ; 3: ˆ (k ) ; Use α(k ) and  to find the estimated image X 4: for all Blocks in image do 5: 6: ui(k ) = LSI (xˆ i(k ) ); 7: 8: 9: 10: 11: 12:

vi(k ) = P NLS(xˆ i(k ) );

Solve the objective function in (13) and get xi(k+1 ) ; end for Combine all the sub-blocks to constitute the reconstructed image Xk+1 ; k = k + 1; end while

4. Experimental Results and Discussion To validate the effectiveness of our proposed CBCS reconstruction method, some natural images with the size of 512 × 512 in Fig. 6 from the benchmark dataset are tested. Some state-of-the-art BCS methods are selected for comparison, such as BCS-SPL-DDWT [20], YALL1 [41], NESTA [4] and BCS-BLO [51]. Among these methods, BCS-SPL-DDWT achieves the best performances among the BCS-SPL based approaches. It applies DDWT basis for the analysis sparse representation. Both YALL1 and NESTA work in an alternatively iterative manner, and BCS-BLO considers a perceptual non-local similarity in its reconstruction model. In addition, to verify the advantages of integrating PNLS into the CBCS reconstruction framework. A dual-domain sparse reconstruction model with only LSI (BCS-LSI) is also compared. 4.1. Reconstructed results In our experiment, reconstructed results are measured by the peak signal-to-noise ratio (PSNR) and FSIM [46]. Both the visual quality of the reconstructed image and the numerical results are compared under different sampling rates (0.1–0.5) in the noiseless scenario, where the sampling rate η is defined as η = N/M. Moreover, an SSIM index map [38] is introduced to visualize the difference between the reconstructed image and the original image. In addition, the Gaussian random measurement matrix B is applied. Parameters for comparative approaches are set when the best performances are obtained. In Figs. 7 and 10, recovered images using different methods are presented. It can be observed that more blurry and blocky artifacts appear in the reconstructed images of YALL1, BCS-SPL-DDWT and NESTA. In contrast, BCS-BLO, BCS-LSI and our proposed CBCS achieve clearer reconstructed image and effective performance in block artifact reduction. Since the latter three approaches incorporate either the local or non-local similarities into the reconstruction model, correlation between the current block and pixels from adjacent blocks is well established. Thus, the region between adjacent blocks becomes more smooth. To further explore differences in these reconstructed results, zoomed images of the selected region are presented in Figs. 8 and 11. Specific parts of some objectives (e.g., the main body or mast of the boat and the woman’s hat) are not reconstructed well by BCS-BLO and BCS-LSI. A significant jagged zigzag can be seen on edge structures, such as the edge of the hat in Lena. The reason may incorporate two aspects. On the one hand, although BCS-BLO employs PNLS to exploit non-local similarities to enhance pixels and structural consistency, by not exploring the local structure, they are degraded in recovered images; on the other hand, despite extracting local structure information (LSI), BCS-LSI does not make use of the pixel redundancy in the non-local regions. Blocky artifacts across blocks also influence pixel consistency. Compared with

Y. Zhou, H. Guo / Information Sciences 472 (2019) 77–93

Fig. 6. Representative test images in our experiments (a) Lena (b) Boats (c) Barbara (d) Pepper (e) Goldhill (f) Baboon.

Fig. 7. Reconstructed Results of Boats with Sampling Rate 0.1, (a)–(f) PSNR/dB:22.67, 25.43, 25.56, 27.93, 27.97, 28.25.

85

86

Y. Zhou, H. Guo / Information Sciences 472 (2019) 77–93

Fig. 8. Details of Reconstructed Results of Boats with Sampling Rate 0.1 (a)–(f) FSIM: 0.8635, 0.8768, 0.8772, 0.9359, 0.9353, 0.9364.

the above methods, the proposed CBCS not only generates a smoother recovered image, but also maintains some detailed structures in the image. To visualize the quality of reconstructed images in a more effective way, an SSIM map is applied, which is regarded as a reliable metric to visually evaluate image quality. In an SSIM map, the SSIM value of each pair of pixels is shown in the reconstructed image, and the original image is recorded and drawn in a grayscale image. The lighter the color of the SSIM map, the more similar the two images are. SSIM maps of reconstructed images in Figs. 7 and 10 are presented in Figs. 9 and 12, respectively. Our proposed CBCS outperforms the other comparative methods in SSIM. In addition, the numerical results of images recovered by different methods presented in Tables 1 and 2 that the proposed CBCS can achieve very competitive results compared with other methods, both in PSNR and in FSIM, when the sampling rate ranges from 0.1 to 0.5. Specifically, CBCS is 0.26–0.55 dB higher in PSNR than the best of the other methods for different test images when η = 0.1. When η = 0.3, the proposed BCS is able to gain 0.12–0.28 dB compared with the best performance of its competitors for Lena, Barbara, Peppers, Boats, Goldhill and Baboon. Also, an increase in the sampling rate causes the advantages of CBCS in PSNR, and FSIM for test images to decrease. In some cases, the BCS-BLO even outperforms the proposed CBCS. This happens because: 1) Sparse representation in CBCS only considers predefined basis, while BCS-BLO use an adaptively trained dictionary. Thus, for some images that contain abundant textural information, e.g. Baboon and Barbara, the representation does not performed well, which results in quality degradation. 2) When performing the recovery at a low sampling rate, the information is extremely in demand. Therefore, adding more structural information and pixel similarity during reconstruction helps to improve the performance of CBCS. However, with more measurements, the redundancy of information in CBCS becomes a bit high, which causes some over-smoothness in the reconstructed image and inaccurate recovery of some local structures.

4.2. Parameter setting and discussion 4.2.1. Parameter setting In our experimental simulation, for LSI, the local region for the regression is 37 × 37 and the size of the kernel function is 9 × 9. In PNLS, the size of the non-local window w × w is 33 × 33, and Ns = 16. In our method, the maximum number of iterations is J = 12, and the regularization parameters can be found in the following discussion. In addition, for our iterative approach, a step-size parameter for a gradient decent algorithm is β = 0.08. All experiments are implemented in ten runs on Matlab 2017a and tested on computer with the following specifications: Core i7 3.4 GHz with 8 GB of RAM.

Y. Zhou, H. Guo / Information Sciences 472 (2019) 77–93

Fig. 9. SSIM Map of Reconstructed Results of Boats with sampling rate 0.1 (a)–(f) SSIM: 0.7761, 0.7994, 0.8036, 0.7731, 0.8621, 0.8876.

Fig. 10. Reconstructed Results of Lena with Sampling Rate 0.3, (a)–(f) PSNR/dB:31.27, 33.85, 34.11, 36.62, 36.70, 36.83.

87

88

Y. Zhou, H. Guo / Information Sciences 472 (2019) 77–93

Fig. 11. Details of Reconstructed Results of Lena with sampling rate 0.3 (a)–(f) FSIM: 0.9741, 0.9804, 0.9837, 0.9942, 0.9948, 0.9952.

Fig. 12. SSIM Map of Reconstructed Results of Lena with Sampling Rate 0.3 (a)–(f) SSIM: 0.9560, 0.9625, 0.9599, 0.9641, 0.9707, 0.9769.

Y. Zhou, H. Guo / Information Sciences 472 (2019) 77–93

89

Table 1 PSNR/dB Results of Reconstructed Images with Different Sampling Rates. Image

Method

η = 0.1

η = 0.2

η = 0.3

η = 0.4

η = 0.5

Lena

YALL1 BCS-SPL-DDWT NESTA BCS-BLO BCS-LSI CBCS YALL1 BCS-SPL-DDWT NESTA BCS-BLO BCS-LSI CBCS YALL1 BCS-SPL-DDWT NESTA BCS-BLO BCS-LSI CBCS YALL1 BCS-SPL-DDWT NESTA BCS-BLO BCS-LSI CBCS YALL1 BCS-SPL-DDWT NESTA BCS-BLO BCS-LSI CBCS YALL1 BCS-SPL-DDWT NESTA BCS-BLO BCS-LSI CBCS

27.35 28.24 29.15 31.83 31.85 32.15 22.04 23.25 24.03 25.61 25.55 25.87 22.67 25.43 25.56 27.93 27.97 28.25 25.86 29.73 29.90 31.18 31.34 31.67 24.35 27.06 27.61 28.94 29.26 29.49 20.33 20.78 20.49 21.23 21.36 21.50

29.46 30.95 31.39 34.51 34.48 34.70 23.16 24.27 25.92 28.10 27.95 28.25 25.91 27.95 28.06 29.61 29.65 29.85 27.78 32.03 31.98 32.95 33.02 33.25 26.72 29.13 28.74 29.27 29.33 29.58 21.26 22.06 21.94 22.48 22.53 22.62

31.27 33.85 34.11 36.62 36.70 36.83 24.53 25.77 26.71 28.93 28.89 29.05 27.73 30.08 30.51 31.23 31.28 31.36 30.15 33.79 33.60 33.98 34.12 34.25 28.05 30.58 30.05 31.03 31.12 31.31 22.30 23.11 23.29 24.38 24.46 24.46

33.22 35.36 35.28 37.31 37.22 37.48 25.57 27.25 27.31 30.23 30.07 30.18 29.94 31.27 31.82 32.30 32.37 32.46 31.70 34.78 34.90 35.31 35.40 35.52 29.88 31.93 32.02 32.74 32.83 32.92 23.51 24.14 24.43 25.39 25.47 25.47

34.65 36.77 36.59 37.42 37.36 37.56 26.82 28.89 28.79 31.84 31.62 31.73 30.75 32.77 32.56 33.82 33.79 33.90 34.02 35.82 35.57 36.11 36.21 36.35 30.41 33.21 32.63 33.76 33.83 33.97 24.64 25.30 24.34 25.43 25.40 25.40

Barbara

Boats

Pepper

Goldhill

Baboon

Fig. 13. The Influence of w1 and w2 on PSNR and FSIM, Respectively, for the Test Image Lena (η = 0.1).

4.2.2. Discussion In this section, we analyze the influence of some key parameters on the reconstructed results. For the regularization ˇ parameter, 12 , 22 and 32 , we first normalize 12 as 1ïijNobtaining 12 = 0.5. For the sparse representation weight, we set 1

2γ12

21

= 0.65, which means that we focus more on minimizing the reconstruction error than the sparse representation error.

Since 22 and 32 are related to the weights of LSI, w1 and PNLS, w2 , we constrain the summation of these two weights as equal to 1. Therefore, we have the following expression:

w1 + w2 = 1 where w1 =

1 222

> 0 and w2 =

(24) 1 232

> 0 So, in our experiment, we investigate the influence of w1 and w2 on the reconstructed

image. Fig. 13 shows that for the test image Lena, the best performance (PSNR and FSIM) can be obtained when w1 = 0.5 and w2 = 0.5. In this case, the LSI and PNLS are regarded as equally important in the reconstruction. However, in Fig. 14, for

90

Y. Zhou, H. Guo / Information Sciences 472 (2019) 77–93 Table 2 FSIM Results of Reconstructed Images with Different Sampling Rates. Image

Method

η = 0.1

η = 0.2

η = 0.3

η = 0.4

η = 0.5

Lena

YALL1 BCS-SPL-DDWT NESTA BCS-BLO BCS-LSI CBCS YALL1 BCS-SPL-DDWT NESTA BCS-BLO BCS-LSI CBCS YALL1 BCS-SPL-DDWT NESTA BCS-BLO BCS-LSI CBCS YALL1 BCS-SPL-DDWT NESTA BCS-BLO BCS-LSI CBCS YALL1 BCS-SPL-DDWT NESTA BCS-BLO BCS-LSI CBCS YALL1 BCS-SPL-DDWT NESTA BCS-BLO BCS-LSI CBCS

0.9384 0.9490 0.9567 0.9755 0.9758 0.9789 0.8735 0.8796 0.8814 0.8997 0.8976 0.9005 0.8635 0.8768 0.8772 0.9359 0.9353 0.9364 0.9325 0.9418 0.9432 0.9778 0.9795 0.9805 0.8891 0.8988 0.9294 0.9541 0.9587 0.9596 0.6915 0.7425 0.7369 0.7685 0.7690 0.7695

0.9602 0.9641 0.9722 0.9902 0.9901 0.9910 0.9215 0.9344 0.9413 0.9486 0.9467 0.9483 0.9157 0.9546 0.9352 0.9612 0.9613 0.9618 0.9615 0.9721 0.9708 0.9885 0.9891 0.9897 0.9336 0.9513 0.9479 0.9625 0.9634 0.9642 0.7725 0.7943 0.7823 0.8327 0.8346 0.8357

0.9741 0.9804 0.9837 0.9942 0.9948 0.9952 0.9405 0.9498 0.9559 0.9632 0.9626 0.9633 0.9515 0.9638 0.9672 0.9769 0.9771 0.9775 0.9675 0.9910 0.9902 0.9923 0.9928 0.9932 0.9512 0.9661 0.9625 0.9768 0.9772 0.9779 0.8285 0.8501 0.8521 0.8751 0.8746 0.8757

0.9803 0.9914 0.9906 0.9960 0.9954 0.9967 0.9468 0.9622 0.9631 0.9725 0.9708 0.9720 0.9619 0.9770 0.9824 0.9876 0.9880 0.9882 0.9737 0.9941 0.9945 0.9960 0.9963 0.9969 0.9578 0.9812 0.9818 0.9884 0.9895 0.9901 0.8672 0.8802 0.8942 0.9255 0.9250 0.9260

0.9901 0.9919 0.9913 0.9964 0.9959 0.9971 0.9545 0.9684 0.9676 0.9801 0.9786 0.9796 0.9653 0.9893 0.9874 0.9895 0.9885 0.9901 0.9883 0.9980 0.9962 0.9989 0.9992 0.9994 0.9752 0.9921 0.9892 0.9932 0.9943 0.9947 0.9236 0.9459 0.9221 0.9467 0.9455 0.9465

Barbara

Boats

Pepper

Goldhill

Baboon

Fig. 14. The Influence of w1 and w2 on PSNR and FSIM, Respectively, for the Test Image Barbara (η = 0.1).

the test image Barbara, the best results appear when we set w1 = 0.8 and w2 = 0.2, because a lot of textural information concentrates on a specific part of the image, and local structural information contributes more in recovering the image than non-local region. In short, for different images, the weights should be adjusted properly, depending on the contents. As shown in Figs. 15 and 16, the effect of stepsize parameter β on the quality of a recovered image is also studied. Our algorithm prefers a small stepsize parameter rather than big one. However, a β that is too small may result in low efficiency in the search procedure. Therefore, we select a medium value for β . 4.3. Convergence of CBCS In CBCS, an iterative approach is applied to solve the formulated multi-criteria problem. To obtain the Nash equilibrium solution of the problem, two objectives are alternatively optimized, where at each iteration the gradient decent method and iterative hard-thresholding shrinkage are used, respectively. The convergence property of a gradient decent method has been

Y. Zhou, H. Guo / Information Sciences 472 (2019) 77–93

91

Fig. 15. The Influence of β on PSNR and FSIM, Respectively, for the test Image Lena (η = 0.1).

Fig. 16. The Influence of β on PSNR and FSIM, Respectively, for the test Image Barbara (η = 0.1).

Fig. 17. Convergence of the proposed iterative approach on the test image Lena (η = 0.1, J = 12).

extensively studied and proved. More details can be found in [27] and [29]. Meanwhile, the thresholding shrinkage approach is also well-applied in compressed sensing reconstruction (l1 norm minimization) and its efficiency and convergence have been well-discussed both theoretically and empirically in [5,16] and [11]. Our proposed method can be approximately regarded as an alternative projection onto these two sub-problems. By setting a proper termination condition, the algorithm is able to ultimately converge to a satisfied solution. To illustrate the convergence performance of our algorithm, the PSNR and root mean square error (RMSE) between the reconstructed image and the original image during the iteration are presented in Fig. 17. It is observed in Fig. 17(a) that RMSE tends to converge to a small value after the tenth loop with an increase in the number of iterations, while PSNR gradually approaches its peak during the iteration.

4.4. Computational complexity analysis The computational complexity of our proposed approach mainly consists of three parts: the computation of PNLS, the computation of LSI, and the proposed iterative approach.

 2   To calculate PNLS, the complexity for each non-overlapped sub-block in a B × B image block is O wb2 4b2 + Ns b2 referring to [51], where w × w is the size of the non-local window, b × b denotes the size of a sub-block in an image block and

92

Y. Zhou, H. Guo / Information Sciences 472 (2019) 77–93

 Ns represents the number of sub-blocks used to compute the perceptually non-local similarity. There are



in total. So, for each block, the computational complexity is O

w2 b2

 

w2 b2



4b2 + Ns b2



w2 b2

 sub-blocks

The complexity of LSI for each block is calculated as follows: Within a wL × wL local searching window, each pixel is estimated by Pi reference points, so the complexity is Pi × w2L . Then, for each block, the result is Pi × B2 . In the proposed iterative method, since the basic principle is based on gradient decent and the hard-thresholding shrinkage (HTS) algorithm, for each block, the computation can be obtained as follows: The complexity of gradient decent largely comes from the matrix-vector product, O(MB4 ), while the complexity of HTS is O(B4 ). So, at each iteration, the total com plexity is O (M + 1 )B4 . given  the ofiterations, J, the cost of the proposed method is

Therefore,

  number  O J (M + 1 )B4 + Pi B2 +

w2 b2

w2 b2

4b2 + Ns b2

. It is worthwhile noting that reconstruction time could be saved by

applying parallel implementations such as GPU acceleration on matrix-vector multiplication. In addition, some algorithms that improve the efficiency of non-local and local similarity searches could be considered for this method. 5. Conclusions and future work This paper mainly focused on the BCS of image reconstruction under single scale sampling. A collaborative BCS framework is proposed. Using this framework, a local structural indicator (LSI) and a perceptual non-local similarity (PNLS) are considered simultaneously to further improve the quality of the reconstructed image. Moreover, a dual-domain sparse representation for the image, which combines the analysis and syntheses sparse coding model, is employed in the objective function. To solve the formulated problem, an iterative approach based on multi-criteria Nash equilibrium technique is proposed. Some experimental results demonstrate that the proposed method performs very competitively. Recently, multi-scale sampling in the sparse transform domain has shown better image reconstruction. These BCS methods can more optimally utilize the correlation among pixels and exploit details of the different structures. Therefore, research efforts will be made on this topic in the future. Acknowledgment This work is supported in part by the Natural Science Foundation of China under Grants 61702336 and 71572115, in part by Shenzhen Emerging Industries of the Strategic Basic Research Project JCYJ20170302154254147, and in part by Natural Science Foundation of SZU (grants no. 2018068 and no. 2018057). References [1] M. Aharon, M. Elad, A. Bruckstein, -Svd: an algorithm for designing overcomplete dictionaries for sparse representation, Signal Process. IEEE Trans. 54 (11) (2006) 4311–4322. [2] R.G. Baraniuk, E. Candes, M. Elad, Y. Ma, Applications of sparse representation and compressive sensing [scanning the issue], Proc. IEEE 98 (6) (2010) 906–909. [3] J.E. Barceló-Lladó, A. Morell, G. Seco-Granados, Amplify-and-forward compressed sensing as an energy-efficient solution in wireless sensor networks, IEEE Sens. J. 14 (5) (2014) 1710–1719. [4] S. Becker, J. Bobin, E.J. Candès, Nesta: a fast and accurate first-order method for sparse recovery, SIAM J. Imaging Sci. 4 (1) (2011) 1–39. [5] J.M. Bioucas-Dias, M.A.T. Figueiredo, A new twist: two-step iterative shrinkage/thresholding algorithms for image restoration, IEEE Trans. Image Process. 16 (12) (2007) 2992–3004, doi:10.1109/TIP.2007.909319. [6] A. Buades, B. Coll, J.-M. Morel, The staircasing effect in neighborhood filters and its solution, Image Process. IEEE Trans. 15 (6) (2006) 1499–1505. [7] A. Buades, B. Coll, J.-M. Morel, Nonlocal image and movie denoising, Int. J. Comput. Vis. 76 (2) (2008) 123–139. [8] E.J. Candes, T. Tao, Near-optimal signal recovery from random projections: universal encoding strategies? Inf. Theory IEEE Trans. 52 (12) (2006) 5406–5425. [9] C. Cruz, R. Mehta, V. Katkovnik, K.O. Egiazarian, Single image super-resolution based on wiener filter in similarity domain, IEEE Trans. Image Process. 27 (3) (2018) 1376–1389, doi:10.1109/TIP.2017.2779265. [10] A. Danielyan, V. Katkovnik, K. Egiazarian, Bm3d frames and variational image deblurring, IEEE Trans. Image Process. 21 (4) (2012) 1715–1728. [11] I. Daubechies, M. Defrise, C. De Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Commun. Pure Appl. Math. 57 (11) (2004) 1413–1457. [12] W. Dong, G. Shi, X. Li, Y. Ma, F. Huang, Compressive sensing via nonlocal low-rank regularization, IEEE Trans. Image Process. 23 (8) (2014) 3618–3632. [13] W. Dong, G. Shi, X. Li, L. Zhang, X. Wu, Image reconstruction with locally adaptive sparsity and nonlocal robust regularization, Signal Process. Image Commun. 27 (10) (2012) 1109–1122. [14] W. Dong, G. Shi, X. Wu, L. Zhang, A learning-based method for compressive image recovery, J. Vis. Commun. Image Represent 24 (7) (2013) 1055–1063. [15] D. Donoho, Compressed sensing, Inf. Theory, IEEE Trans. 52 (4) (2006) 1289–1306. [16] D.L. Donoho, J.M. Johnstone, Ideal spatial adaptation by wavelet shrinkage, Biometrika 81 (3) (1994) 425–455. [17] M. Elad, M. Aharon, Image denoising via sparse and redundant representations over learned dictionaries, Image Process. IEEE Trans. 15 (12) (2006) 3736–3745. [18] M. Elad, P. Milanfar, R. Rubinstein, Analysis versus synthesis in signal priors, Inverse Probl. 23 (3) (2007) 947. [19] N. Eslahi, A. Aghagolzadeh, Compressive sensing image restoration using adaptive curvelet thresholding and nonlocal sparse regularization, IEEE Trans. Image Process. 25 (7) (2016) 3126–3140. [20] J.E. Fowler, S. Mun, E.W. Tramel, Block-based compressed sensing of images and video, Found. Trends Signal Process. 4 (4) (2012) 297–416. [21] L. Gan, Block compressed sensing of natural images, in: Digital Signal Processing, 2007 15th International Conference on, 2007, pp. 403–406. [22] W. Gong, L. Hu, J. Li, W. Li, Combining sparse representation and local rank constraint for single image super resolution, Inf. Sci. 325 (2015) 1–19. 10.1016/j.ins.2015.07.004. [23] L. He, L. Carin, Exploiting structure in wavelet-based bayesian compressive sensing, Signal Process. IEEE Trans. 57 (9) (2009) 3488–3497.

Y. Zhou, H. Guo / Information Sciences 472 (2019) 77–93

93

[24] L. He, H. Chen, L. Carin, Tree-structured compressive sensing with variational bayesian analysis, Signal Process. Lett. IEEE 17 (3) (2010) 233–236. [25] C. Hong, J. Yu, D. Tao, M. Wang, Image-based three-dimensional human pose recovery by multiview locality-sensitive sparse retrieval, IEEE Trans. Ind. Electron. 62 (6) (2015) 3742–3751, doi:10.1109/TIE.2014.2378735. [26] H. Jiang, C. Li, R. Haimi-Cohen, P.A. Wilford, Y. Zhang, Scalable video coding using compressive sensing, Bell Labs Tech. J. 16 (4) (2012) 149–169. [27] R. Johnson, T. Zhang, Accelerating stochastic gradient descent using predictive variance reduction, in: Advances in neural information processing systems, 2013, pp. 315–323. [28] C. Li, H. Jiang, P. Wilford, Y. Zhang, M. Scheutzow, A new compressive video sensing framework for mobile broadcast, Broadcast. IEEE Trans. 59 (1) (2013) 197–205. [29] D.G. Luenberger, Y. Ye, et al., Linear and nonlinear programming, 2, Springer, 1984. [30] A. Majumdar, R. Ward, T. Aboulnasr, Compressed sensing based real-time dynamic mri reconstruction, Med. Imaging IEEE Trans. 31 (12) (2012) 2253–2266. [31] S. Mun, J.E. Fowler, Block compressed sensing of images using directional transforms, in: Image Processing (ICIP), 2009 16th IEEE International Conference on, IEEE, 2009, pp. 3021–3024. [32] S. Mun, J.E. Fowler, Residual reconstruction for block-based compressed sensing of video, in: Data Compression Conference (DCC), 2011, IEEE, 2011, pp. 183–192. [33] L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms, Physica D 60 (1–4) (1992) 259–268. [34] X. Shu, J. Yang, N. Ahuja, Non-local compressive sampling recovery, in: Computational Photography (ICCP), 2014 IEEE International Conference on, IEEE, 2014, pp. 1–8. [35] H. Takeda, S. Farsiu, P. Milanfar, Kernel regression for image processing and reconstruction, Image Process. IEEE Trans. 16 (2) (2007) 349–366. [36] A.N. Tikhonov, A. Goncharsky, V. Stepanov, A.G. Yagola, Numerical methods for the solution of ill-posed problems, 328, Springer Science & Business Media, 2013. [37] N. Vaswani, Ls-cs-residual (ls-cs): compressive sensing on least squares residual, Signal Process. IEEE Trans. 58 (8) (2010) 4108–4120. [38] Z. Wang, A.C. Bovik, H.R. Sheikh, E.P. Simoncelli, Image quality assessment: from error visibility to structural similarity, Image Process. IEEE Trans. 13 (4) (2004) 600–612. [39] J. Wu, F. Liu, L. Jiao, X. Wang, Compressive sensing sar image reconstruction based on bayesian framework and evolutionary computation, Image Process. IEEE Trans. 20 (7) (2011) 1904–1911. [40] X. Wu, W. Dong, X. Zhang, G. Shi, Model-assisted adaptive recovery of compressed sensing with imaging applications, Image Process. IEEE Trans. 21 (2) (2012) 451–458. [41] J. Yang, Y. Zhang, Alternating direction algorithms for \ell_1-problems in compressive sensing, SIAM journal on scientific computing 33 (1) (2011) 250–278. [42] S. Yang, M. Wang, Y. Chen, Y. Sun, Single-image super-resolution reconstruction via learned geometric dictionaries and clustered sparse coding, Image Processing, IEEE Transactions on 21 (9) (2012) 4016–4028. [43] X. Yang, X. Tao, E. Dutkiewicz, X. Huang, Y. Guo, Q. Cui, Energy-efficient distributed data storage for wireless sensor networks based on compressed sensing and network coding, Wirel. Commun. IEEE Trans. 12 (10) (2013) 5087–5099, doi:10.1109/TWC.2013.090313.121804. [44] J. Yu, Y. Rui, D. Tao, Click prediction for web image reranking using multimodal sparse coding, IEEE Trans. Image Process. 23 (5) (2014) 2019–2032, doi:10.1109/TIP.2014.2311377. [45] J. Zhang, D. Zhao, C. Zhao, R. Xiong, S. Ma, W. Gao, Image compressive sensing recovery via collaborative sparsity, IEEE J. Emerging Sel. Top. Circuits Syst. 2 (3) (2012) 380–391. [46] L. Zhang, D. Zhang, X. Mou, Fsim: a feature similarity index for image quality assessment, Image Process. IEEE Trans. 20 (8) (2011) 2378–2386. [47] X. Zhang, M. Burger, X. Bresson, S. Osher, Bregmanized nonlocal regularization for deconvolution and sparse reconstruction, SIAM J. Imaging Sci. 3 (3) (2010) 253–276. [48] P. Zheng, Z.-Q. Zhao, J. Gao, X. Wu, A set-level joint sparse representation for image set classification, Inf. Sci. 448–449 (2018) 75–90. 10.1016/j.ins. 2018.02.062 [49] Y. Zhou, S. Kwong, W. Gao, X. Wang, A phase congruency based patch evaluator for complexity reduction in multi-dictionary based single-image super-resolution, Inf. Sci. 367–368 (2016) 337–353. 10.1016/j.ins.2016.05.024 [50] Y. Zhou, S. Kwong, W. Gao, X. Zhang, X. Wang, Complexity reduction in multi-dictionary based single-image superresolution reconstruction via pahse congtuency, in: 2015 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR), 2015, pp. 146–151, doi:10.1109/ICWAPR.2015. 7295941. [51] Y. Zhou, S. Kwong, H. Guo, W. Gao, X. Wang, Bilevel optimization of block compressive sensing with perceptually nonlocal similarity, Inf. Sci. 360 (2016) 1–20. 10.1016/j.ins.2016.03.027 [52] Y. Zhou, S. Kwong, H. Guo, X. Zhang, Q. Zhang, A two-phase evolutionary approach for compressive sensing reconstruction, IEEE Trans. Cybern. 47 (9) (2017) 2651–2663, doi:10.1109/TCYB.2017.2679705. [53] Y. Zhou, S. Kwong, J. Hou, Single image superresolution by multiple geometrical regressors, in: 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), 2017, pp. 1152–1155, doi:10.1109/APSIPA.2017.8282201. [54] Z. Zhu, H. Yin, Y. Chai, Y. Li, G. Qi, A novel multi-modality image fusion method based on image decomposition and sparse representation, Inf. Sci. 432 (2018) 516–529. 10.1016/j.ins.2017.09.010.