Clustering compressed sensing based on image block similarities

Clustering compressed sensing based on image block similarities

The Journal of China Universities of Posts and Telecommunications August 2014, 21(4): 68–76 www.sciencedirect.com/science/journal/10058885 http://jcu...

1MB Sizes 22 Downloads 133 Views

The Journal of China Universities of Posts and Telecommunications August 2014, 21(4): 68–76 www.sciencedirect.com/science/journal/10058885

http://jcupt.xsw.bupt.cn

Clustering compressed sensing based on image block similarities LI Wei-wei, JIANG Ting ( ), WANG Ning Key Laboratory of Universal Wireless Communication, Beijing University of Posts and Telecommunications, Beijing 100876, China

Abstract Compressed sensing (CS) algorithm enables sampling rates significantly under classical Nyquist rate without sacrificing reconstructed image quality. It is known that, a great number of images have many similar areas which are composed by the same number of grayscale or color. A new CS scheme, namely clustering compressed sensing (CCS), was proposed for image compression, and it introduces clustering algorithm onto framework of CS based on similarity of image blocks. Instead of processing the image as a whole, the image is firstly divided into small blocks, and then the clustering algorithm was proposed to cluster the similar image blocks. Afterwards, the optimal public image block in each category is selected as the representative for transmission. The discrete wavelet transform (DWT) and Gaussian random matrix are applied to each optimal public image block to obtain the random measurements. Different from equal measurements, the proposed scheme adaptively selects the number of measurements based on different sparsity of image blocks. In order to further improve the performance of the CCS algorithm, the unequal-CCS algorithm based on the characteristics of wavelet coefficients was proposed as well. The low frequency coefficients are retained to ensure the quality of reconstructed image, and the high frequency coefficients are compressed by the CCS algorithm. Experiments on images demonstrate good performances of the proposed approach. Keywords CS, DWT, clustering algorithm

1

Introduction

Despite extraordinary advances in computational power, the acquisition and processing of signals in application areas such as imaging, video and medical imaging continues to pose a tremendous challenge. Recently proposed sampling method, CS introduced in Refs. [1–3], can collect compressed data at the sampling rate much lower than that needed in Shannon’s sampling theorem by exploring the sparsity (compressibility) of the signal. Suppose that x ∈ \N ×1 is a length-N signal. It is said to be K-sparse (or compressible) if x can be well approximated using only K N coefficients under some linear transform x = Ψω (1) where Ψ is the sparse transform basis, and ω is the sparse coefficient vector that has at most K (significant) nonzero entries. Received date: 24-02-2014 Corresponding author: JIANG Ting, E-mail: [email protected] DOI: 10.1016/S1005-8885(14)60318-6

According to the CS theory, such a signal can be acquired through the random linear projection: y = Φx + n = ΦΨω + n (2) where y ∈ \ M ×1 is the sampled vector with M N data points. Φ represents a M × N random matrix, which must satisfy the Restricted Isometry Property (RIP) [4], and n ∈ \ M ×1 is the measurement noise vector. Solving the sparsest vector ω consistent with the Eq. (2) is generally an NP-hard problem [4]. For the problem of sparse signal recovery with ω , lots of efficient algorithms were proposed. Typical algorithms include basis pursuit (BP) or l1 -minimization approach [5], orthogonal matching pursuit (OMP) [6], lasso [7], FOCUSS [8], iterative reweighted algorithms [9–10], and Bayesian algorithm [11–12]. Since favorable compression performance, the framework of CS was extensively investigated in image compression literatures [13–18]. But, CS method also faces several challenges such as expensive computation and huge memory. To solve this problem, block

Issue 4

LI Wei-wei, et al. / Clustering compressed sensing based on image block similarities

compressed sensing (BCS) was firstly proposed in Ref. [19], in which, Gan proposed a BCS framework with smoothed projected Landweber (SPL) reconstruction for natural images, it typically provided much faster reconstruction rate than approaches based on full-image CS sampling. After that, in Ref. [20], Mun, etc. adopted the same basic framework of [19] coupled with iterative projection-based reconstruction to promote sparsity and the smoothness of the reconstruction. And then, the multiscale BCS-SPL (MS-BCS-SPL) [21], as well as multihypothesis BCS-SPL (MH-BCS-SPL) [22] was proposed for further improving the reconstructed image quality. The BCS methods mentioned above were conducted in a block-by-block manner through the same operation, but the differences among blocks were not been considered. Therefore, the adaptive Bayesian compressed sensing method based on block image [23] and weight-block compressed sensing technique coupled with the edge information [24] was presented to obtained high-quality reconstruction. In Ref. [25], the directional block-based compressed sensing (DBCS) scheme was proposed by incorporating the directionality into CS paradigm. In Ref. [26], a block-based adaptive compressed sensing algorithm was proposed by using texture information of image. These methods have good compression performance, but have not yet considered the similarity of image blocks. Different from above mentioned algorithms, this article introduced a clustering algorithm into the framework of BCS based on the similarity of image blocks to further improve the performance of BCS algorithm. As we all know, one of the most important feature of the nature images is its redundancy, e.g., there is a certain correlation between adjacent pixels within images. Actually, the authors paid more attention to the spatial redundancy, that any image has many of the similar areas which are composed by the same number of grayscale or color. In this article, a new CS method, namely CCS was proposed. The clustering algorithm is involved into the framework of CS to gather the similar image blocks into a group, and then the authors select so called optimal public blocks of each category as the representative for transmission. The authors chose sparsest image block of each category as the optimal public block. The DWT is chosen as the sparsifying basis. And, Gaussian random matrix is considered as the measurement matrix. Different form equal measurement, the proposed scheme adaptively

69

selects the number of measurements based on the different sparsity of image blocks. Usually, the coefficients are not zero but compressible (most of them are negligible) after sparse transform (e.g., DWT). Therefore, the sparse Bayesian learning (SBL) algorithm is used to resolve the reconstructed problem, which can recover the image correctly and effectively. In order to further improve the performance of the CCS algorithm, unequal-CCS algorithm was proposed based on the characteristics of wavelet coefficients. The low frequency coefficients are retained to ensure the quality of reconstructed image, and the high frequency coefficients are compressed by the CCS algorithm. The main advantages of the proposed method include: 1) The measurement matrix becomes smaller and easier to store after block processing. 2) The block-based measurement is more advantageous for real time applications as the encoder does not need to send the sampled data until the whole image is measured. 3) Since the optimal public measurements in each category is selected as representative for transmission, the compression ratio is greatly improved with the premise of ensuring the quality of the reconstructed image. 4) The adaptive sampling can sufficiently capture the diversity of blocks and keep their properties well. 5) The proposed CCS and unequal-CCS methods not only have higher peak signal to noise ratio (PSNR), but have better stability than the original CS method. The rest is organized as follows. Sect. 2 analyzes the similarity of the image blocks. Sect. 3 details the framework of CCS, and presents unequal-CCS scheme. Afterwards, the analysis and simulation results are shown in Sect. 4. The conclusion is given in Sect. 5.

2 2.1

Similarity analysis of image blocks Similar image blocks

Actually, most of images has many similar areas. As shown in Fig. 1. The 512 × 512 Lena image is divided into 32 × 32 small blocks with the size of 16 × 16 , and the image blocks which are labeled A, B, C are very similar in the image. As can be seen from Fig. 1(a), three image blocks (A, B, C) are almost exactly the same from the visual point of view. Afterwards, the similarity was explained between them form the aspect of histogram. The earliest method on the color-based image indexing

70

The Journal of China Universities of Posts and Telecommunications

2014

and retrieval was proposed by Swain and Ballard, which was based on color histograms [27]. In this article, the histograms intersection algorithm is employed to evaluate the histogram similarity between two image blocks. The histogram similar value between image block A and image block B is G −1

S ( A, B ) =

∑ min(h

A

g =0

( g ), hB ( g ))

(3) G −1 ⎛ G −1 ⎞ min ⎜ ∑ hA ( g ),∑ hB ( g ) ⎟ g =0 ⎝ g =0 ⎠ where hA ( g ) and hB ( g ) are the histogram of the image block A and B, respectively, and G is the gray level. Apparently, S ( A, B ) ∈ (0,1] , and lager value of S ( A, B) represents that the two image blocks are more similar in histogram. It can be seen from Fig. 1(b) that the histogram counts are almost completely similar. i.e., similarities between them are more than 98%. Certainly, there are many other groups of similar blocks in the image. Inspiring by this nature, we involve the clustering algorithm into the framework of CS, and then select the optimal public block in each category as the representative for transmission. The feature extraction of image blocks is the pivotal basis of clustering algorithm, i.e., a good feature extraction method can identify the image block more accurately and thus, provide strong data support for the effective classification. This article aims to illustrate the utility of CCS algorithm, the authors simply use the histogram of image blocks as the feature, and then, the image blocks with similar histogram as a class are gathered.

(b) Histogram similarity Fig. 1 The histogram similarity between the image blocks A, B and C

2.2

Clustering algorithm

Pseudocode for the clustering algorithm is given in Algorithm 1. The N × N image X is divided into n × n small image block xi , i = 1, 2,..., n × n with the size of m × m , m × n = N Algorithm 1 Clustering algorithm Inputs: image blocks xi , histogram similarity norm ( 0 < S n < 1 ), weight α , the number of image blocks n × n Output: indexes set of cluster C , the number of clusters ncn Initialize: C = {} , C{1}(1) = 1 , cn = 1 , k = 1 FOR i = 2 : n × n FOR j = 1: cn Calculate S from formula (3) aa=k; bb=k; IF

S≥S n

k=k+1; bb=k; C{ j}(k ) = i BREAK END END A

B

C

IF bb=aa cn=cn+1; C {cn}(1) = i ;

(a) Similar blocks with Lena

END END

Sn

Issue 4

LI Wei-wei, et al. / Clustering compressed sensing based on image block similarities

2.3 The numbers of categories As shown in Fig. 2, six common grayscale images (include Lena, Barbara, Peppers, Mandrill, Goldhill, and Boat) with the size of 512 × 512 are tested, and these images are divided into 64 × 64 , 32 × 32 , and 16 × 16 small blocks, respectively. The blocks are clustered into one category if their mutual histogram similarity ρ≥95% .

71

blocks, the less the data to be transmitted. Consequently, in order to compress image data as much as possible, these images are divided into 64 × 64 small blocks with 8 × 8 to be compressed.

3

CCS and unequal-CCS scheme

3.1 Adaptive measurement for image blocks According to the original CS theory, the ensuring of the original data can be accurate recovered, the required number of measurements M depends on the sparsity K in some degree, e.g., M = O( K lg( N / K )) . In order to

(a) Lena

(b) Barbara

(c) Peppers

(d) Mandrill (e) Goldhill (f) Boat Fig. 2 Six common images of people (a), animal (b), building (c), landscape (d) boat (e), medical images

The number of categories ncn and the proportion of ncn in the total number of blocks r are shown in Table 1. As can be seen from Table 1, the more the number of divided image blocks there are, the lower the ratio r is. Table 1 The number of categories ncn and the ratio of them to the total number of blocks r (Histogram similarity ρ≥95% ) Image

The total number of Blocks n

The number of categories ncn

Ratio r/%

64 × 64

Lena

32 × 32

2 220 808 249

54.2 78.9 97.3

3 026 947 256

73.9 92.5 100

2 316 879 251

56.5 85.8 98

3 867 1 017 256

94.4 99.3 100

3 371 973 248

82.3 95 96.9

2 412 771 242

58.9 75.3 94.5

16 × 16

Barbara

64 × 64 32 × 32 16 × 16 64 × 64

Peppers

32 × 32 16 × 16

Mandrill

32 × 32

64 × 64 16 × 16

Goldhill

64 × 64 32 × 32 16 × 16 64 × 64

Boat

32 × 32 16 × 16

Remark: r = ( ncn n ) × 100%

This means that the smaller the size of divided image

reduce computation complexity and memory space, image needs to be divided into several sub blocks with size of m × m first. xi and yi separately represent the vectorized signal and measured output of ith image block with relation written as yi = Φ ′xi , where Φ ′ is an M ′ × m matrix. Suppose the number of CS measurements needed to be taken is M, and then M ′ = ( m / N ) × M . In fact, the sparsity of optimal public image block is different from each other. Consequently, we can allot fewer measurements to the sparser optimal public blocks, and vice versa. The dimension of measurement matrix M Bi for each class is determined by K M Bi = cn i M t ∑ Ki

(4)

i =1

where K i is the sparsity of each optimal public image block, cn is the number of categories. And M t = cn × M ′ is the total number of the measurements. Then, the whole measurement matrix can be expressed as ⎛ ΦB1 ⎞ ⎜ ⎟ ΦB2 ⎜ ⎟ (5) Φ=⎜ ⎟ % ⎜ ⎟ ⎜ ΦBcn ⎟⎠ ⎝ In order to save storage space as much as possible, it is assumed that the measurement matrix ΦB j , j ∈ {1, 2,..., cn} is used for the optimal image block which has the largest number of non-zero elements K j , then the

rest of measurement matrix ΦBi ( i ≠ j ) is let as the part of ΦB j . Therefore, we only need to store the measurement matrix ΦB j in the transmission system.

72

3.2

The Journal of China Universities of Posts and Telecommunications

Proposed CCS framework

In this article, the image is not measured as a whole but divided into small blocks to be measured. The proposed CCS framework is shown in Fig. 3. Our modeling instructions are summarized as follows: Step 1 Image blocks. The N × N image X is divided into n × n small image block xi with the size of m × m , i = 1, 2,..., n × n , m × n = N . Step 2 Clustering. Firstly, the blocks are clustered into one category if their mutual histogram similarity ρ≥95% . And, we select the most sparse image blocks x i ( i = 1, 2,..., ncn , ncn is the number of categories) as the representative of each category to be transmitted. Secondly, we record the position information of each category pind , pind = 1, 2,..., n × n . Step 3 Sparse transform. We choose the DWT as the sparsifying basis Ψ for each image block, x i = Ψωi , i = 1, 2,..., ncn .

where M Bi m , M B is the required total number of measurements. The compression ratio of the original CS scheme is calculated by MN M α CS = 2 = (7) N N where M N . 3.3 Unequal-CCS scheme DWT is a multi-resolution decomposition that can be used to analyze images. As shown in Fig. 4, the DWT is employed to decompose the image into four sub-bands (LL, LH, HL and HH). The low frequency sub-band (LL) is considered as the low resolution base-layer (BL) while other high frequency sub-bands (LH, HL, and HH) are enhancement layers (ELs). As far as we know, the low frequency coefficients are much more important than the high frequency coefficients.

Step 4 Random measurement. The Gaussian random matrix M Bi × m ( M Bi m ) ΦBi is employed to measurement sampling the sparse coefficients ω i to yield a sampled vector y i = ΦBi ω i , i = 1, 2,..., ncn . The measurements M Bi is obtained by Eq. (4). Step 5 Recovery. Firstly, the SBL algorithm and inverse discrete wavelet transform (IDWT) are used to recover the optimal coefficient blocks x i′ , i = 1, 2,..., ncn . Secondly, all of image blocks xi′ can be recovered according to the position information of each category pind , pind = 1, 2,..., n × n. Finally, the reconstructed image X ′ can be obtained.

Fig. 4

The proposed framework of CCS

The compression ratio of the proposed CCS scheme is calculated by M m + n2 ⎫ α CCS = B 2 ⎪ ⎪ N (6) ⎬ cn ⎪ M B = ∑ M Bi ⎪⎭ i =1

LL

HL

LH

HH

The frequency distribution after wavelet transform

Eqs. (8)–(12) illustrate the mathematical prototype of image wavelet transform. LL frequency sub-band,this low frequency sub-band which keeps the majority of information in the original image, that the energy of image focus on this frequency sub-band: C1 (m, n) = ∑∑ C 0 (k , l )hk − 2 m hl − 2 n (8) k

l

HL frequency sub-band, this high frequency sub-band keeps the horizon edge information: D11 (m, n) = ∑∑ C 0 (k , l )hk − 2 m gl − 2 n (9) k

Fig. 3

2014

l

LH frequency sub-band,this high frequency sub-band keeps the vertical edge information: D12 (m, n) = ∑∑ C 0 (k , l ) g k − 2 m hl − 2 n (10) k

l

HH frequency sub-band,this high frequency sub-band keeps the diagonal edge information: D13 (m, n) = ∑∑ C 0 (k , l ) g k − 2 m gl − 2 n (11) k

l

where hk = (−1) k gl − k , h and g are the coefficient of low-pass and high-pass filter, respectively. C 0 is the

original image data,C1 is low frequency component. D11 ,

Issue 4

LI Wei-wei, et al. / Clustering compressed sensing based on image block similarities

D12 , and D13 are the high frequency component of

Table 3 The PSNR of multi-scale DWT

horizontal edges , vertical edge, and diagonal edge, respectively. The wavelet reconstruction algorithm is as follows: C 0 (m, n) = ∑∑ C1 (k , l )hm − 2 k hn − 2l + D11 (k , l )hm − 2 k g n − 2l + k

D (k , l ) g m − 2 k hn − 2l + D (k , l ) g m − 2 k g n − 2l 3 1

(12)

Table 2 gives an idea about the achieved PSNR (in dB) that the image restoration only used low frequency coefficients or high frequency coefficients when the images are decomposed by four-scale DWT. We choose the 512 × 512 grayscale images Lena, Barbara, Peppers, Mandrill, Goldhill, and Boat as the transmitted signals. As shown in Table 2, it can be seen that the low frequency coefficients make far greater contribution to the PSNR than the high frequency coefficients make. Accordingly, in order to further improve the performance of the CCS algorithm, the low frequency coefficients are retained to ensure the quality of image restoration, and the high frequency coefficients are processed by CCS approach to achieve compression. Table 2 The PSNR of four-scale DWT Image Lena Barbara Peppers Mandrill Goldhill Boat

PSNR Low frequency High frequency 35.77 5.66 30.25 5.90 33.14 6.63 24.63 5.52 33.15 6.38 33.47 4.86

Table 3 shows that the PSNR of Lena image decomposed by multi-scale DWT. For the convenience of description, in this paper, we consider the N × N image as the N 2 × 1 signal. As shown in Table 3, we can see that the smaller the scale of the decomposition s, the greater the contribution of the low frequency coefficients to the PSNR, and the longer the length of the low frequency coefficients ( N ′ 2 s , N ′ = 512 × 512 ). Therefore, we can choose the proper scale s according to the compression ratio. The compression ratio of the proposed unequal-CCS method is ⎫ m2 ncn + M B m + n 2 ⎪ s 2 α unequal_CCS = ⎪ N2 (13) ⎬ ncn ⎪ M B = ∑ M Bi ⎪ i =1 ⎭ where m 2 2 s is the length of the low frequency coefficients of image block.

4

PSNR Low coefficients

High coefficients

The length of low coefficients

1

35.767 6

5.66

N′ / 2

2

30.738 3

5.67

N′ / 4

3

27.116 0

5.69

N′ / 8

4

24.227 9

5.72

N ′ / 16

Scale s

l

2 1

73

Experimental results and analysis

4.1 Experimental results and analysis of CCS algorithm The performance of the proposed CCS module was evaluated by simulation. The PSNR is used as a measure of the reconstruction quality. We choose the 512 × 512 grayscale images (Lena, Barbara, Peppers, Mandrill, Goldhill, and Boat) as the transmitted signals. 4.1.1 Performance analysis of CCS algorithm in different size of image blocks Firstly, the 512 × 512 image is decomposed into n × n small image blocks with the size of m × m each ( m × n = 512 ). The image blocks are clustered into one category if their mutual histogram similarity meets ρ≥95% . The 4-scale DWT was chosen as the sparsifying basis Ψ for each image blocks, and the common symmetric wavelet function sym1 was selected as the wavelet basis. When the absolute value of wavelet coefficient is less than a very small value ε ( ε > 0 ), it is close to zero. Then we get the signal sparsity K. In this article, we set ε = 0.1 . The M Bi × m Gaussian random matrix ΦBi is employed to measurement of sampling the sparse coefficients. the compression ratio α is set from 10% to 50%. The numbers of categories are summarized in Table 1 where M B is calculated by the Eq. (6), and M Bi is calculated by Eq. (4). The compared PSNR when using CCS algorithm with different size of image blocks are shown in Fig 5. As can be seen from Fig. 5, the smaller the size of image blocks ( m = 8 ), the higher the PSNR for all of the test images. Because that the smaller the size of divided image blocks, the less the data to be transmitted (as shown in Table 1). Consequently, these 512 × 512 test images are divided into 64 × 64 small blocks with the size of 8 × 8 to be compressed.

74

The Journal of China Universities of Posts and Telecommunications

2014

(a) Lena

(e) Goldhill

(b) Barbara

(f) Boat Fig. 5 The compared PSNR of using CCS algorithms at different size ( m = 8,16,32 ) of image blocks

4.1.2 The compared PSNR of using CS and CCS algorithms The size of measured random matrix of the original CS scheme is calculated by Eq. (7), in which, M = α × N . When the result is decimal, we choose the integer ⎢⎣ M ⎥⎦ .

(c) Peppers

(d) Mandrill

The compared PSNR and runtime (s) of using CS and CCS algorithms are shown in Table 4. In order to obtain more objective experiment data, the average values from twenty experiments are taken as the final experiment data. Table 4 shows that the proposed CCS method achieves much higher PSNR than the original CS method. Especially, the gap is much larger in low compression ratio. What’s more, with the decrease of compression ratio, the decline rate of PSNR is much lower than that of the CS method. It indicates that the proposed CCS method has better stability for low compression ratios. In addition, the PSNR which are achieved by the proposed CCS method, are not less than 20 dB. We can also see that the runtime of CCS approach is longer than that of CS method.

Issue 4

LI Wei-wei, et al. / Clustering compressed sensing based on image block similarities

75

Table 4 The compared PSNR of using CS and CCS algorithms Image Lena Barbara Peppers Mandrill Goldhill Boat

Algorithm Proposed (CCS) CS Proposed (CCS) CS Proposed (CCS) CS Proposed (CCS) CS Proposed (CCS) CS Proposed (CCS) CS

10% PSNR Time/s 28.289 9 120.82 9.779 3 46.54 22.735 3 223.51 10.283 8 49.15 26.185 7 115.15 11.323 6 46.53 20.427 3 349.87 9.266 1 49.85 27.421 3 278.53 11.213 5 45.98 25.809 9 217.71 8.742 4 47.54

20% PSNR Time/s 31.237 6 136.84 25.073 2 58.03 26.288 7 230.00 22.617 5 71.84 28.573 0 131.18 24.668 9 70.29 24.199 2 373.09 19.944 2 91.26 30.055 0 317.45 25.822 4 67.50 29.174 3 236.34 23.569 9 69.93

4.2 Experimental results and analysis of unequal-CCS algorithm In order to further improve the performance of the CCS algorithm and the quality of the restored image, an unequal-CCS algorithm based on the characteristics of wavelet coefficients is proposed. Likewise, we choose the 512 × 512 grayscale images Lena, Barbara, Peppers, Mandrill, Goldhill, and Boat as the transmitted signals and set the compression ratio α from 10% to 50%. We determine, firstly here, the scale of the wavelet transform s according to the compression ratio α and the number of categories cn. The simulation parameters s are summarized in Table 5. Table 5 Simulation parameters scale s ( n = 64 , m = 8 ) Image

cn

Lena Barbara Peppers Mandrill Goldhill Boat

2220 3026 2316 3867 3371 2412

10% 3 4 3 4 4 3

Compression ratio α 20% 30% 40% 2 1 1 3 2 1 2 1 1 3 2 2 3 2 2 2 2 1

50% 1 1 1 1 1 1

As mentioned earlier, the importance of low-frequency information is much higher than the high-frequency information. Accordingly, we make the scale s as small as possible. In other words, we should retain low-frequency information as much as possible. According to Eq. (13), the scale s must satisfy m2 ncn + n 2 2s ≤α (14) N2 that m 2 ncn s≥ lg (15) α N 2 − n2 And secondly, the high frequency coefficients are compressed by the proposed CCS algorithm. M B is

Compression ratio α 30% PSNR Time/s 33.478 7 165.32 32.276 7 59.68 27.797 3 242.73 26.012 3 90.06 30.874 3 144.10 30.522 6 77.21 24.575 0 388.48 23.743 0 111.07 31.235 5 326.51 29.541 0 78.43 30.629 5 256.37 29.340 0 74.42

40% PSNR Time/s 35.664 5 172.39 34.145 9 65.14 28.892 2 263.92 27.883 8 106.09 34.161 6 148.19 33.483 8 79.90 25.432 9 430.76 24.910 6 138.62 32.032 1 345.84 31.198 8 84.85 32.865 7 265.09 31.745 3 81.28

50% PSNR Time/s 37.398 4 168.23 35.635 4 69.43 30.200 6 278.06 29.445 7 121.67 36.044 4 256.81 35.277 8 84.51 25.891 6 449.93 25.973 6 165.39 33.226 4 357.03 32.517 5 100.71 34.810 0 285.07 33.291 1 87.18

calculated by the Eq. (6). And then, M Bi is calculated by the Eq. (4). The PSNRs of using unequal-CCS algorithm are shown in Table 6. For comparison purposes, we also include results from [22] using MH-BCS-SPL and MH-MS-BCS-SPL algorithms, which has the best PSNR performance in the known BCS algorithms ([19]–[26]). Table 6 The compared PSNR of using different algorithms Algorithm

MH-BCS-SPL MH-MS-BCS-SPL Unequal-CCS MH-BCS-SPL MH-MS-BCS-SPL Unequal-CCS MH-BCS-SPL MH-MS-BCS-SPL Unequal-CCS MH-BCS-SPL MH-MS-BCS-SPL Unequal-CCS MH-BCS-SPL MH-MS-BCS-SPL Unequal-CCS MH-BCS-SPL MH-MS-BCS-SPL Unequal-CCS

10% 29.85 31.61 33.07

Compression ratio α 20% 30% 40% Lena

32.85 34.88 35.076 Barbara 27.89 31.46 24.28 26.42 29.82 33.47 Peppers 30.28 32.82 32.08 34.73 33.87 35.85 Mandrill 20.47 22.36 21.66 23.20 22.86 23.57 Goldhill 27.67 30.28 29.07 31.35 29.31 31.73 Boat 26.17 29.30 27.46 30.38 28.38 31.66

50%

34.73 36.79 37.91

36.34 38.32 39.52

37.82 39.74 41.87

33.63 27.98 35.85

35.68 32.95 37.69

37.29 36.21 38.66

34.32 35.96 37.53

35.63 37.15 38.22

36.87 38.37 39.98

24.03 24.82 24.79

25.36 25.81 25.36

26.67 27.10 25.85

31.82 33.06 33.34

33.26 34.55 34.84

34.62 36.10 36.56

31.18 32.23 33.25

32.89 33.89 36.85

34.45 35.46 39.26

As shown in Table 6, we can see that the unequal-CCS method achieves much higher PSNR than the [22] approach at any compression ratio α except Mandrill image. The first reason is that the Mandrill image has less similar image blocks than other images. And secondly, there are more high frequency coefficients in Mandrill after DWT. As this type of sources, the proposed method is not suitable for them. But for most of images, the proposed

76

The Journal of China Universities of Posts and Telecommunications

algorithm demonstrates significant performance in terms of PSNR.

5

Conclusions

In this article, a CCS scheme is proposed for images compression by using the self-similarity of image. The experiment results have shown that the proposed algorithm can give a considerable gain in terms of PSNR, with respect to the CS scheme. Furthermore, the paper also shows that the proposed scheme is more stable for various compression ratios. In order to improve the performance of the CCS algorithm, the unequal-CCS is proposed based on the characteristics of wavelet coefficients. The proposed approaches show a significant reconstruction performance over other BCS algorithms. Acknowledgements This work was supported by the National Science Foundation of China (61171176).

References 1. Donoho D L. Compressed sensing. IEEE Transactions on Information Theory, 2006, 52(4): 1289−1306 2. Tsaig Y, Donoho D L. Extensions of compressed sensing. IEEE Transactions on Signal Processing, 2006, 86 (3): 549−571 3. Candès E J, Romberg J, Tao T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory, 2006, 52(2): 489−509 4. Candès E J. The restricted isometry property and its implications for compressed sensing. Comptes Rendus Mathematique, 2008, 346(9): 589−592 5. Chen S S, Donoho D L, Saunders M A. Atomic decomposition by basis pursuit. SIAM Journal on Scientific Computing, 1998, 20(1): 33−61 6. Pati Y C, Rezaiifar R, Krishnaprasad P S. Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition. Proceedings of the 27th Asilomar Conference on Signals, Systems and Computers (ACSSC’93): Vol.1, Nov 1−3,1993, Pacific Grove, CA, USA. Los Alamitos, CA, USA: IEEE Computer Society,1993: 40−44 7. Tibshirani R. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B:Methodological, 1996, 58(1): 267−288 8. Gorodnitsky I F, Rao B D. Sparse signal reconstructions from limited data using FOCUSS: A re-weighted minimum norm algorithm. IEEE Transactions on Signal Processing, 1997, 45(3): 600−616 9. Candes E J, Wakin M B, Boyd S P. Enhancing sparsity by reweighted l1 minimization. Journal of Fourier Analysis and Applications, 2008, 14(5/6): 877−905 10. Chartrand R, Yin W. Iteratively reweighted algorithms for compressive sensing. Proceedings of the 33rd International Conference on Acoustics, Speech, and Signal Processing (ICASSP’08), Mar 31−Apr 4, 2008, Las

2014

Vegas, NV, USA. Piscataway, NJ, USA: IEEE, 2008: 3869−3872 11. Tipping M E. Sparse Bayesian learning and the relevance vector machine. The Journal of Machine Learning Research, 2001, 1(3): 211−244 12. Wipf D, Rao B D. Sparse Bayesian learning for basis selection. IEEE Transactions on Signal Processing, 2004, 52(8): 2153−2164 13. Duarte-Carvajalino J M, Sapiro G. Learning to sense sparse signals: Simultaneous sensing matrix and sparsifying dictionary optimization. IEEE Transactions on Image Processing, 2009, 18(7): 1395−1408 14. Zhang J, Zhao D, Zhao C, et al. Image compressive sensing recovery via collaborative sparsity. IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 2012, 2(3): 380−391 15. Romberg J. Imaging via compressive sampling. IEEE Signal Processing Magazine, 2008, 25(2): 14−20 16. Haupt J, Nowak R. Compressive sampling vs. conventional imaging. Proceedings of the 2006 IEEE International Conference on Image Processing (ICIP’06), Oct 8−11, 2006, Atlanta, GA, USA. Piscataway, NJ,USA:IEEE, 2006: 1269−1272 17. Wen J, Chen Z, Han Y, et al. A compressive sensing image compression algorithm using quantized DCT and noiselet information. Proceedings of the 35th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP’10), Mar 14−19, 2010, Dallas, TX, USA. Piscataway, NJ, USA: IEEE, 2010: 1294−1297 18. Deng C, Lin W, Lee B S, et al. Robust image coding based upon compressive sensing. IEEE Transactions on Multimedia, 2012, 14 (2): 278−290 19. Gan L. Block compressed sensing of natural images. Proceedings of the 15th International Conference on Digital Signal Processing (DSP’07), Jul 1−4, 2007, Cardiff, UK. Piscataway, NJ, USA: IEEE, 2007: 403−406 20. Mun S, Flower J E. Block compressed sensing of images using directional transform. Proceedings of the 16th International Conference on Image Processing (ICIP’09), Nov 7−10, 2009, Cairo, Egypt. Piscataway, NJ, USA:IEEE,2009: 3021−3024 21. Flower J E, Mun S, Tramel E W. Multiscale block compressed sensing with smoother projected Landweber reconstruction. Proceedings of the 19th European Signal Processing Conference (EUSIPCO’11), Aug 29−Sept 2, 2011, Barcelona, Spain. 2011: 64−568 22. Chen C, Tramel E W, Fowler J E. Compressed-sensing recovery of images and video using multihypothesis predictions. Conference Record of the 45th Asilomar Conference on Signals, Systems and Computers (ASILOMAR’11), Nov 6−9, 2011, Pacific Grove, CA, USA. Piscataway, NJ, USA: IEEE, 2011: 1193−1198 23. Qian Y, Lei Y, Sun H. Adaptive Bayesian compressed sensing based on sub-block image. Proceedings of the 11th International Conference on Signal Processing (ICSP’12): Vol 1, Oct 21−25, 2012,Beijing, China. Piscataway, NJ, USA: IEEE, 2012: 97−101 24. Li Y, Sha X, Wang K, et al. The weight-block compressed sensing and its application to image reconstruction. Proceedings of the 2nd International Conference on Instrumentation, Measurement, Computer, Communication and Control (IMCCC’12), Dec 8−12, 2012, Harbin, China. Piscataway, NJ, USA: IEEE, 2012: 723−727 25. Liu L, Wang A, Zhu K, et al. Directional block compressed sensing for image coding. Proceedings of the 2013 IEEE International Symposium on Circuits and Systems (ISCAS’13), May 19−23, 2013, Beijing, China. Piscataway, NJ, USA:IEEE,2013: 1644−1647 26. Wang R F, Jiao L C, Liu F, et al. Block-based adaptive compressed sensing of image using texture information. Acta Electronica Sinica., 2013, 41(8):1506−1514(in Chinese) 27. Swain M, Barrard D. Color indexing. International Journal of Computer Vision, 1991, 7(1):11−32

(Editor: ADA Lai Ti)