An improved SVD-based watermarking scheme using human visual characteristics

An improved SVD-based watermarking scheme using human visual characteristics

Optics Communications 284 (2011) 938–944 Contents lists available at ScienceDirect Optics Communications j o u r n a l h o m e p a g e : w w w. e l ...

832KB Sizes 0 Downloads 46 Views

Optics Communications 284 (2011) 938–944

Contents lists available at ScienceDirect

Optics Communications j o u r n a l h o m e p a g e : w w w. e l s ev i e r. c o m / l o c a t e / o p t c o m

An improved SVD-based watermarking scheme using human visual characteristics Chih-Chin Lai ⁎ Department of Electrical Engineering, National University of Kaohsiung, Kaohsiung 81148, Taiwan

a r t i c l e

i n f o

Article history: Received 6 September 2010 Received in revised form 8 October 2010 Accepted 13 October 2010 Keywords: Watermarking Discrete cosine transformation Singular value decomposition Human visual characteristics

a b s t r a c t With the widespread use of the Internet, digital media can be readily manipulated, reproduced, and distributed over information networks. Therefore, illegal reproduction of digital information started to pose a real problem. Digital watermarking has been regarded as an effective solution to protect the copyright of digital media. In this paper, an improved SVD-based watermarking technique considering human visual characteristics is presented. Experimental results are provided to demonstrate the proposed approach is able to withstand a variety of image processing attacks. © 2010 Elsevier B.V. All rights reserved.

1. Introduction The rapid expansion of the Internet and digital technologies in the past years have sharply increased the ease of the production and distribution of digital media. The phenomenon has lead to a matched ease in the illegal and unauthorized manipulation of multimedia products. Protecting the ownership of digital products while allowing a full utilization of the Internet resources becomes an urgent issue. One technical solution to make law enforcement and copyright protection for digital products possible and practical is digital watermarking. Digital Watermarking refers to techniques that are used to prevent copying or protect digital data by imperceptibly hiding an authorized mark information into the original data. The hidden information is able to retrieve by the contrary process with correct keys. A basic watermarking algorithm, an image for example, consists of a cover image, a watermark structure, an embedding algorithm, and an extraction or detection algorithm. A practical watermarking scheme usually posses the following basic characteristics: (1) perceptual transparency, (2) payload of the watermark, (3) robustness, (4) security, and (5) oblivious/nooblivious watermarking [17]. Several techniques have been proposed for multimedia protection. Among the proposed methods, much interest has focused on digital images [1,6,9,12–14,16,20]. According to the domain in which the watermark is inserted, these techniques are divided into two broad

⁎ Tel.: + 886 7 5919771. E-mail address: [email protected]. 0030-4018/$ – see front matter © 2010 Elsevier B.V. All rights reserved. doi:10.1016/j.optcom.2010.10.047

categories: spatial-domain and frequency-domain methods. Embedding the watermark into the spatial-domain component of the original image is the straightforward method. It has the advantages of low complexity and easy implementation. However, the spatial domain watermarking algorithms are generally fragile to image processing operations or other attacks. On the other hand, the representative frequency-domain techniques embed the watermark by modulating the magnitude of coefficients in a transform domain, such as discrete cosine transform (DCT), discrete Fourier transform (DFT), and discrete wavelet transform (DWT) [2,4,21]. Although frequency-domain methods can yield more information embedding and more robustness against many common attacks, the computational cost is higher than spatial-domain watermarking methods. In the last few years, a singular value decomposition (SVD)-based watermarking technique and its variations are mostly encountered in the literature [5,7,10,11,18]. SVD is a mathematical technique used to extract algebraic features from an image. The core idea behind SVDbased approaches is to apply the SVD to the whole cover image or, alternatively, to small blocks of it, and then modify the singular values to embed the watermark. There are three advantages to employ SVD method in digital watermarking scheme: (1) the size of the matrices from SVD transformation is not fixed; (2) when a small perturbation is added to an image, large variation of its singular values (SVs) does not occur; and (3) SVs represent intrinsic algebraic image properties [18]. Granic et al. [11] presented a double watermarking scheme based on SVD that embeds the watermark twice. In the first layer, the cover image is divided into smaller blocks and a piece of the watermark is embedded in each block. In the second layer, the cover image is used as a single block to embed the whole watermark. The

C.-C. Lai / Optics Communications 284 (2011) 938–944

purpose of considering two layers to embed watermark is that layer one allows flexibility in data capacity, and layer two provides additional robustness to attacks. Huang and Guan [15] presented a hybrid SVD-DCT watermarking method. The SVD transformation and the DCT are performed on the watermark and the original image, respectively. Only the singular values of the watermark are embedded into the DCT coefficients of original image. Moreover, the LPSNR criterion is adopted in their method to achieve the highest possible robustness without degrading image quality. Calagna et al. [8] introduced an image watermarking scheme based on the SVD compression. They divided the cover image into blocks and applied the SVD to each block. The watermark is embedded in all non-zero singular values according to the local features of the cover image so as to balance embedding capacity with distortion. Shieh et al. [22] presented a robust watermarking approach for hiding gray-level watermarks into digital images. Unlike most conventional watermarking algorithms, the watermark image is the same as the original one in their method. Plugging the codebook concept into the SVD, their method embeds the SVs of the original image into the watermark one to attain the lossless objective. Basso et al. [3] proposed a block-based watermarking scheme based on SVD. The proposed scheme works by initially splitting the original image into non-overlapping blocks, applying the SVD transform to each of them and subsequently embedding a watermark into the singular vectors. Each watermark value is embedded by modifying a set of singular vector angles, i.e., angles formed by the right singular vectors of each block. A lot of work has been devoted to studying the human visual system (HVS) and applying this knowledge to image processing applications. There is a trend towards the watermarking techniques should exploit the characteristics of the HVS, so that watermark imperceptibility and robustness can be improved. In this paper, we propose an improved SVD-based watermarking technique considering HVS characteristics, where the watermarking is performed by using a block-based approach. For a block-based watermarking scheme, a more complex block was favored for embedding a watermark to achieve high imperceptibility. Therefore, in order to achieve a desired performance level for the proposed watermarking scheme, the characteristics of an image should be taken into consideration in selecting blocks to be embedded. The visual feature and edge entropy are two criteria used to help select the watermark embedding regions. The DCT and SVD are then applied to the selected image blocks subsequently. The watermark is embedded into the selected blocks by modifying the entries in U orthogonal matrix of each block for preserving the visual quality of watermarked images and increasing the robustness of watermark. The technique can satisfy the both imperceptibility and robustness requirements of the watermarking system. The rest of this paper is organized as follows. Section 2 briefly reviews the basic concepts of DCT and SVD. Next, the proposed watermarking scheme is introduced in Section 3. Simulations of our technique with respect to attacks are conducted in Section 4. Finally, conclusions are given in Section 5. 2. Background 2.1. Discrete cosine transform Discrete Cosine Transform is a kind of signal decomposition that converts signals from time domain to he frequency domain. It is the basis for many image and video compression algorithms, especially the baseline of JPEG and MPEG standards for compression, respectively. The equations of two-dimensional forward DCT and inverse DCT regarding an image block of size N × N pixels are given in Eqs. (1) and (2). C(u, v) represents the coefficient at coordinate (u, v) in the DCT-

939

transformed block and f(x, y) is the pixel value at coordinate (x, y) in the original block.     N−1 N−1 ð2x + 1Þuπ ð 2y + 1Þvπ cos ; C ðu; vÞ = αðuÞαðvÞ ∑ ∑ f ðx; yÞcos 2N 2N x=0 y=0 ð1Þ     ð2x + 1Þuπ ð 2y + 1Þvπ f ðx; yÞ = ∑ ∑ αðuÞαðvÞC ðu; vÞcos cos 2N 2N u=0 v=0 N−1 N−1

ð2Þ which u, v = 0, 1, …, N − 1, and α is defined as 8 rffiffiffi > 1 > > < n; αðuÞ = rffiffiffi > > 2 > : ; n

for u = 0; ð3Þ otherwise:

2.2. Singular value decomposition From the perspective of image processing, an image can be viewed as a matrix with nonnegative scalar entries. The SVD of an image A with size m × m is given by A = USVT, where U and V are orthogonal matrices, and S = diag(λi) is a diagonal matrix of singular values λi, i = 1, …, m, arranged in decreasing order. The columns of U are the left singular vectors, whereas the columns of V are the right singular vectors of the image A. This process is known as the Singular Value Decomposition (SVD) of A, and can be written as A = USV

T

0

λ1

B B 0 = ½u1 ; u2 ; …; um  × B B ⋮ @

0 r

= ∑

i=1

0



λ2



0



0



0

1

ð4Þ

C 0 C C × ½v1 ; v2 ; …; vm T 0 C A λm

T λi ui vi ;

where r is the rank of A, ui and vi are the left and right singular vectors, respectively. It is important to note that the singular values specify the luminance of the image, whereas the corresponding pair of singular vectors specify the geometry of the image. 3. The proposed watermarking scheme In this section, the detail of the proposed watermarking scheme is given as follows. The embedding procedure and extracting procedure are shown in Fig. 1. 3.1. Image information In designing a watermarking scheme, two conflicting requirements should be satisfied at the same time: (1) the watermark should be imperceptible, and (2) the watermark should be robust and very difficult to remove. In order to design more robust watermarking schemes, the watermark should be embedded in the perceptually most significant components of the host media [9]. There are some approaches which are well known on research activities using HVS properties to produce a more robust watermark. The HVS model can be used not only to measure the perceptibility of watermarks once they are embedded, but also to control that perceptibility during the embedding process. Therefore, we considered the visual entropy and edge entropy, which are also used by[19], in order to select watermark embedding regions that satisfy the aforementioned two requirements. An important approach to image region description is to quantify its texture content. The visual entropy is a common measure

940

C.-C. Lai / Optics Communications 284 (2011) 938–944

Cover Image

Block Selection

DCT Transformation

Watermark

Modify U component

SVD Transformation

Inverse SVD Transformation

Inverse DCT Transformation

Watermarked Image

3.2. The watermark embedding procedure

(a) the watermark embedding procedure Watermarked Image

Block Selection

DCT Transformation

SVD Transformation

Examine U component

Extracted Watermark

The SVD mathematical technique provides an elegant way for extracting algebraic features from an image. One of the feature is that the relationship between the entries in the first column vector of the U component could be preserved, while for other columns the entries are changed when general image processing was performed. A few researchers have worked in this filed [5,7,10]. Here, we also utilized this property and used the singular values of transform coefficients of the cover image to insert the watermark. The proposed watermark embedding procedure is formulated as follows.

(b) the watermark extracting procedure Fig. 1. The flowchart of the proposed watermarking scheme.

describing the texture of an image. The widely used mathematical form to calculate visual entropy is shown in Eq. (5). According to Shannon's definition, the entropy of an n-state system is n

H1 = − ∑ pi log pi ; i=1

ð5Þ

where pi is the probability of occurrence of the event “i" with 0 ≤ pi ≤ 1 and ∑ ni = 1 pi = 1. The value depends solely on the probability distribution of the pixel intensities but does not consider the cooccurrence of the pixel values. Therefore, the H1 is viewed as a global measure with respect to the image. The concept behind visual entropy is shown in Fig. 2. Three binary images have identical histograms but different spatial distributions of gray levels. As a result, the visual entropy of these binary images are expected to be different. However, pixel intensities in an image are not independent of each other. The spatial interaction between gray level primitives of some pixels in a region should be considered to estimate the entropy. The edge in an image can be viewed as an important image characteristics since it can be detected by comparing the pixel values for local properties obtained in pairs of non-overlapping neighborhoods bordering the pixel. So along with the visual entropy, the edge entropy of an image block is also considered for selection of the embedding regions. The edge entropy is defined as follows. n

ui

H2 = ∑ pi exp ; i=1

where ui = 1 − pi denotes the ignorance or uncertainty of the pixel value. The main steps performed to incorporate the block selection for watermark embedding with the HVS model are: (1) First, a cover image is partitioned into n × n non-overlapping blocks. The visual entropy and edge entropy of each block are calculated using Eqs. (5) and (6), respectively. (2) The two measure of entropy of each block is then summed up and the values thus obtained are sorted into ascending order based on the magnitude. The block with lowest value is selected for inserting the watermark until the number of selected blocks is equal to the size of the watermark.

ð6Þ

Step 1. The cover image is first partitioned into non-overlapping blocks with 8 × 8 pixels. Step 2. Select appropriate blocks for watermark embedding by utilizing the HVS characteristics. Step 3. Apply DCT technique to the selected blocks and then obtain DCT domain frequency bands. Step 4. Apply SVD to all DCT transformed blocks. Step 5. On each selected block, the relationship between the entries in the first column of the U matrix is examined. The watermark is embedded by changing the relation between the third (u3, 1) and the fourth (u4, 1) entries in the first column. If the embedded binary watermark bit is 1, then the value of (u3, 1 − u4, 1) should be positive and its magnitude is greater than a threshold (T). If the embedded binary watermark bit is 0, then the value of (u3, 1 − u4, 1) should be negative and its magnitude is greater than a threshold (T). When these two conditions are violated, the entries of u3, 1 and u4, 1 should be modified as uˆ 3;1 and uˆ 4;1 , respectively, based on the following rules:   Let μ = ju3;1 j + ju4;1 j 2;

=

 If wk = 1;  If wk = 0;

uˆ 3;1 = μ + T = 2 ; uˆ 4;1 = μ−T = 2

ð7Þ

uˆ 3;1 = μ−T = 2 ; uˆ 4;1 = μ + T = 2

ð8Þ

where wk indicates the embedded binary watermark bit, |x| denotes the absolute value of x. Step 6. Obtain the watermarked image by performing inverse SVD and inverse DCT techniques to all selected blocks subsequently. 3.3. The watermark extracting procedure The watermark extraction sequence consists of the following steps:

Fig. 2. Three binary images with different visual effect.

Step 1. The watermarked image is first partitioned into blocks with 8 × 8 pixels. Step 2. The visual entropy and edge entropy of each block are calculated to determine the block where the watermark is embedded. Step 3. Apply DCT to the selected blocks to obtain DCT domain frequency bands.

C.-C. Lai / Optics Communications 284 (2011) 938–944

941

Fig. 3. (a) cover image I (Lena), (b) cover image II (Peppers), and (c) watermark.

Step 4. Apply SVD to all DCT transformed blocks. Step 5. The relationship between the third and the fourth entries in the first column of the U matrix is examined. If a positive difference is detected, then the extracted watermark bit is a 1, whereas a negative difference would imply that a watermark bit 0 is extracted. These extracted bit values are used to construct the extracted watermark.

4. Experimental results In this section, some experiments are carried out to evaluate the performance of the proposed watermarking scheme. Two gray-level images of 512 × 512 pixels, Lena and Peppers, were used as cover images. Rather than using a pseudo-Gaussian random numbers watermark, we used a 32 × 32 binary image. They are illustrated in

Fig. 4. Watermarked Lena images under different thresholds. (a) T = 0.002, (b) T = 0.012, (c) T = 0.02, and (d) T = 0.04.

942

C.-C. Lai / Optics Communications 284 (2011) 938–944

Fig. 5. Watermarked Peppers images under different thresholds. (a) T = 0.002, (b) T = 0.012, (c) T = 0.02, and (d) T = 0.04.

Fig. 3. Extensive simulations compare the results obtained via the proposed approach with the results obtained via some other SVDbased watermarking methods [5,7,10]. Figs. 4 and 5 show the watermarked images at different threshold values. Notice that there is no visual difference between the original image and the watermarked images; therefore, this ensures the fidelity of the proposed method. In addition to visual inspection of the

watermarked images, the PSNR (Peak Signal-to-Noise Ratio) of Eq. (9) is used as a measure of the quality of a watermarked image.

PSNR = 10 ⋅ log10

2552 ; MSE

ð9Þ

Table 3 Bit correction rate for Lena image under different attacks with T = 0.012. Table 1 PSNR performance for Lena image under different threshold values. Threshold

Chang et al. [5]

Chung et al. [7]

Fan et al. [10]

Ours

0.002 0.012 0.02 0.04 Average

48.80 48.02 46.90 43.74 46.86

50.17 47.83 45.94 42.04 46.49

48.91 48.12 46.98 43.81 46.95

61.69 49.37 44.75 38.51 48.58

Attacks

Chang et al. [5]

Chung et al. [7]

Fan et al. [10]

Ours

CR GN MF JPEG SH Average

0.8115 0.6230 0.5283 0.6806 0.8847 0.7056

0.8115 0.7080 0.4980 0.6855 0.9589 0.7323

0.8115 0.6250 0.5292 0.6806 0.8867 0.7066

0.8476 0.5928 0.9688 0.9941 0.9990 0.8805

Table 4 Bit correction rate for Lena image under different attacks with T = 0.04.

Table 2 PSNR performance for Peppers image under different threshold values. Threshold

Chang et al. [5]

Chung et al. [7]

Fan et al. [10]

Ours

0.002 0.012 0.02 0.04 Average

45.71 45.26 44.61 42.44 44.51

50.86 48.37 46.34 42.24 46.95

45.87 45.41 44.75 42.54 44.64

56.20 50.20 45.73 39.61 47.94

Attacks

Chang et al. [5]

Chung et al. [7]

Fan et al. [10]

Ours

CR GN MF JPEG SH Average

0.8125 0.8750 0.5507 0.9687 0.9873 0.8388

0.8125 0.9179 0.5107 0.9794 0.9892 0.8419

0.8125 0.8740 0.5468 0.9658 0.9873 0.8373

0.8477 0.9521 1.0000 1.0000 1.0000 0.9599

C.-C. Lai / Optics Communications 284 (2011) 938–944

watermarks, the BCR (bit correction rate) which is defined as Eq. (11) is used to measure the correction rate of the extracted watermark.

Table 5 Bit correction rate for Peppers image under different attacks with T = 0.012. Attacks

Chang et al. [5]

Chung et al. [7]

Fan et al. [10]

Ours

CR GN MF JPEG SH Average

0.8310 0.6435 0.4794 0.6943 0.8837 0.7064

0.8330 0.6611 0.4628 0.6562 0.9365 0.7099

0.8311 0.6464 0.4833 0.6933 0.8828 0.7074

0.7969 0.5908 0.8369 0.9160 0.9316 0.8145

Table 6 Bit correction rate for Peppers image under different attacks with T = 0.04. Attacks

Chang et al. [5]

Chung et al. [7]

Fan et al. [10]

Ours

CR GN MF JPEG SH Average

0.8359 0.8720 0.4921 0.9511 0.9667 0.8236

0.8359 0.8896 0.4726 0.9443 0.9628 0.8211

0.8359 0.8720 0.4941 0.9472 0.9658 0.8230

0.8271 0.8544 0.9541 0.9609 0.9717 0.9136

MSE =

1 W H  2 ∑ ∑ xij −xij′ Þ ; WH i j

943

ð10Þ

where the notations W and H represent the width and height of an image, xij is the pixel value of coordinate (i, j) in an original image, and x′ij is the pixel value after the watermark embedding procedure. From the Tables 1 and 2, we can find that the threshold used in the watermark embedding procedure has a major impact on the imperceptibility of the watermarked image; that is, larger threshold value actually results in watermarked image with lower PSNR, but makes the embedded watermark more robust. From the comparison, it is easily known that our proposed approach can improve the PSNR values greatly. To evaluate the robustness of the proposed approach, the watermarked image was tested against five kinds of attacks: (1) geometrical attack: cropping (CR); (2) noising attack: Gaussian noise (GN) addition; (3) denoising attack: median filtering (MF); (4) format-compression attack: JPEG compression; and (5) image-processing attack: sharpening (SH). For comparing the similarities between the original and extracted

BCR =

N ′ ∑M i = 1 ∑j = 1 Wij ⊗W ij ; M×N

ð11Þ

where M × N is the size of watermark image, Wij denotes the ij-th binary value of the original watermark, W′ij is the ij-th binary value of the extracted watermark, and ⊗ indicates an operator of exclusiveOR. The BCR is higher, the robustness is higher; in other words, it implies that the extracted watermark is very similar to the original one. Tables 3–6 present the comparison of BCR values of the extracted watermarks obtained by aforementioned three SVD-based methods. It is clear that our scheme performs well under common image processing attacks with different threshold setting, and it is shown that the proposed scheme can offer better robustness performance compared to other related algorithms. In addition to quantitative measurement, we also need the visual perceptions of the extracted watermarks. The constructed watermarks at the threshold value is 0.04 are shown in Figs. 6 and 7. We can find that our scheme not only can successfully resist different kinds of attacks, but can also restore watermark with high perceptual quality. The main difference in intrinsic property between our approach and compared SVD-based watermarking schemes is the selection of blocks for watermark embedding. They choose the blocks randomly by using a pseudo-random number generator based on their rank. The blocks with higher ranks are selected first before those of lower ranks for embedding the watermark. However, the image blocks with higher ranks would lower the distortion level of the watermarked image. In contrast to such techniques, selecting the watermark embedding regions based on the spatial image features provides a good alternative. This fact is supported by our experimental results. 5. Conclusions In this paper, an image watermarking technique based on SVD and human visual characteristics is presented. The technique fully exploits the respective feature of SVD which efficiently represents intrinsic algebraic properties of an image. The use of HVS characteristics helps to select watermark embedding regions for a good compromise between robustness and quality of the watermarked image. Experimental results of the proposed technique have shown both the significant improvement in imperceptibility and the robustness under

Fig. 6. Extracted watermarks at the threshold value = 0.04 with different attacks. (a) Cropping, (b) Gaussian noise, (c) Median filtering, (d) JPEG compression, and (e) Sharpening.

Fig. 7. Extracted watermarks at the threshold value = 0.04 with different attacks. (a) Cropping, (b) Gaussian noise, (c) Median filtering, (d) JPEG compression, and (e) Sharpening.

944

C.-C. Lai / Optics Communications 284 (2011) 938–944

different types of image manipulation techniques. As for future work, more perceptual features and more advanced perceptual models will be utilized. Acknowledgements This paper is supported by National Science Council, Taiwan, under grant NSC 99-2221-E-390-035. The author would like to thank anonymous referees for their valuable comments and suggestions which led to substantial improvements of this paper. References [1] V. Aslantas, S. Ozer, S. Ozturk, Opt. Commun. 282 (2009) 2806. [2] M. Barni, F. Bartolini, A. De Rosa, A. Piva, IEEE Trans. Signal Process. 51 (2003) 1118. [3] A. Basso, F. Bergadano, D. Cavagnino, V. Pomponiu, A. Vernone, Algorithms 2 (2009) 46. [4] [4] A. Briassouli, M.G. Strintzis, IEEE Trans. Image Process. 13 (1004) 1604.

[5] C.-C. Chang, P. Tsai, C.-C. Lin, Pattern Recogn. Lett. 26 (2005) 1577. [6] L. Chen, D. Zhao, F. Ge, Opt. Commun. 283 (2010) 2043. [7] K.-L. Chung, W.-N. Yang, Y.-H. Huang, S.-T. Wu, Y.-C. Hsu, Appl. Math. Comput. 188 (2007) 54. [8] M. Calagna, H. Guo, L.V. Mancini, S. Jajodia, Proc. 2006 ACM symposium on Applied computing , Dijon, France, 2006, p. 1341. [9] I.J. Cox, J. Kilian, F.T. Leighton, T. Shamoon, IEEE Trans. Image Process. 6 (1997) 1673. [10] M.-Q. Fan, H.-X. Wang, S.-K. Li, Appl. Math. Comput. 203 (2008) 926. [11] E. Ganic, N. Zubair, A.M. Eskicioglu, Proc. The IASTED Int. Conf. on Communication, Network, and Information Security, New York, USA, 2003, p. 85. [12] J. Guo, Z. Liu, S. Liu, Opt. Commun. 272 (2007) 344. [13] F. Hartung, M. Kutter, Proc. IEEE 87 (1999) 1079. [14] M.Z. He, L.Z. Cai, Q. Liu, X.C. Wang, X.F. Meng, Opt. Commun. 247 (2005) 29. [15] F. Huang, Z.-H. Guan, Pattern Recogn. Lett. 25 (2004) 1769. [16] D.-C. Hwang, D.-H. Shin, E.-S. Kim, Opt. Commun. 277 (2007) 40. [17] G. Langelaar, I. Setyawan, R. Lagendijk, IEEE Signal Process Mag. 17 (2000) 20. [18] R. Liu, T. Tan, IEEE Trans. Multimedia 40 (2002) 121. [19] S.P. Maity, M.K. Kundu, Int. J. Electron. Commun. 64 (2010) 243. [20] T.V. Nguyen, J.C. Patra, Digit Signal Process. 18 (2008) 762. [21] A. Nikolaidis, I. Pitas, IEEE Trans. Image Process. 12 (2003) 563. [22] J.-M. Shieh, D.-C. Lou, M.-C. Chang, Comp. Stand Inter. 28 (2006) 428.