Desynchronization attacks resilient image watermarking scheme based on global restoration and local embedding

Desynchronization attacks resilient image watermarking scheme based on global restoration and local embedding

Neurocomputing 106 (2013) 42–50 Contents lists available at SciVerse ScienceDirect Neurocomputing journal homepage: www.elsevier.com/locate/neucom ...

1MB Sizes 0 Downloads 55 Views

Neurocomputing 106 (2013) 42–50

Contents lists available at SciVerse ScienceDirect

Neurocomputing journal homepage: www.elsevier.com/locate/neucom

Desynchronization attacks resilient image watermarking scheme based on global restoration and local embedding Feng Ji a, Cheng Deng a,n, Lingling An b, Dongyu Huang c a

School of Electronic Engineering, Xidian University, Xi’an 710071, China School of Computer Science and Technology, Xidian University, Xi’an 710071, China c School of Information and Control Engineering, Xi’an University of Architecture and Technology, Xi’an 710055, China b

a r t i c l e i n f o

a b s t r a c t

Article history: Received 22 June 2012 Received in revised form 24 September 2012 Accepted 30 September 2012 Communicated by L. Shao Available online 14 November 2012

For the existing feature-based image watermarking schemes, it is still a challenge to resist against desynchronization attacks. In this paper, we propose a new feature-based image watermarking scheme for improving the robustness against desynchronization attacks. First, multi-scale Gaussian filtering model is used to extract feature points in original image. The stable and non-overlapped local circular regions centered at feature points are thereafter selected by combining image segmentation and feature points refinement. Finally the watermark is embedded in the Zernike moments of the normalized circular regions. In watermark detection, a suspected image is restored with the geometrical transform matrix which is calculated by feature points matching and RANSAC iteration algorithm. The watermark detection is then conducted in the normalized circular regions after the same feature selection procedure as the embedding one. Preliminary experiments show that the proposed image watermarking can be robust against various attacks including common image processing attacks and desynchronization attacks, even complicated attacks. Crown Copyright & 2012 Published by Elsevier B.V. All rights reserved.

Keywords: Robust watermarking Desynchronization attacks Feature point Image restoration Segmentation

1. Introduction With the rapid development of information technologies, digital media has been easily accessed and quickly transmitted. On the other hand, however, it gives rise to the problem of illegal copying and tampering of digital media. Hence, copyright protection has attracted more and more attentions. Digital watermarking, which allows for embedding the imperceptibility information in an original data, is recognized as a favorable method for copyright protection of digital media [1]. An efficient image watermarking must be resilient to a variety of possible attacks. In generally, these attacks can be classified into common image processing attacks, such as noise contamination, lossy compression, and filtering, etc, and desynchronization attacks, such as affine transformations, random bend attacks (RBAs), and projection transformations, etc. Desynchronization attacks are more difficult to deal with than other types of attacks because they destroy the synchronization between the original image and the embedded watermarks, and disable the watermark detector. Until now, many watermarking methods have been developed to deal with the desynchronization attacks. These

n

Corresponding author. E-mail address: [email protected] (C. Deng).

methods can be roughly classified into three categories: invariant transform-based, template insertion-based and feature-based. Invariant transform-based image watermarking schemes embed watermark in the affine-invariant domains, such as Fourier–Mellin transformation [2] and generalized Radon transformation [3], to achieve the watermark resynchronization. However, such methods suffer from implementation issues and are vulnerable to cropping and local desynchronization attacks. For the template insertionbased methods [4], desynchronization attacks can be coped with by identifying the transformations that the artificially embedded reference may undergo. Unfortunately, this kind of methods is sensitive to malicious attacks and local desynchronization attacks. Feature-based image watermarking schemes, also called the second generation schemes, have been more and more popular in recent years because they can avoid the watermark synchronization error by binding the watermark with the stable image features. Bas et al. [5] extract feature points with Harris operator and divide an image into a set of disjoint triangles by using Delaunay tessellation. The watermark embedding and detection are conducted in these triangle regions. In [6], the Mexican hat wavelet filtering is utilized to extract feature points, and watermark is embedded in the normalized local regions centered at these regions. Seo and Yoo [7] embed and detect watermark in local invariant regions which are extracted using scale-space feature points. Deng et al. [8] employ Harris–Laplace detector to extract feature points, then embed and detect watermark in the

0925-2312/$ - see front matter Crown Copyright & 2012 Published by Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.neucom.2012.09.032

F. Ji et al. / Neurocomputing 106 (2013) 42–50

43

Tchebichef moments of the non-overlapped feature regions centered at these feature points. Gao et al. [9] adopt affine-invariant points detector to construct affine-invariant regions (ACRs), and embed and detect watermark in the normalized ACRs. As mentioned above, it has been demonstrated that featurebased image watermarking schemes can significantly enhance the robustness against some desynchronization attacks. For common image processing attacks, however, some representative schemes [5,7,9] have no effect on the robustness because they directly embed the watermark in the spatial domain of the local regions. To deal with this problem, some image watermarking schemes implemented watermark embedding and detection in the transform domain of the feature regions, such as DFT [6], histogram [10], Contourlet domain [18], and image moments [8,11]. But even so, the feature-based image watermarking schemes still have two challenge problems to be addressed as follows:

transformation matrix by using RANdom Sample Consensus (RANSAC) iterative optimization algorithm, then restore the suspected image with the obtained geometric transform matrix and construct a set of circular regions as in the watermark embedding procedure, finally detect the watermark from Zernike moments of the circular regions via dither quantization modulation and minimum distance decoder. Experimental results show that our scheme achieves good imperceptibility and is robust against common image processing attacks as well as desynchronization attacks. The remainder of this paper is organized as follows. Section 2 describes some related work. In Section 3, we propose the desynchronization resilient image watermarking scheme. Experimental results and detailed analysis are given in Section 4. Finally, Section 5 concludes the paper.

1) The existing feature-based image watermarking schemes possess a certain degree of robustness against some common desynchronization attacks. However, as for complicated desynchronization attacks, such as RBAs, projection transformations, and reflection transformations, the feature points extracted from image cause pixel deviation and their location accuracies are greatly decreased. The location accuracy is of crucial importance, which directly determines the featurebased image watermarking scheme’s success or failure. 2) Feature points selection for constructing a set of nonoverlapped feature regions plays an important role in achieving desired performance of robust image watermarking scheme. Although some existing methods have developed their feature points selection strategies, such as graph theoretical clustering-based [8,9], greatest robustness measurementbased [12], and image segmentation-based [13], a good feature points selection method must consider the trade-off between false-positive probability and missing probability. It is clear that, if some redundant feature points are selected in the watermark detection, the false-positive probability will increase, while if some useful feature points are lost, the missing probability will also increase.

2. Related work

To target these problems, we develop a new image watermarking scheme resilient to various desynchronization attacks. In watermark embedding procedure, we first use multi-scale Gaussian filtering model to extract the geometrically invariant feature points from an original image, then select a set of stable feature points and construct non-overlapped circular feature regions using graph-theory based image segmentation integrated with feature points refinement, finally embed watermark in the Zernike moments of those circular regions via dither quantization modulation. In watermark detection procedure, we first extract feature points from a suspected image and combine these feature points with the original feature points to calculate the geometric

Our work is mainly related to invariant feature point’s extraction, global image restoration based on feature points, and feature regions selection. The details of the related work will be described in the following subsections. 2.1. Invariant feature points extraction As aforementioned, it has been proven that feature-based image watermarking schemes are robust against common image processing attacks and desynchronization attacks because feature points can be used as references for the watermark embedding and detection. Based on the investigation of the existing detectors [14], multiscale Gaussian filtering model [15], approximating Laplacian of Gaussian (LoG) by difference-of-Gaussian (DoG), has good repeatability close to Hessian-based detectors. Moreover, its descriptors are the most widely used nowadays since they are distinctive and relatively fast for on-line applications. Therefore, we adopt multiscale Gaussian filtering model to extract feature points. Suppose Iðx,yÞ is an input image and Gðx,y, sÞ is a variable-scale Gaussian filter defined as Gðx,y, sÞ ¼

1 2 2 2 eðx þ y Þ=2s 2ps2

ð1Þ

The scale space function of the image, Lðx,y, sÞ, is produced from convolving with the Gaussian filter: Lðx,y, sÞ ¼ Gðx,y, sÞnIðx,yÞ,

ð2Þ

where n is the convolution operation. Then the DoG function can be computed from the difference of two nearby scales separated by a constant multiplicative factor k: Dðx,y, sÞ ¼ ðGðx,y,ksÞGðx,y, sÞÞnIðx,yÞ ¼ Lðx,y,ksÞLðx,y, sÞ

Fig. 1. Feature points of Lena image. (a) Original image, (b) original feature points, (c) feature points in middle-scale band, (d) final constructed feature regions.

ð3Þ

44

F. Ji et al. / Neurocomputing 106 (2013) 42–50

By repeating the above process, the DoG can be computed from the successively filtered images. The candidate feature points can be extracted with local extrema detection in DoG. After rejecting keypoints with low contrast and eliminating those with strong edge responses, the final results are selected as the feature points as shown in Fig. 1(a) and (b). Besides location and characteristic scale, each feature point is assigned with orientation to achieve invariance to image rotation. To do so, an orientation histogram is calculated from the gradient orientations of all the pixels within a region around the feature point, and the peak of the histogram is assigned as the orientation of the feature point.   The gradient of pixel x0 ,y0 in the image I is computed as follows:     @I @I , rI x0 ,y0 ¼ ð4Þ @x @y ðx0 ,y0 Þ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2  2  The magnitude of this gradient is given by @I=@x þ @I=@y and its orientation is given by tan1 ðð@I=@yÞ=ð@I=@xÞÞ. Fig. 2 depicts the orientation calculation in the local region centered at a feature point and the histogram of orientations. 2.2. Global restoration based on feature points In our scheme, watermark detection should be conducted in the distorted image after restoration. Therefore, the effectiveness of the image restoration algorithm has an important impact on the performance of the image watermarking. Let O1 and O2 be the feature point sets of original image I and distorted image I0 , respectively. Feature points between the original image and the distorted image, which is represented as a 128-dimension feature vector, are matched according to Euclidean distance measure. If the ratio of the nearest and the second nearest distance is less than a threshold, the match is successful. Even so, there are still a few mismatched feature points that exist which will decline the quality of the restored image. In this paper, we adopt RANSAC iteration optimization algorithm [16] to remove these mismatched feature points, and then calculate the transformation matrix T more accurately. Table 1 depicts the detailed iteration algorithm. The transformation can be written using homogeneous coordinate as: 32 3 2 03 2 3 2 t 11 t 12 1 x x x 76 7 6 07 6 7 6 ð5Þ 4 y 5 ¼ T4 y 5 ¼ 4 t 21 t 22 1 54 y 5 1

1

t 31

t 32

0

1

Once after solving the transformation matrix T, we can restore the distorted image. Specifically, we first initialize a template matrix Itemp with the same size of the original image I. Then,

the given pixel coordinate ðx,yÞ in the restored image IR and the pixel coordinate ðx0 ,y0 Þ in the distorted image I0 can be computed according to Eq. 5. Finally, we restore every pixel of the distorted image using bilinear interpolation as follows.           Itemp1 ðx,yÞ ¼ y0 y0 I0 x0 , y0 þ y0  y0 I0 x0 , y0  0  0  0  0  0   0 0  0  0   0  Itemp2 ðx,yÞ ¼ y y I x , y þ y  y I x , y      ð6Þ IR ðx,yÞ ¼ x0 x0 Itemp1 ðx,yÞ þ x0  x0 Itemp2 ðx,yÞ, where bdc and dde are respectively, rounding down operation and rounding up operation. 2.3. Local feature regions selection The extracted feature points are not directly applicable for image watermarking, due to the fact that the number and the distribution of the feature points are so dense that the content of the image is nearly covered completely. So, it is necessary to select a set of appropriate feature points and ensure that the local feature regions centered as these selected feature points are stable and non-overlapped. Considering the advantages and disadvantages in the existing feature point’s selection methods, we develop a feature point’s selection strategy using graph-based image segmentation technique [17]. Table 1 RANSAC iteration algorithm Input: Set of data points U Model M Maximal allowed error emax Minimal required number of inliers nmin Output: Set of inlier data points U~ þ ^ best fitting to U~ þ Model M 1: Initialize best set of inliers U~ þ 2: for k þ þ do 3: Randomly select minimal subset U min from U to solve M 4: M’ fit to U min 5: Initialize set of inliers U þ 6: for all u A U do 7: if MðuÞ o emax then 8: U þ ’U þ [ u 9: end if 10: end for   11: if 9U þ Z nmin 9 4 9U þ 9 4 9U~ þ 9 then 12: U~ þ ’U þ 13: end if 14: end for



15: if U~ þ Z nmin then ^ w.r.t. all inliers U~ þ 16: Refine M 17: end if

Fig. 2. Illustrations of orientation calculation: (a) orientations in a local region and (b) histogram of the orientations.

F. Ji et al. / Neurocomputing 106 (2013) 42–50

An input image is represented as an undirected graph G ¼ ðV, EÞ where each vertex vi A V corresponds to a pixel in the image, and the edge set E is constructed by connecting pairs of pixels that are neighbors in an 8-connected sense. The edge weights in the graph are based on the following absolute intensity differences between the pixels connected by an edge:

wððvi ,vj ÞÞ ¼ Iðpi ÞIðpj Þ , ð7Þ   where I pi is the intensity of the pixel pi . The graph-based image segmentation procedure is summarized in Table 2, which is closely related to construct a minimum spanning tree (MST) of a graph-based on distance constraints. Particularly, MInt, minimum internal difference, is defined as MInt ðC 1 ,C 2 Þ ¼ minðInt ðC 1 Þ þ tðC 1 Þ, IntðC 2 Þ þ tðC 2 ÞÞ,

ð8Þ

with IntðCÞ ¼

max

wðeÞ

45

For each segmented region, one feature point is selected and the circular region centered at this feature point is used for watermarking embedding and detection. The strategy for selecting the feature points is: the feature points in middle-scale band are first chosen in terms of stability and non-overlapped. As shown in Fig. 1 (c), the middle-scale band is set to ½4,12 which means that feature points with characteristic scale below 4 or above 12 will be discarded. For each segmented region, there may exist several feature points. In this case, the feature point whose characteristic strength is the largest is used to form the circular region, as illustrated in Fig. 1(d). Fig. 4 illustrates the restored images under different distorted images using this method. Fig. 4(a)–(d) are the distorted images, Fig. 4(e)–(h) are the corresponding restored images, and Fig. 4(i)–(l) are the final selected feature regions. As shown in Fig. 4, the restored images have good quality, which can almost preserve the relatively high repeatability of the feature points.

ð9Þ

e A MST ðC,EÞ

3. Robust image watermarking scheme

tðCÞ ¼ k=9C9

ð10Þ

where IntðCÞ is the internal difference of a component C D V to be the largest weight in the minimum spanning tree of the component MST ðC,EÞ, t is the threshold function to control the degree to which the difference between two components, 9C9 denotes the size of C, and k is some constant parameter. Fig. 3 shows the results of graph-based image segmentation. The original image Lena (Fig. 3(a)) is segmented into many different homogeneous regions (Fig. 3(b)). Each region is represented in one color.

Table 2 Graph-based image segmentation algorithm Input: A graph G ¼ ðV, EÞ, with n vertices and m edges Output: A segmentation of V into components S ¼ ðC 1 , . . ., C r Þ 1: Sort E into p ¼ ðo1 , . . ., om Þ by non-decreasing edge weight 2: Start with a initial segmentation S0 ¼ ðC 1 , . . ., C r Þ, C i ¼ fvi g 3: Repeat step 3 3.1: for 3.2:

C q1 A Sq1 , q ¼ i if C iq1 a C q1 j

3.3: 3.4: 3.5: 3.6: 4: Return

1, . . ., m do   and w oq r MInt C iq1 , C jq1 then

and C q1 generating Sq ’Sq1 by merging C q1 i j else Sq ¼ Sq1 end if end for S ¼ Sm

Fig. 3. Image segmentation based on graph theory: (a) original image and (b) segmented image. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

In this section, we propose a desynchronization attacks resilient image watermarking scheme that can overcome the aforementioned drawbacks in the existing feature-based watermarking. The proposed scheme mainly consists of two components: watermark embedding and watermark detection. 3.1. Watermark embedding Fig. 5 shows the framework of our proposed watermark embedding scheme. The detailed steps of the watermark embedding procedures are given as follows. Step 1: We first extract feature points by utilizing multi-scale Gaussian filtering model and select a set of feature points with the feature selection strategy based on image segmentation, then construct stable and non-overlapped circular regions centered at these feature points, O ¼ fo1 ,o2 ,. . .,ok g, i.e., ðxuÞ2 þ ðyvÞ2 ¼ ðk½sÞ2 ,

ð11Þ

where k is a factor to adjust the radii of the circular regions. Once the set of feature points are located, we will assign the rotation and scaling invariant properties to these circular regions. To do so, the orientation alignment is implemented by rotating to make the circular regions invariant to rotation. Then, scaling normalization is employed to achieve the scaling invariance. Hereto, the selected circular regions are ready for watermark embedding. Step 2: In order to enhance the robustness of the watermarking system against common image processing attacks, such as noise, small displacement of the feature point position, the watermark sequence is embedded in the transform domain instead of spatial domain. In our scheme, image moment coefficients are computed for embedding the watermark. As compared in Ref. [20], watermark embedded in the low order Tchebichef moments (TMs) of original image is extracted more accurately than in the case of the Zernike moments (ZMs) [21]. However, TMs are more sensitive to image tampering than ZMs. So, based on the above analysis, we embed watermark in the ZMs. In implementation, considering the fact that ZMs are defined in the unite circle, we first obtain original square patches, and the circular regions can then be formed after padding the square patches with zeros. Step 3: Let pseudo-random watermark sequence b ¼ ðb1 , . . .,bL Þ,bi A f0,1g be generated with the key K1 . For each square

46

F. Ji et al. / Neurocomputing 106 (2013) 42–50

Fig. 4. The distorted images and the corresponding restored images. (a) Rotation 151, (b) affine transformation, (c) reflective transformation, (d) aspect ratio change; (e) and (f) the corresponding restored images; (i) and (l) the constructed feature regions.

Fig. 5. The framework of watermark embedding procedure.

patch, ZMs are calculated and carefully selected since inherent computation error of ZMs will generate inaccuracy moments which are not suitable for watermark embedding. In this paper, two aspects should be considered in selecting ZMs. One is ZMs with order m above a certain value, that is, M max , cannot be obtained accurately. The other is ZMs with repetitions m ¼ 4iði ¼ 0,1,. . .Þ are not accurate. Therefore, the final selected ZMs can be defined as

  S ¼ Z mn m rM max ,n Z 0,n a 4i ð12Þ M max is set to 30 in our scheme. Furthermore, to enhance security of the watermarking system, we use a secret key K2 to randomly select L ZMs from S and to   form a ZM vector Z¼ Z p1 q1 ,UUU,Z pL qL with corresponding magnitude vector A ¼ Ap1 q1 ,UUU,ApL qL .

Step 4: For each watermark bit bi , the magnitude of Z pi qi is modified with dither quantization modulation to implement the watermark embedding. The detailed embedding rule is:   Api qi di ðbi Þ A0pi qi ¼ D þdi ðbi Þ, i ¼ 1,. . .,L, ð13Þ

D

where ½d is rounding operation, D is quantization step and di ðdÞ is the it dither function satisfying di ð1Þ ¼ D=2 þ di ð0Þ. The dither vector ðd1 ð0Þ, . . ., dL ð0ÞÞ is generated by secret key K3 and follows uniform distribution over ½0, D. Note that for the conjugate symmetry of ZMs, the magnitude Api ,qi should also be quantized with Eq. (13). The modified ZMs are depicted as Z 0pi qi ¼

A0pi qi Api qi

Z pi qi ,i ¼ 1,:::,L

ð14Þ

F. Ji et al. / Neurocomputing 106 (2013) 42–50

Here, Z 0pi qi and A0pi qi respectively are the ith modified ZM and its corresponding magnitude. Thus, the watermarked circular regions can be reconstructed with the modified ZMs. Step 5: Replacing all of the watermarked circular regions with the original ones, we can obtain the watermarked image.

With the same key K3 , the two dither vectors ðd1 ð0Þ,. . .,dL ð0ÞÞ and ðd1 ð1Þ,. . .,dL ð1ÞÞ are regenerated. As the Eq. (13), the magnitude of each Z 0pi qi is then quantized with the above two dither vector, respectively, " 0 # Api qi dðjÞ A0pi qi ¼ D þdðjÞ, j ¼ 0,1 ð19Þ

D

j

Concretely, the watermarked circular region is composed of two parts. One is the circular region C rest ðx,yÞ, reconstructed by the non-modified ZMs. C rest ðx,yÞ ¼ C o ðx,yÞC m ðx,yÞ,

ð15Þ

with C m ðx,yÞ ¼

L X

ð16Þ

i¼1

where C 0 ðx,yÞ is the original circular region, C m ðx,yÞ is the reconstructed circular region with those selected ZMs to be modified, and the corresponding ZMs basis function V pi qi . Another part is the circular region C 0m ðx,yÞ, reconstructed by the modified ZMs. C 0m ðx,yÞ ¼

L X

Z 0pi qi V pi ,qi ðx,yÞ þZ 0pi ,qi V pi ,qi ðx,yÞ

C rest ðx,yÞ þ C 0m ðx,yÞ

with 2 A0pi qi A0pi qi 0 2 disð1Þ ¼ A0pi qi A0pi qi

disð0Þ ¼



1

ð21Þ

When t ¼ dis0dis1, the watermark bits can be estimated in 0 0 terms of the sign of t. If t o0, bi ¼ 0, else bi ¼ 1. 3.3. Detection threshold analysis

By combining the above two parts, the watermarked circular region C 0 ðx,yÞ can be generated: C ðx,yÞ ¼

j

ð17Þ

i¼1

0

where ½d is the rounding operator, and i ¼ 1,. . .,L. By comparing the distances between A0pi qi with its two quantized versions, the estimated watermark bits are computed as the following equation,  2 bi ¼ argmin A0pi qi  A0pi qi , i ¼ 1,. . .,L ð20Þ j A ð0,1Þ

Z pi qi V pi qi ðx,yÞ þZ pi ,qi V pi ,qi ðx,yÞ,

47

ð18Þ

3.2. Watermark detection The procedure of watermark detection is shown in Fig. 6, which mainly consist of image restoration, feature region selection, and watermark detection. In detail, Step 1: As described in Section 2, feature points are firstly extracted from a suspected image via multi-scale Gaussian filtering model. Then, the suspected image is restored according to the transform matrix Twhich is computed with the matching feature point pairs between the extracted ones and those pre-stored ones. In detection end, these pre-stored feature points extracted from the original image can be regarded as side information, and the original image does not need to be stored at all. Step 2: For the restored image, we re-extract feature points with feature detector, and adopt the same selection method to form a set of stable and non-overlapped circular regions. High repeatability of feature points and good robustness of segmentation regions can guarantee the maximum reappearance of the watermarked circularregions.  Step 3: Let O0 ¼ o01 ,o02 ,. . .,o0n be the obtained circular regions. For each circular region, we calculate ZMs and choose L elements with key K2 .

For the watermark detector, there are typically two kinds of errors: the false-alarm probability (no watermark embedded but one extracted) and the missing probability (watermark embedded but none extracted). The watermark detection threshold T should be adjusted to realize a tradeoff between the false-alarm probability and the missing probability. In practice, it is usual to determine the threshold T of minimizing missing probability subject to a fixed false-alarm probability. The false-alarm error occurs when the watermark is perceived to be detected in a non-watermark image. The false-alarm probability of a circular region is denoted as [19] Pf a ¼

L X k¼T

ð0:5ÞL

L! , k!ðLkÞ!

ð22Þ

where L is the length of the watermark sequence, and T is the threshold used to judge the presence of the watermark. Normally, an image is claimed to be watermarked if at least two circular regions detect the watermark successfully. According to this rule, the false-alarm probability of the suspected image is   m  X i  mi m Pf a_image ¼ P f a  1P f a  , ð23Þ i i¼2 where m is the number of circular regions in the suspected image. Given the false-alarm probability, we can determine the detection threshold T. For example, when the watermark length and the detection threshold T are respectively set to 30 and 23, the falsealarm probability P f a r 104 .

Fig. 6. The framework of watermark detection procedure.

48

F. Ji et al. / Neurocomputing 106 (2013) 42–50

4. Experimental results and analysis

4.2. Watermark robustness

50 popular 512  512 images from Internet including Baboon, Lena, and Peppers are used to evaluate the imperceptibility and robustness of the proposed watermarking scheme. In the experiments, we only list the results of Baboon, Lena, and Peppers with the following concerns. On the one hand, these three images are benchmark: Baboon represents images with complex texture, Lena has mixture characteristics, and Peppers indicates luminosity changes [23]; on the other hand, it is applicable to compare the performance with other representative methods.

To prove the robustness of the proposed watermarking scheme, we adopt Starmark 4.0 [22] to generate various attacks, including common image processing attacks and desynchronization attacks. Tables 3 and 4 respectively show the results, in which the numerator is the number of regions the watermark has been successfully detected and the denominator is the number of regions the watermark has been actually embedded. As shown in Table 3, the proposed scheme is robust against many common image processing attacks, such as median filtering with size 3  3, 5  5, Gaussian filtering with size 3  3, adding uniform noise, adding Gaussian noise, and JPEG compression. In Table 4, it is clearly shown that the proposed scheme can resist to various desynchronization attacks which include global desynchronization attacks such as cropping, scaling, rotation, and global linear transformations. Obviously, our scheme is also successful against local desynchronization attacks such as RBAs. Moreover, the proposed scheme is compared with two representative methods, as shown in Tables 3 and 4. The overall detection ratio of our scheme is nearly 54% for common image processing attacks, and scheme [7] and scheme [8] are respectively 36% and 63%. Our scheme is comparable to the scheme [8]. Under desynchronization attacks, since the overall detection ratio of our scheme is about 46%, and the scheme [7] and the scheme [8] are 19% and 35%, our scheme outperforms the scheme [7] and the scheme [8]. For the complicated desynchronization attacks such as RBAs and reflection transformations, the performance of our scheme is the best among these three schemes. The experimental results demonstrate that the proposed watermarking scheme can survive both common image processing attacks

4.1. Watermark imperceptibility The peak-signal-to-noise (PSNR) value between the original image and the watermarked version is a criterion for the watermark imperceptibility. By the analysis, we find the PSNR value mainly depends on three factors: the watermark length L, the quantization step D, and the adjustment factor of the circular region radius k. On the one hand, given a fixed L and k, a large value of D will increase the watermark strength, but reduce the PSNR. On the other hand, given a fixed D and k, the more the watermark bits, the lower the PSNR value. Moreover, a large watermarking embedding region will decrease the PSNR value. So, in the experiments, D ¼ 5, L ¼ 30, and k can be used as a key to enhance the security of the watermark system. The PSNR of the watermarked images are above 45 dB. Fig. 7 illustrates the original images, and the corresponding watermarked images, which manifest that the proposed watermarking scheme has good imperceptibility.

Fig. 7. The imperceptibility of the proposed watermarking scheme. The first column is the original images, and the second column is the watermarked images.

F. Ji et al. / Neurocomputing 106 (2013) 42–50

49

Table 3 Robustness against various common image processing attacks of the proposed watermarking scheme Attack Type

Median filter 3  3 Median filter 5  5 Gaussian filter 3  3 Uniform noise (0.01) Gaussian noise (0.001) JPEG 70 JPEG 50 JPEG 30 JPEG 15

Baboon

Lena

Peppers

Our scheme

Ref. [7]

Ref. [8]

Our scheme

Ref. [7]

Ref. [8]

Our scheme

Ref. [7]

Ref. [8]

4/8 3/8 4/8 4/8 4/8 5/8 5/8 4/8 4/8

4/8 3/8 1/8 2/8 3/8 2/8 3/8 1/8 1/8

11/17 11/17 8/17 12/17 11/17 15/17 13/17 9/17 9/17

4/9 6/9 6/9 6/9 3/9 4/9 5/9 5/9 4/9

5/8 5/8 3/8 4/8 4/8 3/8 2/8 0/8 0/8

7/13 5/13 5/13 9/13 8/13 9/13 8/13 8/13 8/13

3/9 6/9 6/9 4/9 3/9 5/9 5/9 4/9 4/9

3/8 3/8 3/8 2/8 2/8 6/8 5/8 4/8 4/8

14/18 10/18 9/18 11/18 10/18 16/18 16/18 12/18 10/18

Table 4 Robustness against various desynchronization attacks of the proposed watermarking scheme Attack Type

Cropping 5% Cropping 10% Scaling 90% Scaling 150% Rotation 53 Rotation 303 Rotation 903 Affine I (1.007) Affine II (1.010) Reflection I (X-axis) Reflection II (Y-axis) Aspect (1.0, 1.1) Aspect (0.7, 0.9) Random bending

Baboon

Lena

Peppers

Our scheme

Ref. [7]

Ref. [8]

Our scheme

Ref. [7]

Ref. [8]

Our scheme

Ref. [7]

Ref. [8]

6/8 6/8 4/8 6/8 3/8 3/8 3/8 4/8 5/8 3/8 3/8 5/8 2/8 3/8

2/8 2/8 2/8 1/8 3/8 0/8 0/8 3/8 1/8 0/8 0/8 0/8 0/8 2/8

10/17 9/17 7/17 9/17 5/17 7/17 0/17 9/17 8/17 0/17 0/17 7/17 3/17 6/17

4/9 4/9 3/9 3/9 3/9 3/9 4/9 3/9 3/9 3/9 2/9 3/9 3/9 4/9

3/8 2/8 3/8 3/8 3/8 2/8 0/8 2/8 3/8 0/8 0/8 0/8 0/8 4/8

6/13 6/13 8/13 6/13 9/13 8/13 0/13 6/13 5/13 0/13 0/13 8/13 5/13 5/13

5/9 5/9 5/9 4/9 4/9 5/9 5/9 5/9 2/9 3/9 3/9 5/9 4/9 4/9

2/8 2/8 4/8 3/8 5/8 1/8 0/8 2/8 1/8 0/8 0/8 0/8 0/8 2/8

9/18 7/18 7/18 9/18 6/18 4/18 0/18 6/18 6/18 0/18 0/18 11/18 7/18 8/18

and desynchronization attacks, which mainly benefits from the combination of feature selection strategy, global restoration and local embedding.

5. Conclusion In this paper, we propose a new image watermarking scheme resilient to common image processing attacks as well as desynchronization attacks. The benefits of this scheme consist in: (1) an image restoration method is utilized by calculating the geometrical transform matrixes with feature points matching and RANSAC iteration algorithm, which is helpful for resynchronization between watermark embedding and detection, (2) An effective feature selection strategy is developed by combining graphbased image segmentation and feature points refinement, and (3) watermark embedding and detection are conducted in the Zernike moments of the normalized circular regions, which can enhance the robustness against common image processing attacks. Our method can be further improved by developing more accurate image restoration approach and more robust embedding approach. Moreover how to integrate current visual coding techniques [24–26] with robust image watermarking also is our future work.

Acknowledgments We want to thank the helpful comments and suggestions from the anonymous reviewers. This research was supported partially

by the National Natural Science Foundation of China (Nos. 61125204, 61172146, 61101250, and 60902082), the Natural Science Basis Research Plan in Shaanxi Province of China (Nos. 2010JQ8026 and 2011JM8008), the China Postdoctoral Science Foundation (No. 20100471603 and 201104660), and the Program for New Scientific and Technological Star of Shaanxi Province (No. 2012KJXX-24).

References [1] L. An, X. Gao, Y. Yuan, et al., Content-adaptive reliable robust lossless data embedding, Neurocomputing 79 (1) (2012) 1–11. [2] J.K. O’Ruanaidh, T. Pun, Rotation, scale and translation invariant spread spectrum digital image watermarking, Signal Process. 66 (3) (1998) 303–317. [3] D. Simitopoulos, D.E. Koutsonanos, M.G. Strintzis, Robust image watermarking based on generalized Radon transformation, IEEE Trans. Circ. Syst. Video Tech. 13 (8) (2003) 732–745. [4] S. Pereira, T. Pun, Robust template matching for affine resistant image watermarks, IEEE Trans. Image Process. 9 (6) (2000) 1123–1129. [5] P. Bas, J.M. Chassery, B. Macq, Geometrically invariant watermarking using feature points, IEEE Trans. Image Process. 11 (9) (2002) 1014–1028. [6] C.W. Tang, H.M. Hang, A feature-based robust digital image watermarking scheme, IEEE Trans. Signal Process. 51 (4) (2003) 950–959. [7] J.S. Seo, C.D. Yoo, Local image watermarking based on feature points of scalespace representation, Pattern Recognition 37 (7) (2004) 1365–1375. [8] C. Deng, X. Gao, X. Li, D. Tao, A local Tchebichef moments-based robust image watermarking, Signal Process. 89 (8) (2009) 1531–1539. [9] X. Gao, C. Deng, X. Li, D. Tao, Geometric distortion insensitive image watermarking in affine covariant regions, IEEE Trans. Syst. Man Cybern. C Appl. Rev. 40 (3) (2010) 278–286. [10] C. Deng, X. Gao, X. Li, D. Tao, Local histogram based geometric invariant image watermarking, Signal Process. 90 (12) (2010) 3256–3264. [11] X. Wang, L. Hou, J. Wu, A feature-based robust digital image watermarking against geometric attacks, Image Vis. Comput. 26 (7) (2008) 980–989.

50

F. Ji et al. / Neurocomputing 106 (2013) 42–50

[12] J.-S. Tsai, W.-B. Huang, Y.-H. Kuo, M.-F. Horng, Joint robustness and security enhancement for feature-based image watermarking using invariant feature regions, Signal Process. 92 (6) (2012) 1431–1445. [13] D. Zheng, S. Wang, J. Zhao, RST invariant watermarking algorithm with mathematical modeling and analysis of the watermarking processes, IEEE Trans. Image Process. 18 (5) (2009) 1055–1068. [14] H. Bay, A. Ess, T. Tuytelaars, L.V. Gool, Speed-up robust feature (SURF), Comput. Vis. Image Understand. 110 (3) (2008) 346–359. [15] D.G. Lowe, Distinctive image features from scale-invariant keypoints, Int. J. Comput. Vis. 60 (2) (2004) 91–110. [16] M.A. Fischler, R.C. Bolles, Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography, Comm. ACM 24 (6) (1981) 381–395. [17] P.F. Felzenszwalb, D.P. Huttenlocher, Efficient graph-based image segmentation, Int. J. Comput. Vis. 59 (2) (2004) 167–181. [18] L. Li, X. Yuan, Z. Lu, J.-S. Pan, Rotation invariant watermark embedding based on scale-adapted characteristic regions, Inform. Sci. 180 (15) (2010) 2875–2888. [19] X. Wang, J. Wu, P. Liu, A new digital image watermarking algorithm resilient to desynchronization attacks, IEEE Trans. Inform. Forensics Secur. 2 (4) (2007) 655–663, Dec. [20] S.M. Elshoura, D.B. Megherbi, Comparison of Zernike and Tchebichef moments for image tampering detection sensitivity and watermark recovery, in: Proceedings of the IEEE International Conference Technologies and Homeland Security, 2008, pp. 615–619. [21] Y. Xin, S. Liao, M. Pawlak, Circularly orthogonal moments for geometrically robust image watermarking, Pattern Recognition 40 (12) (2007) 3740–3752. [22] F.A.P. Petitcolas, Watermarking schemes evaluation, IEEE Signal Process. Mag. 17 (5) (2000) 58–64. [23] X. Gao, C. Deng, X. Li, D. Tao, Local feature based geometric-resistant image information hiding, Cogn. Comput. 2 (2) (2010) 68–77. [24] R. Ji, L. Duan, J. Cheng, et al., Location discriminative vocabulary coding for mobile landmark search, Int. J. Comput. Vis. 96 (3) (2012) 290–314. [25] R. Ji, H. Yao, W. Liu, et al., Task-dependent visual-codebook compression, IEEE Trans. Image Process. 21 (4) (2012) 2282–2293. [26] R. Ji, H. Yao, X. Sun, Actor-independent action search using spatiotemporal vocabulary with appearance hashing, Pattern Reognition 44 (3) (2011) 624–638.

Cheng Deng received the B.Sc., M.Sc., and Ph.D. degrees in signal and information processing from Xidian University, Xi’an, China. Currently, he is an associate professor with the School of Electronic Engineering at Xidian University. His research interests include information hiding, multimedia retrieval, and computer vision.

Dongyu Huang is a lecturer with the School of Information and Control Engineering at Xi’an University of Architecture and Technology. Her research interests are information hiding and computer vision.

Feng Ji is a Ph.D. candidate with the School of Electronic Engineering at Xidian University. His research interests are information hiding and reversible watermarking.

Lingling An received the B.Sc. and M.Sc. degrees in computer science and technology, and Ph.D. degree in information and communication engineering from Xidian University, Xi’an, Shaanxi, China. She is an associate professor with the School of Computer Science and Technology at Xidian University. Her research interests include data hiding, visual cognition and machine learning.