Int. J. Electron. Commun. (AEÜ) 66 (2012) 275–285
Contents lists available at SciVerse ScienceDirect
International Journal of Electronics and Communications (AEÜ) journal homepage: www.elsevier.de/aeue
A new facet in robust digital watermarking framework Gaurav Bhatnagar Department of Electrical and Computer Engineering, University of Windsor, ON, Canada N9B 3P4
a r t i c l e
i n f o
Article history: Received 8 December 2010 Accepted 5 August 2011 Keywords: Watermarking Redundant wavelet transform Chaotic map Hessenberg decomposition Automatic thresholding
a b s t r a c t Generally, in watermarking techniques the size of the watermark is very small when compared to the host image. On the contrary, this is an attempt in which a new facet in watermarking framework is presented where the size of host image is very small when compared to the watermark image. The core idea of the proposed technique is to first scale up the size of host image equal to the size of watermark using chaotic map and Hessenberg decomposition followed by the redundant wavelet transform. A meaningful gray scale watermark is embedded in the low frequency sub-band at the finest level using singular value decomposition. To prevent ambiguity and enhance the security, a binary watermark is also embedded in loss-less manner which ensures the authenticity of the watermarked image. Finally, a reliable watermark extraction scheme is developed for extracting both the watermarks. The experimental results demonstrate better visual imperceptibility and resiliency of the proposed scheme against intentional or un-intentional variety of attacks. © 2011 Elsevier GmbH. All rights reserved.
1. Introduction As the Internet and the World Wide Web have gained great popularity, the increasing amount of multimedia data is frequently stored and transmitted digitally. However, this popularity introduces a new set of challenging problems regarding security such as authentication, copyright protection, illegal copying, distribution and tampering of multimedia. To address these concerns, we need to develop techniques that can protect multimedia against malicious attacks and intentional alterations. Recently, digital watermarking represents a viable solution to the above mentioned issues, since it makes possible to identify the author, owner, distributor or authorized consumer of the images. Digital watermarking is a technique for inserting an information into a multimedia. The embedding/insertion is made in such a way that it must not cause serious degradation to the original multimedia. Here the problem of watermarking digital image content is addressed. In literature, a variety of watermarking techniques have been proposed. These techniques can be broadly classified in two categories according to the embedding domain: spatial and transform domain. Spatial domain approaches [1,2] are the simplest and the earliest algorithms based on the modification of pixel intensities. These algorithms are less robust against the attacks. On the other hand, transform domain approaches insert the watermark into transform coefficients, such as discrete Fourier transform (DFT) [3], discrete
E-mail address:
[email protected] 1434-8411/$ – see front matter © 2011 Elsevier GmbH. All rights reserved. doi:10.1016/j.aeue.2011.08.005
cosine transform (DCT) [4,5], and wavelet transform (WT) [6–12] coefficients. Solachidis and Pitas [3] have presented the statistical analysis of the behavior of a blind robust watermarking system based on pseudo random signals embedded in the magnitude of the Fourier transform of the host data. Cox et al. [4] have presented the most popular watermarking schemes based on the Spread Spectrum Communication. The watermark is embedded into the first k highest magnitude DFT/DCT coefficients of the image and extraction is done by comparing the DFT/DCT coefficients of the watermarked and the original image. Barni et al. [5] have proposed a watermarking algorithm, which operates in the frequency domain, it embeds a pseudo-random sequence of real numbers in a selected set of DCT coefficients. The watermark can be reliably extracted blindly by exploiting the statistical properties of the embedded sequence. Xia et al. [6] have added a pseudo-random sequence to the largest wavelet coefficients of the detail bands where perceptual considerations are taken into account by setting the amount of modification proportional to the strength of the coefficient itself. Watermark detection is achieved through comparison with the original un-watermarked image. Barni et al. [7] have proposed a method based on the characteristics of the human visual system operating in wavelet domain. Based on the texture and the luminance content of all image sub-bands, a mask is accomplished pixel by pixel. Hsu and Wu [8] have presented a multilevel wavelet transformation technique in which both the host and watermark images are transformed in wavelet domain and the wavelet coefficients of watermark are added to the corresponding wavelet coefficients of host image. Wang et al. [9] have proposed a key dependent wavelet transform for watermarking. They used randomly gener-
276
G. Bhatnagar / Int. J. Electron. Commun. (AEÜ) 66 (2012) 275–285
ated orthonormal filter bank as a major part of the key. Kundur and Hatzinakos [10] have proposed the use of gray scale logo as watermark. They addressed a multiresolution fusion based watermarking method for embedding gray scale logos into wavelet transformed images via salience factor. Dawei et al. [11] have proposed a new technique in which wavelet transform applies locally, based on chaotic logistic map and embeds the watermark. This technique shows very good robustness to geometric attacks but it is sensitive to common attacks like filtering and sharpening. The detailed survey on wavelet based watermarking techniques can be found in [12]. Recently, a new transform, singular value decomposition (SVD)based [13–16] watermarking technique and its variants have been proposed. These approaches work on the simple concept of finding the SVD of a cover image or the SVD of each block of the cover image, and then modifying the singular values to embed the watermark. Recently, some researchers have presented hybrid watermarking schemes in which they have combined SVD with other existing transforms. SVD based scheme withstands a variety of attacks but it is not resistant to geometric attacks like rotation, cropping, etc. Hence, for improving the performance, hybridization is needed. Ganic and Eskicioglu [17] have presented hybrid-watermarking scheme based on DWT and SVD. After decomposing the cover image into four bands, SVD is applied on each band, and modify the singular values of each band with the singular values of the visual watermark. Sverdlov et al. [18] have used the same concept taking DCT and SVD. DCT coefficients are mapped into four quadrants via ZIG-ZAG scan and modify the singular values of each quadrant. Li et al. [19] have proposed the same hybrid DWT–SVD domain watermarking scheme by exploiting the properties of human visual system. Bhatnagar and Raman [20] have proposed a reference watermarking scheme in which watermark is embedded in the singular values of reference image obtained by directive contrast and wavelet transform. Further, Bhatnagar and Raman [21] have proposed another reference watermarking scheme in which a reference watermark is embedded in the singular values of host image using cosine transform and wavelet packet transform. Chang et al. [22] have proposed a new technique in which embedding is done in both singular values and singular vector components. In this paper, a new facet in watermarking technique is introduced for images. Unlike existing watermarking techniques, the proposed technique deals with the situation where the size of watermark is greater than the size of host image. The core idea behind the proposed framework is to first scale-up the size of host image using non-linear chaotic map and Hessenberg decomposition to get a randomized image. Due to the use of non-linear chaotic map, this scaling essentially introduces randomness in the image along with the increment in size. The randomized image is then decomposed by the means of redundant wavelet transform and the gray-scale watermark is embedded in the low frequency sub-band at the finest level using singular value decomposition followed by the inverse transform to get the watermarked randomized image. Recently, some researchers have shown that SVD based schemes fail under ambiguity attacks [23,24]. To prevent ambiguity and enhance the security, a binary watermark is embedded in the watermarked randomized image with the help of automatic thresholding. Finally, watermarked randomized image is scaled-down to get the watermarked image. At the extraction end, initially binary watermark is extracted and then it is compared with the original one. The gray scale watermark is extracted if and only if the similarity between original and extracted binary watermark is greater than the prescribed threshold. One of the possible practical situations for the proposed technique, comes from defense. Let us suppose one map is transmitted from one army station to another. One way is to segment whole map into several parts and then all parts are transmitted by using any of the existing method (where the size of
host image is larger than watermark). In this case, the complexity for transmitting map is very high and of course transmitting cost too. Hence, as the possible solution, proposed algorithm transmits the whole map in only one attempt with minimum complexity and cost. The rest of paper is organized as follows. The description of used terminologies, i.e. redundant wavelet transform, singular value decomposition, chaotic map and automatic thresholding is given in Section 2. Section 3 provides the explanation of Hessenberg decomposition followed by proposed embedding and extraction process in Section 4. Section 5 presents experimental results using proposed watermarking technique and finally the concluding remarks in Section 6.
2. Mathematical preliminaries In this section, the main terminologies and mathematical concepts that are used in the proposed algorithm are presented to achieve desired goal. These terminologies are as follows.
2.1. Redundant wavelet transform (RWT) Wavelet transform is one of the versatile mathematical transform having numerous applications in different areas. Although, the wavelet transform led to a successful implementation in image compression but the results were far from optimal for other applications such as filtering, deconvolution, detection or more generally, analysis of signals. This is mainly due to the lose-of translation-invariance property in the wavelet transform, leading to a large number of artifacts when an image is reconstructed after modification of its wavelet coefficients. In order to rectify this drawback of wavelet transform, redundant wavelet transform (RWT) has been proposed. The RWT has a long history, having been independently discovered a number of times and given a number of different names, including the algorithme à trous [25], the undecimated WT (UWT) [26], the overcomplete WT (OWT) [27], the shift-invariant WT (SIWT)[28], and discrete wavelet frames (DWFs) [29]. The RWT was originally developed as a discrete approximation to the continuous wavelet transform. Subsequent formulations realized that removal of the downsampling operation from the traditional critically sampled WT would produce an overcomplete representation with shift invariance, since the well known shift variance of the WT arises from its use of downsampling, while the RWT is shift invariant since the spatial sampling rate is fixed across scale. As a result, the size of each subband in an RWT is exactly the same as that of the input signal (illustration is depicted in Fig. 1(a). Therefore, RWT is defined in terms of an underlying wavelet transform. Let h ∈ 2 (Z) and g ∈ 2 (Z) be the scaling and wavelet filters respectively, of an orthonormal wavelet transform. In essence, the à trous implementation of the RWT removes the decimation operations from Mallat’s algorithm for the critically sampled wavelet transform. To retain the proper multiresolution characteristic of the transform, the scaling and wavelet filters must be adjusted according to each scale. Specifically, the upsampling operator is defined as
x[k] ↑ 2 =
x[k/2],
k even
0,
k odd
(1)
and the scaling and wavelet filters at scale j + 1 as hj+1 [k] = hj [k] ↑ 2, gj+1 [k] = gj [k] ↑ 2
(2)
G. Bhatnagar / Int. J. Electron. Commun. (AEÜ) 66 (2012) 275–285
277
Fig. 1. (a) Difference between image decompositions (2-level) for DWT and RWT, (b) illustration of singular value decomposition, and (c) binarization of cameraman image using Otsu’s algorithm.
where h0 [k] = h[k] and g0 [k] = g[k]. Finally, the RWT of any signal x ∈ 2 (Z), is implemented recursively with the filter-bank operations cj+1 [k] = hj [−k] ∗ cj [k], dj+1 [k] = gj [−k] ∗ cj [k]
(3)
where c0 = x and j = 0, 1, . . ., J − 1. The J-scale RWT is the collection of subbands resulting from the recursive filtering operations, i.e., X J = RWTJ [x] = [ cJ dJ dJ−1 · · · d1 ] such that
||X J ||2 = ||cJ ||2 +
J
||dj ||2
(4)
(PWNLM) F : I → I where I = [0,1] and also denote the length of the region, described as [31]
F(xk+1 ) =
⎧ ⎪ ⎨ I
1 + ai i+1 − Ii
(xk − ai ) −
ai 2 (xk − ai ) , Ii+1 − Ii
0, ⎪ ⎩
if xk ∈ [Ii , Ii+1 ) if xk = 0.5
F(xk − 0.05),
(7)
if xk ∈ (0.5, 1]
where xk ∈ [0, 1] and Ii is the sub-interval of [0,1] such that 0 = I0 < I1 < · · · < Ii < · · · < In+1 = 0.5. The parameter ai ∈ (− 1, 0) ∪ (0, 1) tune sequence in the ith interval such that n−1
j=1
(Ii+1 − Ii )ai = 0
(8)
i=0
The interesting properties of the above mentioned map are summarized as follows.
2.2. Singular value decomposition Singular value decomposition [32] is a way of factoring matrices into a series of linear approximations that expose the underlying structure of the matrix. Let X be a real (complex) matrix of order m × n. The singular value decomposition (SVD) of X is the factorization (see Fig. 1(b)) X = U ∗ S ∗ VT
(5)
where U and V are orthogonal (unitary), and S = diag(1 , 2 , . . . , r )
(6)
where i , i = 1(1)r are the singular values of the matrix X with r = min(m, n), satisfying 1 ≥ 2 ≥ · · · ≥ r . The first r columns of V are the right singular vectors and the first r columns of U are the left singular vectors, where r is the rank of matrix, i.e. r = min(m, n). Use of SVD in digital image processing has some advantages. First, the size of the matrices from SVD transformation is not fixed. It can be a square or a rectangle. Secondly, singular values in a digital image are less affected if general image processing is performed. Finally, singular values contain intrinsic algebraic image properties. 2.3. Non-linear chaotic map A chaotic system is a deterministic non-linear system with pseudo stochastic property [30]. Due to its interesting properties like non-periodicity, unpredictability, initial parameter sensitivity and Gauss like statistical characteristics, many chaotic systems serve as the stochastic signal/sequence generator nowadays. In this work, we have used piecewise non-linear map in order to create digital sequence. Mathematically, a piecewise non-linear map
• The iteration system obtained by Eq. (7) i.e. xk+1 = F(xk ) is chaotic for all xk ∈ [0, 1]. • The sequence {xk }∞ is ergodic in [0,1] having the uniform probk=1 ability distribution function (x) = 1, which further shows the uniformity of the map, i.e. the probability of each value in [0,1] is equal to be selected. • The sequence {xk }∞ has ı-like autocorrelation function given as k=1
1 j→∞ j
RF (r) = lim
j
x x k=1 k k+r
j
x2 k=1 k
,
r≥0
2.4. Automatic thresholding: the Otsu’s method The Otsu’s method [33] is used to automatically perform the reduction of a gray level image to a binary image via histogram shape-based image thresholding. The algorithm assumes that the image to be thresholded contains two classes of pixels viz. foreground and background followed by the calculation of optimum threshold by maximizing a discrimimant criterion, i.e., the separability of the resultant classes in gray levels. Let us assume that the gray level of the given image (I) ranges in [0, L − 1], where L is the total number of gray levels of the image. The Otsu’s method searches for an optimal threshold value Topt in the range such that the objective function JOtsu (T) achieves its maximum. Mathematically, Topt = arg max ]JOtsu (T ) 0≤T ≤L−1
(9)
278
G. Bhatnagar / Int. J. Electron. Commun. (AEÜ) 66 (2012) 275–285
where the objective function is given by JOtsu (T ) =
having the form
p1 (T )p2 (T )(m1 (T ) − m2 (T )) 2
T
L−1
T
T
h(l)/ l=0 h(l), m1 (T ) = lh(l)/ l=0 h(l) where p1 (T ) = l=0 l=0 denote the prior probability and the mean of the foreground
L−1
L−1 respectively whereas p2 (T ) = h(l)/ l=0 h(l), m2 (T ) = l=T +1
L−1
1 n−k⎤ ⎡ k−1 (k−1) (k−1) (k−1) H12 H13 H11 k−1 ⎢ (k−1) ⎥ =⎣ 0 ⎦1 bk−1 H23 22 (k−1) (k−1) n−k 0 H32 H33
(10)
Ak−1
(14)
L−1
lh(l)/ l=T +1
h(l) denote the prior probability and mean l=T +1
L−1
2
(l − m(T )) h(l) is of the background respectively. The 2 = l=0 the variance of the gray level of the image. m(t) is the mean of total gray levels whereas {h(l) : l = 0, 1, . . ., L − 1} is the gray level histogram or distribution of the image. As soon as the optimal threshold is obtained, the binarization of the image I is done as
BI (i, j) =
1,
if I(i, j) ≥ Topt
0,
otherwise
(11)
where P˜ k =
(k−1) H11
is Hessenberg matrix. Let
In−k − 2uk (uk )
T
T
(uk ) uk
be a Householder matrix with the property that P˜ k bk−1 has zeros in the last n − j − 1 components which further follows that the matrix Pk = diag(In−k , P˜ k ) is orthogonal and Ak is given as T
From Eq. (10), it is clear that maximizing the Otsu’s objective function equivalently maximizes the between-class separation in the foreground and background while at the same time, minimizes the total variance in both fore- and back-ground classes. It has been shown [33] that the above criterion can achieve its maximum when the distributions of both classes are two-peaked, especially Gaussian with the same class variance. Again from Eq. (10), we can also see that Otsu’s criterion pays almost equal attention to both the foreground and background, which is the main benefit of the Otsu’s method in thresholding. One simplest example is depicted in Fig. 1(c).
Ak = PkT Ak−1 Pk = (P1 P2 ...Pk−1 Pk ) A (P1 P2 ...Pk−1 Pk ) k 1 n−k−1 ⎡ ⎤ T (k−1) (k−1) (k−1) H12 H13 Pk ⎥ k ⎢ H11 ⎢ ⎥ T (k−1) = ⎢ bk−1 H23 Pk ⎥ ⎢ 0 ⎥1 22 ⎣ ⎦ T (k−1) (k−1)Pk n−k−1 0 Pk H32 Pk H33 (15) where Ak is Hessenberg matrix. Since, we need n − 2 steps in the overall procedure, the above Eq. (15) can be rewritten as H = Q T AQ ⇒ A = QHQ T
3. Hessenberg decomposition
where H = An−2 and Q = P1 P2 · · · Pn−3 Pn−2 .
Hessenberg decomposition [32] is the factorization of a general matrix A by orthogonal similarity transformations into the form
4. Proposed watermarking technique
A = QHQ T
(12)
where Q is an orthogonal and H is an upper Hessenberg matrix, meaning is that H = [hij : hij = 0 whenever i > j + 1. Hessenberg decomposition is typically computed using Householder matrices. Householder matrix (P) is the orthogonal matrices of the form P=
In − 2uuT uT u
where u is a non-zero vector in Rn and In is the n × n identity matrix. The main reason of using these matrices is the ability of introducing zeros. For instance, suppose x = (x1 , x2 , . . ., xn ) is a non-zero vector in Rn and u is defined as u = x + sign(x1 )||x||2 e1 where sign(x1 ) = x1 /|x1 | and e1 is the first column of In . It then follows
Px = −sgn(x1 )||x||2 e1 i.e. a vector having zeros in all but its first component. There are n − 2 steps in the overall procedure when A is of size n × n, which further proves the ability of introducing zeros. At the beginning of kth step orthogonal matrices P1 , P2 , . . ., Pk−1 have been expressed as Ak−1 = (P1 P2 · · ·Pk−1 )T A(P1 P2 · · ·Pk−1 )
(13)
(16)
In this section, some motivating factors in design of proposed approach to watermarking are discussed. The proposed technique uses two meaningful watermarks out of which one is gray-scale whereas second is binary image/logo. The initial watermarks used for embedding are bigger in the size when compared with the host image. Without loss of generality, assume that F represents the host image of size, M × N, W1 and W2 represent the initial watermarks of size M1 × N1 and M × N respectively. The host image is smaller than the watermarks by a factor 2q1 and 2q2 along both the directions, where q1 and q2 are any integers greater than or equal to 1. 4.1. Watermark embedding The embedding process to embed gray-scale and binary image/logo is formulated as follows: 1. Formation of randomized image: This step essentially scale up the size of host image equal to the size of watermark while introducing randomness in the image. The whole process is formulated as follows. (a) Adopting k1 and k2 as keys, iterate non-linear chaotic map to get two random sequences of length M * M and N * N, respectively. (b) Stack these two sequences into two array of size M × M and N × N, denoted by K1 and K2 , respectively. (c) Apply Hessenberg decomposition on K1 and K2 to get two orthogonal matrices Q1 and Q2 . (d) Randomize the host image using Q1 and Q2 as F˜r = Q1 ∗ F ∗ Q2
(17)
G. Bhatnagar / Int. J. Electron. Commun. (AEÜ) 66 (2012) 275–285
(e) Scale-up the size of randomized image (F˜r ) as Fr (2i, 2j) = F˜r (i, j)Fr (2i − 1, 2j) = F˜r (i, j) Fr (2i, 2j − 1) = F˜r (i, j)
(18)
Fr (2i − 1, 2j − 1) = F˜r (i, j) 2. Perform L-level redundant wavelet transform on Fr , which is denoted by fr,l , where ∈ {A, H, V, D} and l ∈ [1, L]. 3. Embedding gray scale watermark: The gray scale logo (W1 ) is embedded in the low frequency sub-band at the finest level, i.e. A and the process is formulated as follows: fr,L A and W (a) Perform SVD on both fr,L 1 A T fr,L = Uf A Sf A V TA , W1 = UW1 SW1 VW r,L
f
r,L
r,L
(19)
1
(b) Modify the singular values of the sub-band with the help of watermark singular values as S ∗A f
r,L
= Sf A + ˛SW1
(20)
r,L
where ˛ gives the watermark strength. (c) Perform inverse SVD to construct modified sub-band A,new fr,L = Uf A S ∗A V TA r,L
f
r,L
f
(21)
r,L
4. Perform L-level inverse redundant wavelet transform to get the watermarked randomized scaled-up image F˜rnew . 5. Scale-down the watermarked randomized scaled-up image (F˜rnew ) to get watermarked randomized image as 1 new Frnew (i, j) = F˜r (2i + m, 2j + m) 4 1
(22)
k=0
6. Embedding binary watermark: The binary image (W2 ) is embedded as follows: (a) Find a binary image BF new corresponding to Frnew using autor matic thresholding. (b) Produce the extraction key using bit-wise exclusive-OR operation as: K = XOR(BF new , W2 )
The embedding process is completed after the construction of extraction key K. Obviously, Frnew also remains unchanged during this embedding. Hence, binary watermark embedding is the loss-less embedding, i.e. the the intensities of each pixel of Frnew before and after embedding are same. 7. Perform inverse randomization process to get the watermarked image F new as F
new
=
inv(Q1 ) ∗ Frnew
∗ inv(Q2 )
Since Q1 and Q2 are orthogonal matrices, inv(Q1 ) = inv(Q2 ) = Q2 , respectively.
prescribed threshold as we do in the blind watermarking scheme which uses a binary or Gaussian noise type watermarks. Hence, the gray-scale watermark extraction is non-blind. The extraction process is formulated as follows: 1. Adopting keys k1 , k2 and Step 1 of embedding process, the watermarked image (Fw = F new ) is first randomized as F wr = Q1 ∗ Fw ∗ Q2
(24) Q1
(25)
2. Extracting binary watermark: The binary watermark is extracted as follows: (a) Find a binary image BFwr corresponding to F wr using automatic thresholding. (b) The binary watermark is extracted using bit-wise exclusiveOR operation as: W2ext = XOR(BFwr , K)
(26)
3. Extracting gray scale watermark: This extraction is performed if and only if the similarity between extracted (from the previous step) and original binary watermark is greater than the prescribed threshold. Extraction process is described as follows: (a) Scale-up the size of randomized image (F wr ) as Fwr (2i, 2j) = F wr (i, j) wr (i, j) Fwr (2i − 1, 2j) = F wr (i, j) Fwr (2i, 2j − 1) = F
(27)
wr (i, j) Fwr (2i − 1, 2j − 1) = F (b) Perform L-level redundant wavelet transform on Fwr , which is denoted by fwr,l , where ∈ {A, H, V, D} and l ∈ [1, L]. A (c) Select the low frequency sub-band at the finest level, i.e. fwr,L to extract the gray scale watermark. A (d) Perform SVD on fwr,L A fwr,L = UfwA SfwA V T r,L
r,L
fwA
(28)
r,L
(e) Extract the singular values of gray scale watermark as
(23)
r
279
ext = SW
SfwA − Sf A r,L
r,L
˛
1
(29)
(f) Perform inverse SVD to construct the extracted gray scale logo. ext T W1ext = UW1 SW VW 1
1
(30)
5. Results and discussions 5.1. Experimental setup
and
4.2. Watermark extraction The objective of the watermark extraction is to obtain the estimate of the original watermark. For binary watermark extraction, only key K is required whereas for gray scale logo extraction, host and watermarked images, VW1 , UW1 , K1 and K2 are required. Hence, the binary watermark extraction is blind and gray-scale watermark extraction is non-blind. The main reason behind the non-blind extraction is the embedding of the gray-scale logo/image as watermark. Generally in the blind watermarking scheme, a binary or Gaussian noise type watermark is embedded. Further, one cannot predict actual intensity at any pixel by just comparing the correlation (between watermarked and watermark image) with the
The robustness of the proposed watermarking technique is demonstrated using MATLAB platform and different gray-scale images namely cameraman, Boat, Fingerprint and Lena of size 256 × 256 are used as the host images. For gray scale watermarks, Pecock, Clock, Cup and Circle images of size 512 × 512 are used whereas University of Windsor logo, CVSS lab logo, Dollar Sign and E&CE Dept. images of size 256 × 256 are used as binary watermarks. Peacock and University of Windsor logo watermarks are embedded into Cameraman image while Clock and CVSS lab watermarks are embedded into Boat image. Cup and Dollar Sign watermarks are embedded into Fingerprint image whereas Circle and E&CE Dept watermarks are embedded into Lena image. The watermarked image quality is measured using PSNR (Peak Signal to Noise Ratio). The watermarked Barbara, Lena, Mandril and Parrot images are having 37.3685, 35.2547, 34.9981 and 36.5296 PSNR values (in
280
G. Bhatnagar / Int. J. Electron. Commun. (AEÜ) 66 (2012) 275–285
Fig. 2. Experimental images (a) host image, (b) gray-scale watermark, (c) binary watermark, (d) watermarked image, (e) extracted gray-scale watermark, and (f) extracted binary watermark.
db), respectively. In Fig. 2, all original host, watermarked, original watermark and extracted watermark images are shown. From the figure, it is clear that there is not any perceptual degradation between the original and the watermarked images according to human perception. The value of watermark strength (˛) is calculated by attempting several experiments on the host image considering human visual system. For this purpose, watermark images are prepared with the different values of ˛ and some human observers are asked to view the series of watermarked images and rate them. The value of ˛ is taken to be optimal for which most of the observers fail to distinguish the original and watermarked image. For further analysis, Cameraman image is used, since it has maximum PSNR values among all test images. To verify the presence of watermark, the correlation coefficient between the original and the extracted singular values is given by ¯ = (w, w)
M1 i=1
i=1
N1
j=1
j=1
5.3. Results
¯ ij − (wij − w )(w ¯ w)
(wij − w )
M1 N1 2 i=1
j=1
¯ ij − (w ¯ w )2 (31)
¯ w and ¯ w are the original, extracted, mean of original where w, w, and extracted watermarks respectively. is the number that lies between [−1,1]. If the value of is equal to 1 then the extracted singular values are just equal to the original one, if it is −1 then the constructed watermark looks like a negative thin film. This grayscale watermark extraction is performed if and only if the similarity between extracted and original binary watermark is greater than the prescribed threshold (Ts ), i.e. if (W2 , W2ext ) > Ts then the grayscale watermark is extracted. In our experiments, the threshold for binary watermark extraction is Ts = 0.51. 5.2. Threshold determination In the proposed watermarking algorithm, the threshold value plays an important role for the extraction of gray-scale watermark. Since, the gray-scale extraction is performed if and only if the similarity between extracted and original binary watermark is greater than the prescribed threshold. In the present work, the threshold is determined experimentally. For this purpose, we generate n random binary matrices and embed in the host image using the proposed technique. In each case, the watermark is extracted and is calculated between the original binary watermark and the
To investigate the robustness of the proposed watermarking scheme, the following attacks have been considered: Gaussian Blurring, Gaussian and Salt and Pepper noise addition, JPEG and SPIHT compression, Resize, Rotation, Horizontal and Vertical Flipping,
0.511
Maximum Correlation Coefficients
M1 N1
extracted binary matrix. Then the maximum value of is considered as a desired threshold Ts . In this manner, the obtained threshold value should maintain a balance between binary watermark extraction and robustness. In our experiments, 25,000 binary matrices are generated randomly and embedded in all experimental images by the proposed algorithm. In each case, the watermark is extracted and is calculated between the original and the extracted binary matrix. Finally, a graph is plotted (Fig. 3) by taking the number of binary matrices on x-axis and the corresponding maximum on y-axis. The corresponding highest value (i.e. the highest peak on plot) is taken as the optimal value for threshold. In figure, the horizontal lines shows the maximum correlation coefficients obtained with the used four experimental images. According to the experiments, the threshold comes out to be 0.5098 ≈ 0.51.
0.506
0.501
0.496
0
5,000
10,000
15,000
20,000
Number of Random Binary Matrix Fig. 3. Threshold synchronization: the evaluation process.
25,000
G. Bhatnagar / Int. J. Electron. Commun. (AEÜ) 66 (2012) 275–285
281
against wrapping is estimated by giving the 3D effect to watermark image around a spherical shape and the respective result is shown in Fig. 5(xi). Pixilation is the process of displaying a digitized image where the individual pixels are apparent to the viewer. These kind of situations occur in real life when a low-resolution image designed for an ordinary computer display is projected on a large screen. The proposed watermarking scheme is also tested against general image processing attacks like histogram equalization, Sharpen and Contrast Adjustment and the concerned results are depicted in Fig. 5(xiii)–(xv), respectively. For sharpening attack, the sharpness of the watermarked host image is increased by 90% whereas the contrast of the watermarked image is decreased by 90% for contrast adjustment attack. The correlation coefficients for all extracted watermarks are given in Table 1. 5.4. Sensitivity analysis
Fig. 4. Extracted watermarks after attacks (a) extracted gray-scale watermark and (b) extracted binary watermark.
Row–Column Deletion, Wrapping, Pixilation, Histogram Equalization, Sharpening and Contrast Adjustment. After all these attacks on the watermarked image, the extracted watermarks are compared with the original one. All attacked images are depicted in Fig. 4. The most common manipulation in digital image is blurring. It is a widely used effect in graphics software, typically to reduce image noise and reduce detail. The most common way is Gaussian blur which is the result of blurring an image by a Gaussian function. For this attack, the watermark is extracted after applying 13 × 13 Gaussian blurring to the watermarked image and the result is depicted in Fig. 5(i). To verify the robustness of the watermarking scheme, another measure is Noise addition. In real life, the degradation and distortion of the image comes from noise addition. In our experiments, P% additive Gaussian and Salt and Pepper noise is added in the watermarked image. Fig. 5(ii) and (iii) shows the extracted watermark from 50% noise attacked watermarked images. In real life applications, storage and transmission of digital data is a frequently used operation. For this purpose, a lossy coding operation is often performed on the data to reduce the memory and increase efficiency. Hence, the proposed scheme is also tested for JPEG compression (100:1) and SPIHT compression (0.1 bpp). The extracted watermarks after compression attacks are shown in Fig. 5(iv) and (v). We have also tested the proposed scheme for resizing and rotation attacks. For resizing, the size of watermarked image is reduced to 64 × 64 and again carried back to original size 256 × 256 followed by the extraction, see Fig. 5(vi). For rotation, the watermark is extracted from the 30◦ rotated watermarked image. Fig. 5(viii) and (ix) shows the results of horizontal and vertical flipping. Fig. 5(x)–(xii) shows the results for row–column deletion, wrapping and pixelation attacks. In row–column deletion, some of the rows and columns of the watermarked image are randomly deleted followed by the watermark extraction. Fig. 5(x) shows the result of randomly deleted 10 rows and 10 columns attack. Wrapping is the process of giving 3D effect to an object by distorting the image and stretching it to fit the selected curve. Robustness
A highly key sensitive watermarking technique protects the data against various attacks because slight change in the keys never gives the perfect extraction which further increases the security. Therefore, keys play the vital role to enhance the security. Hence, the key sensitivity of the proposed technique is validated. For this purpose, it is assumed that an intruder knows the complete embedding and extraction structure but not the used key. In the proposed technique, two keys (k1 and k2 ) are used. These keys are used as the initial seed for non-linear chaotic map in order to randomize the host image. The sensitivity is checked in three cases (1) when k1 is slightly changed, (2) when k2 is slightly changed, and (3) when both k1 and k2 are slightly changed. The slight change is made in such a way that the original and modified keys are approximately same. In the experiments, the values of k1 and k2 are used to be 0.3567 and 0.5965, respectively whereas the modified values of k1 and k2 are 0.356699999 and 0.596511111, respectively. Fig. 6(a and d) shows the extracted watermarks when only k1 is modified whereas Fig. 6(b and e) shows the extracted watermarks when only k2 is modified. From figures it is clear that a slight change in any of the key leads to the imperfect extraction. The extracted watermarks with both keys modified are depicted in Fig. 6(c and f). Fig. 6(g) shows the quality of extracted gray-scale watermark with respect to varying keys k1 and k2 . In this experiment, the keys are modified as ki ± ı : i = 1, 2 where ı ∈ [0, 0.001]. It is clear that there is no similarity between the extracted and original watermarks according to human visual system. Hence, the proposed technique is highly sensitive to the keys. 5.5. Comparative analysis As it is already said in the abstract that this is the very first attempt to deal with the situation where the size of watermark is needed to be greater than the size of host image. All the existing watermarking methods use host image having the size greater than the size of watermark. According to the authors, a fair comparison can be done if and only if both the proposed and existing methods use the same host and watermark images which is not possible in the case of proposed method. But, in order to demonstrate the significant performance of the proposed scheme, the more elaborated performance comparison with the existing methods [10,17] is given below. To compare proposed method with these methods, the size of host image is up-scaled such that the watermark (used in experiments of the proposed method) must be embedded according to the requirements of the existing methods [10,17], respectively. In the watermarking method [10], the host and watermark images are of size 256 × 256 and 32 × 32, respectively, i.e. host image is greater by a factor of 23 in both x- and y-directions. On the other hand, the size of host and watermark images in [17] are 512 × 512 and 256 × 256, respectively, i.e. host image is greater by
282
G. Bhatnagar / Int. J. Electron. Commun. (AEÜ) 66 (2012) 275–285
Fig. 5. Attacked cameraman image after (a) Gaussian blur (13 × 13), (b) Gaussian noise addition (50%), (c) salt and pepper noise addition (50%), (d) JPEG compression (CR 100:1), (e) SPIHT compression (0.1 bpp), (f) resizing (256 → 64 → 256), (g) rotation (30◦ ), (h) horizontal flipping, (i) vertical flipping, (j) row–column deletion (10 rows and 10 columns), (k) histogram equalization, (l) pixilation, (m) wrapping, (n) sharpen (increased by 90%), and (o) contrast adjustment (decreased by 90%).
a factor of 21 in both x- and y-directions. In order to use same host and watermark images, the size of host image is up-scaled by factor 23 and 21 for methods [10,17], respectively. The size of host image is up-scaled (using Eq. (18)) 3 times and 1 time for methods
[10,17], respectively. After up-scaling of the host image, the watermark is embedded followed by down-scaling of the watermarked host image using Eq. (22). Finally, the attack analysis and watermark extraction are performed. In this manner, the same host and
G. Bhatnagar / Int. J. Electron. Commun. (AEÜ) 66 (2012) 275–285
283
Table 1 Correlation coefficients for the extracted watermarks. Attacks
Images Cameraman
No attack Gaussian blur (13 × 13) Gaussian noise addition (50%) Salt and pepper noise addition (50%) JPEG compression (CR 100:1) SPIHT compression (0.1 bpp) Resizing (256 → 64 → 256) Rotation (30◦ ) Horizontal flipping Vertical flipping Row–column deletion Histogram equalization Pixelation Wrapping Sharpen (+90%) Contrast adjustment (−90%)
Boat
Fingerprint
Lena
Binary
Gray scale
Binary
Gray scale
Binary
Gray scale
Binary
Gray scale
0.9999 0.9557 0.8783 0.8393 0.9529 0.9615 0.9551 0.9070 0.7169 0.7143 0.8996 0.9281 0.9427 0.9237 0.9052 0.9467
0.9996 0.8940 0.8007 0.8375 0.7693 0.7308 0.8633 0.7976 0.9852 0.8956 0.9296 0.9856 0.8841 0.9158 0.8933 0.9819
0.9999 0.9545 0.8729 0.8356 0.9527 0.9598 0.9570 0.9086 0.7134 0.7144 0.8926 0.9299 0.8853 0.9247 0.9079 0.9462
0.9995 0.8821 0.8071 0.8364 0.7702 0.7336 0.8636 0.7973 0.9851 0.8923 0.9237 0.9896 0.8830 0.9117 0.8990 0.9811
0.9998 0.9523 0.8761 0.8388 0.9517 0.9609 0.9536 0.9038 0.7173 0.7185 0.8997 0.9291 0.8837 0.9258 0.9072 0.9474
0.9996 0.8963 0.8001 0.8299 0.7686 0.7356 0.8644 0.7958 0.9819 0.8954 0.9297 0.9879 0.8884 0.9141 0.8949 0.9823
0.9999 0.9441 0.8643 0.8269 0.9478 0.9552 0.9496 0.9014 0.7134 0.7135 0.8931 0.9222 0.8804 0.9228 0.9054 0.9417
0.9991 0.8852 0.7974 0.8266 0.7623 0.7267 0.8591 0.7913 0.9788 0.8912 0.9198 0.9861 0.8814 0.9085 0.8909 0.9796
Fig. 6. Extracted watermarks after slight change in (a and d) k1 , (b and e) k2 , (c and f) both k1 and k2 , and (g) effect of varying change in the keys on extracted gray-scale watermark.
watermark images (which are used in the proposed algorithm) are used for comparison. The detailed comparative study is given in Table 2. From the table, it is clear that the proposed algorithm shows better performance with that in [10,17], respectively. For Gaussian blurring, the existing and proposed methods extract watermark up to 9 × 9 and 13 × 13, respectively. For noise addition, JPEG compression, SPIHT compression, rotation, resizing and contrast adjustment, the proposed method shows excellent results when compared to existing methods. Watermark is extracted upto 50% Gaussian and salt and paper noise addition with the proposed
method and upto 30% noise addition with existing methods. For JPEG compression, proposed method extracts watermark upto a compression ratio of 100 whereas existing methods extract watermark upto a compression ratio of 50. On the other hand, proposed method extract watermark up to 0.1 bpp SPIHT compression whereas 1 bpp and 0.8 bpp for method in [10,17], respectively. For rotation, the proposed method extracts watermark upto 50◦ rotation whereas methods in [10,17] extract watermark upto 0.4◦ and 20◦ rotation respectively. For resizing, with the proposed method watermark is extracted upto the 1/4th scale down watermarked image whereas existing method extract watermark upto 1/2th scale
284
G. Bhatnagar / Int. J. Electron. Commun. (AEÜ) 66 (2012) 275–285
Table 2 Comparison with existing algorithms. Existing algorithm
Proposed algorithm
Kundur and Hatzinakos [10]
Ganic and Eskicioglu [17]
Operating domain Embedding quality Extraction algorithm
DWT Loosy Non-blind
DWT + SVD Loosy Non-blind
RWT + SVD + HD Loosy Non-blind
Watermark extracted Gaussian blur Gaussian noise addition Salt and pepper noise addition JPEG compression SPIHT compression Resizing Rotation Horizontal flipping Vertical flipping Row-column deletion Histogram equalization Pixelation Wrapping Sharpen Contrast adjustment
Up to 9 × 9 Up to 25% Up to 25% Up to CR 40:1 Up to 1 bpp Scale down by 1/2 Up to 0.4◦ Less effective Effective Up to 5-R and 5-C Less effective Less effective Effective Up to 60% Up to 50%
Up to 9 × 9 Up to 30% Up to 25% Up to CR 50:1 Up to 0.8 bpp Scale down by 1/2 Up to 20◦ Less effective Less effective Up to 7-R and 7-C Less effective Less effective Less effective Up to 80% Up to 50%
Up to 13 × 13 Up to 50% Up to 50% Up to CR 100:1 Up to 0.1 bpp Scale down by 1/4 Up to 30◦ Less effective Less effective Up to 10-R and 10-C Less effective Less effective Less effective Up to 90% increased Up to 90% decreased
CR, compression ratio; AR, area remaining; R, rows; C, columns.
down watermarked image. Another excellent result of the proposed method is for contrast adjustment where proposed method extracts watermark upto 80% decreased contrast whereas existing methods extract watermarks upto only 50% reduced contrast. For flipping, histogram equalization, pixelation, wrapping and sharpen attacks, all the methods perform almost equally.
6. Conclusions In this paper, a new facet of watermarking is proposed which deals with the situations where the size of watermark is needed to be greater than the host image. For this purpose, chaotic map, Hessenberg decomposition, redundant wavelet transform and automatic thresholding are used. The proposed scheme uses two meaningful watermarks instead of Gaussian noise type watermark. Between these watermarks, one is gray-scale watermark and another one is binary watermark. The binary watermark is embedded in loss-less manner which further ensures the authenticity of the watermarked image and hence enhances the security. For extraction, the reliable extraction scheme is constructed for the binary and as well for gray-scale watermark. The feasibility and the robustness of the proposed method is carried out by a variety of attacks.
References [1] Schyndle RGV, Tirkel AZ, Osbrone CF. A digital watermark. Proc IEEE Int Conf Image Process 1994;2:86–90. [2] Hwang MS, Chang CC, Hwang KF. A watermarking technique based on one-way hash functions. IEEE Trans Consum Electron 1999;45(2):286–94. [3] Solachidis V, Pitas I. Optimal detector for multiplicative watermarks embedded in the DFT domain of non-white signals. EURASIP J Appl Signal Process 2004;2004:2522–32. [4] Cox IJ, Killian J, Leighton FT, Shamoon T. Secure spread spectrum watermarking for multimedia. IEEE Trans Image Process 1997;6(12):1673–87. [5] Barni M, Bartiloni F, Cappellini V, Piva A. A DCT domain system for robust image watermarking. Signal Process 1998;66(3):357–72. [6] Xia X, Boncelet CG, Arce GR. A multiresolution watermark for digital images. Proc IEEE Int Conf Image Process 1997;3:548–51. [7] Barni M, Bartiloni F, Piva A. Improved wavelet based watermarking through pixel wise masking. IEEE Trans Image Process 2001;10:783–91. [8] Hsu CT, Wu JL. Multiresolution watermarking for digital images. IEEE Trans Circuits Syst 1998;45:1097–101.
[9] Wang Y, Doherty JF, Van Dyck RE. A waveletbased watermarking algorithm for ownership verification of digital images. IEEE Trans Image Process 2002;11:77–88. [10] Kundur D, Hatzinakos D. Towards robust logo watermarking using meltiresolution image fusion. IEEE Trans Multimedia 2004;6:185–97. [11] Dawei Z, Guanrong C, Wenbo L. A chaos-based robust wavelet-domain watermarking algorithm. J Chaos Solitons Fractals 2004;22:47–54. [12] Meerwald P, Uhl A. A survey of wavelet-domain watermarking algorithms. In: Proceedings of SPIE, electronic imaging, security and watermarking of multimedia contents III, vol. 4314. 2001. p. 505–16. [13] Gorodetski VI, Popyack LJ, Samoilov V, Skormin VA. SVD-based approach to transparent embedding data into digital images. In: Proceedings of international workshop on mathematical methods, models and architectures for computer network security, vol. 2052. 2001. p. 263–74. [14] Liu R, Tan T. An SVD-based watermarking scheme for protecting rightful ownership. IEEE Trans Multimedia 2002;4(1):121–8. [15] Chandra DVS. Digital image watermarking using singular value decomposition. Proc IEEE Midwest Symp Circuits Syst 2002:264–7. [16] Ganic E, Zubair N, Eskicioglu AM. An optimal watermarking scheme based on singular value decomposition. In: Proceedings of the IASTED international conference on communication, network, and information security (CNIS 2003). Uniondale, NY; 2003. p. 85–90. [17] Ganic E, Eskicioglu AM. Robust embedding of visual watermarks using discrete wavelet transform and singular value decomposition. J Electron Imaging 2005;14(1):043004. [18] Sverldov A, Dexter S, Eskicioglu AM. Robust DCT-SVD domain image watermarking for copyright protection: embedding data in all frequencies. In: Proc Eur Signal Process Conf. 2005. p. Cr10231–4. [19] Li Q, Yuan C, Zong YZ. Adaptive DWT-SVD domain image watermarking using human visual model. In: Proc Int Conf Adv Commun Technol. 2007. p. 1947–51. [20] Bhatnagar G, Raman B. A new reference watermarking scheme based on DWTSVD. Comput Stand Interfaces 2009;31(5):1002–13. [21] Bhatnagar G, Raman B. Robust reference-watermarking scheme using wavelet packet transform and bidiagonal-singular value decomposition. Int J Image Graph 2009;9(3):447–9. [22] Chang CC, Tsai P, Lin CC. SVD-based digital image watermarking scheme. Pattern Recognit Lett 2005;26:577–1586. [23] Zhang XP, Li K. An SVD-based watermarking scheme for protecting rightful ownership. IEEE Trans Multimedia 2005;7(2):593–4. [24] Yavuz E, Telatar Z. Improved SVD-DWT based digital image watermarking against watermark ambiguity. Proc ACM Symp Appl Comput 2007: 1051–5. [25] Dutilleux P. An implementation of the algorithme à trous to compute the wavelet transform. In: Combes J-M, Grossman A, Tchamitchian P, editors. Wavelets: time–frequency methods and phase space. Berlin, Germany: Springer-Verlag; 1989. p. 298–304. [26] Lang M, Guo H, Odegard JE, Burrus CS, Wells RO. Noise reduction using an undecimated discrete wavelet transform. IEEE Signal Process Lett 1996;3(1):10–2. [27] Zaciu R, Lamba C, Burlacu C, Nicula G. Image compression using an overcomplete discrete wavelet transform. IEEE Trans Consum Electron 1996;42(3):800–7.
G. Bhatnagar / Int. J. Electron. Commun. (AEÜ) 66 (2012) 275–285 [28] Lang M, Guo H, Odegard JE, Burrus CS, Wells RO. Nonlinear processing of a shift invariant DWT for noise reduction. In: Szu HH, editor. Proceedings of the SPIE wavelet applications II, vol. 2491. 1995. p. 640–51. [29] Unser M. Texture classification and segmentation using wavelet frames. IEEE Trans Image Process 1995;4(11):1549–60. [30] Pollicott M, Yuri M. Dynamical systems and ergodic theory. Cambridge: London Mathematical Society Student Text Series; 1998. [31] Tao S, Ruli W, Yixun Y. Generating binary bernoulli sequences based on a class of even-symmetric chaotic maps. IEEE Trans Commun 2001;49(4): 620–3. [32] Golub GH, Loan CFV. Matrix computations. Johns Hopkins University Press; 1996. [33] Otsu N. A threshold selection method from gray-level histograms. IEEE Trans Syst Man Cyber 1979;9(1):62–6.
285
Gaurav Bhatnagar is with the Department of Electrical and Computer Engineering at University of Windsor, ON, Canada since 2009. He received his PhD and MSc degree in Applied Mathematics from Indian Institute of Technology Roorkee, India, in 2010 and 2005, respectively. He has coauthored more than 30 journals, conference proceedings and contributed to two books in his area of interest. His research interests include digital watermarking, encryption techniques, biometrics, image analysis, wavelet analysis and fractional transform theory.