Wave atom transform based image hashing using distributed source coding

Wave atom transform based image hashing using distributed source coding

ARTICLE IN PRESS journal of information security and applications ■■ (2016) ■■–■■ Available online at www.sciencedirect.com ScienceDirect j o u r n ...

1MB Sizes 1 Downloads 95 Views

ARTICLE IN PRESS journal of information security and applications ■■ (2016) ■■–■■

Available online at www.sciencedirect.com

ScienceDirect j o u r n a l h o m e p a g e : w w w. e l s e v i e r. c o m / l o c a t e / j i s a

Wave atom transform based image hashing using distributed source coding Yanchao Yang a, Junwei Zhou a, Feipeng Duan a, Fang Liu b,*, Lee-Ming Cheng b a

Hubei Key Laboratory of Transportation Internet of Things, School of Computer Science and Technology, Wuhan University of Technology, Wuhan, China b Department of Electronic Engineering, City University of Hong Kong, Hong Kong, China

A R T I C L E

I N F O

A B S T R A C T

Article history:

To reduce the size of hash code and enhance the security of wave atom transform (WAT)

Available online

based image authentication system, a low-density parity-check code based distributed source coding (DSC) is employed to compress the hash code. With the help of a legitimately modi-

Keywords:

fied image, the compressed hash value could be correctly decoded while it will fail with

Perceptual hashing

the help of a maliciously attacked image. Therefore, the employed DSC provides a desired

Image authentication

robustness to image authentication. Simulation results indicate that the proposed scheme

Distributed source coding

provides a better performance with less hash code than existing WAT based image hash

Wave atom transform

without using DSC. Moreover, the proposed scheme outperforms the random projection based approach in terms of authentication accuracy and data size. © 2016 Elsevier Ltd. All rights reserved.

1.

Introduction

Along with the rapid development of information technologies, the broad popularity of image manipulation tools has led to an explosive growth of image illegal use, which makes image authentication more and more important. Three kinds of image authentication techniques, namely digital forensics (Zhao and Zhao, 2013), image watermarking (Chetan and Nirmala, 2015; Ghosal and Mandal, 2014) and perceptual hashing (Zhao et al., 2013), have been carried out on image authentication. Perceptual hashing is the transformation of an image into a usually shorter fixed-length value that represents the original image. It could verify the originality of an image by comparing the hash codes of the original image and the target image. Swaminathan et al. (2006) have developed a perceptual hashing

scheme based on Fourier transform features and controlled randomization. By embedding the detected local features into shape-context-based descriptors, Lv and Wang (2012) used the most stable scale-invariant feature transform key points as the hash code. Zhao et al. (2013) employed Zernike moments to represent the luminance and chrominance of an image as global features, while they took position and texture information of salient regions as local features to produce the hash code. Since target images are usually correlated to original image in the image authentication system, the hash codes of original image and target image are correlated, thus it is possible to compress hash code of the original image using distributed source coding (DSC). The redundancy of hash code can be further reduced and the compressed hash code will be statistically independent of the target image. Motivated by the potential benefits of using DSC such as reducing size of the

* Corresponding author. Department of Electronic Engineering, City University of Hong Kong, Hong Kong, China. Fax: +852 34424151. Email address: [email protected] (F. Liu). http://dx.doi.org/10.1016/j.jisa.2016.09.001 2214-2126/© 2016 Elsevier Ltd. All rights reserved. Please cite this article in press as: Yanchao Yang, Junwei Zhou, Feipeng Duan, Fang Liu, Lee-Ming Cheng, Wave atom transform based image hashing using distributed source coding, journal of information security and applications (2016), doi: 10.1016/j.jisa.2016.09.001

ARTICLE IN PRESS 2

journal of information security and applications ■■ (2016) ■■–■■

hash code and improving the security of the authentication system, some researchers studied the compression processing for the follow-up procedures of hash construction (Lin et al., 2012; Sun et al., 2002; Tagliasacchi et al., 2009; Venkatesan et al., 2000). An error-correcting code is employed in Venkatesan et al. (2000) by projecting the hash into syndrome bits which are used to verify the authentication directly. The parity check bits of the binary feature vectors, which are produced by a systematic Hamming code, are taken as the hash code in Sun et al. (2002). In addition, DSC is combined with compressive sensing to identify sparse image tampering (Tagliasacchi et al., 2009). The hash code is produced by encoding the quantized random projections using a LDPC-based DSC encoder (Varodayan et al., 2012), while a DSC decoder is employed to decode the hash value with the help of target image. A training procedure is usually applied to find the minimum decodable rate (MDR) for all the legitimately modified images. Thus, if a legitimately modified image is taken as side information, the decoding will be successful, but it will fail with the help of illegitimately modified image due to the weak correlation between the original image and the illegitimately modified image. On the other hand, wave atom transform (WAT) is a recent addition to the repertoire of mathematical transforms of computational harmonic analysis, which is introduced by Demanet and Ying (2007). WAT is constructed from tensor products of adequately chosen 1-D wave packets, and the 2-D orthonormal basis functions with four bumps can be formed by individually utilizing products of 1-D wave packets in the frequency plane. As a variant of 2-D wavelet packets, WAT could adapt to arbitrary local directions of a pattern, and sparsely represent anisotropic patterns aligned with the axes. Oscillatory functions and oriented textures in WAT have been proven to have a dramatically sparser expansion compared to some other fixed standard representations, such as Gabor filters, wavelets, and curvelets. A WAT based perceptual hashing was studied in Liu et al. (2012), which has reported that WAT based perceptual hash could outperform discrete cosine transform or discrete wavelet transform based schemes in terms of robustness and fragility. In order to reduce the size of hash code and improve the security of the WAT based perceptual hashing, this work employs a LDPC-based DSC (Varodayan et al., 2012) to compress the randomized wave atom features of the original

image, which is also expected to show better performance than the scheme without DSC. The contributions of this work are listed as follows: (1) Since the compressed hash value could be correctly decoded but it will fail with the help of a maliciously attacked image, the employed DSC provides a desired robustness to image authentication. (2) The correlation between the hash value and images is removed by DSC, which certainly improves the security of the existing hash scheme. (3) The length of hash value is shortened. The rest of this paper is structured as follows. The proposed authentication system is described in Section 2, and the experimental analyses are presented in Section 3. The conclusions are given in Section 4.

2.

Proposed authentication system

The proposed authentication system is composed of two steps: In the first step, the randomized wave atoms are extracted from the original image (Liu et al., 2012). The randomized wave atoms will be further encoded by LDPC-based DSC in the second step. The encoded wave atoms are taken as the hash code. The target images are produced by two channels which include a legitimate channel and a tampered channel (Lin et al., 2012). In the legitimate channel, the target image is only processed with legitimate operations, such as lossy compression including JPEG and JPEG2000. While in tampered channel, a further malicious modification is applied. The examples of two channels’ outputs are shown in Fig. 1, the original image (a) is fragment of “Lena”, the target image (b) is the output of the legitimate channel, and the target image (c) is the output of the tampered channel. The aim of the image authentication system is to distinguish the images of the tampered channel from the legitimate channel ones.

2.1.

WAT based perceptual hashing

In WAT, the image is decomposed into several scale bands. In each scale band, there are different numbers of sub-blocks and each sub-block consists of various wave atom coefficients. With the increasing of scale band, the number of coefficients in scale

Legitimate Channel

Target Image

JPEG2000/JPEG

Original Image

JPEG2000/JPEG Tampered Channel

Tamper

Target Image

Fig. 1 – Examples of original image and target images. The target images are modeled as output of two channels. In the legitimate channel, the image is processed by legitimate operations, such as JPEG2000/JPEG. In the tampered channel, the images are further tampered. Please cite this article in press as: Yanchao Yang, Junwei Zhou, Feipeng Duan, Fang Liu, Lee-Ming Cheng, Wave atom transform based image hashing using distributed source coding, journal of information security and applications (2016), doi: 10.1016/j.jisa.2016.09.001

ARTICLE IN PRESS journal of information security and applications ■■ (2016) ■■–■■

WAT Based Perceptual Hashing Image

Wave Atom Transform

Feature Extraction

Hash Construction

Hash Code

Secret Key

Fig. 2 – The module diagram of WAT based perceptual hashing.

band will be doubled. It is found that the coefficients in the third scale bands have minor changes when the image is not altered perceptually. On the contrary, the corresponding coefficients change significantly when a malicious modification has been applied (Liu et al., 2012). Thus the coefficients in the third scale band are employed to construct the hash code, which has the ability to distinguish the maliciously attacked images from those legitimately modified ones. The procedure of generation perceptual hash code is shown in Fig. 2. First, the operation of WAT is applied to the original image x, and then the summation of each sub-block in the third scale band, which serves as the most important features in the perceptual hash, is calculated by feature extraction processing. To make the obtained features key-dependent, random permutations are applied based on pseudo-random numbers controlled by a secret key, and then the randomized summations are further quantized using 4-bit Gray code to generate hash vector in the hash construction processing. Finally, the intermediate hash code X is constructed by concatenating all these quantized summation bits. For details, please refer to Liu et al. (2012).

2.2.

Distributed source coding

DSC deals with the compression of multiple correlated sources that do not communicate with each other. As shown in Fig. 3, X and Y are the features of the original and the target images, respectively. A DSC encoder is used to compress X by exploring the correlation between X and Y. Slepian and Wolf (1973) proved that the individual achievable rates RX and RY are bounded by Eq. (1).

⎧ RX ≥ H ( X Y ) , ⎪ ⎨ RY ≥ H ( Y X ) , ⎪R + R ≥ H ( Y , X ) . Y ⎩ X

X

SW Encoder

(1)

Rate RX

Statistically Dependent

SW Decoder



Y

Fig. 3 – The sources X and Y are the features of the original and the target images, respectively. The features X could be compressed at a rate H(X|Y) which is much lower than H(X). With the help of Y, X could be decoded.

3

After the DSC, the correlation between X and Y is reduced. Therefore, DSC not only shortens the hash code but also improves the security of the image authentication. At the decoder side, X could be decoded with the help of side information Y. If the coding rate RX is higher than H(X|Y), X can be decoded without error (Slepian and Wolf, 1973). Since the features Yl of the legitimately processed image are more relevant to that of the original image than the features Yt of maliciously attacked image, the resultant conditional entropy H(X|Yl) is far less than H(X|Yt). If DSC encoder adopts a rate RX in the range of (H (X Yl ) , H (X Yt )) , the decoder, with the help of side information Yl, is able to obtain X without error. Since the correlation between original image and the modified image can be modeled as a virtual channel which has input as source X and output as source Y, the channel coding such as LDPC and turbo codes is usually used to implement DSC. In this work, we employ a powerful DSC based on LDPC which is introduced in Varodayan et al. (2012) to compress the hash code. It is reported that the performance of this code is close to the Slepian–Wolf bound (Varodayan et al., 2012). Here, we will use an example to illustrate how to use a simple Slepian–Wolf code to compress the random projection X. Suppose that both the projections of the original image and the target image X and Y are 3-bit binary sequences. Since legitimate modification will not change the image greatly, it is a reasonable assumption that X and Y differ in at most one of 3-bit. If Y is available to the decoder without the encoder, we only need 2 bits to represent X instead of 3 bits. If the decoder knows that either X = 111 or X = 000, it can resolve this uncertainty by checking which of them is closer in Hamming distance to Y since the Hamming distance between X and Y is no more than one. Therefore, the encoder does not need to differentiate pair sequences {111,000}. Likewise, the encoder will not differentiate pair sequences {110,001}, {101,010} and {011,100} whose Hamming distance is 3. Finally, the encoder only needs 2 bits to index the four pairs. At the decoder side, with help of Y, the decoder could get a correct decoding. For example, the encoder uses for 2-bit sequences 00, 01, 10, 11 to represent four pair sequences {111,000}, {110,001}, {101,010} and {011,100} respectively. If X = 110 and Y = 010, the encoder can use 01 to represent X and the index 01 is sent to the decoder. The decoder will know X in the pair {110,001} whose index is 01. Since Y = 010, the decoder could determine that Xˆ = 110 which is equal to X and one bit different from Y.

2.3.

Authentication system

An image authentication process consists of two steps: generation of authentication data and identification. In the first step, the authentication data are extracted from the original image. In the second step, the authentication decision is made by comparing the statistical correlation between target image and hash. Fig. 4 illustrates the detailed procedure of WAT based image authentication using DSC. The two steps of generation of authentication data and identification are described as follows: 1. Generation of Authentication Data: First, an intermediate hash code X is generated from the original image using WAT based perceptual Hashing described in the previous section. Then

Please cite this article in press as: Yanchao Yang, Junwei Zhou, Feipeng Duan, Fang Liu, Lee-Ming Cheng, Wave atom transform based image hashing using distributed source coding, journal of information security and applications (2016), doi: 10.1016/j.jisa.2016.09.001

ARTICLE IN PRESS 4

journal of information security and applications ■■ (2016) ■■–■■

Wave Atoms Based Hashing

x

Original Image

Two-state Channel

X

Digest

e

SW Encoder En(X)

Secret Key

Decision

S(X ) y

Wave Atoms Based Hashing

Target Image

Y

SW Decoder

Table 2 – NHDs between malicious attacked images and original ones.



n

)

) ∑ (Y − Xˆ ) . 2

i

NHD

(c) (f) (h) (l)

0.3929 0.4000 0.3393 0.2036

3.

the side information Y. The Euclidean distance is given by Eq. (2), where n is the size of X. Finally, the controller will determine whether the target image is authentic or tampered using a pre-set threshold θ. If d > θ, the target image will be considered as a tampered version; otherwise, it will be considered as authentic. By collecting a large number of d in an independent stage, the range of θ can be found. The specific value of θ is determined by the requirement of the application. If the application requires a higher Probability of False Positive and a lower Probability of False Negative, a large θ will be selected.

(

Image

0.2571 0.3536 0.3214 0.3286

Digest

this intermediate hash code X is further encoded as En(X) by the DSC encoder. The encoded value En(X) is taken as the final authentication data. At the meantime, a traditional cryptographic digest function is also applied to X to generate a hash digest e. 2. Identification: Suppose that the target image y is obtained through the two-state channel, while x is the input. A hash code Y is generated from y using the same WAT based perceptual hashing with the same secret key as well. The DSC decoder is then employed to decode authentication data En(X) with the help of side information Y. The decoded data are denoted as Xˆ , which has further gone through the same cryptographic digest function to generate a digest eˆ . If eˆ differs from e, the target image will be identified as tampered image. Otherwise, the target image will be further examined by the inside controller, which calculates the Euclidean distance d Xˆ , Y between the decoded hash Xˆ and

d Y , Xˆ =

NHD

(b) (e) (g) (k)

e'

Fig. 4 – The procedure of WAT based image authentication using Distributed Source Coding (DSC) coding.

(

Image

(2)

i

i =1

Simulation analysis

In this work, 100 grayscale original images with size of 512 × 512 have been used for simulation. The legitimately modified images are produced by reconstruction of manipulations such as lossy compression, Gaussian noise, salt and pepper noise and lowpass filtering with different parameters. According to Lin et al. (2012), the reconstructed images of lossy compression manipulation with PSNR no less than 30 dB have been used. By overlaying a text banner with a size of 179 × 39 as shown in Fig. 1 at random location of the reconstructed images, the illegitimately modified images have been obtained from the tampered state channel.

3.1.

Normalized hamming distance

In order to justify our choice of using DSC to compress WAT based hash value, we calculated the normalized Hamming distance (NHD) between hash code of original images and manipulated ones. Table 1 tabulates the NHD between three sample images (Lena, Pepper and Mandrill) and their manipulated ones, as well as the average NHD between the original images and their manipulated ones. From Table 1, it can be observed that the values of NHD between the hashes of original images and manipulated ones are small for most cases. Table 2 shows the normalized Hamming distances between the hash values of the original images and the tampered images listed in Fig. 5. It is observed that the NHDs are normally larger than distances between the hash values of the original images and non-maliciously processed ones. The tampering has destroyed the distribution of the original waveform in WAT, which leads to the huge changes of the coefficients and results in the enlargement of NHD. Comparing Tables 1 and 2, it can be observed that modified images are usually correlated to original image no matter what the tempered and legitimately modified ones are, where

Table 1 – NHDs between different legitimated manipulated images and original ones. Manipulation JPEG (Factor = 25) Gaussian noise (Variance = 5) Salt & pepper noises (Density = 0.01) Gaussian filter (Variance = 0.5, Window = 5) Median filter (Filter size = 5 × 5) Contrast change (20%) Laplacian sharpening (Operator = 0.3) Histogram equalization

Lena

Pepper

Mandrill

Average

0.0179 0.0321 0.0607 0.0071 0.0357 0.0214 0.0286 0.1143

0.0357 0.0500 0.0393 0.0214 0.0571 0.0500 0.0214 0.1107

0.0536 0.0643 0.0571 0.0321 0.1286 0.0250 0.0357 0.1643

0.0517 0.0679 0.0791 0.0369 0.0978 0.0267 0.0284 0.1859

Please cite this article in press as: Yanchao Yang, Junwei Zhou, Feipeng Duan, Fang Liu, Lee-Ming Cheng, Wave atom transform based image hashing using distributed source coding, journal of information security and applications (2016), doi: 10.1016/j.jisa.2016.09.001

ARTICLE IN PRESS journal of information security and applications ■■ (2016) ■■–■■

(a)

(d)

(g)

(j)

(b)

(e)

(h)

(k)

(c)

(f)

(i)

(l)

5

Fig. 5 – Tampered target images (the images in the first row are the original ones, while the other rows are the tampered images).

the NHDs are usually small than 0.5. Moreover, the NHDs between original image and legitimately modified images are small than tampered images, which indicates that the hash value of legitimately modified images are more correlated to original image than tampered image. Therefore, it provided an opportunity to compress the hash code with DSC. After carefully choosing the code rate, the compressed hash value could be correctly decoded with the help of a legitimately modified image, but it will fail with the help of a maliciously attacked image.

3.2.

Minimum correctly decodable rates

In order to justify the choice of employing DSC, MDRs for the legitimately and illegitimately modified images are calculated, respectively. As shown in Fig. 6, MDRs of the DSC bitstream for legitimately and illegitimately modified images are compared. Here, 100 original images have been modified under various manipulations including lossy compression, Gaussian noise, salt & pepper noise and Gaussian low-pass filtering in the legitimate state channel. The horizontal axes are the corresponding parameters of the four manipulations. The horizontal axes of (a), (b), (c) and (d) are the PSNR of reconstructed image, the noise density, the filter size (HSIZE) with standard deviation (SIGMA) and the noise variance, respectively. An additional malicious attack of overlaying a text banner at a random location in the processed images has been applied in tampered state channel. Both theoretical bound and feasible values using DSC are plotted. The theoretical bound is the least coding rate of encoding X for a lossless decoding, which is conditional entropy H(X|Y). The feasible value is the least coding rate with the practical LDPC code introduced in Varodayan et al. (2012). It should be noted that the practical coding rate is a little larger than

the theoretical bound, since the LDPC code approaches theoretical bound only for very large data blocks. It could be found that the rate required for correctly decoding with legitimately created side information is significantly lower than the rate for illegitimately created side information. Moreover, the rate for legitimate side information is decreased with magnitude of manipulation, while the rate for illegitimate side information remains very high. If the DSC encoder chooses the coding rate in this gap, then the decoding will be successful with the help of legitimately modified images but it will fail with the help of maliciously attacked images.

3.3.

ROC curves

In this paper, receiver operating characteristic (ROC) curve is employed to measure the performance of the proposed scheme by showing the rates of false rejection and false acceptance. False rejection rate is the possibility that a legitimately modified image is falsely considered as a tampered image, while false acceptance rate is the possibility that a tampered image is falsely considered as a legitimately modified image. Here, we also compare the proposed method with Lin et al. (2012) using 1-bit quantization as well. In order to make a fair comparison, a similar configuration is set up. In the legitimate state channel, the target image is processed with lossy compression and reconstruction using JPEG and JPEG2000 with PSNR no less than 30 dB, and in tampered state channel, an additional overlay of text banner with size of 179 × 39 at a random location in the processed images has been attacked. The random projection based scheme in Lin et al. (2012) and the proposed scheme require that the coding rate must be chosen to correctly decode with the help of the legitimately modified images, thus we have to choose a coding rate that could

Please cite this article in press as: Yanchao Yang, Junwei Zhou, Feipeng Duan, Fang Liu, Lee-Ming Cheng, Wave atom transform based image hashing using distributed source coding, journal of information security and applications (2016), doi: 10.1016/j.jisa.2016.09.001

ARTICLE IN PRESS 6

journal of information security and applications ■■ (2016) ■■–■■

Fig. 6 – Minimum correctly decodable rates (MDRs) under legitimate modifications: (a) lossy compression, (b) salt & pepper noise, (c) Gaussian noise, (d) low-pass filtering and corresponding legitimately manipulations based on 100 images of the proposed scheme.

be used to decode all legitimately modified images. As shown in Fig. 7, the ROC curve of the proposed scheme with coding rate of 0.4 is not plotted in Fig. 7 since the decoding for some legitimately modified images failed. We can find that the proposed scheme achieves a much better performance in terms of ROC curve where the false acceptance and false rejection rates are both lower than the scheme introduced in Lin et al. (2012). This is because WAT could adapt to arbitrary local directions of a pattern, and sparsely represent anisotropic patterns aligned with the axes. Oscillatory functions and oriented textures in WAT have a dramatically sparser expansion compared to random projection. The setup

of coding rate with a value of 1 is equivalent to the scheme without using DSC. It could be also observed that the employed DSC can reduce the probability of false positive.

3.4.

Conditional entropy

Entropy is usually employed to evaluate the security of the hashing (Kang et al., 2009; Swaminathan et al., 2006; Venkatesan et al., 2000). However, since the modified image is correlated to the original one and their hash codes tend to be close in Hamming distance or Euclidean distance even for an image under malicious attack, the attacker could use the statistical

Please cite this article in press as: Yanchao Yang, Junwei Zhou, Feipeng Duan, Fang Liu, Lee-Ming Cheng, Wave atom transform based image hashing using distributed source coding, journal of information security and applications (2016), doi: 10.1016/j.jisa.2016.09.001

ARTICLE IN PRESS journal of information security and applications ■■ (2016) ■■–■■

0

Receiver Operating Characteristic Curves

4.

10

Probability of False Positive

10

−2

10

RP RX = 1 RP RX = 0.8 RP RX = 0.4

10

Conclusion

A LDPC based DSC is employed in WAT based image authentication system in this work. The maliciously modified images are distinguished by assessing the Euclidean distance between wave atoms of the original and the target image. The employed DSC explores the correlation between wave atoms of the original and the target image to lower their statistical dependence. It not only reduces the size of the hash value but also improves the security of the image authentication system. The simulation results indicate that the proposed scheme also provides a better performance than the existing authentication system without DSC.

−1

−3

7

Proposed RX = 1.0 Proposed RX = 0.8 −4

10 −4 10

0

−1

−2

−3

10

10

10 10 Probability of False Negative

Fig. 7 – Comparison between the random projection (RP) scheme introduced in Lin et al. (2012) and the proposed scheme in terms of receiver operating characteristic curve (ROC) on various coding rates.

correlation to forge the hash code X of the original image. The number of exhaustive searches required to forge the hash code is proportional to δ H(X Y ) instead of δ H(X) for some δ > 1 since H ( X Y ) ≤ H ( X ) (Cover and Thomas, 2006), where H(X|Y) is the conditional entropy between the hash codes of the modified image and the original one while H(X) is the entropy of the original image’s hash. The bigger the conditional entropy between the hash codes is, the higher the randomness and the larger the number of exhaustive searches required to forge the hash code X will be. In this subsection, we calculated the conditional entropy of X and Y with DSC and without DSC as shown in Table 3. A lossy compression operation is applied on “Lena” image and it is reconstructed with PSNR 30 dB. When RX = 1, it means that the DSC is not employed. The corresponding column presents the conditional entropy of H(X|Y). We can find that X and Y are strongly correlated since H(X|Y) is far less than 1. It can be also observed that the employed DSC raised the conditional entropy both for legitimate manipulation and malicious attack. With decreasing coding rate, conditional entropy is increased. Moreover, conditional entropy for malicious attack can be approaching the maximum value when RX < 0.8. Therefore, the employed DSC not only reduces the size of the hash code but also improves the security of the image authentication process.

Table 3 – Conditional entropy H (S ( X ) Y ) (bits per symbol). Manipulations

RX = 1

RX = 0.8

RX = MDR

Legitimate manipulation Malicious attack

0.3291 0.8400

0.4114 1

0.7771 1

Acknowledgments The work described in this paper was supported in part by the National Natural Science Foundation of China (Grant No. 61601337), by the Fundamental Research Funds for the Central Universities under Grant 2015III015-B04 and Grant 2015IVA034, by Key Project of Nature Science Foundation of Hubei Province (Grant No. ZRZ2015000393), by the National Key Technology Support Program of China (Grant No. 2012BAH45B01), by Science & Technology Pillar Program of Hubei Province (Grant No. 2014BAA146) and by the Key Natural Science Foundation of Hubei Province of China (Grant No. 2015CFA069). REFERENCES

Chetan K, Nirmala S. An efficient and secure robust watermarking scheme for document images using integer wavelets and block coding of binary watermarks. J Inf Secur Appl 2015;24–25:13–24. Cover TM, Thomas JA. Elements of information theory. Wiley Series in Telecommunications and Signal Processing. WileyInterscience; 2006. Demanet L, Ying L. Wave atoms and sparsity of oscillatory patterns. Appl Comput Harmon Anal 2007;23(3):368–87. Ghosal S, Mandal J. Binomial transform based fragile watermarking for image authentication. J Inf Secur Appl 2014;19(45):272–81. Kang LW, Lu CS, Hsu CY. Compressive sensing-based image hashing. In: IEEE International Conference on Image Processing; 2009, 1277–1280. Lin Y-C, Varodayan D, Girod B. Image authentication using distributed source coding. IEEE Trans Image Process 2012;21(1):273–83. Liu F, Cheng L-M, Leung H-Y, Fu Q-K. Wave atom transform generated strong image hashing scheme. Opt Commun 2012;285(24):5008–18. Lv X, Wang ZJ. Perceptual image hashing based on shape contexts and local feature points. IEEE Trans Inf Foren Secur 2012;7(3):1081–93. Slepian D, Wolf JK. Noiseless coding of correlated information sources. IEEE Trans Inf Theory 1973;19(4):471–80. Sun Q, Chang S-F, Maeno K, Suto M. A new semi-fragile image authentication framework combining ECC and PKI infrastructures. IEEE Int Symp Circ Syst 2002;2:440–3. Swaminathan A, Mao YN, Wu M. Robust and secure image hashing. IEEE Trans Inf Foren Secur 2006;1(2):215–30.

Please cite this article in press as: Yanchao Yang, Junwei Zhou, Feipeng Duan, Fang Liu, Lee-Ming Cheng, Wave atom transform based image hashing using distributed source coding, journal of information security and applications (2016), doi: 10.1016/j.jisa.2016.09.001

ARTICLE IN PRESS 8

journal of information security and applications ■■ (2016) ■■–■■

Tagliasacchi M, Valenzise G, Tubaro S. Hash-based identification of sparse image tampering. IEEE Trans Image Process 2009;18(11):2491–504. Varodayan D, Lin YC, Girod B. Adaptive distributed source coding. IEEE Trans Image Process 2012;21(5):2630–40. Venkatesan R, Koon SM, Jakubowski MH, Moulin P. Robust image hashing. In: International Conference on Image Processing; 2000, 664–666.

Zhao J, Zhao W. Passive forensics for region duplication image forgery based on Harris feature points and local binary patterns. Math Probl Eng 2013;1–12. 619564. Zhao Y, Wang S, Zhang X, Yao H. Robust hashing for image authentication using Zernike moments and local features. IEEE Trans Inf Foren Secur 2013;8(1):55–63.

Please cite this article in press as: Yanchao Yang, Junwei Zhou, Feipeng Duan, Fang Liu, Lee-Ming Cheng, Wave atom transform based image hashing using distributed source coding, journal of information security and applications (2016), doi: 10.1016/j.jisa.2016.09.001