The Journal of Systems and Software 84 (2011) 669–678
Contents lists available at ScienceDirect
The Journal of Systems and Software journal homepage: www.elsevier.com/locate/jss
A data hiding scheme using the varieties of pixel-value differencing in multimedia images Cheng-Hsing Yang a , Chi-Yao Weng b , Hao-Kuan Tso c , Shiuh-Jeng Wang d,∗ a
Department of Computer Science, National Pingtung University of Education, Pingtung 900, Taiwan Department of Computer Science, National Tsing-Hua University, Hsinchu 300, Taiwan Department of Computer Science and Communication Engineering, Army Academy R.O.C., Chungli, Taoyuan 320, Taiwan d Department of Information Management, Central Police University, Taoyuan 333, Taiwan b c
a r t i c l e
i n f o
Article history: Received 10 May 2010 Received in revised form 8 November 2010 Accepted 8 November 2010 Available online 4 December 2010 Keywords: Capacity Pixel-value differencing Image quality
a b s t r a c t In this paper, a capacity promoting technique is proposed for embedding data in an image using pixelvalue differencing (PVD). The PVD scheme embeds data by changing the difference value between two adjacent pixels so that more data is embedded into two pixels located in the edge area, than in the smooth area. In order to increase the embedding capacity, a new approach is proposed in this paper by searching edge area more flexibly. Instead of processing a pair of pixels at a time as proposed by Wu and Tsai, two pairs of pixels in a block are processed at the same time. In addition, we proposed a pixel-value shifting scheme to further increase the chances for embedding data. Our scheme exploits the edge areas more efficiently, thus leading to an increase in embedding capacity as shown by experimental results compared to Wu and Tsai’s method. Also, the embedding result of our scheme passes the Fridrich et al.’s detection. Besides, according to the distribution of difference values, more practical range partitions are suggested for improving capacity. © 2010 Elsevier Inc. All rights reserved.
1. Introduction Nowadays, the Internet has become a common communication channel. Communicating in public system means that some problems need to be faced, such as data security, copyright protection, etc. Ciphering is a well-known method for security protection (Chu and Chang, 1999; Highland, 1997), but it has the disadvantage of making a message unreadable thereby attracting the attention of eavesdroppers. This makes steganography which hides data within data a good choice for secret communications (Bender et al., 1996; Anderson and Peticolas, 1998; Artz, 2001; Lee and Chen, 2000; Yu et al., 2005). One of the best-known steganographic methods is the leastsignificant-bit (LSB) substitution (Chan and Chen, 2004; Chang et al., 2002; Li et al., 2009). The simple LSB substitution method replaces the length-fixed LSB with the fixed length bits. Although the technique is efficient, it is rather easy to create a noticeable distortion for the human eye or can be detected by some programs (Lee and Chen, 2000; Fridrich et al., 2001; Ker, 2007). Therefore, several adaptive methods have been proposed for steganography in order to decrease the distortion caused by the LSB substitution (Li et al., 2009; Yang and Wang, 2006; Yang, 2008). In addition, some
∗ Corresponding author. Fax: +886 3 3272038. E-mail address:
[email protected] (S.-J. Wang). 0164-1212/$ – see front matter © 2010 Elsevier Inc. All rights reserved. doi:10.1016/j.jss.2010.11.889
methods use the concept of human vision to avoid the detection of programs (Wu and Tsai, 2003; Chang and Tseng, 2004). Recently, Wu and Tsai proposed a “pixel-value differencing” steganographic method that used the difference value between two adjacent pixels in a block in order to determine the number of embeddable secret bits (Wu and Tsai, 2003). This difference value is adjusted so as to embed the secret bits, and the difference between the original and new difference values is adjusted between the two pixels. To check the proposed method, author applies the dual statistics method (Fridrich et al., 2001), called as RS-diagram, to detect the function of embedding method. In RS-diagram, first of all, the discrimination and flipping functions are applied to define pixel groups: Regular (R), Singular (S), and Unusable (U). Then, the percentages of all groups of Regular and Singular with masks m = [0 1 1 0] and −m = [0 −1 −1 0] are computed, in which they are represented as Rm , R−m , Sm , and S−m , respectively. Finally, the RSdiagram applied hypotheses of Rm ∼ = R−m and Sm ∼ = S−m to present the detected resultant. In 2005, Wu et al.’s proposed a method which combines the pixel-value differencing and LSB replacement method. Their approach provides higher capacity than pixel-value differencing, but it does not pass the detection of RS-diagram, the reason is shown in (Yang et al., 2007). Consequently, Liu and Shih proposed generalized pixel-value differencing method in 2008. Their approach not only provides high capacity but also passes the detection of RS-diagram (Fridrich et al., 2001).
670
C.-H. Yang et al. / The Journal of Systems and Software 84 (2011) 669–678
Fig. 1. RS-diagrams yielded by the dual statistics method by Fridrich et al. (2001) for experimental of stego-images by two methods. (a) Conventional 2-bits LSB substitution; (b) pixel-value difference method.
In this paper, we propose an efficient steganographic scheme to hide data imperceptibly in gray-level images. This scheme is based on the property of human eye, which is more sensitive to the change in the smooth area than the edge area (Lee and Chen, 2000; Wu and Tsai, 2003; Chang and Tseng, 2004; Wang et al., 2008; Yang et al., 2008). In this paper, we process a block of four neighboring pixels simultaneously. The number of secret bits to be embedded in a block depends on the degree of smoothness or sharpness. Each four-pixel block is divided into two two-pixel groups, and each group is processed using the approach of pixel-value differencing. In order to extract the embedded data correctly, some schemes are designed cleverly to differentiate which pixels belonging to different groups. In Wu and Tsai’s (2003) method, some conditions will cause a block to be abandoned without embedding. To overcome this obvious drawback, a new technique known as pixel-value shifting is proposed. Experimental results show that our scheme not only provides higher embedding capacity than that of Wu and Tsai but also passes through Fridrich et al.’s (2001) detection. The paper is organized as follows. Wu and Tsai’s method is introduced in Section 2. Our scheme is presented in Section 3. The experimental results are shown in Section 4. Some analyses and discussions are given in Section 5. Finally, the conclusions are provided in Section 6. 2. Literature reviews Wu and Tsai’s steganographic method hides secret data in graylevel images by pixel-value differencing. First, the host image is partitioned into non-overlapping consecutive two-pixel blocks by scanning all the rows of the host image in a zigzag manner. A difference value d is calculated from the two pixels, say pi and pi+1 , of each block. By symmetry, only the possible absolute values of d (0 through 255) are considered and they are classified into a number of contiguous ranges, called as Ri , where i = 1, 2, 3, . . ., n. The width of Ri is ui − li + 1, where ui is the upper bound of Ri and li is the lower bound of Ri . The width of each range is taken as a power of 2. This restriction of width facilitates the embedding of binary data. If d falls in smooth area, less secret data will be hidden in the block. On the other hand, if d falls in sharp area, then the block has higher tolerance and thus more secret data can be embedded inside it. Suppose that d falls into the range Rk . The number of embedding bits is determined by the width of Rk . Therefore, the embedding operation is to replace d with a new difference value d , which is the sum of the embedded value and the lower bound of Rk . Finally, an inverse calculation from d is performed to generate the new gray values of the two pixels in the block. Note that the new gray values of the two pixels must lie in between the range [0, 255]. Therefore, if the new gray values are created by value uk , which are the maximally possible value of d , falling outside the range [0, 255], the block must be abandoned for embedding data. In the extracting phase, the secret data are extracted from the blocks of the stego-image in the same order as the embedding
phase. The number of secret bits to be embedded in a two-pixel block is determined by the range Rk , which is the range of the difference value between two pixels. In addition, the value of the embedded data in the block is calculated by subtracting the lower bound of Rk from the difference value of the block. Therefore, the embedded bits in the block can be reconstructed. To verify the security of the proposed method, authors apply statistic steganalytic technique (Fridrich et al., 2001) which is called RS-diagram, in order to prove that the method is undetected. The results are shown in Fig. 1. 3. Our approach In this section, we introduce our steganographic scheme based on blockwise embedding. We use the idea of pixel-value differencing, but we process four pixels simultaneously in spite of two at a time. In the pixel-value differencing approach of Wu and Tsai, each time two pixels are grouped for embedding secret data. Fig. 2(a) shows the only grouping result of Wu and Tsai’s method for a fourpixel block. However, as shown in Fig. 2(b), there are three kinds of possible grouping results. In order to embed data more efficiently, we have considered different grouping results in our approach. Moreover, new techniques are proposed in order to avoid the additional information needed for recording the selected grouping data. The grouping, embedding, and extracting procedures of our approach are described in the following subsections. 3.1. Pairwise grouping of a block The host images used in our scheme are 256 gray value. Two difference values d1 and d2 are computed from each non-overlapping block with four neighbor pixels, say pi,j , pi,j+1 , pi+1,j+1 , and pi+1,j , of a given host image. The strategy of partitioning the host image into four-pixel blocks is to run through all the rows in a raster scan. The four pixels pi,j , pi,j+1 , pi+1,j+1 , and pi+1,j are then renamed as p1 , p2 , p3 , and p4 , and their corresponding gray values g1 , g2 , g3 , and g4 satisfy the condition g1 ≤ g2 ≤ g3 ≤ g4 . The four-pixel block is partitioned into two two-pixel groups (g1 , g4 ) and (g2 , g3 ). The group which belongs to pi,j is defined as group1, and the other is defined as group2. In our scheme, the two group differences are computed as (g4 − g1 ) and (g3 − g2 ). The group difference of group1 is denoted as d1 , and the group difference of group2 is denoted as d2 . The difference value di (where i = 1 or 2) may be in the range from 0 to 255. The range from 0 to 255 is partitioned into a number of con-
Fig. 2. Grouping results of a four-pixel block: (a) the only grouping result used by Wu and Tsai’s method; (b) all possible grouping results.
C.-H. Yang et al. / The Journal of Systems and Software 84 (2011) 669–678
671
Fig. 3. All conditions that the gray value of pi,j is different to three other gray values.
tiguous ranges, say Rk where k = 1, 2, 3, . . ., r. The lower and upper bounds of Rk are denoted by lk and uk , where l1 is 0 and ur is 255. The width of Rk is (uk − lk + 1). In order to embed binary data, the width of each range is taken as a power of 2. A difference value which falls in the range with index k is said to have index k. The secret message is embedded by replacing the difference values of the two groups in a block. In other words, the value of each group difference will be replaced by some other value in the same range. Suppose that d1 and d2 are replaced by d1 and d2 , respectively. Then, d1 and d1 must lie in the same range; similarly, d2 and d2 lie in the same range. If the resulting pixels of the inverse calculation fall outside the boundaries of the range [0, 255], we try to shift the gray values of the pairwise pixels of a group in a certain direction concurrently, such that both gray values of the pairwise pixels fall into the range [0, 255]. The technique of shifting is denoted as pixel-value shifting. Also, to check whether g1 , g2 , g3 , and g4 all fall into the range [0, 255], it is called the falling-outside-boundary checking. 3.2. Data embedding The secret message can be seen as a long bit stream. The task is to embed this stream into the four pixels block. The number of bits which can be embedded varies and is decided by the width of the range to which the difference values of the block belongs. Let the group difference di (where i = 1, 2) falls into the range of index ki . Then, the number of bits, say ni , to be embedded in ith group is calculated by ni = log2 (uki − lki + 1). We embed n1 bits and n2 bits into group1 and group2, respectively. Let S1 and S2 be the bit streams of the secret message to be embedded, where |S1 | = n1 and |S2 | = n2 . The two new differences d1 and d2 can be computed by di = lki + bi ,
where i = 1, 2.
(1)
In the above equation, b1 and b2 are the decimal values of S1 and S2 , and are embedded into group1 and group2, respectively. By replacing d1 and d2 with d1 and d2 , respectively, an inverse calculation from d1 and d2 generates the new pixel values g1 , g2 , g3 , and g4 of the pixels p1 , p2 , p3 , and p4 . For the following two reasons, the pixel values g1 , g2 , g3 , and g4 need to be modified further. One is that all values g1 , g2 , g3 , and g4 must fall into the range [0, 255]. The other is that the two groups need to be distinguished after embedding. The key point in order to successfully distinguish between two groups is to maintain the two intervals [g1 , g4 ] and [g2 , g3 ] such that one of them contains the other. We use pixel-value shifting to modify
g1 , g2 , g3 , and g4 such that the above conditions are satisfied. The embedding algorithm is described as follows: 1. Partition the host image into non-overlapping blocks with four neighboring pixels (pi,j , pi,j+1 , pi+1,j+1 , pi+1,j ) in a raster scan manner. 2. For each block, do the following step: (a) Rename the pixels of the block as p1 , p2 , p3 , and p4 , where corresponding pixel values are g1 , g2 , g3 , and g4 , respectively. (b) Distinguish the two groups as group1 and group2. (c) Compute group differences d1 and d2 for group1 and group2, respectively. Find out index ki of the range which di (where i = 1, 2) falls into. (d) By Eq. (1), calculate the new difference values d1 and d2 . (e) Replace d1 and d2 with d1 and d2 and calculate the new pixel values g1 , g2 , g3 , and g4 by inverse calculation as follows: if (g1 , g4 ) is Group 1 g4 = g4 + [m1 /2], g1 = g1 − [m1 /2] g3 = g3 + [m2 /2], g2 = g2 − [m2 /2] else g3 = g3 + [m1 /2], g2 = g2 − [m1 /2] g4 = g4 + [m2 /2], g1 = g1 − [m2 /2]
(2)
where mi = di − di , i = 1, 2. (f) Use pixel-value shifting to shift pixel values g1 , g2 , g3 , and g4 , such that the conditions g1 ≤ g2 ≤ g3 ≤ g4 or g2 ≤ g1 ≤ g4 ≤ g3 are satisfied and g1 , g2 , g3 and g4 all fall into the range [0, 255], and that the value of the square error (g1 − g1 )2 +
(g2 − g2 )2 + (g3 − g3 )2 + (g4 − g4 )2 is minimized. (g) Use pixel-value shifting to shift one of the groups following the conditions shown in Fig. 4 (the detailed description is shown later). In Step 2(f), it is easy to know that (g1 − g1 )2 + (g2 − g2 )2 +
(g3 − g3 )2 + (g4 − g4 )2 is minimal before shifting. However, in order to distinguish group1 and group2, conditions g1 ≤ g2 ≤ g3 ≤ g4 or g2 ≤ g1 ≤ g4 ≤ g3 must be satisfied. Therefore, we just shift groups (g1 , g4 ) and (g2 , g3 ) until the conditions are satisfied. Note that we define the group which pixel pi,j belongs to as group1. The main job of distinguishing group1 and group2 is to find out the exact partner of pi,j from pixels pi,j+1 , pi+1,j+1 , and pi+1,j . Figs. 3 and 4 show all possible conditions after Step 2(f). All of them satisfy the condition that one interval of [g1 , g4 ] and [g2 , g3 ] con-
672
C.-H. Yang et al. / The Journal of Systems and Software 84 (2011) 669–678
Fig. 4. The conditions that the gray value of pi,j is equivalent to at least one gray values of other pixels.
tains the other. Fig. 3 shows all possible conditions under which the gray value of pi,j is different to the other three gray values. Under these conditions, pi,j can find out its partner exactly. Note that if the gray value of the partner of pi,j is the same as some gray values of other pixels, anyone of them can be picked to be the partner of pi,j . For example, Fig. 2(b) shows one of such cases, where each of the two right side pixels can be chosen as the partner of pi,j . Fig. 4(a)–(e) shows the condition where the gray value of pi,j is exactly the same as one of the three other gray values. Under these conditions, they need to apply the pixel-value shifting again, except in the case of Fig. 4(e). For instance, pi,j cannot distinguish its partner in the case of Fig. 4(a). In this case, it can be solved by shifting, in which group1 shifts right by one unit or group2 shifts left by one unit. The possible results are shown in the right side of Fig. 4(a), and
both these possible results are available for determining the partner of pi,j . Of course, each operation of shifting is available only if its result passes the falling-outside-boundary check, and the available result with the smaller square error is chosen. Fig. 4(f)–(i) shows the conditions where the gray value of pi,j is the same as two of three other gray values. After shifting one unit of group1 or group2, the right sides of Fig. 4(f)–(i) shows all possible results, which allows the partner of pi,j to be determined. Note that Fig. 4(g) and 4(i) are two special cases, where shifting may transform one case into the other (and vice versa). In both cases, pi,j cannot find out its partner exactly. However, this problem can be solved by defining the partner in one of cases. In our approach, we define the partner of pi,j as a pixel, whose gray value is the same as that of pi,j , shown in the case of Fig. 4(i). Therefore, there is no need of shifting in Fig. 4(i) and the
C.-H. Yang et al. / The Journal of Systems and Software 84 (2011) 669–678
Fig. 5. An illustration of the data embedding process.
Fig. 6. Two of the experimental host images with size 512 × 512: (a) Peppers; (b) Baboon.
Fig. 7. Two secret images with size 256 × 256: (a) Boat; (b) Airplane.
673
674
C.-H. Yang et al. / The Journal of Systems and Software 84 (2011) 669–678
Fig. 8. The embedding results using the set of widths 8, 8, 16, 32, 64, and 128: (a) the stego-images used in our scheme; (b) the stego-images used in PVD; (c) the enhanced difference images between the host images and the stego-images used in our scheme; (d) the enhanced difference images between the host images and the stego-images used in PVD.
case of Fig. 4(i) and the case of Fig. 4(g) can be solved by shifting. Finally, Fig. 4(j) shows the conditions where all gray values of four pixels are the same. In this condition, there is no need of shifting. An example describing the above embedding process is as follows. As shown in Fig. 5, the pixel values of a sample block (pi,j , pi,j+1 , pi+1,j+1 , pi+1,j ) are (130, 136, 122, 110). Pixels (pi,j , pi,j+1 , pi+1,j+1 , pi+1,j ) are renamed as (p3 , p4 , p2 , p1 ) such that their gray values satisfy the condition g1 ≤ g2 ≤ g3 ≤ g4 . Now we have (g1 , g2 , g3 , g4 ) = (110, 122, 130, 136) and pi,j = p3 . Because pi,j always belongs to group1, the two groups are distinguished as group1 = (g2 , g3 ) and group2 = (g1 , g4 ). The group difference d1 is 130 − 122 = 8 and d2 is 136 − 110 = 26, where d1 is in the range [8, 15] and d2 is in the range [16, 31]. The numbers of bits to be embedded into group1 and group2 are n1 = log2 (15 − 8 + 1) = 3 and n2 = log2 (31 − 16 + 1) = 4,
Fig. 9. The embedding results using the set of widths 16, 16, 32, 64, and 128: (a) the stego-images used in our scheme; (b) the stego-images used in PVD; (c) the enhanced difference images between the host images and the stego-images used in our scheme; (d) the enhanced difference images between the host images and the stego-images used in PDV.
respectively. Assume that the secret data are 1100100011. The bit stream embedded into group1 is 110, and bit stream 0100 is embedded into group2. Eq. (1) is used to calculate d1 and d2 . That is, d1 = 8 + (110)2 = 14 and d2 = 16 + (0100)2 = 20. Finally, new pixel values (g2 , g3 ) = (119, 133) and (g1 , g4 ) = (113, 133) are calculated by the inverse calculation of the embedding algorithm in Step 2(e). Because the new pixel values satisfy the condition g1 ≤ g2 ≤ g3 ≤ g4 , Step 2(f) is skipped. In addition, the case of these new pixel values is the case of Fig. 4(a), therefore, Step 2(g) next follows in such a way that the result of pixel-value shifting is (g2 , g3 ) = (118, 133), where (g2 , g3 ) = (118, 132) =
(118 − 122)2 + (132 − 130)2 = 20
(g2 , g3 ) = (113, 127) =
is
less
(113 − 122)2 + (127 − 130)2 = 90.
than
C.-H. Yang et al. / The Journal of Systems and Software 84 (2011) 669–678
675
Table 1 The capacities and PSNRs for embedding random bit stream, image Boat, and image Sailboat by our approach. Cover images (512 × 512)
Lena Baboon Peppers Toys Sailboat Girl Gold Zelda Barb Tiffany
Capacity PSNR Capacity PSNR Capacity PSNR Capacity PSNR Capacity PSNR Capacity PSNR Capacity PSNR Capacity PSNR Capacity PSNR Capacity PSNR
Widths of 8, 8, 16, 32, 64, and 128
Widths of 16, 16, 32, 64, and 128
Random bits
Boat
Airplane
Random bits
Boat
Airplane
410,854 40.54 482,515 34.67 408,281 40.47 418,948 38.55 530,888 38.11 410,199 40.74 418,575 40.25 402,163 41.18 456,479 35.33 408,486 39.24
410,854 41.24 482,515 35.31 408,281 41.19 418,948 37.47 530,888 38.73 410,199 40.63 418,575 40.85 402,163 42.34 456,479 36.70 408,486 40.80
410,854 41.21 482,515 35.21 408,281 41.05 418,948 39.22 530,888 38.56 410,199 40.44 418,575 40.87 402,163 42.18 456,479 36.46 408,486 40.66
528,966 36.75 559,222 34.30 528,791 36.83 533,455 36.51 535,984 36.51 527,846 36.54 529,319 37.15 526,145 37.78 548,206 34.80 528,678 36.35
528,966 37.38 559,222 34.46 528,791 37.38 533,455 36.68 535,984 36.43 527,846 37.13 529,319 37.53 526,145 37.77 548,206 35.30 528,678 37.05
528,966 37.23 559,222 34.32 528,791 37.27 533,455 36.56 535,984 36.28 527,846 36.93 529,319 37.42 526,145 37.63 548,206 35.11 528,678 36.93
Table 2 Comparisons of the results for embedding random bit stream into host images by ) method, ) method and our approach. Cover images (512 × 512)
Yang et al’s method
Wu and Tsai’s method
Our approach
Widths of 16, 16, 32, 64, 128, 256
Widths of 8, 8, 16, 32, 64, and 128
Widths of 8, 8, 16, 32, 64, and 128
Capacity
PSNR
Capacity
PSNR
Capacity
PSNR
Lena Baboon Peppers Toys Sailboat Girl Gold Zelda Barb Tiffany
408,828 481,288 406,688 418,274 426,396 406,254 414,528 400,698 456,800 406,144
39.59 33.45 37.93 37.09 37.21 38.73 38.05 38.94 34.39 38.69
406,632 437,806 401,980 406,656 415,554 402,965 405,634 398,584 442,529 403,764
41.71 38.90 41.07 39.93 40.67 42.54 42.20 42.66 36.24 41.47
410,854 482,515 408,281 418,948 430,888 410,199 418,575 402,163 456,479 408,486
40.54 34.67 40.47 38.55 38.11 40.74 40.25 41.18 35.33 39.24
Average
422,589
37.41
412,210
40.74
424,739
38.91
3.3. Data extracting The process of extracting the embedded message is similar to the embedding process with the same traversing order of visiting all blocks. First, we rename the pixels of a block as (p1 , p2 , p3 , p4 ) and distinguish group1 and group2. Then, all values of di (i = 1, 2), ki (i = 1, 2), and ni (i = 1, 2) of this block are found. Note that the calculation of these values is the same as the embedding process, except that
the block now comes from stego-images. The bit-stream values b1 and b2 , which are embedded in this block are then extracted using the following equations:
bi = di − lki ,
where i = 1, 2
(3)
Finally, each value bi is transformed into a bit stream with ni bits.
Table 3 The distributions of difference values of Wu and Tsai’s method for various images. Cover images
Ranges [0, 7]
[8, 15] No.
[16, 31] Ratio (%)
No.
[32, 63] Ratio (%)
No.
[64, 127]
No.
Ratio (%)
Lena Baboon Peppers Toys Sailboat Girl Gold Zelda Barb Tiffany
105,622 66,283 112,598 110,377 86,980 103,373 102,379 114,796 78,228 109,720
80.58 50.57 85.91 84.21 66.36 78.87 78.11 87.58 59.68 83.71
15,038 30,957 11,606 11,419 26,895 19,351 18,693 11,777 20,420 13,222
11.47 23.62 8.85 8.71 20.52 14.76 14.26 8.99 15.58 10.09
7643 23,559 4819 5055 12,271 7059 7727 3660 17,796 5635
5.83 17.97 3.68 3.86 9.36 5.39 5.90 2.79 13.58 4.30
2534 9788 1809 3650 4417 1177 2128 809 11,954 2218
Ratio (%) 1.93 7.47 1.38 2.78 3.37 0.90 1.62 0.62 9.12 1.69
235 485 240 565 473 112 145 30 2652 275
No.
Average
99,036
75.56
17,938
13.68
9522
7.27
4048
3.09
521
[128, 255] Ratio (%) 0.18 0.37 0.18 0.43 0.36 0.09 0.11 0.02 2.02 0.21 0.4
No.
Ratio (%)
0 0 0 6 36 0 0 0 22 2
0.00 0.00 0.00 0.00 0.03 0.00 0.00 0.00 0.02 0.00
7
0.00
676
C.-H. Yang et al. / The Journal of Systems and Software 84 (2011) 669–678
Table 4 The differences between the distributions of difference values of ours and Wu and Tsai’s method. Cover images
Ranges [64, 127]
[128, 255]
Lena Baboon Peppers Toys Sailboat Girl Gold Zelda Barb Tiffany
[0, 7] −5709 −17,813 −10,137 −11,303 −10,253 −7825 −20,834 −6406 −7850 −5420
[8, 15] 3161 −2620 6443 4014 1474 2748 10,506 3815 929 2670
[16, 31] 1214 2766 1952 4570 3893 3533 7865 1872 1629 1719
[32, 63] 999 11,285 1291 1587 3603 1140 2313 496 4151 698
330 6255 430 614 1227 195 150 177 980 81
5 127 21 518 56 209 0 46 161 252
Average
−10,355
3314
3101
2756
1044
140
Table 5 The number of difference values which fall out of range [0, 255]. Strategies
Cover images Lena
PVD Our approach
0 0
Baboon 0 0
Peppers
Toy
84 0
Sailboat
112 0
49 0
Girl
Gold
Zelda
0 0
0 0
0 0
Barb 69 0
Tiffany 59 0
Table 6 The results of our approach using the widths of 8, 16, 32, 64, 128, and 8. Our scheme
Ranges 8–16–32–64–128–8
Cover images
Capacity PSNR
Lena
Baboon
Peppers
Toy
Sailboat
Girl
Gold
Zelda
Barb
Tiffany
432,333 38.86
524,785 33.31
428,909 39.15
438,658 37.77
465,837 36.77
435,678 38.38
452,917 38.17
419,507 40.08
488,357 35.09
427,248 38.67
4. Experimental results In our experiments, we used ten host images with size 512 × 512. Two of them are shown in Fig. 6. Two sets of widths, which partition the range [0, 255], are used in the experiments. The first experiment selects the widths of 8, 8, 16, 32, 64, and 128, which partitions the range [0, 255] into ranges [0, 7], [16, 31], [32, 63], [64, 127], and [128, 255]. The second experiment is based on the widths
of 16, 16, 32, 64, and 128. We used a random bit stream, images “Boat” and “Sailboat” as separate secret messages in the experiments. Images “Boat” and “Airplane” are 256 × 256 gray images as shown in Fig. 7. If a random bit stream is used as secret data, then the values of PSNR and capacities are the averages of the results obtained by 100 times executing the different random bit streams. The capacities and PSNR of embedding the results of two sets of widths are shown in Table 1. The capacities of the second experi-
Fig. 10. RS-diagrams yielded by the dual statistics method of Fridrich et al. (2001) for various experiments: (a) simple 2-bit LSB substitution; (b) pixel-value difference (Wu and Tsai, 2003); (c) our method using widths of 8, 8, 16, 32, 64, and 128; (d) our method using width of 16, 16, 32, 64, and 128.
C.-H. Yang et al. / The Journal of Systems and Software 84 (2011) 669–678
ment are higher than that of the first experiment. This is due to the fact that the second experiment uses larger widths. Consequently, the values of PSNR of the second experiment are worse. The results of the first experiment for host images “Lena” and “Baboon” are shown in Fig. 8. Fig. 8(a) and (b) shows the stego-images in use of our scheme and PVD, and Fig. 8(c) and (d) shows the corresponding enhanced difference images between the stego-images and the host images (with the differences of gray values being scaled eight times and inverted) for PVD as well as our scheme. Also, the results of the second experiment with larger width are shown in Fig. 9. As observed the enhanced difference images of Fig. 8(c) and (d) and Fig. 9(c) and (d), it can be seen that major distortions are found on the edges of the images. This means that more secret data are embedded in the edge areas. It can also be seen that Fig. 8(c) has more secret embedded in than Fig. 8(d). Similarly, Fig. 9(c) is better than Fig. 9(d) in secret embedding. Table 2 shows the results compared with Yang et al. and Wu–Tsai’s method. According to Table 2, our scheme has a higher capacity and maintains unnoticeable distortion of the image in each experimental case. We follow the detection tool of pixel-value differencing scheme that apply dual statistic proposed by Fridrich et al. (2001). The detection results are shown in Fig. 10 and it can be observed that the approach of a simple 2-bit LSB substitution can indeed be detected, because the percentages of the singular pixel groups will become more and more equal or unequal, when the percentage of pixels embedded by a simple 2-bit LSB substitution approaches 100%. However, the RS-diagrams of the other three experiments, as shown in Fig. 10(b)–(d), indicate that the stego-images do not seem to contain any embedded data. This is due to the fact that the expected value of Rm is close to that of R−m and that of Sm is close to that of S−m .
5. Analyses and discussions From Table 2, we clearly see that our approach has hiding capacity about 1.01–1.10 greater than Wu and Tsai’s method, however, the PSNR value has dropped between 1.48 and 4.23 db. Basically, our approach is based on PVD scheme. In the same partition of range, more hiding capacity of our approach means that more edge areas are found in our approach, that is, there are more difference values located in latter ranges of larger widths. Ranges with large widths can embed more secret messages but it will cause the drop of PSNR values. However, the drop of PSNR values is acceptable for the reason that edge areas can tolerate larger distortions for human eyes shown in Figs. 8(a), (b) and 9(a), (b). In this section, we show the distributions of difference values in both Wu and Tsai’s PVD method and our blockwise scheme. Also, a more practical partition of range is suggested in order to increase the capacity by observing the distribution of difference values. There are three conclusions shown in this section. First, our approach finds out more edge areas than Wu and Tsai’s method does. Table 3 shows the distributions of difference values of Wu and Tsai’s method for various images. Also, the differences between the distributions of ours and theirs are shown in Table 4. In Table 4, “Positive number” is the amount of the increase between our approach and Wu and Tsai’s method in the same range. “Negative number” is the amount of the decrease between our approach and Wu and Tsai’s method in the same range. In Table 4, it shows that larger difference values could be located. For example, there are extra 11,285 found in our scheme when compared with PVD. In other words, more secret could be embedded into the edge areas by means of our scheme. Second, our approach can avoid the conditions of falling out of the boundary. We propose the skill of pixel-value shifting to shift the pixel values. It can solve the conditions of falling out of the boundary, and hence increase the embedded messages. Table 5
677
shows the amounts of pairwise pixels falling out the boundary in Wu and Tsai’s method and ours. Although the amounts of falling out of the boundary are not large in their method, however, our approach guarantees that all pairwise pixels are usable. Finally, the partition scheme of range suggested by Wu and Tsai’s is reasonable, but it is not practical. In their partition, the later ranges are larger for the reason that edge areas can tolerate larger distortions. Therefore, their partition of range is reasonable. However, as shown in Table 3, there are about 90% difference values falling into ranges [0, 7] and [8, 15], but few difference values fall into ranges [64, 127] and [128, 255]. Hence, distinguishing the widths between ranges [0, 7] and [8, 15] is needed, and the large widths of ranges [64, 127] and [128, 255] are almost no meaning. We suggest a more practical partition with widths 8, 16, 32, 64, 128, and 8. The experimental results are shown in Table 6. Compare to the results of our approach shown in Table 2, the capacities of Table 6 increase 17,344–42,270 bits and the PSNR values are still accepted by human eyes. 6. Conclusions In this paper, we propose a new capacity promoting technique for an image which embeds data keeping human vision in mind. Our blockwise approach extends the pixel-value differencing approach by processing a four-pixel block simultaneously. If a block is located in a sharp area, then more secret data will be embedded that is our proposed scheme in this paper with pixel-value differencing manners. The experimental results demonstrate that our approach can find out more edge areas than the original PVD scheme. Therefore, our approach provides a better way for embedding more secret data into host images based on the PVD approach. In addition, similar to the original PVD scheme, our approach can pass the program detection of Fridrich et al. Finally, we suggest a more practical partition of range to increase the capacity for the PVD approach. Acknowledgements This research was partially supported by the National Science Council of the Republic of China under the Grant NSC 98-2221-E015-001-MY3-, and NSC 99-2918-I-015-001-. References Anderson, R.R., Peticolas, F.A.P., 1998. On the limits of steganography. IEEE Journal of Selected Areas in Communications 16 (4), 474–481. Artz, D., 2001. Digital steganographic: hiding data within data. IEEE Internet Computing 5, 75–80. Bender, W., Gruhl, D., Morimoto, N., Lu, A., 1996. Techniques for data hiding. IBM Systems Journal 35, 313–316. Chan, C.K., Chen, L.M., 2004. Hiding data in images by simple LSB substitution. Pattern Recognition 37 (3), 469–474. Chang, C.C., Tseng, H.W., 2004. A steganographic method for digital images using side match. Pattern Recognition Letters 25 (12), 1431–1437. Chang, C.C., Chen, T.S., Chung, L.Z., 2002. A steganographic method based upon JPEG and quantization table modification. Information Sciences 141 (1), 123–138. Chu, Y.H., Chang, S., 1999. Dynamical cryptography based on synchronized chaotic systems. IEE Electronics Letters 35 (12), 974–975. Fridrich, J., Goljan, M., Du, R., 2001. Reliable detection of LSB stegnography in grayscale and color images. In: Proceedings of the ACM Workshop on Multimedia and Security , pp. 27–30. Highland, H.J., 1997. Data encryption: a non-mathematical approach. Computers and Security 16, 369–386. Ker, A.D., 2007. Steganalysis of embedding in two least-significant bits. IEEE Transactions on Information Forensics and Security 2 (1), 46–54. Lee, Y.K., Chen, L.H., 2000. High capacity image steganography. IEE Proceedings on Vision Image and Signal Processing 147 (3), 288–294. Li, X., Yang, B., Cheng, D.F., Zeng, T.Y., 2009. A generalization of LSB matching. IEEE Signal Processing Letter 16 (2), 69–72. Liu, J.C., Shih, M.H., 2008. Generalizations of pixel-value differencing steganography for data hiding in images. Fundamenta Informaticae 83 (3), 319–335. Wang, C.M., Wu, N.I., Tsai, C.S., Hwang, M.S., 2008. A high quality steganographic method with pixel-value differencing and modulus function. Journal of Systems and Software 81 (1), 150–158.
678
C.-H. Yang et al. / The Journal of Systems and Software 84 (2011) 669–678
Wu, D.C., Tsai, W.H., 2003. A steganographic method for images by pixel-value differencing. Pattern Recognition Letters 24, 9–10, 1613–1626. Wu, H.C., Wu, N.I., Tsai, C.S., Hwang, M.S., 2005. Image steganographic scheme based on pixel-value differencing and LSB replacement methods. IEE ProceedingsVision Image and Signal Processing 152 (5), 611–615. Yang, C.H., 2008. Inverted pattern approach to improve image quality of information hiding by LSB substitution. Pattern Recognition 41 (8), 2674–2683. Yang, C.H., Wang, S.J., 2006. Weighted bipartite graph for locating optimal LSB substitution for secret embedding. Journal of Discrete Mathematical Sciences & Cryptography 9 (1), 153–164 (Taru Publications, India).
Yang, C.H., Wang, S.J., Weng, C.Y., 2007. Analyses of pixel-value-differencing schemes with LSB replacement in stegonagraphy. In: The Third International Conference on Intelligent Information Hiding and Multimedia Signal Processing, pp. 445–448. Yang, C.H., Wang, S.J., Weng, C.Y., Sun, H.M., 2008. Information hiding technique based on blocked PVD. Journal of Information Management 15 (3), 29–48. Yu, Y.H., Chang, C.C., Hu, Y.C., 2005. Hiding secret data in images via predictive coding. Pattern Recognition 38 (5), 691– 705.