Reversible data hiding in halftone images based on minimizing the visual distortion of pixels flipping

Reversible data hiding in halftone images based on minimizing the visual distortion of pixels flipping

Reversible Data Hiding in Halftone Images Based on Minimizing the Visual Distortion of Pixels Flipping Journal Pre-proof Reversible Data Hiding in H...

12MB Sizes 0 Downloads 53 Views

Reversible Data Hiding in Halftone Images Based on Minimizing the Visual Distortion of Pixels Flipping

Journal Pre-proof

Reversible Data Hiding in Halftone Images Based on Minimizing the Visual Distortion of Pixels Flipping Xiaolin Yin, Wei Lu, JunHong Zhang, Wanteng Liu PII: DOI: Reference:

S0165-1684(20)30148-1 https://doi.org/10.1016/j.sigpro.2020.107605 SIGPRO 107605

To appear in:

Signal Processing

Received date: Revised date: Accepted date:

6 September 2019 29 March 2020 1 April 2020

Please cite this article as: Xiaolin Yin, Wei Lu, JunHong Zhang, Wanteng Liu, Reversible Data Hiding in Halftone Images Based on Minimizing the Visual Distortion of Pixels Flipping, Signal Processing (2020), doi: https://doi.org/10.1016/j.sigpro.2020.107605

This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2020 Published by Elsevier B.V.

Highlights 1. A minimization model of visual distortion is designed by calculating the visual quality score rather than relying on a look-up table; 2. A visual quality score is constructed by analyzing the structural properties to determine the optimal flippable pixels; 3. The performance of embedding payload is improved while ensuring high visual quality.

Reversible Data Hiding in Halftone Images Based on Minimizing the Visual Distortion of Pixels Flipping Xiaolin Yina , Wei Lua,∗, JunHong Zhanga , Wanteng Liua a

School of Data and Computer Science, Guangdong Key Laboratory of Information Security Technology, Ministry of Education Key Laboratory of Machine Intelligence and Advanced Computing, Sun Yat-sen University, Guangzhou 510006, China

Abstract With the rapid development of multimedia technology, reversible data hiding (RDH) has been proposed as a technique to protect the hidden information and recover the original image. However, the visual quality of the existing RDH methods in halftone images is not typically satisfactory. In this paper, to reduce the visual distortion, a novel RDH method in halftone images based on minimizing the visual distortion of pixels flipping is proposed. First, based on the human visual system, the suitable marked blocks are selected by the designed visual quality score. Then, by analyzing the block structure of halftone images, the optimal flippable pixels in the suitable marked blocks are selected to embed data. One of the advantages of the proposed method is that the distortion caused by pixels flipping is minimal. The other is that to achieve reversibility, the positions of these optimal flippable pixels can be directly determined by calculating the minimal visual distortion without recording the location map. Experimental results demonstrate that the proposed method ensures high visual quality for marked images and achieves high embedding payload. Keywords: Reversible data hiding, halftone images, minimizing visual distortion, visual quality score, embedding payload



Corresponding author. Email addresses: [email protected] (Xiaolin Yin), [email protected] (Wei Lu), [email protected] (JunHong Zhang), [email protected] (Wanteng Liu)

Preprint submitted to Elsevier

April 1, 2020

1. Introduction With the rapid development of multimedia technology, multimedia security issues have gradually become a research hotspot [1, 2, 3, 4, 5]. As a common information carrier, digital images are widely used in various network environments due to their large quantity of information and higher fidelity. Under the Internet and image technology, people are exposed to a large variety of digital images with information in real life. For multimedia security and integrity authentication, some secret information should be embedded in digital images through data hiding methods. In recent years, data hiding has received a lot of attention. For most data hiding methods, the distortion caused by embedding messages in an original image is permanent, which cannot be recovered from the marked image after extracting the embedded messages. Distortion is intolerable and it is essential to restore the original image in some sensitive cases, such as law enforcement, medical imagery [6], and military imagery. Consequently, reversible data hiding (RDH) has been proposed as a technique for completely extracting the hidden messages and recovering the original image. In Fig. 1, the framework of RDH scheme is presented. In the embedding process, the secret message is embedded in an original image. In the extraction process, the marked image is perfectly recovered into the original image, and the secret message is correctly extracted. Binary images have been used in bi-level output devices such as printers, copiers, and fax machines. The files which are previously stored as binary images are still spreading across the Internet. Different from gray-scale and color images, halftone images are the special kinds of binary images with two tones, which are converted by multi-tone continuous digital images to scattered dot images [7] under the halftoning process. Halftone images are usually generated by error diffusion [8, 9] and ordered dither [10]. Since a halftone image has only two tones, it transmits fast and needs small storage. One of the important applications for halftone images is facsimile. Since halftone images are still used as storage and transportation carriers, it is still worth studying on the halftone image RDH for the existence and usage of halftone images, although RDH for halftone images has received less 2

Secret message

Extracting

Secret message

Embedding Marked image

Recovering

Original image

Original image Embedding process

Extraction process

Figure 1: The framework of RDH scheme including the embedding process and extraction process.

attention in recent years. Some RDH methods [11, 12, 13, 14, 15, 16, 17] in gray-scale, color images or encrypted images have been released. There are some RDH methods based on difference expansion (DE) proposed in [18, 19, 20]. Also, other RDH methods based on histogram shifting (HS) [21, 22, 23] are proposed. In [24, 25], the RDH methods are achieved by integer transforming. However, these RDH methods in gray-scale and color images achieve great embedding payload but do not apply to binary images, especially halftone images. Compared to 8 bits per pixel (bpp) in a gray-scale image and 24 bpp in a color image, halftone image has only 1 bpp which leads to fewer embeddable pixels. On the other hand, as the common binary images are continuous instead of dot-like scattered, some RDH methods in binary images proposed in [26, 27, 28] are not appropriate to halftone images as well. In recent years, some RDH methods in halftone images were proposed in [29, 30, 31, 32, 33]. In [31], Pan et al. converted a set of non-overlapping 4 × 4 blocks into a binary sequence then rearranged into the decimal. A look-up table (LUT) is constructed by selected several sets of similar block patterns pairs with the greatest number of appearance times and the pattern appears for one time. The similar block patterns pairs are designed based on human visual system (HVS) and the visual response filter [34] which had the smallest distortion. In [33], an ordered dither halftoning process of halftone images is proposed, and the maximum span pixel pairs are constructed by dividing the pixel pairs into three categories. And blackand-white pixel pairs are with the minimal number of appearance times under this halftoning method. Lien et al. embedded the secret messages by swapping White-Black-pixel pairs into Black-White-pixel pairs. Both Pan’s [31] and Lien’s [33] methods can achieve a good 3

embedding payload because they using the patterns or pairs which are with the greatest number of appearance times in original images to embed messages. But considering the impact of visual quality caused by changing these patterns or pairs, the selected flipped pixels will lead to large image distortion. Consequently, it is still a challenging but significant work to achieve an RDH method in halftone images to keep great performance in visual quality under high embedding payload. In this paper, a novel RDH method based on minimizing the visual distortion of pixels flipping (MVD) between original and marked images is proposed. The principle of the proposed model is presented that for the original images, there are numerous marked images after using various RDH methods. By obtaining the minimal difference of visual quality between the original images and the marked images, the optimal RDH method which generates these marked images can be designed. The proposed RDH method is constructed which guarantees the minimal flipping visual distortion in the marked images. Based on MVD, a visual quality score is designed in the overlapping 4 × 4 blocks, which is obtained through the comparison of correlation between an original block and its marked blocks with several flipped pixels. The higher the visual quality score is, the more suitable for the pixels to embed messages by being flipped. To ensure reversibility while hiding and extracting data, the optimal flippable pixel in a block is defined as the pixel which maintains the highest visual quality scores both from the original block to the marked block, and from the marked block to the recovering original block. The proposed RDH method is aimed at improving visual quality and increasing the embedding payload. The main contributions of the proposed method are 1. A minimization model of flipping visual distortion is designed. After embedding messages by flipping the optimal flippable pixels, the highest visual quality can be maintained between the original image and its marked image. Based on MVD, an RDH method is achieved by calculating the visual quality score rather than relying on the LUT; 2. The visual quality score is constructed by analyzing the structural properties of the 4

pixels and their surrounding pixels in halftone images, and the optimal flippable pixels can be selected; 3. The embedding payload is increased while ensuring high visual quality as the proposed method is no longer limited to a small number of pattern pairs defined in the LUT. The remainder of this paper is organized as follows. In Section 2, we describe the proposed RDH method based on minimizing the visual distortion of pixels flipping. The principle of optimizing the visual quality is constructed, and the structural properties of halftone images are analyzed. Then the selection of the optimal flippable pixels is presented. In Section 3, the embedding and extraction processes of the proposed RDH method are proposed in detail. Section 4 discusses the experimental results and presents that the proposed method achieves better visual quality and greater embedding payload. Finally, the conclusion is presented in Section 5. 2. Model of flipping visual distortion 2.1. Principle of halftone visual quality The objective visual quality of gray-scale images is directly evaluated by the peak-signalto-noise ratio (PSNR) as well as the structural similarity index measure (SSIM). In gray-scale images , the data embedding process is to increase and decrease 1 in the pixel values whose gray-scale values are between the minimum point and the maximum point in the histogram. In the worst case, the gray-scale values of all pixels will be added or subtracted by 1. Consequently, it leads the PSNR of the marked image versus the original image according to the resultant mean square error (MSE) to directly evaluate visual quality. The higher the PSNR, the fewer the number of pixels whose gray-scale values are modified. However, halftone images simulate the brightness of gray-scale images by describing the density of white scattered dots. Since there are many halftoning methods, it is hard to evaluate the visual quality of two halftone images directly which are generated by the same gray-scale image utilizing two different halftoning methods. As shown in Fig. 2, there are examples of 3 halftone images generated by the same original gray-scale image using the dispersed ordered 5

(a)

(b)

(c)

(d)

(e)

(f)

(g)

Figure 2: Examples of 3 halftone images generated by the same original gray-scale image using 3 different halftoning methods, and the recovering images using the inverse halftoning method by low-pass filtering. (a) The original gray-scale image. (b) The halftone image generated by the dispersed ordered dither matrix [33]. (c) The halftone image generated by the dither algorithm of MATLAB R2015b. (d) The halftone image generated by error diffusion [9]. (e) The recovering image of the halftone image in (b) with PSNR=23.3361

6 and SSIM=0.3531 compared with the original gray-scale image in (a). (f) The recovering image of the halftone image in (c) with PSNR=24.2498 and SSIM=0.4194 compared with the original gray-scale image in (a). (g) The recovering image of the halftone image in (c) with PSNR=24.0664 and SSIM=0.4216 compared with the original gray-scale image in (a).

dither matrix [33], dither algorithm of MATLAB R2015b and error diffusion [9]. These halftoning methods have different expressions in presenting the brightness of the original gray-scale image, which results in different halftone images. Shown in Fig. 2(b), based on an ordered dithering matrix [33], a halftone image is generated by simply comparing the pixels in the original gray-scale image. So the halftone images generated in [33] present in a distinct checkerboard type, and the Black-White-pixel pair has a small appearance probability. The halftone images generated by the dither algorithm of MATLAB R2015b are relatively smooth shown in Fig. 2(c). And for halftone image in Fig. 2(d) generated by error diffusion [9], the black pixels are more densely distributed at the edge of the original image. In general, these three halftone images have different expressions but are all mapped to the same gray-scale image shown in Fig. 2(a). A process is referred to as “inverse halftoning” [35] which reconstructs the 8-bit image from its halftoned version. The inverse halftoning images of Fig. 2(b)-2(d) are shown in Fig. 2(e)-2(g), respectively. When viewing from a certain distance, the halftone image resembles the original multi-tone image due to the low-pass nature of HVS [36]. The inverse halftoning method utilizes Gaussian low-pass filtering with 5 × 5 block size and σ = 0.5. In Fig. 2, the larger the PSNR and SSIM values, the closer the visual quality between the inverse halftoning image and the original image. Although the three groups of visual quality results are not the same, these 3 inverse halftoning images can be mapped to the original image since the differences are small. The mapping for original gray-scale image IG to its set of halftone images {IH } and the visual quality for the set of inverse halftoning images {IV } are defined as: {IH } = Half toning(IG ) {IV } = InverseHalf toning({IH })

(1)

Vg (IVi ) ≈ Vg (IVj ) where Vg (∗) is the visual quality evaluation of gray-scale images, and IVi , IVj ∈ {IV } where i 6= j. Consequently, different halftoning processes produce different halftone images but the visual quality of inverse halftone images is closely similar to their original gray-scale image. Also, while conducting various RDH methods to embed the same secret messages in 7

(a)

(b)

(c)

(d)

Figure 3: Examples of 3 marked images generated by different RDH methods. (a) The original halftone image. (b) The marked image IM1 generated by RDH method MR1 with 1000 pixels flipped, which PSNR = 24.1854. (c) The marked image IM2 generated by RDH method MR2 with 1000 pixels flipped, which PSNR = 24.1854. (d) The marked image IM3 generated by RDH method MR3 with 1400 pixels flipped, which PSNR = 22.7365.

8

halftone images, numerous results are presented in the marked images. In these different marked images, numerous various pixels are flipped, and the selections of these flipped pixels may be completely different. Consequently, when evaluating whether a marked image has high visual imperceptibility, it is necessary to compare the marked image with its original halftone image. Note that the comparison of visual quality between two images is not the number of different pixels since flipping fewer pixels in a halftone image does not mean less visual distortion. For example, in Fig. 3, there are 3 marked images used 3 different RDH methods. The marked image IM1 in Fig. 3(b) and IM2 in Fig. 3(c) are with the same PSNR while the same number of pixels are flipped. However, several black pixels appear on the white background at the left top of the marked image IM1 . It is easier to perceive that secret messages have been embedded in marked image IM1 than marked image IM2 . Even if the marked image IM3 shown in Fig. 3(d) has been flipped more pixels than the marked image IM1 , the marked image IM3 is less perceptible. Consequently, the visual distortion is to measure the visual similarity between the original halftone image and the marked image under the same visual quality evaluation. When this visual distortion is minimized or even zero, we call it the model which minimizes the visual distortion of pixels flipping (MVD). 2.2. Minimizing the visual distortion of pixels flipping The proposed RDH method is constructed based on minimizing the visual distortion of the original and marked image. Let IO be the original halftone image and {IM } be its set of marked halftone images which are embedded the secret messages using the RDH method {MR }. When |Vh (IO ) − Vh (IMi )| is minimal, the RDH method MRi is satisfied with the minimum of visual distortion, where Vh (∗) is the visual quality evaluation of halftone images, and IMi ∈ {IM }, MRi ∈ {MR }. While the visual distortion between the original halftone image IO and its marked halftone image IMi is minimized, the RDH method MRi has the considerable performance of visual quality. Based on MVD, the purpose of the proposed RDH method is to design the optimal flippable pixels that minimize the visual distortion between the original and marked images. Based on MVD and the visual quality analysis of halftone images mentioned in Section 9

ܾͷ

ܾ͸

ܾͳ͸ ܾͳ ܾͳͷ ܾͶ

ܾ͹

ܾʹ

ܾͺ

ܾͻ

ܾͶ

ܾʹ

ܾͳ

ܾʹ

ܾ͵ ܾͳͲ ܾͶ

ܾͳͶ ܾͳ͵ ܾͳʹ ܾͳͳ ܾͳ

ܾͳ

ܾʹ

ܾͶ

ܾ͵

ܾ͵

ܾ͵

Figure 4: An image divided into overlapping 4 × 4 blocks with 3 step length, and only 4 pixels b1 , b2 , b3 , b4 which are affected by another 12 neighbor pixels can be considered to be flipped in the proposed method.

2.1, we conduct an RDH method by calculating the visual quality of halftone images. For an image divided into overlapping 4 × 4 blocks shown in Fig. 4, only 4 pixels b1 , b2 , b3 , b4 which are affected by another 12 neighbor pixels can be considered to be flipped in the proposed method. For an RDH method, the primary work is to select the optimal flippable pixels as the candidates to embed messages. By flipping 4 pixels b1 , b2 , b3 , b4 in the original block Borg respectively, the 4 marked blocks in the set {Bmark } are generated. Belonging to {Bmark }, the suitable marked block Bmarkx which flips the pixel bx from Borg , x = 1, 2, 3, 4, and is satisfied as: Bmarkx = arg max V s(Borg , {Bmark })

(2)

where the visual quality score V s(∗) is designed to evaluate visual quality. For data hiding methods, once the suitable block Bmarkx and the flipped pixel bx have been determined, the visual distortion can be minimized. To achieve reversibility, one more flipping action is needed to select the optimal flippable pixels. After selecting the suitable marked block Bmarkx , the 4 pixels b1 , b2 , b3 , b4 in Bmarkx are flipped again respectively, and 4 flipped blocks in the set {Bf lipx } are generated. Belonging to {Bf lipx }, the flipped block Bf lipxy which flips

10

the pixel by from Bmarkx , y = 1, 2, 3, 4, and is satisfied as: Bf lipxy = arg max V s(Bmarkx , {Bf lipx })

(3)

Based on Eq. (2) and Eq. (3), optimal visual quality of (Borg , Bmarkx ) and (Bmarkx , Bf lipxy ) can be obtained, and the distortion d after these two flipping actions is denoted as: d = V s(Borg , Bmarkx ) − V s(Bmarkx , Bf lipxy )

(4)

When the distortion d is minimized (actually requires d = 0, which is discussed in Section 2.3), the optimal flippable pixel is selected. The visual quality score V s evaluates the visual quality of two blocks. For example, V s(Borg , Bmarkx ) is calculated for a original block Borg and its marked block Bmarkx , defined as: V s(Borg , Bmarkx ) = corr[(Borg ∗ f ), (Bmarkx ∗ f )]

(5)

where corr(A, B) is the correlation coefficient which reflects the similarity degree between the matrices A and B of size 4 × 4, defined as: P4 P4 ¯ uv − B) ¯ (Auv − A)(B corr(A, B) = q P P u v ¯ 2) ¯ 2 )(P4 P4 (Buv − B) ( 4u 4v (Auv − A) v u

(6)

filter f [34, 31] is utilized according to  0.1628    0.3215 1   f=  0.4035 11.566    0.3215  0.1628

(7)

where A¯ is the mean of the values in the matrix A. In Eq. (5), the 5 × 5 visual response HVS as follows 0.3215 0.4035 0.3215 0.1628 0.6352 0.7970 0.6352 0.3215



  0.7970 0.6352 0.3215    1 0.7970 0.4035    0.7970 0.6352 0.3215   0.4035 0.3215 0.1628

. (Borg ∗ f ) are the 8 × 8 matrices which are obtained by convoluting the original block Borg with the visual response filter f , under 1-pixel step and 4-pixel padding. And the visual quality scores V s(Borg , Bmarkx ) is to evaluate the visual correlation of two blocks after using 11

the visual response filter f for halftone images. For example, in Fig. 5, the marked block Bmark2 has the maximal V s(Borg , Bmark2 ) = 0.9973, and the pixel b2 is selected according to Eq. (2). Consequently, the pixel b2 is the most appropriate pixel for flipping in the original block Borg , and the marked block Bmark2 with b2 flipped is most similar to Borg . It means that between Borg and Bmark2 , the image visual distortion is minimal. Flipping the other 3 pixels b1 , b3 , b4 will cause greater distortion for this original block. After calculating the most appropriate pixel which minimizes the visual distortion by being flipped, we need to determine whether this pixel is the optimal flippable pixel that achieves reversibility for the data receiver. 2.3. Optimal flippable pixels selection for RDH As mentioned in Section 2.2, the second flipping action is the key step for RDH, which maximizes the visual similarity between the suitable block Bmarkx and its flipped block Bf lipxy . After obtaining the distortion d of two flipping actions according to Eq. (4), d = 0 or not. When d 6= 0, the data receiver cannot determine whether the block has been embedded data. Moreover, the flipped pixel bx in first flipping action usually different from the flipped pixel by in the second flipping action, which confuses the data receiver to extract data and recover the image. For example, in Fig. 6, the example of unsuccessfully selecting the optimal flippable pixel. First, the suitable marked block Bmark1 is selected by flipping pixel b1 based on the maximal V s(Borg , Bmark1 ) = 0.9941. Then in the blue frame, the pixels b1 , b2 , b3 , b4 in the marked block Bmark1 are flipped respectively, and the visual quality scores V s(Bmark1 , Bf lip1y ) are calculated. However, the maximal V s(Bmark1 , Bf lip14 ) = 0.9943 means that pixel b4 is flipped but not pixel b1 . Also the distortion d = 0.0002. It confuses the data receiver that which pixel is the optimal flippable pixel, b1 or b4 . Consequently, the distortion d is required as 0. According to Eq. (4), the optimal flippable pixel is selected as: V s(Borg , Bmarkx ) = V s(Bmarkx , Bf lipxy )

(8)

It is obvious that when Borg = Bf lipxy , the distortion d = 0. In this way, the optimal flippable pixel bx is unique as bx is the same pixel as by in 2 flipping actions. 12

ķ Selection of suitable marked block ‫ʹ ݇ݎ ܽ݉ܤ‬

࢈૚ ࢈૛ ࢈૝ ࢈૜ ‫݃ݎ݋ܤ‬

࢈૛ ‫ͳ ݇ݎ ܽ݉ܤ‬

ܸ‫ݏ‬ሺ‫ ݃ݎ݋ܤ‬ǡ ‫ ͳ ݇ݎ ܽ݉ܤ‬ሻ ൌ ͲǤͻͻ͵͸

‫ʹ ݇ݎ ܽ݉ܤ‬

ܸ‫ݏ‬ሺ‫ ݃ݎ݋ܤ‬ǡ ‫ ʹ ݇ݎ ܽ݉ܤ‬ሻ ൌ ͲǤͻͻ͹͵

‫͵ ݇ݎ ܽ݉ܤ‬

ܸ‫ݏ‬ሺ‫ ݃ݎ݋ܤ‬ǡ ‫ ͵ ݇ݎ ܽ݉ܤ‬ሻ ൌ ͲǤͻͻͷʹ

࢈૛ ‫ͳʹ ݌݈݂݅ܤ‬

‫ʹʹ ݌݈݂݅ܤ‬

ܸ‫ݏ‬ሺ‫ ʹ ݇ݎ ܽ݉ܤ‬ǡ ‫ ͳʹ ݌݈݂݅ܤ‬ሻ ൌ ͲǤͻͻͷ͹ ܸ‫ݏ‬ሺ‫ ʹ ݇ݎ ܽ݉ܤ‬ǡ ‫ ʹʹ ݌݈݂݅ܤ‬ሻ ൌ ͲǤͻͻ͹͵

‫͵ʹ ݌݈݂݅ܤ‬

‫ ݇ݎ ܽ݉ܤ‬Ͷ

ܸ‫ݏ‬ሺ‫ ݃ݎ݋ܤ‬ǡ ‫ ݇ݎ ܽ݉ܤ‬Ͷ ሻ ൌ ͲǤͻͻ͹Ͳ

ĸ Selection of optimal flippable pixel ࢈૛

‫ʹ ݌݈݂݅ܤ‬Ͷ

ܸ‫ݏ‬ሺ‫ ʹ ݇ݎ ܽ݉ܤ‬ǡ ‫ ͵ʹ ݌݈݂݅ܤ‬ሻ ൌ ͲǤͻͻ͸͸ ܸ‫ݏ‬ሺ‫ ʹ ݇ݎ ܽ݉ܤ‬ǡ ‫ʹ ݌݈݂݅ܤ‬Ͷ ሻ ൌ ͲǤͻͻ͹ʹ

Figure 5: The example of selecting an optimal flippable pixel in an original block based on minimizing the visual distortion of pixels flipping. First, in the red frame, the suitable marked block Bmark2 is selected by flipping pixel b2 according to Eq. (2) based on the maximal V s(Borg , Bmark2 ) = 0.9973. Then in the blue frame, the pixels b1 , b2 , b3 , b4 in the marked block Bmark2 are flipped respectively, and the visual quality scores V s(Bmark2 , Bf lip2y ) are calculated. Finally, the pixel b2 is determined as an optimal flippable pixel in the original block Borg with maximum V s(Bmark2 , Bf lip22 ) = 0.9973 according to Eq. (3), and the distortion d = 0.

13

࢈૚ ࢈૛ ࢈૝ ࢈૜

ķ Selection of Suitable marked block ‫ͳ ݇ݎ ܽ݉ܤ‬

‫݃ݎ݋ܤ‬

࢈૚ ‫ͳ ݇ݎ ܽ݉ܤ‬

ܸ‫ݏ‬ሺ‫ ݃ݎ݋ܤ‬ǡ ‫ ͳ ݇ݎ ܽ݉ܤ‬ሻ ൌ ͲǤͻͻͶͳ

‫ͳͳ ݌݈݂݅ܤ‬

ܸ‫ݏ‬ሺ‫ ͳ ݇ݎ ܽ݉ܤ‬ǡ ‫ ͳͳ ݌݈݂݅ܤ‬ሻ ൌ ͲǤͻͻͶͳ

‫ʹͳ ݌݈݂݅ܤ‬

ܸ‫ݏ‬ሺ‫ ͳ ݇ݎ ܽ݉ܤ‬ǡ ‫ ʹͳ ݌݈݂݅ܤ‬ሻ ൌ ͲǤͻͺͺͺ

‫ʹ ݇ݎ ܽ݉ܤ‬

ܸ‫ݏ‬ሺ‫ ݃ݎ݋ܤ‬ǡ ‫ ʹ ݇ݎ ܽ݉ܤ‬ሻ ൌ ͲǤͻͺͲʹ

‫͵ ݇ݎ ܽ݉ܤ‬

ܸ‫ݏ‬ሺ‫ ݃ݎ݋ܤ‬ǡ ‫ ͵ ݇ݎ ܽ݉ܤ‬ሻ ൌ ͲǤͻ͹ͷʹ

࢈૝

‫͵ͳ ݌݈݂݅ܤ‬

ܸ‫ݏ‬ሺ‫ ͳ ݇ݎ ܽ݉ܤ‬ǡ ‫ ͵ͳ ݌݈݂݅ܤ‬ሻ ൌ ͲǤͻͺͶͻ

‫ ݇ݎ ܽ݉ܤ‬Ͷ

ܸ‫ݏ‬ሺ‫ ݃ݎ݋ܤ‬ǡ ‫ ݇ݎ ܽ݉ܤ‬Ͷ ሻ ൌ ͲǤͻͻͳͳ

ĸ No selection of optimal flippable pixel

‫ͳ ݌݈݂݅ܤ‬Ͷ

ܸ‫ݏ‬ሺ‫ ͳ ݇ݎ ܽ݉ܤ‬ǡ ‫ͳ ݌݈݂݅ܤ‬Ͷ ሻ ൌ ͲǤͻͻͶ͵

Figure 6: The example of unsuccessfully selecting the optimal flippable pixel. First, in the red frame, the suitable marked block Bmark1 ) is selected by flipping pixel b1 based on the maximal V s(Borg , Bmark1 ) = 0.9941. Then in the blue frame, the pixels b1 , b2 , b3 , b4 in the marked block Bmark1 ) are flipped respectively, and the visual quality scores V s(Bmark1 , Bf lip1y ) are calculated. However, there is no pixel being determined as an optimal flippable pixel in the original block Bo rg because of the maximal V s(Bmark1 , Bf lip14 ) = 0.9943 which is not flipping pixel b1 but b4 . And the distortion d = 0.0002.

14

For example, in Fig. 5, the suitable marked block Bmark2 has the maximal V s(Borg , Bmark2 ) = 0.9973 according to Eq. (2). Then the pixels b1 , b2 , b3 , b4 in the marked block Bmark2 are flipped respectively, and the visual quality scores V s(Bmark2 , Bf lip2y ) are calculated. It is determined and unique for the receiver to select the optimal flippable pixel in marked image according to V s(Borg , Bmark2 ) = V s(Bmark2 , Bf lip22 ), and thus the pixel b2 is the optimal flippable pixel. Based on the MVD, the selection of optimal flippable pixels is designed that the visual distortion between the original and marked image is minimized. Also through calculating visual distortion, the position of embedded pixels can be accurately and uniquely known for the data receiver in the reversible process. 3. Proposed method 3.1. Embedding sequence construction The construction of the embedding sequence should be prepared before the data embedding and extraction processes being presented in detail. To assure the feasibility of reconstruction, the original optimal flippable pixels that are selected to embed messages need to be recorded, which are called the overhead information. To efficiently compress the embedding sequence, the number of “0” in the overhead information should be much more than the number of “1”, and vice versa. It requires whether the optimal flippable pixels selected in the proposed RDH method are more black pixels or more white pixels. We often pay more attention to an event with less probable [37]. In [37, 38, 39, 7], the concept “pixel density” is widely used. It is said that in halftone images, the visual impact caused by pixels flipped is different between the original white or black pixels. Black pixels in halftone images are usually as the background pixels. In contrast, the density of white pixels is used to simulate the brightness in gray-scale images. According to the human visual system, the area is bright when the white pixels are in high density. Conversely, the area is dark under the sparse white pixels. In special cases, if an area is full of white pixels, it reflects that this region in its original gray-scale image is also pure white. In Section 4.1, 15

Original halftone image IO

Marked halftone image IM

1

3

1

3

2

2

6 4

9

7

6

5

8

10

4

11

9

7

5

8

10

11

12

13

12

2 1

13

Embedding process by flipping pixels

3

2 1

4

3

4 5

8 6

5

9

8

7

6

9 8 7 6 5 4 3 2

1

9

7

9 8 7 6 5 4 3 2

1

The original value of optimal flippable pixels Val(Ms) : 0111001010110

Embed secret messages Ms in Val(Ms) : 1011010110101

The original value of optimal flippable pixels Val(Ichange) : 101110100

Embed original value in bottom area Ichange in Val(Iover): 010111001

The original value in bottom area Ichange : 010111001

Ԣ Embed ‫ݎ݁ݒ݋ܫ‬ in Ichange : 001110010 Ԣ Embedding sequence Se = Ms + Ichange + ‫ݎ݁ݒ݋ܫ‬

Iover = Val(Ms) + Val(Ichange) Ԣ = ‫ݎ݁ݒ݋ܫ‬

comp(Iover)= 001110010

Secret messages Ms: 1011010110101

Figure 7: Example of the embedding process and the areas for embedding sequences Se in a 22 × 22 image. The secret messages Ms are embedded in the optimal flippable pixels which are in the green frame in the 4 × 4 red blocks. The original values in the bottom area Ichange are embedded in the optimal flippable pixels in the orange frame in the 4 × 4 blue blocks. The recording overhead information Iover includes the original 0 0 value V al(Ms ) and the original value V al(Ichange ). Iover is compressed as Iover , and Iover are embedded in

the bottom area in the purple frame.

16

it is shown that the probability of large white pixel density in 3 datasets is less, indicating that in most halftone images, there is less number of white pixels than the number of black pixels. So modifying white pixels will cause more attention, and the RDH method will be perceived easier. As foreground pixels, white pixels represent brightness, edge contour, and content in a halftone image. Flipping white pixels into black pixels will cause information destruction, resulting in degradation of visual quality. So the black pixels in halftone images are more flippable to embed messages. In the proposed method, the optimal flippable pixels which are selected based on Eq. (8) are also with a large number of black pixels. In Section 4.1, it is shown that the number of black pixels in the optimal flippable pixels selected in the datasets is more than 7 times of white pixels. Hence, a high compression ratio can be achieved by directly recording the original pixel value of the optimal flippable pixels. The embedding sequence Se includes the secret messages Ms , the original pixels Ichange 0 0 of Iover , and the compressed recording overhead information Iover . The embedding process 0 is shown in Fig. 7. The Iover is compressed by the original recording overhead information

Iover which are recording the original value of Ms and Ichange which are defined as V al(Ms ) 0 can reduce the and V al(Ichange ). The less compressed recording overhead information Iover

size of the embedding sequence Se to increase the embedding payload. For some bad cases, fewer pixels in the green frame which embed the compressed recording overhead information 0 Iover will lead to failure. Consequently, the proposed RDH method meets to the condition 0 Length(Iover ) = Length(comp(V al(Ms ) + (V al(Ichange )))) 0 Length(V al(Ichange )) ≥ Length(Iover )

(9)

As the construction of the embedding sequence and the coding method are clarified, the embedding and extraction processes will be described in detail in Section 3.2 and Section 3.3. 3.2. Data embedding For a given halftone original image IO of size H × W , we embed the secret messages Ms with the length Cpure . The embedding procedure includes the following steps: 17

1. Divide IO into overlapped 4 × 4 blocks with 3 step length. 2. Scan the blocks and calculate the visual quality scores according to Eq. (2), Eq. (3) and Eq. (8) to determine the optimal flippable pixels. 3. Record the original pixel values V al(Ms ) in the optimal flippable pixels and embed the secret messages Ms . If the to-be-embedded message is different from the original pixel value, this optimal flippable pixel needs to be flipped. 4. Record the original values of optimal flippable pixels V al(Ichange ) and combine them 0 with V al(Ms ) as the recording overhead information Iover . Compress Iover into Iover

and it is particularly important to note that meet the condition in Eq. (9). 0 ) bits as a secret key K. And 5. Record the original values of the bottom Length(Iover 0 starting from the last bit, embed the compressed recording overhead information Iover

in the bottom of the image from right to left and bottom to top. 0 in bottom area in the optimal flippable pixels 6. Embed the original values Ichange of Iover 0 V al(Ichange ). If the to-be-embedded Iover bit is different from the original pixel value,

this optimal flippable pixel needs to be flipped. 3.3. Data extraction For a given H ×W marked image IM and the length of the compressed recording overhead

0 information Length(Iover ), the extraction process includes the following steps:

1. Dividing blocks: Divide IM into overlapped 4 × 4 blocks with 3 step length. 2. Scan the blocks and calculate the visual quality scores according to Eq. (2), Eq. (3) and Eq. (8) to determine the optimal flippable pixels. 3. Extract the secret messages Ms from the optimal flippable pixels. If the pixel value is “0”, the message is “0”, otherwise. 4. Starting from the last bit of the image, extract the compressed recording overhead 0 0 information Iover from right to left and bottom to top. And decompress Iover into

Iover . 5. According to Iover , restore the optimal flippable pixels where the secret messages are embedded. 18

6. Scan the optimal flippable pixels following the pixels in Step 2 to extract the original 0 in the bottom area. values Ichange of Iover

7. According to the second part of Iover , restore the optimal flippable pixels which are embedded Ichange . 0 in the bottom area. 8. According to the original values Ichange , restore Iover

4. Experiments and results The images dataset we used in the experiments comes from BossBased-1.01 [40] which is with 10000 gray-scale images of size 512 × 512. We converted them into 3 halftone images datasets, by using the dispersed ordered dither matrix (DODM) [33], dither algorithm of MATLAB R2015b (DAM) and error diffusion (ED) [9], respectively. In the following experiments, 3 measurements are employed to further evaluate the objective visual quality, including the Weighted Signal-to-Noise Ratio (WSNR) [41], the Universal image quality index (UQI) [42], and five scores S1 to S5 proposed in [43]. A higher WSNR means higher quality. In [41], the marked image represents the nonlinear distortion plus additive independent noise, and thus it is weighted by a CSF, which is a linear approximation of the HVS response to a sine wave of a single frequency. A low pass CSF assumes the human eyes do not focus on one point but freely moves around the image. In [42], the dynamic range of UQI is [−1, 1], and the best value 1 is achieved if and only if the marked image is the same as the original image completely. The UQI is based on HVS considering the different attributes including brightness, contrast, texture, orientation, etc. In [43], the amount and the size of the “salt-and-pepper” clusters are measured due to the disturbing feature of larger clusters. In a halftone image, the black pixel in the bright region is denoted as class 1. The white pixel in the dark region is denoted as class 4, and the pixels in class 1 and class 4 cause remarkable “salt-and-pepper” noise which causes great distortion of halftone images. Let L be the locations of the pixels which are modified. S1 is the total number of two types of elements with class 1 and class 4 in the locations of the flipped pixels. S2 gives the total area covered by the clusters with class 1 and class 4 in L. S3 gives the average area per cluster, which means that a pixel is flipped in the region of 19

white pixels density histogram

2500

1000

1500

number of blocks

1500

1000

500

500

0

2

4

6

8

10

12

14

16

white pixel density

(a)

0

white pixels density histogram

2000

1500

number of blocks

number of blocks

2000

0

white pixels density histogram

2000

1000

500

0

2

4

6

8

10

white pixel density

(b)

12

14

16

0

0

2

4

6

8

10

12

14

16

white pixel density

(c)

Figure 8: The white pixel density histograms of 3 halftone images datasets. (a) The white pixel density histogram of halftone images generated by dispersed ordered dither matrix [33]. The ratio of black pixels to white pixels is 2.1491. (b) The white pixel density histogram of halftone images generated by dither algorithm of MATLAB R2015b. The ratio of black pixels to white pixels is 2.1545. (c) The white pixel density histogram of halftone images generated by error diffusion [9]. The ratio of black pixels to white pixels is 2.1524.

opposite brightness will generate S3 clusters. S4 is the number of elements of L associated with clusters of size 3 or more. S5 is a perceptual measurement with a linear penalty model. The smaller S1 and S2 are, the fewer clusters in class 1 and class 4 are in halftone images, and the better visual quality of halftone image is. And it is believed that clusters of size 1 or 2 are not very visually disturbing. So the smaller S4 is, the fewer clusters of big size are in halftone images. Finally, the isolated black or white pixels which look visually pleasing will get S5 = 0. 4.1. Flippable pixels analysis It is concluded in Section 3.1 that the number of black pixels appearing in halftone images is larger than that of white pixels. For 3 datasets with 10000 halftone images, defined as the ratio of white pixels to all pixels in a block, the white pixel density is counted in nonoverlapped 4 × 4 blocks. The 0 white pixel density means that there is no white pixel in a 4 × 4 block, and the 16 white pixel density means that there are all white pixels in a 4 × 4 block. In Fig. 8, the white pixel density histograms of 3 halftone images datasets are presented. Since the proposed RDH method does not consider uniform blocks which means all pixels are black or white, the smaller the white pixel density it is, the larger the number 20

Table 1: The ratio of the optimal flippable black pixels to the optimal flippable white pixels for 3 halftone images datasets generated by the dispersed ordered dither matrix (DODM) [33], dither algorithm of MATLAB R2015b (DAM) and error diffusion (ED) [9] from BossBased-1.01 [40].

DODM [33]

DAM

ED [9]

N (B)

8009

11258

11153

N (W )

1564

1152

1465

R(B/W )

5.12

9.77

7.62

of blocks it has. The probability of a large white pixel density is little, and the number of black pixels is larger as the ratio of black pixels to white pixels is 2.15 approximately. Consequently, the number of black pixels in halftone images is larger than white pixels, and there is a greater probability of black pixels being flipped. Furthermore, without considering the uniform blocks, the optimal flippable pixels are also with a large number of black pixels in 3 halftone images datasets. Theoretically, for a 4 × 4 block, there are 65536 block patterns, from which 52078 optimal flippable pixels can be selected from these block patterns according to Eq. (4). And the optimal flippable pixels are almost evenly distributed in 4 center pixels among 13124 optimal flippable pixels distributing in b1 , 13050 in b2 , 12998 in b3 and 12906 in b4 in a block shown in Fig. 4. For a randomly given 4 × 4 block which has an optimal flippable pixel, the probability that this optimal flippable pixel is black or white is equal. However, in 3 halftone images datasets, the ratio R(B/W ) of the optimal flippable black pixels to the optimal flippable white pixels are 7.5 approximately, shown in Table. 1 where N (B) and N (W ) are the number of the optimal flippable black or white pixels, respectively. In summary, since the white pixels play an important role in expressing the image content, there are a large number of the optimal flippable black pixels, and the black pixels in halftone images are more suitable for flipping. 4.2. Visual quality comparison To embed the same-length messages, the comparison of visual quality for marked images in 3 datasets using Lien et al.’s method [33], Pan et al.’s method and the proposed RDH 21

Table 2: The comparison of objective visual quality using dispersed ordered dither matrix (DODM) [33], dither algorithm of MATLAB R2015b (DAM) and error diffusion (ED) [9] from BossBased-1.01 [40].

DM

WSNR

UQI

S1

S2

S3

S4

S5

EP

Lien et al. [33]

Pan et al. [31]

DODM [33]

DAM

ED [9]

DODM [33]

DAM

Proposed ED [9]

DODM [33]

DAM

ED [9] 34.39

440

44.54

-

-

34.54

33.92

33.44

33.86

34.84

800

43.45

-

-

33.30

32.02

33.30

30.85

32.00

31.49

1200

41.82

-

-

32.24

30.02

32.24

28.77

30.04

29.50

1600

40.64

-

-

31.40

28.27

27.88

27.36

28.68

28.16

2000

39.72

-

-

30.68

26.77

26.45

26.27

27.63

27.15

440

0.99

-

-

0.98

0.97

0.97

0.99

0.99

0.99

800

0.99

-

-

0.98

0.96

0.98

0.99

0.99

0.99

1200

0.98

-

-

0.97

0.95

0.97

0.99

0.99

0.99

1600

0.98

-

-

0.96

0.93

0.93

0.98

0.98

0.98

2000

0.97

-

-

0.96

0.92

0.91

0.98

0.98

0.98

440

266.84

-

-

535.41

827.31

695.57

438.22

378.78

425.87

800

446.76

-

-

738.08

1146.42

738.08

810.37

703.20

789.77

1200

646.77

-

-

965.08

1469.60

965.08

1216.97

1054.63

1184.92

1600

846.58

-

-

1191.00

1782.70

1726.90

1612.41

1391.16

1550.39

2000

1046.70

-

-

1418.50

2099.20

2045.57

2011.54

1712.58

1897.97

440

1658.80

-

-

1970.40

3433.59

2297.94

1204.03

1018.24

1154.72

800

1970.40

-

-

2729.70

4866.51

2729.70

2221.83

1893.18

2145.40

1200

2400.60

-

-

3580.60

6406.30

3580.60

3332.86

2838.65

3216.57

1600

3144.20

-

-

4426.90

7942.60

7645.70

4409.82

3741.22

4201.03

2000

3888.40

-

-

5280.80

9509.40

9194.51

5489.92

4603.93

5132.75

440

6.22

-

-

3.68

4.15

3.30

2.75

2.69

2.71

800

4.41

-

-

3.70

4.24

3.70

2.74

2.69

2.72

1200

3.71

-

-

3.71

4.36

3.71

2.74

2.69

2.71

1600

3.71

-

-

3.72

4.46

4.43

2.73

2.69

2.71

2000

3.71

-

-

3.72

4.53

4.49

2.73

2.69

2.70

440

352.05

-

-

366.73

780.69

452.21

214.66

209.26

241.62 448.30

800

366.73

-

-

505.29

1097.82

505.29

398.63

388.73

1200

510.19

-

-

660.65

1427.50

660.65

598.91

581.83

670.44

1600

669.62

-

-

815.02

1747.50

1695.90

791.78

765.54

873.43

2000

829.59

-

-

970.93

2070.20

2018.95

983.88

940.50

1063.99

440

1212.00

-

-

1435.00

2606.28

1502.37

765.81

639.46

728.85

800

1435.00

-

-

1991.60

3720.09

1991.60

1411.46

1189.97

1355.63

1200

1753.90

-

-

2615.50

4936.70

2615.52

2115.89

1784.02

2031.66

1600

2297.70

-

-

3235.90

6159.90

5918.80

2797.41

2350.05

2650.64

2000

2841.80

-

-

3862.40

7410.20

7148.93

3478.39

2891.35

3234.78

22

100

1 Pan et al. Proposed

Pan et al. Proposed

80

UQI

WSNR

0.95 60

0.9 40

20

0

2000

4000

6000

0.85

8000

0

2000

Embedding payload Cpure

(a)

6000

15000 Pan et al. Proposed

5000

10000

S2

4000 3000 2000

5000

1000 0

Pan et al. Proposed

0

2000

4000

6000

0

8000

0

2000

Embedding payload Cpure

4000

8000

(d)

5 4

3500

12000

3000

10000

2500

8000

2000

S5

S4

3

S3

6000

Embedding payload Cpure

(c)

6000

1500

2

4000

1000 1 0

8000

(b)

6000

S1

4000

Embedding payload Cpure

Pan et al. Proposed

0

2000

4000

6000

Embedding payload Cpure

(e)

8000

Pan et al. Proposed

500 0

0

2000

4000

6000

Embedding payload Cpure

(f)

8000

Pan et al. Proposed

2000 0

0

2000

4000

6000

8000

Embedding payload Cpure

(g)

Figure 9: The comparison of the visual quality using halftone images from BossBased-1.01 [40] generated by dither algorithm of MATLAB R2015b (DAM) measured by WSNR, UQI, S1 to S5 .

23

method is presented in Table 2, where “DM” means the distortion measurements, and “EP” means the embedding payload. Marked images that are measured with greater WSNR, UQI, and smaller S1 to S5 will have higher visual quality. Focusing on Lien et al.’s method [33], marked images are with high visual quality under the halftoning method which is generated by the dispersed ordered dither matrix (DODM) proposed by themselves. Halftone images generated in [33] present in checkerboard scatter structures, and Black-White-pixel pairs are of the least chance to occur. Lien et al.[33] embedded the secret messages by swapping White-Black-pixel pairs into Black-White-pixel pairs so that the less overhead information needs to be recovered. However, it does not consider the connectivity after swapping the pixel pairs. Also in other datasets, there are a large frequency of Black-White-pixel pairs appearing, which causes a great size of recording overhead information, and leads to failure to Lien et al.’s method [33]. For marked images in datasets generated by dither algorithm of MATLAB R2015b (DAM) and error diffusion (ED) [9], the proposed method shows better visual quality under the same embedding payload. By minimizing the visual comparison between the original blocks and marked blocks, the proposed method minimizes the visual distortion for marked images while flipping pixels to embed messages, which leads to great visual quality performance. The comparison of the visual quality using the dataset generated by DAM is shown in Fig. 9, which is measured by WSNR, UQI, S1 to S5 . Under the same embedding payload, the proposed method has greater WSNR, UQI and smaller S1 to S5 . Also, compared with 2926 bits in Pan et al.’s method [31], the proposed method has the maximum embedding payload 7773 bits. To further discuss the visual quality of the proposed method, two images from the halftone images generated by the dispersed ordered dither matrix (DODM) [33] were randomly selected. Respectively, 3 marked images after embedding 1200-bit pure payload by utilizing three RDH methods were obtained. Then we focused on the cropped parts of them and provided a detailed visual comparison. The examples in the edged regions are shown in Fig. 10. Comparing Fig. 10(e) and Fig. 10(f), the scatter structure of the cropped marked block generated by [33] is changed, 24

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

Figure 10: Detailed visual comparison of the edged region in a halftone image which is generated by the dispersed ordered dither matrix (DODM) [33] using 3 RDH methods under 1200-bit pure payload. (a) The original halftone image of size 512 × 512. (b) The marked image using Lien et al.’s RDH method [33]. (c) The marked image using Pan et al.’s RDH method [31]. (d) The marked image using the proposed RDH method. (e) The cropped part of the original image of size 24 × 24. (f) The cropped part of the marked image (b) with PSNR= 8.8733. (g) The cropped part of the marked image (c) with PSNR= 11.5836. (h) The cropped part of the marked image (d) with PSNR= 20.6145.

25

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

Figure 11: Detailed visual comparison of the flat region in a halftone image which is generated by the dispersed ordered dither matrix (DODM) [33] using 3 RDH methods under 1200-bit pure payload. (a) The original halftone image of size 512 × 512. (b) The marked image using Lien et al.’s RDH method [33]. (c) The marked image using Pan et al.’s RDH method [31]. (d) The marked image using the proposed RDH method. (e) The cropped part of the original image of size 24 × 24. (f) The cropped part of the marked image (b) with PSNR= 5.8145. (g) The cropped part of the marked image (c) with PSNR= 13.4545. (h) The cropped part of the marked image (d) with PSNR= 27.6042.

26

Table 3: The detail visual comparison of objective visual quality for the 3 edged marked blocks in Fig. 10(e)-10(h) and the 3 flat marked blocks in Fig. 11(e)-11(h) . Lien et al. [33]

Pan et al. [31]

Proposed

Lien et al. [33]

Pan et al. [31]

Proposed

in Fig. 10(f)

in Fig. 10(g)

in Fig. 10(h)

in Fig. 11(f)

in Fig. 11(g)

in Fig. 11(h)

WSNR

27.23

28.13

30.32

24.12

28.74

36.57

UQI

0.65

0.88

0.98

0.44

0.91

0.99

S1

39

21

5

73

13

1

S2

118

45

5

251

50

1

S3

3.03

2.14

1

3.44

3.85

1

S4

30

1

0

66

8

0

S5

79

24

0

178

37

0

resulting in many clusters. The reason is that Lien et al.’s RDH method [33] does not consider the connectivity between pixels after swapping the pixel pairs. Comparing Fig. 10(e) and Fig. 10(g), the cropped marked block generated by the Pan et al.’s RDH method [31] has many clusters of the same structure. The reason is that Pan et al.’s RDH method [31] has a small size of LUT and the few substituted pattern pairs. After replacement, many blocks with the same structure will present. Comparing Fig. 10(e) and Fig. 10(h), the edged marked block generated by the proposed RDH method has better visual quality. It retains the scatter structure in the original block, and fewer clusters are generated by flipping pixels. Similarly, cropping the flat regions, the same results of visual comparisons are presents in Fig. 11. The detailed visual comparison of both different RDH methods and different regions is shown in Table 3. The proposed method outperforms, especially in the flat region. In conclusion, the proposed method fully considers the visual quality when embedding data by flipping pixels. By selecting the optimal flippable pixels, fewer clusters are generated, and the marked image maintains higher visual quality while minimizing the visual distortion. 4.3. Embedding payload comparison The comparison of embedding payload Cpure under 3 halftone images datasets using Lien et al.’s method [33], Pan et al.’s method [31] and the proposed RDH method is shown in 27

Table 4: The comparison of embedding payload using halftone images generated by the dispersed ordered dither matrix (DODM) [33], dither algorithm of MATLAB R2015b (DAM) and error diffusion (ED) [9] from BossBased-1.01 [40]. RDH

Generated

methods

Embedding payload

Flipped pixels

Embedding efficiency

methods

Cpure

Cf lip

EE

DODM [33]

63006

63097

0.999

DAM

-

-

-

Lien et al. [33]

ED [9]

-

-

-

DODM [33]

13521

14826

0.912

Pan et al. [31]

DAM

2926

4522

0.647

ED [9]

2959

4330

0.683

DODM [33]

4721

6406

0.737

DAM

7773

7443

1.044

ED [9]

7140

7895

0.904

Proposed

Table. 4. Lien et al.’s method [33] has the maximum embedding payload in the dataset generated by DODM proposed by themselves [33]. With 10 similar pattern pairs in Pan et al.’s method, the embedding payload in the dataset generated by DODM is greater than the proposed method, while the embedding payloads in the other 2 datasets are smaller. It is because that there is a large number of White-Black-pixel pairs in halftone images generated by DODM, which leads to less selection of optimal flippable pixels using the proposed method. The proposed method is dynamically applicable in the 3 halftone images datasets and is not based on the special structure of blocks or patterns, which has the maximum embedding payload in the datasets generated by DAM and ED. The other two RDH methods are based on statistical experiments, and thus the structures of the image will greatly affect the performance. As mentioned in Section 4.1, the proposed method provides 52078 optimal flipping pixels which lead to diversity for the selection of pixels for embedding messages. The embedding payload is increased while ensuring high visual quality, as the proposed method is no longer limited to a small number of pattern pairs defined in the LUT. The embedding efficiency EE is defined as the bits of random messages embedded for each pixel flipped presented as EE =

Cpure Cf lip

28

(10)

where Cf lip is the number of flipping pixels in the marked images. It shows that the method achieves great embedding efficiency in the dataset generated by DAM and ED. The proposed method embeds more bits of secret messages while flipping fewer pixels in marked images. Although the proposed method does not perform better than [33] in DODM, but it applies to any type of halftone image. The average embedding payload for 3 datasets of the proposed method is 6545 bits, while it is 6469 bits in [31]. The average embedding efficiency for 3 datasets of the proposed method is 0.895, while it is 0.747 in [31]. The embedding payload and efficiency of the proposed method are acceptable. 5. Conclusions In this paper, we propose an RDH method in halftone images based on minimizing the visual distortion of pixels flipping. Different from the existing method in halftone images which guarantee the feasibility of reversibility by determining the replacement of pattern pairs, the proposed method is based on the calculation of visual quality to select the optimal flippable pixels. The purpose of selecting the optimal flippable pixels is to ensure that the distortion caused by pixels flipped is minimized, and the receiver can extract the embedded messages and recover the modified pixels by visual quality calculation. Through calculation for minimizing visual distortion, the embedding payload is increased by diversity for the selection of optimal flippable pixels. Experimental results show that the proposed method outperforms some RDH methods by improving visual quality in marked images and embedding payload. Herein, we may focus on the RDH construction standing on increasing the embedding payload as much as possible without reducing the visual quality in the future. Acknowledgment This work is supported by the National Natural Science Foundation of China (No. U1736118), the Key Areas R&D Program of Guangdong (No. 2019B010136002), the Key Scientific Research Program of Guangzhou (No. 201804020068), the Natural Science Foundation of Guangdong (No. 2016A030313350), the Special Funds for Science and Technology Development of Guangdong (No. 2016KZ010103). 29

References [1] W. Lu, L. He, Y. Yeung, Y. Xue, H. Liu, B. Feng, Secure binary image steganography based on fused distortion measurement, IEEE Transactions on Circuits and Systems for Video Technology 29 (6) (2019) 1608–1618. [2] S.-W. Jung, Lossless embedding of depth hints in JPEG compressed color images, Signal Processing 122 (2016) 39–51. [3] N. Wang, C. Men, Reversible fragile watermarking for locating tampered blocks in 2d vector maps, Multimedia tools and applications 67 (3) (2013) 709–739. [4] G. Hua, J. Huang, Y.-Q. Shi, J. Goh, V. L. L. Thing, Twenty years of digital audio watermarking-a comprehensive review, Signal Processing 128 (2016) 222–242. [5] Z. Hong, H. Wang, M. K. Khan, Steganalysis for palette-based images using generalized difference image and color correlogram, Signal Processing 91 (11) (2011) 2595–2605. [6] F. Bao, R. H. Deng, B. C. Ooi, Y. Yang, Tailored reversible watermarking schemes for authentication of electronic clinical atlas, IEEE Transactions on Information Technology in Biomedicine 9 (4) (2005) 554–563. [7] Y. Xue, W. Liu, W. Lu, Y. Yeung, X. Liu, H. Liu, Efficient halftone image steganography based on dispersion degree optimization, Journal of Real-Time Image Processing (2018) 1–9. [8] J. F. Jarvis, C. N. Judice, W. H. Ninke, A survey of techniques for the display of continuous tone pictures on bilevel displays, Computer Graphics and Image Processing 5 (1) (1976) 13–40. [9] R. W. Floyd, L. Steinberg, Adaptive algorithm for spatial grayscale, in: Proceedings of SID, 1976, pp. 75–77. [10] J.-M. Guo, Watermarking in dithered halftone images with embeddable cells selection and inverse halftoning, Signal Processing 88 (6) (2008) 1496–1510. [11] S. Yi, Y. Zhou, Parametric reversible data hiding in encrypted images using adaptive bit-level data embedding and checkerboard based prediction, Signal Processing 150 (2018) 171–182. [12] D. Hou, H. Wang, W. Zhang, N. Yu, Reversible data hiding in jpeg image based on dct frequency and block selection, Signal Processing 148 (2018) 41–47. [13] S. Yi, Y. Zhou, Binary-block embedding for reversible data hiding in encrypted images, Signal Processing 133 (2017) 40–51. [14] H. Yao, C. Qin, Z. Tang, Y. Tian, Improved dual-image reversible data hiding method using the selection strategy of shiftable pixels’ coordinates with minimum distortion, Signal Processing 135 (2017) 26–35. [15] X. Wu, W. Sun, High-capacity reversible data hiding in encrypted images by prediction error, Signal Processing 104 (6) (2014) 387–400. [16] H. Ren, W. Lu, B. Chen, Reversible data hiding in encrypted binary images by pixel prediction, Signal

30

Processing 165 (2019) 268–277. [17] J.-M. Guo, J.-J. Tsai, Reversible data hiding in low complexity and high quality compression scheme, Digital Signal Processing 22 (5) (2012) 776–785. [18] A. Arham, H. A. Nugroho, T. B. Adji, Multiple layer data hiding scheme based on difference expansion of quad, Signal Processing 137 (2017) 52–62. [19] S. Weng, Z. Yao, J.-S. Pan, R. Ni, Reversible watermarking based on invariability and adjustment on pixel pairs, IEEE Signal Processing Letters 15 (20) (2008) 721–724. [20] M. Xiao, X. Li, Y. Wang, Y. Zhao, R. Ni, Reversible data hiding based on pairwise embedding and optimal expansion path, Signal Processing 158 (2019) 210–218. [21] Z. Ni, Y.-Q. Shi, N. Ansari, W. Su, Reversible data hiding, IEEE Transactions on Circuits and Systems for Video Technology 16 (3) (2006) 354–362. [22] Y. Jia, Z. Yin, X. Zhang, Y. Luo, Reversible data hiding based on reducing invalid shifting of pixels in histogram shifting, Signal Processing 163 (2019) 238–246. [23] J. Wang, N. Mao, X. Chen, J. Ni, C. Wang, Y.-Q. Shi, Multiple histograms based reversible data hiding by using FCM clustering, Signal Processing 159 (2019) 193–203. [24] F. Peng, X. Li, B. Yang, Adaptive reversible data hiding scheme based on integer transform, Signal Processing 92 (1) (2012) 54–62. [25] X. Wang, X. Li, B. Yang, Z. Guo, Efficient generalized integer transform for reversible watermarking, IEEE Signal Processing Letters 17 (6) (2010) 567–570. [26] X. Yin, W. Lu, W. Liu, J. Zhang, Reversible data hiding in binary images by symmetrical flipping degree histogram modification, in: Security with Intelligent Computing and Big-data Services, Springer International Publishing, 2019, pp. 891–903. [27] C. Kim, J. Baek, P. S. Fisher, Lossless data hiding for binary document images using n-pairs pattern, in: International Conference on Information Security and Cryptology, 2014, pp. 317–327. [28] C. H. T. Yang, Y. Hsu, C. Wu, J. Chang, Reversible data hiding method based on exclusive-OR with two host images, in: International Conference on Trustworthy Systems and Their Applications, 2014, pp. 69–74. [29] Z.-M. Lu, H. Luo, J.-S. Pan, Reversible watermarking for error diffused halftone images using statistical features, in: International Workshop on Digital Watermarking, 2006, pp. 71–81. [30] J.-S. Pan, H. Luo, Z.-M. Lu, A lossless watermarking scheme for halftone image authentication, International Journal of Computer Science and Network Security 6 (2B) (2006) 147–151. [31] J.-S. Pan, H. Luo, Z.-M. Lu, Look-up table based reversible data hiding for error diffused halftone images, Informatica, Lith. Acad. Sci. 18 (2007) 615–628. [32] F. Yu, H. Luo, S. Chu, Lossless data hiding for halftone images, in: J.-S. Pan, H. C. Huang, L. C. Jain

31

(Eds.), Information Hiding and Applications, Springer Berlin Heidelberg, Berlin, Heidelberg, 2009, pp. 181–203. [33] B. K. Lien, Y. M. Lin, K. Y. Lee, High-capacity reversible data hiding by maximum-span pixel pairing on ordered dithered halftone images, in: International Conference on Systems, Signals and Image Processing, 2012, pp. 76–79. [34] S. M. Cheung, Y. H. Chan, A technique for lossy compression of error-diffused halftones, in: IEEE International Conference on Multimedia and Expo, Vol. 2, 2004, pp. 1083–1086. [35] Z. Karni, D. Freedman, D. Shaked, Fast inverse halftoning, Proceeedings of International Congress on Imaging Science 20 (1) (2010) 5–17. [36] Y. Guo, O. C. Au, R. Wang, L. Fang, X. Cao, Halftone image watermarking by content aware doublesided embedding error diffusion, IEEE Transactions on Image Processing 27 (7) (2018) 3387–3402. [37] N. Burrus, T. M. Bernard, Adaptive vision leveraging digital retinas: extracting meaningful segments, in: International Conference on Advanced Concepts for Intelligent Vision Systems, 2006, pp. 220–231. [38] Y. Ren, F. Liu, D. Lin, R. Feng, W. Wang, A new construction of tagged visual cryptography scheme, in: International Workshop on Digital Watermarking, 2016, pp. 433–445. [39] P. S. Chouhan, V. K. Govindan, Localization of license plate using characteristics of alphanumeric characters, International Journal of Computer Science and Information Technologies 5 (3) (2014) 3407– 3409. [40] P. Bas, T. Filler, T. Pevn´ y, Break our steganographic system: The ins and outs of organizing boss, in: International Conference on Information Hiding, Vol. 6958, 2011, pp. 59–70. [41] M. Valliappan, B. L. Evans, D. A. D. Tompkins, F. Kossentini, Lossy compression of stochastic halftones with JBIG2, in: International Conference on Image Processing, Vol. 1, 1999, pp. 214–218. [42] Z. Wang, A. C. Bovik, A universal image quality index, IEEE Signal Processing Letters 9 (3) (2002) 81–84. [43] M. S. Fu, O. C. Au, Halftone image data hiding with intensity selection and connection selection, Signal Processing: Image Communication 16 (10) (2001) 909–930.

32

Conflict of Interest and Authorship Conformation Form Please check the following as appropriate: 

All authors have participated in (a) conception and design, or analysis and interpretation of the data; (b) drafting the article or revising it critically for important intellectual content; and (c) approval of the final version.



This manuscript has not been submitted to, nor is under review at, another journal or other publishing venue.



The authors have no affiliation with any organization with a direct or indirect financial interest in the subject matter discussed in the manuscript



The following authors have affiliations with organizations with direct or indirect financial interest in the subject matter discussed in the manuscript:

Author’s name Xiaolin Yin Wei Lu JunHong Zhang Wanteng Liu

Affiliation Sun Yat-sen University Sun Yat-sen University Sun Yat-sen University Sun Yat-sen University

Xiaolin Yin: Methodology, Software, Validation, Writing - Original Draft Wei Lu: Conceptualization, Methodology, Validation, Formal analysis, Writing Original Draft Junhong Zhang: Software, Validation, Writing - Review & Editing Wanteng Liu: Validation, Writing - Review & Editing