A high capacity reversible data hiding scheme based on generalized prediction-error expansion and adaptive embedding

A high capacity reversible data hiding scheme based on generalized prediction-error expansion and adaptive embedding

Signal Processing 98 (2014) 370–380 Contents lists available at ScienceDirect Signal Processing journal homepage: www.elsevier.com/locate/sigpro A ...

1MB Sizes 210 Downloads 163 Views

Signal Processing 98 (2014) 370–380

Contents lists available at ScienceDirect

Signal Processing journal homepage: www.elsevier.com/locate/sigpro

A high capacity reversible data hiding scheme based on generalized prediction-error expansion and adaptive embedding Xinlu Gui, Xiaolong Li, Bin Yang n Institute of Computer Science and Technology, Peking University, Beijing 100871, China

a r t i c l e in f o

abstract

Article history: Received 27 May 2013 Received in revised form 31 October 2013 Accepted 6 December 2013 Available online 16 December 2013

In this paper, a high capacity reversible image data hiding scheme is proposed based on a generalization of prediction-error expansion (PEE) and an adaptive embedding strategy. For each pixel, its prediction value and complexity measurement are firstly computed according to its context. Then, a certain amount of data bits will be embedded into this pixel by the proposed generalized PEE. Here, the complexity measurement is partitioned into several levels, and the embedded data size is determined by the complexity level such that more bits will be embedded into a pixel located in a smoother region. The complexity level partition and the embedded data size of each level are adaptively chosen for the best performance with an advisable parameter selection strategy. In this way, the proposed scheme can well exploit image redundancy to achieve a high capacity with rather limited distortion. Experimental results show that the proposed scheme outperforms the conventional PEE and some state-of-the-art algorithms by improving both marked image quality and maximum embedding capacity. & 2013 Elsevier B.V. All rights reserved.

Keywords: Reversible data hiding Generalized PEE Adaptive embedding Complexity partition

1. Introduction Reversible data hiding (RDH) aims to embed secret data into a host image by slightly modifying its pixels, and more importantly, the original image as well as the embedded message can be completely restored from the marked image [1–3]. The RDH technique has been widely applied to some sensitive application fields such as law forensics, medical image processing and military image processing. In general, the performance of a RDH algorithm is evaluated in the following three aspects: embedding capacity (EC), marked image quality and computational complexity. Specifically, for a given EC, one expects to minimize the

n Corresponding author. Tel.: þ86 10 82529693; fax: þ 86 10 82529207. E-mail addresses: [email protected] (X. Gui), [email protected] (X. Li), [email protected] (B. Yang).

0165-1684/$ - see front matter & 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.sigpro.2013.12.005

embedding distortion and meanwhile keep the computational complexity as low as possible. A significant amount of research has been done on RDH over the past few years. Early RDH algorithms are mainly based on lossless compression [4–9], in which certain features of host image are losslessly compressed to save space for embedding the payload. These methods usually provide low EC and may lead to severe degradation in image quality. Later on, more efficient algorithms based on histogram modification and expansion technique have been devised. The histogram-modification-based method is firstly proposed by Ni et al. in [10]. This method focuses on very high visual quality with quite limited EC, in which the peak point of image histogram is utilized to embed data. In this method, each pixel value is modified at most by 1, and thus the marked image quality is well guaranteed with a PSNR larger than 48.13 dB. The expansion technique is firstly proposed by Tian in [11] where the pixel difference is expanded to embed data. Compared with the

X. Gui et al. / Signal Processing 98 (2014) 370–380

compression-based RDH, Tian's method can provide a much higher EC while keeping the distortion low. Afterwards, the expansion technique has been widely investigated and developed, mainly in the aspects of integer-tointeger transformation [12–18], location map reduction [19–23], and prediction-error expansion (PEE) [24–32] where the difference value is replaced by the predictionerror in expansion embedding. To the best of our knowledge, among existing RDH approaches, PEE usually leads to a good performance since it has a potential to well exploit the spatial redundancy in natural images. In conventional PEE embedding, after the prediction-error histogram is generated, high frequency bins are expanded to embed data while other bins are shifted to ensure the reversibility. Here, it should be noticed that each expanded prediction-error is uniformly embedded with 1 bit. In [33,34], unlike the conventional PEE, a data embedding level is adaptively adjusted for each pixel considering the human visual system characteristics. To this end, the just noticeable difference values are estimated for every pixel, and the estimated values as well as the edge information are used to determine the embedding level. Then, based on the embedding level, each pixel is adaptively selected for embedding with 1 data bit or shifting. Recently, the conventional PEE is improved by Li et al. [35] using adaptive embedding. As noisy pixels may cause much larger distortion than smooth ones with identical size of embedded data, the adaptive embedding in [35] guarantees that more data is embedded into a smoother pixel according to a local complexity measurement. Specifically, Li et al. first divide image pixels into “flat part” and “rough part”, and then, by using the conventional PEE twice or once, adaptively embed 2 bits or 1 bit into each expandable flat or rough pixel, respectively. However, although the adaptive embedding in [35] plays an important role in improving the conventional PEE, this strategy can be further exploited to achieve a better performance. Since image pixels are simply classified into two categories for embedding 2 bits or 1 bit in [35], the performance improvement is limited. Based on this consideration, by further exploiting the adaptive embedding strategy and by extending the conventional PEE to a general form, we propose in this work a new RDH scheme which significantly outperforms the conventional PEE. We first generalize the conventional PEE with an adjustable parameter k such that it can embed log2 ðk þ 1Þ bits into a pixel. The generalized PEE includes the conventional PEE as a special case by taking k¼1, and it is equivalent to using the conventional PEE twice if k ¼3. Then, instead of simply classifying image pixels into two categories in [35], we divide the pixels into several categories based on a partition of local complexity measurement, and embed more bits into a smoother pixel by taking a larger k in generalized PEE. The incorporation of adaptive embedding and generalized PEE provides a possibility to optimize the embedding performance. Actually, the complexity partition and the associated embedded data size of each pixel category are adaptively chosen for the best performance with an advisable parameter selection strategy. In this way, the proposed scheme can better

371

exploit image redundancy to achieve a higher EC with less distortion compared with the conventional PEE and the adaptive embedding method of [35]. Experimental results also verify its superiority over some other state-of-the-art RDH algorithms. The rest of the paper is organized as follows. The related work including conventional PEE and adaptive embedding is briefly introduced in Section 2. Section 3 presents the proposed RDH scheme in detail. The experimental results as well as the comparisons with the prior arts are shown in Section 4. Finally, Section 5 concludes this paper. 2. Related work 2.1. Prediction-error expansion The embedding procedure of conventional PEE contains the following three steps: 1. According to a certain scanning order and by using a predictor, for each pixel x, determine its prediction value b x and this value should be rounded off if it is not an integer. The prediction-error is denoted as e ¼ xb x. 2. Embed data by modifying the prediction-error histogram through expansion and shifting. Specifically, for each prediction-error e, it is expanded or shifted as 8 > < 2eþ b if T r e oT if eZ T em ¼ e þT ð1Þ > : e T if eo T where T is an integer-valued capacity-control parameter, and b A f0; 1g is a to-be-embedded data bit. With (1), the bins in the inner region ½  T; TÞ are expanded to embed data, and those in the outer region ð 1; TÞ [ ½T; þ1Þ are shifted outwards to create vacancies to ensure the reversibility. 3. Finally, each pixel x is modified to xm ¼ b x þem to generate the marked image. In the above procedure, the maximum modification to image pixels is the capacity-control parameter T which is an important factor for the embedding performance. Therefore, to minimize the distortion in PEE, T is taken as the smallest positive integer such that the inner region can provide sufficient expandable pixels for embedding the required payload. According to (1), after PEE embedding, predictionerrors belonging to ð  1;  TÞ, ½  T; TÞ and ½T; þ 1Þ change to those in new intervals ð  1;  2TÞ, ½  2T; 2TÞ and ½2T; þ 1Þ, respectively. The three new intervals are also disjointed with each other which guarantees accurate extraction and restoration of PEE. The PEE extraction procedure can be summarized as follows: 1. For each marked pixel xm, determine its prediction value b x . The marked prediction-error is thus em ¼ xm  b x . A key issue of PEE is that the prediction

372

X. Gui et al. / Signal Processing 98 (2014) 370–380

value obtained by a decoder should be the same as that of an encoder. For example, by using a medianedge-detector (MED [26,36]) or a gradient-adjustedpredictor (GAP [35,37]) which is based on halfenclosing casual pixels for prediction, the decoder can inversely scan image pixels to get the same prediction value. For another example, one can utilize doublelayered embedding [28,38,39] to obtain the same prediction value as well. 2. For each marked prediction-error em, the original prediction-error can be recovered as 8 m m > < ⌊e =2c if  2T re o 2T m m if e Z 2T ð2Þ e ¼ e T > : m if em o 2T e þT where ⌊c is the floor function. Meanwhile, if em A ½  2T; 2TÞ, the embedded data bit can be extracted as b ¼ em  2⌊em =2c. 3. Finally, restore each pixel as x ¼ b x þ e to get the host image.

2.2. Adaptive embedding Conventional PEE treats all image pixels equally and sequentially embeds data into pixels one by one. However, it is widely recognized that embedding in noisy pixels may cause larger distortion in RDH than in smooth ones with identical embedded data size. So, the conventional uniform embedding will severely degrade image quality especially for high EC. To remedy this drawback, as an extension to the conventional PEE, an adaptive embedding strategy is proposed by Li et al. [35]. In adaptive embedding, instead of uniformly embedding 1 bit into each inner region pixel, the host image is first divided into “flat part” and “rough part” according to a local complexity measurement computed using pixel context, and 2 bits or 1 bit will be embedded into each expandable flat or rough pixel, respectively. That is to say, for an expandable pixel of flat part, its prediction-error e will be expanded twice to embed 2 bits b1 ; b2 A f0; 1g to get the marked prediction-error as em ¼ 2ð2eþ b1 Þ þ b2 ¼ 4e þ2b1 þ b2 : m

This is equivalent to taking e

ð3Þ

as

em ¼ 4eþ b

ð4Þ

where b A f0; 1; 2; 3g represents 2 to-be-embedded bits. As for an expandable rough region pixel, it is just embedded with 1 bit using conventional PEE. The advantage of adaptive embedding can be analyzed as follows. As mentioned in [35], for the expansion case of conventional PEE, the average distortion (in l2-norm) is h i h i DðeÞ 9E ðxm  xÞ2 ¼ E ðem eÞ2 ¼

1 1 ∑ ðe þbÞ2 ¼ e2 þ eþ 0:5: 2b¼0

formulated by Dn ðeÞ 9

1 3 ∑ ð3eþ bÞ2 ¼ 9e2 þ 9eþ 3:5: 4b¼0

ð6Þ

Comparing (5) with (6), for two arbitrary prediction-errors e1 and e2, one can derive that Dn ðe1 Þ o Dðe2 Þ holds if 0 r e1 r e2 =3 1, since, in this case, Dn ðe1 Þ  Dðe2 Þ r Dn ðe2 =3  1Þ  Dðe2 Þ ¼ 4e2 þ 3 o0. This means that, compared with embedding 1 bit into a pixel with a large prediction-error, it is better to embed additionally 1 bit into an already embedded pixel whose original predictionerror is sufficiently small. Li et al.'s work [35] is motivated by this observation, and the effectiveness of adaptive embedding is experimentally verified in [35]. However, as we have mentioned in Section 1, the adaptive embedding of [35] cannot fully exploit image redundancy since it only considers two types of pixels as flat and rough. We may, in fact, classify image pixels into several categories based on a partition of complexity, and the smoother a pixel is, the more bits will be embedded into it. Based on this flexible adaptive embedding strategy, we may also determine the best complexity partition and the corresponding capacity parameters to optimize the embedding performance. In this way, the image redundancy is further utilized and better performance can be expected. The details will be given in the next section. 3. Proposed scheme In this section, the pixel prediction and the complexity computation are first described as a preparation. Then, we introduce the generalized PEE. Next, by incorporating generalized PEE and adaptive embedding, we present the proposed RDH scheme with detailed data embedding and extraction procedures. Finally, we show the parameter selection method for determining the best complexity partition and the corresponding capacity parameters. 3.1. Pixel prediction and complexity measurement The same as the methods of Hu et al. [26] and Li et al. [35], our scheme sequentially embeds data pixel by pixel and uses half-enclosing casual pixels for prediction. We then adopt the GAP predictor and calculate the complexity based on a pixel context containing 10 pixels as shown in Fig. 1. GAP is a major part of context-based, adaptive, lossless image coding (CALIC) algorithm [37]. This predictor is

ð5Þ

On the other hand, for the double expanded case in (4) of embedding 2 bits, the average distortion can be

Fig. 1. Context of x. The seven pixels fv1 ; v2 ; v3 ; v4 ; v5 ; v7 ; v8 g are used in GAP prediction and all ten pixels fv1 ; v2 ; …; v10 g are used to compute the complexity of x.

X. Gui et al. / Signal Processing 98 (2014) 370–380

373

more efficient than MED with more neighbor pixels involved. To predict a pixel x using GAP, the horizontal and vertical gradients of x are first estimated as δh ¼ jv1  v2 j þjv3 v4 j þ jv4  v5 j δv ¼ jv1  v5 jþ jv3  v7 j þjv4  v8 j:

and

Let δ ¼ δv  δh . Then, the prediction value b x is defined by 8 if δ4 80 v1 > > > > n > þx Þ=2 if δA ð32; 80 ðv 1 > > > > n > þ3x Þ=4 if δA ð8; 32 ðv > < 1 b if δA ½  8; 8 ð8Þ x ¼ xn > > > ðv4 þ3xn Þ=4 if δA ½  32;  8Þ > > > > > ðv4 þxn Þ=2 if δA ½  80;  32Þ > > > : if δo 80 v4 where xn ¼ ðv1 þ v4 Þ=2 þ ðv3  v5 Þ=4 is the prediction for smooth context. The complexity of a pixel is measured as a noise level for adaptive embedding. For the sake of accuracy, it is calculated with a context of 10 pixels (see Fig. 1). Specifically, the complexity of x denoted as C(x), is defined as the sum of both vertical and horizontal absolute differences of every two consecutive pixels in the context. As our adaptive embedding is conducted based on a partition of complexity, we then normalize the complexity to the interval [0,255] by taking   CðxÞ C n ðxÞ ¼ 255 ð9Þ C max where C max means the maximum complexity of all host pixels, and ⌈  ⌉ is the ceiling function. Notice that the pixel scanning order of our extraction procedure is inverse to that of embedding. Thus, when processing a pixel in data extraction, its context has been already recovered and the same predication value and complexity can be obtained. This guarantees the reversibility of the proposed RDH scheme. 3.2. Generalized PEE The generalized PEE is a natural extension of (1) and (4) with an arbitrary positive integer k such that log2 ðk þ 1Þ bits can be embedded into a pixel at a time. That is, for a prediction-error e, it is first expanded or shifted to 8 > < ðk þ 1Þe þb if  T k re oT k em ¼

e þ kT k > : e  kT k

if e ZT k if e o  T k

-6

-5

-4

-3

-2

-1

0

1

2

3

4

5

-6

-5

-4

-3

-2

-1

0

1

2

3

4

5

ð7Þ

ð10Þ

where b A f0; …; kg stands for log2 ðk þ 1Þ to-be-embedded data bits, and Tk is a capacity-control parameter corresponding to k. Then, like the conventional PEE, the marked value is taken as xm ¼ b x þem . Here, when k¼ 1, (10) is just the conventional PEE as (1). Clearly, we see that after generalized PEE embedding, prediction-errors belonging to intervals ð  1; T k Þ, ½  T k ; T k Þ and ½T k ; þ1Þ change to those in new disjointed intervals ð  1;  ðk þ1ÞT k Þ, ½  ðk þ1ÞT k ; ðk þ 1ÞT k Þ and ½ðk þ 1ÞT k ; þ 1Þ, respectively. So the operation (10) is reversible and the decoder can recover the original

Fig. 2. Illustrations of (10) for (a): ðk; T k Þ ¼ ð1; 2Þ and (b): ðk; T k Þ ¼ ð2; 1Þ.

prediction-error from the marked pixel as 8 m > < ⌊e =ðk þ 1Þc e ¼ em kT k > : em þkT k

if  ðk þ1ÞT k r em o ðk þ1ÞT k if em Z ðk þ1ÞT k

ð11Þ

if em o  ðk þ 1ÞT k :

Then the original pixel is restored as x ¼ b x þ e. Meanwhile, if ðk þ 1ÞT k r em oðk þ 1ÞT k , the embedded data can be extracted as b ¼ em  ðk þ1Þ⌊em =ðk þ 1Þc. To better introduce the generalized PEE, the illustrations of (10) for ðk; T k Þ ¼ ð1; 2Þ and (2,1) are shown in Fig. 2, where the red points represent the predictionerrors used for expansion embedding and the black points represent the ones used for shifting. In both cases, after data embedding, each prediction-error only corresponds to one value of the original prediction-error, and thus the reversibility of the generalized PEE is guaranteed. It should be noticed that in generalized PEE, to avoid overflow/underflow, only the pixels satisfying the following conditions can be expanded or shifted using (10) 8 > < 0 rx þke r 255  k x r255  kT k > : x ZkT k

if  T k r eo T k if eZ T k

ð12Þ

if eo  T k :

Otherwise, the pixels are skipped in an embedding process and their locations are recorded in a location map. The location map will be embedded into the host image as a part of embedding payload for blind extraction and restoration. Finally, we remark that, as a result of generalized PEE (10), the average distortion of expansion embedding is h i h i Dk ðeÞ 9 E ðxm xÞ2 ¼ E ðem  eÞ2

¼

k 1 kð2k þ 1Þ 2 2 ∑ ðke þbÞ2 ¼ k e2 þ k e þ k þ1 b ¼ 0 6

ð13Þ

which is less than ðkT k Þ2 , while the distortion of shifting is just ðkT k Þ2 . In this situation, since the shifting distortion has a major impact on the embedding performance, we will take kT k  T (i.e., T k  T=k) for a given capacity-control parameter T. Therefore, in our scheme where the generalized PEE is utilized several times with different parameter k, the distortion to each pixel can be consistently dominated by T2.

374

X. Gui et al. / Signal Processing 98 (2014) 370–380

Table 1 Parameters of the proposed RDH scheme with L þ 1 complexity levels. Level of C n ðxÞ

ðk; T k Þ for the generalized PEE (10)

Amount of embedded data bits

½0; sL Þ ½sL ; sL  1 Þ ⋯ ½s2 ; s1 Þ ½s1 ; 256Þ

ðt L ; ⌊T=t L cÞ ðt L  1 ; ⌊T=t L  1 cÞ ⋯ ðt 1 ; ⌊T=t 1 cÞ —

log2 ðt L þ 1Þ log2 ðt L  1 þ 1Þ ⋯ log2 ðt 1 þ 1Þ 0

3.3. Proposed RDH scheme with given complexity partition and capacity parameters The proposed RDH scheme is based on a partition of complexity and generalized PEE. We first divide the range of normalized complexity (NC), [0,255], into L þ1 levels ½sL þ 1 ; sL Þ, ½sL ; sL  1 Þ; …; ½s1 ; s0 Þ with L þ2 integer-valued parameters ðs0 ; s1 ; …; sL þ 1 Þ where sL þ 1 ¼ 0, s0 ¼ 256, and 0 rsL rsL  1 r⋯ rs1 r256. Then, for a pixel x, if its NC C n ðxÞ A ½si þ 1 ; si Þ for an index i A f1; …; Lg, we embed log2 ðt i þ1Þ bits into it by the generalized PEE (10) with k ¼ t i and T k ¼ ⌊T=kc. Here, T is a given capacity-control parameter, and the integer-valued parameters ðt 1 ; …; t L Þ satisfy 0 o t 1 o⋯ ot L r 7. Notice that our adaptive embedding is realized via the condition t i ot i þ 1 such that the smoother a pixel is, the more data bits will be embedded into it. Particularly, for a most noisy pixel x, i.e., C n ðxÞ A ½s1 ; s0 Þ, it is unmodified and skipped in data embedding. To summarize, the complexity partition and the corresponding parameters are listed in Table 1. We give some examples for further illustration. For specific values of parameters as below, the proposed scheme turns out some previously introduced algorithms. In this light, our method includes these previous works as special cases.

 L ¼1, s1 ¼ 256 and t 1 ¼ 1: In this case, the generalized







PEE with k¼1 is applied to all image pixels indiscriminately, and our method is just the conventional PEE introduced in Section 2.1. L ¼1, 0 o s1 o256 and t 1 ¼ 1: In this case, the generalized PEE with k ¼1 is applied to image pixels with NC less than s1 while other pixels keep unchanged. Our method is the conventional PEE applied to selected smooth pixels which is essentially equivalent to the methods [38] and [40]. L ¼2, 0 r s2 r s1 r256, t 2 ¼ 3 and t 1 ¼ 1: In this case, the generalized PEE with k ¼3 and k¼1 is applied to the pixels with NC belonging to ½0; s2 Þ and ½s2 ; s1 Þ, respectively, while other pixels keep unchanged. Our method with these parameters can then be viewed as an alternative version of Li et al.'s work [35]. L ¼3, 0 rs3 rs2 rs1 r 256, t 3 ¼ 7, t 2 ¼ 3 and t 1 ¼ 1: In this case, for a pixel x, ○ If C n ðxÞ A ½0; s3 Þ, it will be shifted or embedded with 3 bits using generalized PEE of k ¼7; ○ If C n ðxÞ A ½s3 ; s2 Þ, it will be shifted or embedded with 2 bits using generalized PEE of k¼ 3;



○ If C n ðxÞ A ½s2 ; s1 Þ, it will be shifted or embedded with 1 bit using generalized PEE of k ¼1; ○ If C n ðxÞ A ½s1 ; 256Þ, it is skipped without any modification. L ¼4, 0 r s4 r s3 r s2 rs1 r256, t 4 ¼ 7, t 3 ¼ 5, t 2 ¼ 3 and t 1 ¼ 1: In this case, for a pixel x, ○ If C n ðxÞ A ½0; s4 Þ, it will be shifted or embedded with 3 bits using generalized PEE of k ¼7; ○ If C n ðxÞ A ½s4 ; s3 Þ, it will be shifted or embedded with log2 6 bits using generalized PEE of k¼5; ○ If C n ðxÞ A ½s3 ; s2 Þ, it will be shifted or embedded with 2 bits using generalized PEE of k¼ 3; ○ If C n ðxÞ A ½s2 ; s1 Þ, it will be shifted or embedded with 1 bit using generalized PEE of k ¼1; ○ If C n ðxÞ A ½s1 ; 256Þ, it is skipped without any modification.

By the last two examples, we see that more flexible adaptive embedding can be realized by our method. We now describe the detailed data embedding and extraction procedures as follows. 3.3.1. Data embedding Step 1: Scan host pixels from left to right and top to bottom successively and calculate the prediction value and NC for each pixel according to (8) and (9). Step 2: For a to-be-processed pixel, suppose its NC belongs to ½si þ 1 ; si Þ. If i¼0, skip it and move to the next one. If i 40, modify it using generalized PEE (10) referring to Table 1 if there is no overflow/underflow (see (12)); otherwise, record the pixel location using ⌈log2 N⌉ bits, where N is the image size. This step will stop if all message bits are embedded. The number of overflow/underflow pixels is denoted as N flow . Step 3: Record the least significant bits (LSB) of the first 20 þ 12L þ ðNflow þ 2Þ⌈log2 N⌉ pixels to obtain a binary sequence SLSB . Embed the sequence SLSB into the unprocessed pixels (i.e., the pixels which are not processed in Step 2) in the same way as Step 2. Then we denote lend as the index of the last processed pixel. Next, replace LSB of the first 20 þ 12L þ ðN flow þ 2Þ⌈log2 N⌉ pixels by the following auxiliary information:  the capacity-control parameter T (8 bits),  the parameters fs1 ; …; sL g and ft 1 ; …; t L g (9L þ 3L ¼ 12L bits),  the end-position lend (⌈log2 N⌉ bits),  the maximum complexity of all host pixels C max (12 bits),  the number of overflow/underflow pixels Nflow (⌈log2 N⌉ bits),  the overflow/underflow locations (N flow ⌈log2 N⌉ bits). Finally, the marked image is generated. 3.3.2. Data extraction Step 1: Read LSB of first 20 þ12L þ2⌈log2 N⌉ pixels of marked image to get the values of T, fs1 ; …; sL g, ft 1 ; …; t L g, lend , C max and Nflow . Then, read LSB of next Nflow ⌈log2 N⌉

X. Gui et al. / Signal Processing 98 (2014) 370–380

3.4. Optimal complexity partition and capacity parameters selection For a given L, there are 2L þ1 parameters denoted as P ¼ ðT; s1 ; …; sL ; t 1 ; …; t L Þ involved in our method. We now discuss how to determine the best parameters to achieve an optimized embedding performance. We first give estimation on EC and embedding distortion when the parameter set P is given. For simplification, the overflow/ underflow is not considered in estimating EC and embedding distortion. For each s A f0; 1; …; 255g and e A f  255; …; 255g, we define n

hðs; eÞ ¼ f1 r i rN : C ðxi Þ ¼ s; ei ¼ eg

ð14Þ

comparison of the proposed RDH scheme with randomly selected parameter sets and OPSs for L ¼3 is shown in Fig. 3. One can see that, according to this figure, the OPS always achieves the best performance for every embedding rate (ER). It verifies the effectiveness of the proposed parameter determination procedure. Then we present some experimental results for L A f1; 2; 3; 4g for Lena. Referring to Fig. 4, one can see that our scheme with OPS performs well and can achieve a very high ER larger than 2 bits per pixel (BPP). Moreover, when L increases, the performance becomes better with improved maximum ER, and the performance gain is actually very slight when changing L from 3 to 4. The observation is a general phenomenon and is validated for other test images. For a given image, the running time for the parameter determination procedure implemented by C þ þ is about 1 s for L¼1, 10 s for L¼2, 3 min for L ¼3 and 1 h for L ¼4. Based on the trade-off between the time cost and embedding performance, we simply fix L as 3 in our scheme to demonstrate its performance in the following experiments. More experimental results will be reported later in the next section.

si  1

f EC ðPÞ 9 ∑ log2 ðt i þ 1Þ i¼1



40

L

f ED ðPÞ 9 ∑

i¼1

si  1





s ¼ si þ 1 e ¼  ⌊T=t i c

255



s ¼ si þ 1 e ¼  255

35

30

⌊T=t i c  1

and

Randomly selected parameter sets Optimized parameter sets

45

where N is the image size, xi is the i-th pixel value, ei and C n ðxi Þ is the prediction-error and NC of xi, and # means the cardinal number of a set. Then, for a given P, the EC and embedding distortion in l2-norm can be formulated as ! L

Lena

50

PSNR (dB)

pixels of marked image to determine all overflow/underflow locations. Step 2: From the end-location, in the reverse scan order, repeat the following process until the sequence SLSB is extracted. For each pixel, the prediction value and NC are first computed. Suppose its NC belongs to ½si þ 1 ; si Þ. If i¼0 or the pixel index is an overflow/underflow location, the original pixel value is itself and there is no hidden data. Otherwise, the pixel restoration and data extraction can be conducted according to the method described in Section 3.2 with k ¼ t i and T k ¼ ⌊T=kc. Step 3: Replace LSB of first 20 þ12L þðNflow þ 2Þ⌈log2 N⌉ pixels by the sequence SLSB extracted in Step 2. Then use the same method of Step 2 to extract the embedded message from the unprocessed pixels and meanwhile realize the restoration. Finally, the embedded data is extracted and the host image is recovered.

375

hðs; eÞ

ð15Þ

25 0.5

1

1.5

2

Embedding Rate (BPP)

Fig. 3. Performance comparison of the proposed RDH scheme with randomly selected parameter sets and OPSs for L ¼3.

! g i ðeÞhðs; eÞ

where, by using (13), gi is defined by ( Dt i ðeÞ if  ⌊T=t i c re o⌊T=t i c g i ðeÞ ¼ ðt i ⌊T=t i cÞ2 otherwise:

ð16Þ Lena

45

40

After computing ðf EC ; f ED Þ for each possible parameter set, we define P as an optimized parameter set (OPS) if, for any other P 0 aP, f EC ðP 0 Þ Z f EC ðPÞ and f ED ðP 0 Þ rf ED ðPÞ do not hold at the same time. By this means, we may determine all OPS. Finally, for an OPS P, we implement the data embedding procedure described in Section 3.3.1 where the true EC is the difference value of the estimated EC f EC ðPÞ and the auxiliary information size. To demonstrate the advantage of the above parameter determination procedure, we first give some experimental results with and without this technique for the standard 512  512 sized gray-scale image Lena. Performance

35

PSNR (dB)

ð17Þ

30

25

20 0.5

1

1.5

2

2.5

Embedding Rate (BPP)

Fig. 4. Performance of the proposed RDH scheme with OPS for L A f1; 2; 3; 4g.

376

X. Gui et al. / Signal Processing 98 (2014) 370–380

Fig. 5. Eight test images. (a) Lena. (b) Baboon. (c) Barbara. (d) Airplane. (e) House. (f) Lake. (g) Boat. (h) Tiffany.

4. Experimental results

70

In this section, several experiments are conducted to demonstrate the performance of our scheme with OPS for L ¼3. Eight 512  512 sized standard gray-scale images including Lena, Baboon, Barbara, Airplane, House, Lake, Boat and Tiffany shown in Fig. 5 are used in our experiments. Except Barbara, all the images are downloaded from the USC-SIPI database.1 To better understand the proposed method, we first show the OPS for given ER. Referring to Fig. 6, as expected, we see that the capacity-control parameter T increases as ER increases. Moreover, for a given ER, this parameter is relatively larger for a textured image such as Baboon. The parameters ðs1 ; …; sL Þ and ðt 1 ; …; t L Þ for L ¼3 are given in Table 2. According to the table, for the complexity partition parameters ðs1 ; s2 ; s3 Þ, one can see that they generally increase as ER rises. When ER is small, the adaptive embedding strategy is not fully exploited as the payload is mostly embedded into smooth pixels by the conventional PEE. For the cases of ER A f0:2; 0:3; 0:4; 0:6; 0:8g, only two intervals are in fact adopted for adaptive embedding. However, with ER approaching the maximum, all of the three distinct intervals are fully used with different capacity parameters, making better use of adaptive embedding strategy. As for the capacity parameters ðt 1 ; t 2 ; t 3 Þ, t1 is 1 in most cases and, for high ER, larger values of t2 and t3 are selected to provide enough capacity. For example, for image Lena with ER ¼2.2 BPP, ðt 1 ; t 2 ; t 3 Þ ¼ ð1; 3; 6Þ, which

60

1

http://sipi.usc.edu/database/database.php?volume=misc

Lena Baboon Airplane

Value of T

50 40 30 20 10 0 0

0.5

1

1.5

2

2.5

Embedding Rate (BPP)

Fig. 6. Capacity-control parameter T used in our scheme with OPS for L ¼ 3, for three test images.

correspond to embedding 1, 2, and log2 7 bits into each pixel of different complexity levels. The proposed RDH scheme is compared with four state-of-the-art methods of Hu et al. [26], Li et al. [35], Sachnev et al. [38] and Peng et al. [16]. The performance comparison is shown in Fig. 7. For all the images except Tiffany, one can obviously see the superiority of our scheme over the other methods. According to Fig. 7, our scheme shows a significant superiority over the conventional PEE of Hu et al. due to the adaptive embedding strategy. For Li et al.'s method, our scheme performs better with a more flexible adaptive embedding strategy, and the advantage is more obvious for higher ER. Referring to Tables 4 and 5, our scheme improves Li et al.'s by 0.89 dB and 1.28 dB in average for ER of 1.0 and 1.5 BPP,

X. Gui et al. / Signal Processing 98 (2014) 370–380

377

Table 2 Parameters ðs1 ; …; sL Þ and ðt 1 ; …; t L Þ used in our scheme with OPS for L ¼3, for the image Lena. ER

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

1.1

1.2

1.3

1.4

1.5

1.6

1.7

1.8

1.9

2.0

2.1

2.2

2.3

s1 s2 s3 t1 t2 t3

32 3 0 1 2 –

19 5 0 1 3 –

57 2 0 1 3 –

31 5 1 1 3 4

33 2 0 1 5 –

41 6 2 1 2 3

33 9 0 1 2 –

47 11 4 1 2 4

43 14 5 1 2 3

47 16 7 1 2 3

57 19 7 1 2 3

59 22 9 1 2 3

72 24 12 1 2 3

76 27 15 1 2 3

78 32 18 1 2 3

83 32 15 1 2 4

94 35 18 1 2 4

106 31 13 1 3 5

118 37 15 1 3 5

120 41 19 1 3 5

118 46 18 1 3 6

94 30 16 2 4 7

Table 3 Comparison of PSNR (in dB) between our scheme for L ¼ 3 and four methods of Hu et al. [26], Li et al. [35], Sachnev et al. [38] and Peng et al. [16], for an ER of 0.5 BPP. Image

Hu et al. Li et al. [26] [35]

Sachnev et al. [38]

Peng Proposed et al. [16] scheme

Lena Baboon Barbara Airplane House Lake Boat Tiffany

40.73 30.65 38.47 44.21 42.43 35.33 36.45 41.18

42.37 31.42 41.04 45.97 44.92 36.62 37.82 40.89

42.73 32.20 40.20 46.25 43.74 36.73 37.95 42.97

40.98 30.31 38.81 44.17 43.05 31.18 36.90 41.76

42.41 32.01 41.78 46.03 45.25 37.15 37.81 41.77

Average 38.68

40.13

40.35

38.40

40.53

respectively. Besides, for each image, our scheme can provide larger maximum ER compared with [35], for example, our maximum ER is as high as 2.3 BPP for Lena. For the eight test images, Li et al.'s maximum ER can be increased by our scheme by 19% in average. For Sachnev et al.'s method, it provides an embedding position selection strategy as well as a better pixel prediction. One can see from Fig. 7, it is comparable with our scheme for small ER while our scheme is better for high payload in most cases. Referring to Table 3, our scheme can achieve a larger PSNR in average. Peng et al.'s method is based on integer transform. Although this method can also provide high ER, the integer transform is not a good choice for efficient RDH since it is the worst performed method among the tested ones. However, for the image Tiffany, our scheme fails to provide a good performance due to the large size of location map since this image contains a plenty number of saturated pixels. As we know, the location map may significantly affect the embedding performance of RDH. For our method, the location map size measured by BPP for three test images is shown in Fig. 8. It can be observed that the location map size is rather small at low ER and increases rapidly when ER approaches the maximum. Besides, the effect of location map is greater for textured images than that for smooth ones. For smooth images like Lena and Airplane, the location map shows little influence with ER less than 2 BPP, while the textured image Baboon suffers from an obvious payload loss with ER larger than 0.5 BPP. For example, when ER is 1.3 BPP, the location map size is larger than 0.2 BPP for Baboon, and thus the true embedded payload size is actually larger than 1.5 BPP.

For the images which will generate large sized location map such as Tiffany, the proposed method cannot provide good marked image quality. However, for our method with a given OPS, instead of simply recording the overflow/ underflow locations, at a cost of running time, we may compress the location map to save more space for EC. Specifically, we first establish a location map which has the same size as the host image. Then, if overflow/underflow occurs, the corresponding value in the location map is marked as 1 and otherwise as 0. Next, the location map is compressed using lossless arithmetic coding to further reduce its size. Finally, the compressed location map should be embedded into the host image as a part of the auxiliary information. With this treatment of location map, the performance of our method can be enhanced. The comparison between the proposed method with and without compressed location map and other methods for Tiffany is shown in Fig. 9. As we can see from it, our method with this new implementation of location map performs rather well. It is better than Li et al.'s and Peng et al.'s methods with a significant increase in both PSNR and maximum ER. It should be mentioned that, for a given OPS, our original data embedding can be implemented in less than 0.1 s, however, for our method with location map compression, its running time is about 20 s. Besides the above eight classical images, to further validate the efficiency of our method, we also conduct experiments on a larger database of BossBase v1.002 [41] containing 10 000 gray-scale images. The images in BossBase are never-compressed cover images coming from several digital cameras. All these images are created from full-resolution color images in RAW format (CR2 or DNG). The images are then resized such that the smaller side is 512 pixels long, then they are cropped to 512  512 pixels, and finally converted to gray-scale. We employ the proposed scheme and the method of Li et al. [35] for an ER of 1.0 bpp, and the obtained average PSNR are 38.98 dB and 37.85 dB, respectively. So, the proposed method outperforms that of Li et al. with an average PSNR increase of 1.13 dB. The probability distribution of PSNR difference between the proposed method and Li et al.'s is shown in Fig. 10, demonstrating the superiority of our method on a large database. In conclusion, compared with the previous state-ofthe-art work [16,26,35,38], our scheme can achieve better performance with higher maximum ER.

2

http://www.agents.cz/boss/BOSSFinal/

378

X. Gui et al. / Signal Processing 98 (2014) 370–380

Lena

Baboon Hu et al. Li et al. Sachnev et al. Peng et al. Proposed scheme

50

35 40

PSNR (dB)

PSNR (dB)

45

Hu et al. Li et al. Sachnev et al. Peng et al. Proposed scheme

40

35 30

30

25

25 20 20 0.5

1

1.5

2

0.2

2.5

0.4

Barbara

50

1.2

1.4

Hu et al. Li et al. Sachnev et al. Peng et al. Proposed scheme

45

40

PSNR (dB)

PSNR (dB)

1

50

35 30

40 35 30

25

25 20 0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

0.5

Embedding Rate (BPP)

1

1.5

2

2.5

Embedding Rate (BPP)

House

55

Lake 45

Hu et al. Li et al. Sachnev et al. Peng et al. Proposed scheme

50

Hu et al. Li et al. Sachnev et al. Peng et al. Proposed scheme

40 PSNR (dB)

45 PSNR (dB)

0.8

Airplane

55 Hu et al. Li et al. Sachnev et al. Peng et al. Proposed scheme

45

20 0.2

0.6

Embedding Rate (BPP)

Embedding Rate (BPP)

40 35

35 30

30 25

25 20 0.5

1

1.5

20 0.2

2

0.4

Embedding Rate (BPP)

0.6

Boat

1

1.2

1.4

1.6

Tiffany

45

50

Hu et al. Li et al. Sachnev et al. Peng et al. Proposed scheme

Hu et al. Li et al. Sachnev et al. Peng et al. Proposed scheme

45 PSNR (dB)

40 PSNR (dB)

0.8

Embedding Rate (BPP)

35 30

40 35

25 30 20 0.2

0.4

0.6

0.8

1

1.2

1.4

Embedding Rate (BPP)

1.6

1.8

2

25 0.2

0.4

0.6

0.8

1

1.2

1.4

Embedding Rate (BPP)

Fig. 7. Performance comparison between our method and four methods of Hu et al. [26], Li et al. [35], Sachnev et al. [38] and Peng et al. [16].

X. Gui et al. / Signal Processing 98 (2014) 370–380

Tiffany

Table 4 Comparison of PSNR (in dB) between our scheme for L ¼ 3 and two methods of Li et al. [35] and Peng et al. [16], for an ER of 1.0 BPP. The result for Tiffany is not presented here since our method and Li et al.'s method cannot provide such a payload. Li et al. [35]

Peng et al. [16]

Proposed scheme

Lena Baboon Barbara Airplane House Lake Boat

34.59 23.81 32.32 37.76 34.89 29.78 30.68

32.96 22.35 30.23 35.96 33.45 29.14 29.32

35.39 24.32 33.49 38.51 36.90 30.25 31.20

Average

31.98

30.49

32.87

50

Sachnev et al. Peng et al. Proposed scheme Proposed scheme with compressed location map

45

PSNR (dB)

Image

379

40

35

30

25 0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

Embedding Rate (BPP)

Table 5 Comparison of PSNR (in dB) between our scheme for L ¼ 3 and two methods of Li et al. [35] and Peng et al. [16], for an ER of 1.5 BPP. The results for Baboon, Lake and Tiffnay are not presented here since either our method or Li et al.'s method can not provide such a payload.

Fig. 9. Performance comparison between the proposed method with and without compressed location map and two methods of Sachnev et al. [38] and Peng et al. [16] for image Tiffany.

0.15

Li et al. [35]

Peng et al. [16]

Proposed scheme

Lena Barbara Airplane House Boat

29.57 25.00 32.08 28.76 25.12

27.31 24.11 29.96 26.09 23.44

30.20 27.49 32.90 30.54 25.80

Average

28.11

26.18

29.39

0.1

0.05

Lena Baboon Airplane

0.2 0.18 Location map size (BPP)

Possibility percentage

Image

0 −5

0.16

0

5

Difference of PSNR

0.14

Fig. 10. Distribution of PSNR difference between our scheme and the method of Li et al. [35] on the database of BossBase v1.00.

0.12 0.1 0.08

with limited distortion. Experimental results demonstrate its superiority over some state-of-the-art RDH works.

0.06 0.04 0.02 0 0.5

1

1.5

2

2.5

References

Embedding Rate (BPP)

Fig. 8. Location map size for L ¼ 3, for three test images.

5. Conclusion In this paper, based on the generalized PEE and adaptive embedding, a high capacity RDH scheme is proposed. For each pixel, we first calculate its prediction value and complexity by GAP predictor and normalized measurement. Then a certain amount of message bits will be embedded into this pixel by the generalized PEE, where the amount of embedded data bits is adaptively determined by the complexity level. According to our adaptive embedding strategy, more bits will be embedded into pixels located in smoother regions. In this way, our scheme can utilize much more image redundancy to embed data

[1] Y.Q. Shi, Reversible data hiding, in: Proceedings of IWDW, Lecture Notes in Computer Sciences, vol. 3304, Springer, 2004, pp. 1–12. [2] Y.Q. Shi, Z. Ni, D. Zou, C. Liang, G. Xuan, Lossless data hiding: fundamentals, algorithms and applications, in: Proceedings of IEEE ISCAS, vol. 2, 2004, pp. 33–36. [3] R. Caldelli, F. Filippini, R. Becarelli, Reversible watermarking techniques: an overview and a classification, EURASIP J. Inf. Secur. 2010 (article ID 134546). [4] J. Fridrich, M. Goljan, R. Du, Invertible authentication, in: Security and Watermarking of Multimedia Contents III, vol. 4314, SPIE, 2001, pp. 197–208. [5] J. Fridrich, M. Goljan, R. Du, Lossless data embedding—new paradigm in digital watermarking, EURASIP J. Appl. Signal Process. 2002 (2) (2002) 185–196. [6] M.U. Celik, G. Sharma, A.M. Tekalp, E. Saber, Lossless generalized-LSB data embedding, IEEE Trans. Image Process. 14 (2) (2005) 253–266. [7] M.U. Celik, G. Sharma, A.M. Tekalp, Lossless watermarking for image authentication: a new framework and an implementation, IEEE Trans. Image Process. 15 (4) (2006) 1042–1049. [8] W. Zhang, B. Chen, N. Yu, Improving various reversible data hiding schemes via optimal codes for binary cover, IEEE Trans. Image Process. 21 (6) (2012) 2991–3003.

380

X. Gui et al. / Signal Processing 98 (2014) 370–380

[9] W. Zhang, X. Hu, X. Li, N. Yu, Recursive histogram modification: establishing equivalency between reversible data hiding and lossless data compression. IEEE Trans. Image Process., 22 (7) (2013) 2775– 2785. [10] Z. Ni, Y.Q. Shi, N. Ansari, W. Su, Reversible data hiding, IEEE Trans. Circuits Syst. Video Technol. 16 (3) (2006) 354–362. [11] J. Tian, Reversible data embedding using a difference expansion, IEEE Trans. Circuits Syst. Video Technol. 13 (8) (2003) 890–896. [12] A.M. Alattar, Reversible watermark using the difference expansion of a generalized integer transform, IEEE Trans. Image Process. 13 (8) (2004) 1147–1156. [13] D. Coltuc, J.M. Chassery, Very fast watermarking by reversible contrast mapping, IEEE Signal Process. Lett. 14 (4) (2007) 255–258. [14] X. Wang, X. Li, B. Yang, Z. Guo, Efficient generalized integer transform for reversible watermarking, IEEE Signal Process. Lett. 17 (6) (2010) 567–570. [15] C. Wang, X. Li, B. Yang, High capacity reversible image watermarking based on integer transform, in: Proceedings of IEEE ICIP, 2010, pp. 217–220. [16] F. Peng, X. Li, B. Yang, Adaptive reversible data hiding scheme based on integer transform, Signal Process. 92 (1) (2012) 54–62. [17] D. Coltuc, Low distortion transform for reversible watermarking, IEEE Trans. Image Process. 21 (1) (2012) 412–417. [18] X. Gui, X. Li, B. Yang, A novel integer transform for efficient reversible watermarking, in: Proceedings of ICPR, 2012, pp. 947–950. [19] L. Kamstra, H.J.A.M. Heijmans, Reversible data embedding into images using wavelet techniques and sorting, IEEE Trans. Image Process. 14 (12) (2005) 2082–2090. [20] S. Weng, Y. Zhao, J.S. Pan, R. Ni, Reversible watermarking based on invariability and adjustment on pixel pairs, IEEE Signal Process. Lett. 15 (2008) 721–724. [21] H.J. Kim, V. Sachnev, Y.Q. Shi, J. Nam, H.G. Choo, A novel difference expansion transform for reversible data embedding, IEEE Trans. Inf. Forensic Secur. 4 (3) (2008) 456–465. [22] M. Liu, H.S. Seah, C. Zhu, W. Lin, F. Tian, Reducing location map in prediction-based difference expansion for reversible image data embedding, Signal Process. 92 (3) (2012) 819–828. [23] C.-F. Lee, H.-L. Chen, Adjustable prediction-based reversible data hiding, Digit. Signal Process. 22 (6) (2012) 941–953. [24] D.M. Thodi, J.J. Rodriguez, Expansion embedding techniques for reversible watermarking, IEEE Trans. Image Process. 16 (3) (2007) 721–730. [25] M. Fallahpour, Reversible image data hiding based on gradient adjusted prediction, IEICE Electron. Express 5 (20) (2008) 870–876. [26] Y. Hu, H.K. Lee, J. Li, DE-based reversible data hiding with improved overflow location map, IEEE Trans. Circuits Syst. Video Technol. 19 (2) (2009) 250–260.

[27] W. Hong, T.S. Chen, C.W. Shiu, Reversible data hiding for high quality images using modification of prediction errors, J. Syst. Softw. 82 (11) (2009) 1833–1842. [28] L. Luo, Z. Chen, M. Chen, X. Zeng, Z. Xiong, Reversible image watermarking using interpolation technique, IEEE Trans. Inf. Forensic Secur. 5 (1) (2010) 187–193. [29] D. Coltuc, Improved embedding for prediction-based reversible watermarking, IEEE Trans. Inf. Forensic Secur. 6 (3) (2011) 873–882. [30] X. Gao, L. An, Y. Yuan, D. Tao, X. Li, Lossless data embedding using generalized statistical quantity histogram, IEEE Trans. Circuits Syst. Video Technol. 21 (8) (2011) 1061–1070. [31] H.-T. Wu, J. Huang, Reversible image watermarking on prediction errors by efficient histogram modification, Signal Process. 92 (12) (2012) 3000–3009. [32] G. Coatrieux, W. Pan, N. Cuppens-Boulahia, F. Cuppens, C. Roux, Reversible watermarking based on invariant image classification and dynamic histogram shifting, IEEE Trans. Inf. Forensic Secur. 8 (1) (2013) 111–120. [33] S. Jung, L. Ha, S. Ko, A new histogram modification based reversible data hiding algorithm considering the human visual system, IEEE Signal Process. Lett. 18 (2) (2011) 95–98. [34] W. Hong, T. Chen, M. Wu, An improved human visual system based reversible data hiding method using adaptive histogram modification, Opt. Commun. 291 (2013) 87–97. [35] X. Li, B. Yang, T. Zeng, Efficient reversible watermarking based on adaptive prediction-error expansion and pixel selection, IEEE Trans. Image Process. 20 (12) (2011) 3524–3533. [36] M.J. Weinberger, G. Seroussi, G. Sapiro, The LOCO-I lossless image compression algorithm: principles and standardization into JPEG-LS, IEEE Trans. Image Process. 9 (8) (2000) 1309–1324. [37] X. Wu, N. Memon, Context-based adaptive lossless image coding, IEEE Trans. Commun. 45 (4) (1997) 437–444. [38] V. Sachnev, H.J. Kim, J. Nam, S. Suresh, Y.Q. Shi, Reversible watermarking algorithm using sorting and prediction, IEEE Trans. Circuits Syst. Video Technol. 19 (7) (2009) 989–999. [39] W. Hong, An efficient prediction-and-shifting embedding technique for high quality reversible data hiding, EURASIP J. Adv. Signal Process. 2010 (article ID 104835). [40] W. Hong, Adaptive reversible data hiding method based on error energy control and histogram shifting, Opt. Commun. 285 (2) (2012) 101–108. [41] P. Bas, T. Filler, T. Pevny, Break our steganographic system—the ins and outs of organizing boss, in: Proceedings of 13th International Workshop on Information Hiding, Lecture Notes in Computer Sciences, vol. 6958, Springer, 2011, pp. 59–70.