Expert Systems with Applications 40 (2013) 707–714
Contents lists available at SciVerse ScienceDirect
Expert Systems with Applications journal homepage: www.elsevier.com/locate/eswa
Duplication forgery detection using improved DAISY descriptor Jing-Ming Guo ⇑, Yun-Fu Liu, Zong-Jhe Wu Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
a r t i c l e
i n f o
Keywords: Digital image forensics Image matching Copy-move attack Authenticity verification Duplication
a b s t r a c t Copy-move is one of the simple and effective operations to create digital image forgeries due to the gradually evolved image processing tools. In recent years, SIFT-based approach is widely applied to detect copy-move. Although these methods are proved to have robust performance in this field, when the cloned region is of uniform texture, this kind of methods normally failed to detect such forgeries due to insufficient or even none keypoints located. Thus, in this paper, an effective manner based on adaptive non-maximal suppression and rotation-invariant DAISY descriptor is proposed, and which enables the capability to detect a cloned region even undergone several geometric changes, such as rotation, scaling, JPEG compression, and Gaussian noise. Extensive experimental results are presented to confirm that the technique is effective to identify the altered area. Ó 2012 Elsevier Ltd. All rights reserved.
1. Introduction Nowadays, the forgeries produced by state-of-the-art digital image technologies have not been easily authenticated even by an expert. This raises critical concerns about the use of digital images that handle information needing authentication, e.g., the digital crime. Currently, the authentication methodology can be roughly grouped into two categories, active and passive, according to the strategy intentions. The methods in the first category, such as the digital watermarking (Fridrich, 2002; Run, Horng, Lai, Kao, & Chen, 2012; Run et al., 2011; Yuan & Zhang, 2006), embed imperceptible information into an image. Yet, these approaches are invasive in the sense that certain distortions are inevitable during the embedding process. In practice, active techniques are only focused on some specific environments, such as surveillance cameras, in which the devices are equipped with appropriate electronic components either to compute digital signatures or to embed watermarks. On the other hand, the image forensics, aims at determining whether an image has been manipulated, without relying on other additional information embedded beforehand (Fridrich et al., 2003; Popescu and Farid, 2004). Sometimes, these techniques are usually working on an assumption that forgeries may alter some underlying statistical characteristics of an image, even without any perceivable trace (Popescu & Farid, 2005). Therefore, due to this difficulty, the forensic proof almost relies on various methods simultaneously, rather than just one specific algorithm (Fridrich, Soukal, and Lukas, 2003).
⇑ Corresponding author. Tel.: +886 227303241. E-mail addresses:
[email protected] (J.-M. (Y.-F. Liu),
[email protected] (Z.-J. Wu).
Guo),
[email protected]
0957-4174/$ - see front matter Ó 2012 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.eswa.2012.08.002
Copy-move, one of the most commonly considered forgeries, involves in concealing unwanted objects in the image with regions copied from another part of the same image (Fridrich et al., 2003; Langille & Gong, 2006; Mahdian & Saic, 2007; Popescu & Farid, 2004). To allow further visual inspection in some specific applications such as image crime, the duplicated areas should be initially highlighted for authentication. In addition, for higher robustness, the situation applying additional post-processing operations on the cloned regions should be considered. For example, the detection of cloned areas altered by JPEG compression had been studied in Fridrich et al. (2003), Popescu and Farid (2005), Luo et al. (2006), while the detection of duplicated regions affected by blurring was also considered in Mahdian and Saic (2007) and Dybala, Jennings, and Letscher, 2007 works. Nevertheless, these methods are still susceptible to the geometrical changes in the copied portion. Therefore, an excellent copy-move forgery detection should be robust to some non-malicious transformations/manipulations, such as rotation, scaling, usual JPEG compression, and suffering from some common pollutions, e.g., the Gaussian noise. Former methods do not deal with all kinds of manipulations simultaneously. For instance, the method (Popescu and Farid, 2004) was not able to detect scaling as well as rotation transformation, and the methods (Bayram, Sencar, and Memon, 2009; Fridrich et al., 2003) only allowed small variations at rotation and scaling as reported in Bayram et al. (2008). The authors Ryu, Lee, and Lee (2010) attempted to overcome the problem using Zernike moments to identify copy-move manipulation, when only the rotated cloned region was taken considered. This issue was also discussed in Lin, Wang, and Kao (2009), in which rotation transformations, JPEG compression, and Gaussian noise manipulations were analyzed to understand how they affect the copy-move detection. The authors Bravo-Solorio and Nandi (2009) also proposed an
708
J.-M. Guo et al. / Expert Systems with Applications 40 (2013) 707–714
approach to detect duplicated and transformed regions through the use of a block description, which is invariant to rotation, such as the log-polar block representation summed along its angle axis. In past years, the Scale-Invariant Feature Transformation (SIFT) Lowe, 2004 is widely used in lots of fields, including image retrieval, object recognition, and image matching, due to its excellent characteristics of robust for geometric transformation, occlusions, and clutter. For instance, in the digital forensics, the SIFT features had been used for fingerprint detection (Shuai, Zhang, and Hao, 2008), shoeprint image retrieval (Su, Bouridane, & Gueham, 2007), and also for some copy-move detections (Amerini et al., 2010; Huang, Guo, & Zhang, 2008; Jing & Shao, 2012; Pan, Lyu, & Dallas, 2010). The SIFT feature is detectable at different scales using a scale-space representation implemented as an image pyramid. The pyramid levels are obtained by Gaussian smoothing and subsampling of the image resolution while interest points are selected as local extrema (minimum/maximum) in the scale-space. Thus, the extracted keypoints extracted by SIFT-based methods are usually insufficient or even none when the textures of some cloned regions are almost in uniform, since the local extrema did not exist in such area. Hence, the SIFT-based methods for copymove detection are usually failed to be detected. In this paper, an effective forensics method is proposed to detect duplicated regions, while these regions are allowed to undergo several geometric changes, as well as some manipulations including JPEG compression, and Gaussian noise addition. To solve the insufficient/none keypoints problems mentioned above, the adaptive non-maximal suppression (ANMS) algorithm Brown, Szeliski, & Winder, 2005 is exploited to extract evenly distributed keypoints in the region where the SIFT-based approach cannot address. Afterward, it is necessary to generate descriptive vectors for each keypoint such that descriptors are distinctive and robust to other variations, such as illuminations. For this, recently Tola, Lepetit, and Fua (2010) presented a novel local descriptor called DAISY, which gears with some better characteristics involving such as insensitivity to contrast variation and scale changes. However, it also lacks the characteristic of rotation-invariant. Therefore, in this work, an improved DAISY descriptor including the important rotation-invariant is also proposed for higher reliability. Finally, after undergoing a keypoint matching process, some matched point pairs are exploited to indicate the cloned areas to meet the further prospective applications. The rest of this paper is organized as follows. Section 2 presents the proposed approach regarding copy-move forgery detection method. Section 3 presents the experimental results on forgery detection and explains the advantages of the proposed method, while conclusions are finally drawn in Section 4. 2. Proposed method Fig. 1 illustrates the flowchart of the proposed system, and generally it can be separated into three parts. Firstly, the ANMS (Brown et al., 2005) is adopted to initially detect out some uniformly distributed keypoints K ¼ fk1 ; k2 ; . . . ; kM jkm ¼ ðukm ; v km Þg to be the interest locations as described later. Afterward, the keypoints in the set k are then described as F = {f1, f2, . . ., fM} using the proposed rotation-invariant DAISY descriptor. Finally, the feature matching determines whether each pair of two keypoints is similar, an effective method is adopted instead directly using nearest neighbor. 2.1. Keypoints detection The Harris corner is commonly used for detecting keypoints in image matching and image stitching. The principal of it can be easily described as following formulas:
(I )
K = {k1 , k2 ,..., k M }
F = { f1 , f 2 ,..., f M }
P = { p1 , p2 ,..., pN }
Fig. 1. Flowchart of the proposed forgery detection system.
Eðu; v Þjðx;yÞ ¼
X
wðx; yÞ½Iðx þ u; y þ v Þ Iðx; yÞ2 ;
ð1Þ
where w(x, y) denotes a Gaussian filter as defined below, and (u, v) is expressed as a minimal distance.
1 wðx; yÞ ¼ exp ðu2 þ v 2 Þ=r2 ; 2
ð2Þ
where r denotes the standard deviation. After applying the Taylor series expansion to the formula of E(u, v) and eliminating the high-level items, the formula can represented as below,
Eðu; v Þjðx;yÞ ¼ ½uv V
u
v
;
where V ¼
A
C
C
B
;
ð3Þ
A ¼ w Ix2 ; B ¼ w Iy2 ; C ¼ w IxIy In which, denotes the convolution operation, Ix and Iy represent the values of horizontal and vertical directions, respectively. Moreover, V is a 2 2 matrix which has two eigenvalues. When these two eigenvalues are large enough, the point E(u, v) will be considered as an interest point. A response function is also provided to distinguish the corner point and the edge point as formulated below,
Z ¼ detðVÞ a tr 2 ðVÞ;
ð4Þ
where det() denotes the operation to get the determinant of a matrix, tr() denotes getting trace of a matrix, and a is set at 0.06 in this work. Since the computational cost of matching procedure as introduced later is in superlinear on the number of interest points, thus it is necessary to restrict the maximum number of interest points (M) extracted from each image. Moreover, simply keeping the strongest corners will not make the points evenly distributed throughout the image, thus the detected keypoints may be insufficient or even none in duplicated region of uniform texture. To avoid the same drawback in SIFT-based algorithms, the adaptive non-maximal suppression (ANMS) Brown et al., 2005 is employed in the proposed approach for selecting interest points km e K. The ANMS is a strategy for selecting the keypoints generated from Harris corner detector. Interest points are suppressed based on the Z calculated in the Eq. (4), and only those that are a maximum in a neighborhood of radius r pixels are retained, where r denotes the searching range which ranges from infinite to zero until the desired number of keypoints (M) is reached. Fig. 2 compares the results obtained by the SIFT feature detection (Lowe, 2004) and the ANMS keypoint selection. Note that the points of the latter are much better distributed throughout entire image.
709
J.-M. Guo et al. / Expert Systems with Applications 40 (2013) 707–714
Fig. 2. Detected keypoints results of (a) the SIFT Lowe, 2004 and (b) ANMS Brown et al., 2005. In which the spots ‘‘+’’ denotes the interest points.
Table 1 Daisy parameter definitions (Tola et al., 2010).
θ
Symbol
Description
R Q T H S D
Distance from the center point to the outermost grid point. Number of the rings around the center grid point. Number of the grids of each ring. Number of the bins of a grid point. Number of all the grid points, QxT + 1. Size of a descriptor vector, HxS.
0°
Fig. 3. Structure of the DAISY (Tola et al., 2010) descriptor. The ‘‘+’’ spots denote the points to be sampled. The radius of each circle denotes that a convolved region, which is directly proportional to the used standard deviation of each Gaussian kernel. The h is a anticlockwise deflection for the line of 0°, and it is computed for assigning the primary orientation of each keypoint.
2.2. Rotation-invariant DAISY descriptor Afterward, it is necessary to generate descriptive vectors for each keypoint such that descriptors are distinctive and robust to other interferences, such as illuminations. Local image descriptors offering highly discriminative, computational efficient and take storage space as small as possible, are the goals to be reached. Yet, there are always as some contradictions between high distinctiveness and short computation time. One has to face such a compromise of reducing the descriptor size while keeping its adequate distinctiveness which is crucial in matching task. Recently, Tola et al. (2010) proposed a powerful descriptor termed DAISY, which is manifested to have better performance than SIFT descriptor. The schematic structure of DAISY descriptor and its parameter settings are showed in Fig. 3. and Table 1, respectively. The DAISY uses circular grids instead of the rectangular ones as used in SIFT (Lowe, 2004) and SURF (Bay, Ess, Tuytelaars, & Gool, 2008), while it also shows this structure provides better localization properties than rectangular ones (Mikolajczyk & Schmid, 2005). Moreover, it also manifested that the DAISY offers relatively robust to contrast variation, scale change, low image quality, and efficient to compute densely. Furthermore, DAISY is able to resist rotational perturbations through combining the Gaussian kernels and circular grids.
Taking these advantages into account, the DAISY is suited for forgery detection. However, the DAISY descriptor, cannot deal with large in-plane rotation, and performs even worse in the situation of significant rotational changes (Guo, Mu, Zeng, and, Wang, 2010). Therefore, an improved DAISY descriptor is proposed in this study to cope with the problem. For the sake of clarity, the traditional DAISY descriptor is briefly introduced firstly. At the beginning, H numbers of gradient maps of a given image I are computed with pixel difference as below,
Go ¼
þ @I ; @o
ð5Þ
where o denotes the orientation of the derivative, and ()+ denotes the operation (x)+ = max(x, 0). Then the orientation maps are convolved Q times with Gaussian kernels of different standard deviation R to get convolved orientation maps for regions with different sized regions as:
GRo ¼ GR
þ @I ; @o
ð6Þ
where GP denotes a Gaussian kernel. Different Rs are used to control the size of the region. However, the convolved gradient maps can be efficiently computed in a low-cost that a larger Gaussian kernel R2 can be obtained from consecutive convolution with smaller kernel R1,
GRo 2 ¼ GR2
þ þ @I @I ¼ GR GR1 ¼ GR GRo 1 ; @o @o
ð7Þ
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi where R ¼ R22 R21 . Thus, the representation at each specific coordinate (u, v) can be expressed as a H-dimension histogram which combines the gradients of Q gradient maps as below:
h iT R R hRq ðu; v Þ ¼ G1 q ðu; v Þ; . . . ; GHq ðu; v Þ ; R
R
where q 2 ½1; Q;
ð8Þ
where G1 q ðu; v Þ; . . . ; GHq ðu; v Þ denote the orientation maps of R in different directions. Then each vector is normalized to unit form ~ R ðu; v Þ. independently in each histogram, and represented as h q
710
J.-M. Guo et al. / Expert Systems with Applications 40 (2013) 707–714
Table 2 Eight different combinations of geometric transformations applied to the cloned patch for UCID dataset (Schaefer & Stich, 2003). Attacks
Rotation (h)
Scaling (ratio)
A B C D E F G H
0 0 0 30 70 150 40 60
1 0.7 1.2 1 1 1 0.8 1.3
where lj(u, m, R) denotes the location with distance R from (u, m) in 0° direction, and T denotes the number of the grids of each ring as defined in Table 1. As mentioned above, because DAISY is not a rotation-invariant descriptor, picture pairs covering large in-plane rotation cannot be well handled by DAISY. In order to keep the descriptor rotation-invariant, the primary orientation is assigned to each keypoint. More specifically, given a keypoint’s coordinate kn 2 R2 to compute the local histogram of the neighborhood centered the keypoint, the gradient magnitude m(u, m) and orientation h(u, m) within a circular region size with radius R are calculated beforehand using pixel differences:
mðu; mÞ ¼
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðIðu þ 1; mÞ Iðu 1; mÞ þ Iðu; m þ 1Þ Iðu; m 1ÞÞ2 ; ð10Þ
h ¼ tan1
Iðu; m þ 1Þ Iðu; m 1Þ ; Iðu þ 1; mÞ Iðu 1; mÞ
ð11Þ
Afterward, the orientation of each pixel in the circle window is weighted added to a histogram to detect the canonical orientation. This histogram is quantized into 36 bins, thus each of which covers approximately 10°. The calculation for each bin is formulated as below,
Hi ¼
Fig. 4. Performances in different numbers of keypoints. (H = 8, Q = 3, T = 4)
Finally, a whole DAISY descriptor of a certain point (u, v) can be formed by concatenating the previously computed normalized vectors of itself and its neighboring sample points of outer rings,
h ~T ðu; mÞ; h ~T ðl1 ðu; m; R1 ÞÞ; . . . ; h ~T ðlT ðu; m; R1 ÞÞh ~T Dðu; mÞ ¼ h R1 R1 R1 R2 ~T ðlT ðu; m; R2 ÞÞ . . . h ~ T ðl1 ðu; m; RQ ÞÞ; ðl1 ðu; m; R2 ÞÞ; . . . ; h R2 RQ i T T ~ hRQ ; . . . ; ðlT ; ðu; m; ðRQ ÞÞ ;
X
ðuu0 Þ2 þðmm0 Þ2 1 2r2 mðu; mÞ pffiffiffiffiffiffiffi e ; r 2p ðu;mÞ2R
ð12Þ
where ðu0 ; m0 Þ denotes the coordinate of the center, and r denotes the standard deviation of the used Gaussian weights, and it is set at 1.5 in this work. The variable Hi denotes the magnitude in each quantized orientation, where i e [1, 36]. The peak in this histogram correspond to the primary orientation is also noted in Fig. 2, and
hpri ¼ 10 arg maxðHi Þ:
ð13Þ
i2½1;36
ð9Þ
Once these keypoints are detected, and canonical orientations are assigned, rotation-invariant DAISY descriptors are computed at keypoints’ locations in image defined as F = {f1, f2, . . ., fM}. In Tola et al.
Fig. 5. DAISY (Tola et al., 2010) Performances under various parameter combinations. (a) Different number of quantized orientations. (b) Different number of convolved orientation rings. (c) Different number of circles used on each ring.
J.-M. Guo et al. / Expert Systems with Applications 40 (2013) 707–714 Table 3 TPR, FPR and average processing time for each method. Various algorithms
TPR (%)
FPR (%)
Time (second)
Huang et al.’s method (2008) Jing et al.’s method (2012) Proposed method
88.52 91.26 92.79
9.48 6.32 3.31
5.67 8.29 13.46
(2010), the proposed parameters are as follows: R, Q, T, H, and thus the total number of the sample points is S = QxT +1, and the dimensionality of a DAISY descriptor in this case is D = S H. Hence, feature descriptor consists of a histogram of D elements, obtained from a circular pixel area with radius R around the corresponding keypoint. Moreover, the four main parameters (R, Q, T, H) that determine the shape of the DAISY descriptor will be analyzed in Section 3. 2.3. Feature matching After the above procedure, a set of keypoints K = {k1, k2, . . ., kM} with their corresponding DAISY descriptors F = {f1, f2 . . ., fM} are extracted. A matching operation is then performed in the DAISY space among the vectors of each keypoint to identify similar local patches in the test image. The best candidate match for each keypoint is found by identifying its nearest neighbor from all the other keypoints of the image, which is the keypoint with the minimum Euclidean distance in the DAISY space, while the matching pair can be written as P ¼ p1 ; p2 ; . . . ; pM jN 6 bM2 c , where bc denotes the round down operation. However, to determine if two keypoints are similar enough to match, simply evaluating the distance between two descriptors with respect to a global threshold does not perform well due to that the high-dimensional feature space in which some descriptors
Fig. 6. Detection performances against JPEG compression for each method. (a) TPR. (b) FPR.
711
are much more discriminative than others. Another more effective procedure is adopted, as suggested in Lowe (2004), by using the ratio between the distance of the closest neighbor and the second-closest one, and comparing it with a threshold. For example, if a keypoint with distance vector defined as D = {d1, d2, . . ., dn1} which represents the sorted Euclidean distances with respect to the other descriptors, the keypoint is matched only if the following constraint is satisfied.
d1 =d2 < c;
where c 2 ð0; 1Þ:
ð14Þ
3. Experimental results In this section, the performance of the proposed approach is evaluated on the Uncompressed Colour Image Database (UCID) dataset Schaefer and Stich, 2003, which is composed of 1338 images and the image resolution is 512 384 or 384 512 pixels. To produce as much as possible situations, 800 forged images are obtained by randomly copying an image area (square or rectangular) and pasting it onto the image, after having applied eight kinds of attacks such as scaling, rotation, or a combination of them as exhibited in Table 2. The size of the forged patch covers, on the average, 1.83% of the whole image. 3.1. Settings for forgery detection The detection performance is measured in terms of both True Positive Rate (TPR) and False Positive Rate (FPR) in this study as defined below,
Fig. 7. Detection performances against Gaussian noise for each method. (a) TPR. (b) FPR.
712
J.-M. Guo et al. / Expert Systems with Applications 40 (2013) 707–714
Fig. 8. Detect results of various methods. The duplicated region undergoes (a) scaling with a factor 1.1 (enlarge) and rotated 10°; (b) scaling with a factor 0.8 (reduce).
TPR ¼
/ images detected as forged being forged ; / forged images
ð15Þ
FPR ¼
/ images detected as forged being original ; / original images
ð16Þ
where the notation u denotes the image size as described. The number of keypoints detected by the ANMS directly affects the detection performance. As can be thought, the more keypoints are detected, the better performance will be. However, it will exhaust much more time. To set the best M, the performances in different number of keypoints are drawn in Fig. 4, in which the DAISY parameters are set as, H = 8, Q = 3, T = 4. It’s clear that the best number of M can be set as near 2500, because the performance begins increasing slowly bigger than this number. As described in Section 2.2, there are four main parameters that can affect the DAISY descriptor-R, H, Q, and T as listed in the Table 1. The influences of different parameters are evaluated experimentally on all the forgery images of UCID. To do so, a series of detection accuracy are depicted by alternately changing one parameter while fixing the others. To keep the scales of different orientation
rings, R is set at 5 for 1 ring, 10 for 2 rings, and 15 for 3 rings. According to Fig. 5, the following conclusions can be made: 8 orientations perform clearly better than 4, while 12 show no superiority to 8, indicating that 8 orientations are sufficient; the performance keeps improving as the number of rings increases, showing that more rings are better, since more neighboring information is included; 4, 8 and 12 circles have very similar performances, implying that large number of circles on each ring is unnecessary, due to overlapping of adjacent regions. Therefore, 8H3Q4T is a good choice of parameters for DAISY, and is applied for our following experiments. 3.2. Detection performance and comparison The proposed approach has been compared to the results obtained with our implementations of the SIFT-based algorithms (Huang et al., 2008 ; Jing & Shao, 2012). The thresholds required by these methods are all set as 0.6. In our method, the threshold T is set as 0.6 as well. The experiments have been done on the forged 800 image of UCID database, and the TPR, FPR, and process-
J.-M. Guo et al. / Expert Systems with Applications 40 (2013) 707–714
713
Fig. 8 (Continued).
ing time have been evaluated. Table 3 shows the detection performance and the processing time on average for an image. The results indicate that the proposed method performs better with respect to the other methods. Furthermore, tests to check the robustness of the method against usual operations such as JPEG compression or Gaussian noise addition have also been carried out. To explore this situation, the above experiments are executed again, while the cloned region is distorted by JPEG compression or Gaussian noise addition before past it. Fig. 6 shows the TPR and FPR for all the diverse JPEG quality factors with range from 90 to 20 with scale of 10. As can be seen, FPR is quite stable while the TPR tends to slightly decrease when image quality reduces. In the second experiment, the images of UCID are distorted by adding a Gaussian noise to obtain a decreasing Signal-to-Noise Ratio (SNR) ranging from 40 to 20 with step of 10, and the results are shown in Fig. 7, while it can be noticed that the TPR is over 90% until SNR decrease to 20dB, while FPR is still stable. In general, SIFT-based method performs well in most scenarios, but it often failed to detect duplicated region with insufficient/ none keypoints. Here, some forged examples are shown in Fig. 8,
where the first row in each example represents original and tampered image, respectively; the second row is the result detected by Huang et al. (2008) and Jing and Shao (2012), respectively; the third row is the proposed method. The duplicated region undergoes different tests such as scaling and rotation. As the results indicate, SIFT-based methods failed in this case because none keypoints exist in the cloned patch, while proposed approach can still locate accurately.
4. Conclusions An effective forensic method is proposed to detect and localize duplicated regions which have undergone rotation and scaling, even after JPEG compression and Gaussian noise addition. To localize the duplicated regions of which SIFT algorithm cannot detect, the ANMS algorithm is adopted to extract the keypoints. Moreover, by assigning each keypoint a primary orientation, rotationalinvariant DAISY descriptor can be generated for feature matching. Experimental results show that it can reliably detect if any region
714
J.-M. Guo et al. / Expert Systems with Applications 40 (2013) 707–714
has been duplicated by any diverse types of transformation, such as rotation, scaling, JPEG compression, and Gaussian noise addition, even those altered regions which SIFT algorithm cannot detect. References Amerini, I., Ballan, L., Caldelli, R., Del Bimbo, A., & Serra, G. (2010). Geometric tampering estimation by means of a SIFT-based forensic analysis. In Proceedings IEEE ICASSP, Dallas, TX. Bay, H., Ess, A., Tuytelaars, T., & Gool, L. V. (2008). SURF: Speeded Up Robust Features. Computer Vision and Image Understanding, 110(3), 346–359. Bayram, S., Sencar, H. T., & Memon, N. (2008) ‘‘A survey of copy-move forgery detection techniques,’’ In Proceedings of IEEE western New York image processing workshop, Rochester, NY. Bayram, S., Sencar, H. T., & Memon, N. (2009). ‘‘An efficient and robust method for detecting copy-move forgery.’’ In Proceedings IEEE ICASSP, Washington, DC. Bravo-Solorio S., & Nandi, A. K. (2009).‘‘Passive method for detecting duplicated regions affected by reflection, rotation and scaling.’’ In Proceedings EUSIPCO, Glasgow, Scotland. Brown, M., Szeliski, R., & Winder, S. (2005). Multi-image matching using multi-scale oriented patches. In Conference on computer vision and pattern recognition (pp. 510–517). Dybala, B., Jennings, B., & Letscher, D. (2007). ‘‘Detecting filtered cloning in digital images’’. In Proceedings of MM&SEC–workshop on multimedia and security, ACM, (pp. 43–50). Fridrich J. (2002) ‘‘Security of fragile authentication watermarks with localization.’’ In Proceedings of security and watermarking of multimedia contents, SPIE, (Vol. 4675, pp. 691–700). Fridrich, J., Soukal, D., & Lukas, J. (2003).‘‘Detection of copy-move forgery in digital images’’. In: Proceedings of digital forensic research, workshop. (pp. 55–61). Guo, Y., Mu, Z. -C., Zeng, H., & Wang, K. (2010). ‘‘Fast rotation-invariant DAISY descriptor for image keypoint matching.’’ In IEEE international symposium on multimedia, (art. no. 5693839, pp. 183–190). Huang, H., Guo, W., & Zhang, Y. (2008). Detection of copy-move forgery in digital images using SIFT algorithm. In Proceedings of IEEE Pacific-Asia workshop on computational intelligence and industrial application (pp. 272–276). Jing, L., & Shao, C. (2012). Image copy-move forgery detecting based on local invariant feature. Journal of Multimedia, 7(1), 90–97. Langille, A.& Gong, M. (2006) ‘‘An efficient match-based duplication detection algorithm’’, In Proceedings Canadian conference on computer and robot vision, (pp. 64–71).
Lin, H.-J., Wang, C.-W., & Kao, Y.-T. (2009). Fast copy-move forgery detection. WSEAS Transactions on Signal Processing, 5(5), 188–197. Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2), 91–110. Luo, W., Huang, J., & Qui, G. (2006) ‘‘Robust detection of region-duplication forgery in digital images.’’ In Proceedings of ICPR–IAPR international conference on pattern recognition (Vol. 4, pp. 746–749). Mahdian, B., & Saic, S. (2007). Detection of copy-move forgery using a method based on blur moment invariants. Journal of Forensic Science, 180–189. Mikolajczyk, K., & Schmid, C. (2005). A performance evaluation of local descriptors. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(10), 1615–1630. Pan, X., & Lyu, S. (2010). Detecting image region duplication using SIFT features. In Proceedings IEEE ICASSP, Dallas, TX. Popescu, A. C, & Farid H. (2004). ‘‘Exposing digital forgeries by detecting duplicated image regions’’. Technical Report, Department of Computer Science, Dartmouth College. Popescu, A. C., & Farid, H. (2005). Exposing digital forgeries by detecting traces of resampling. IEEE Transactions on Signal Processing, 53(2), 758–767. Run, R.-S., Horng, S.-J., Lai, J.-L., Kao, T.-W., & Chen, R.-J. (2012). An improved SVDbased watermarking technique for copyright protection. Expert Systems with Applications, 39(1), 673–689. Run, R.-S., Horng, S.-J., Lin, W.-H., Kao, T.-W., Fan, P., & Khan, M. K. (2011). An efficient wavelet-tree-based watermarking method. Expert Systems with Applications, 38(12), 14357–14366. Ryu, S. -J., Lee, M. -J., & Lee, H.-K. (2010). ‘‘Detection of copy-rotate-move forgery using zernikemoments.’’ In Proceedings of international workshop on information hiding, Calgary, Canada. Schaefer, G. & Stich, M. (2003).‘‘UCID: An uncompressed color image database.’’ In Proceedings SPIE storage and retrieval methods and applications for multimedia, (San Jose, CA, Vol. 5307, pp. 472–480). Shuai, X., Zhang, C. & Hao, P. (2008). ‘‘Fingerprint indexing based on composite set of reduced SIFT features.’’ In Proceedings of ICPR, Tampa, FL. Su, H., Bouridane, A., & Gueham, M. (2007) Local image features for shoeprint image retrieval. In Proceedings of BMVC, Warwick, UK. Tola, E., Lepetit, V., & Fua, P. (2010). DAISY: an efficient dense descriptor applied to wide-baseline stereo. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(5), 815–830. Yuan, H., & Zhang, X.-P. (2006). Multiscale fragile watermarking based on the Gaussian mixture model. IEEE Transactions on Image Processing, 15(10), 3189–3200.