Optik 127 (2016) 8306–8316
Contents lists available at ScienceDirect
Optik journal homepage: www.elsevier.de/ijleo
Original research article
Task driven saliency detection for image retargeting Xi-xi Jia ∗ , Xiang-chu Feng, Wei-wei Wang School of Mathematics and Statistics, XiDian University, Xi’an 710126, China
a r t i c l e
i n f o
Article history: Received 8 December 2015 Accepted 26 May 2016 Keywords: Sparse representation Multi-Gaussian Wavelet transform Seam carving
a b s t r a c t Importance map plays a crucial role in the seam-carving process, a well known image retargeting method. Usually a saliency map (with central bias or single Gaussian priority) is chosen as the importance map. However, we find that direct use of saliency map for seamcarving may cause some unsatisfactory visual effects such as trivial solution or distortion. In this paper, we analyze the reasons and give improved importance map by fusing a multiGaussian saliency map (MG-SM) and a revised slant edge saliency map (SE-SM). Specifically for the multi-Gaussian saliency map, we assume that the priori saliency distribution of an image is multi-Gaussian centered at some object centers rather than single Gaussian. Super pixels and sparse representation are used to measure the saliency. For the revised slant edge saliency map, we use wavelet transform to find the slant edges and design to carve seams on the slant edges uniformly. The present method has been extensively tested and more satisfactory experimental results, especially for the slant edge distortion, are obtained than the other methods compared. © 2016 Elsevier GmbH. All rights reserved.
1. Introduction Nowadays our daily life is teeming with various visual display units (e.g. cell phones, TV, PAD or computer monitor) which have different display capabilities; therefore, we need to share or exchange image content among them, specifically adapting the image size to fit the displays which has been called image retargeting. With the development of computer vision image retargeting becomes increasingly important. Standard image scaling works well on proportional resizing, but it loses effects when an image is resized disproportionately, it makes the salient parts twist or out of proportion. To address this problem, adaptively and optimally resize the image to fit the displays, namely content aware image retargeting comes to our mind. In recent years, large amounts of adaptive methods are raised and cause wide public interests, simple method as cropping. Cropping simply cuts the borders of an image and changes the size. However, this trivial solution has its inherent flaw; it often discards too much information of interests, especially when it is at the borders. For more works on adaptive image retargeting refer to [1–10,15–19]. Among all these methods seam-carving stands out for its wise design and good performance as shown in Fig. 1. It is designed to dynamically carve seams that are of less importance, while the image size changed the visual sense stays. Being a prominent and effective method for image retargeting, seam carving, first proposed by Avidan and Shamir [2], has caused wide public interests. A lot of works have been proposed [3,8–10,27,33]. Seam-carving method is divided into three steps (as in Fig. 2(a)): Firstly, define an energy function to measure the importance of pixels and generate the importance map; Secondly, according to the importance map using dynamic programming
∗ Corresponding author. Tel.: +86 15191433965. E-mail addresses:
[email protected] (X.-x. Jia),
[email protected] (X.-c. Feng),
[email protected] (W.-w. Wang). http://dx.doi.org/10.1016/j.ijleo.2016.05.113 0030-4026/© 2016 Elsevier GmbH. All rights reserved.
X.-x. Jia et al. / Optik 127 (2016) 8306–8316
8307
Fig. 1. Indication of seam-carving.
Fig. 2. Flow chart of seam-carving.
find a seam and carve it; Thirdly, diffuse the importance value of the seam carved to their neighbor pixels, and update the importance map for finding the next seam. In the above steps, the importance map plays a crucial role and even directly affect the seam-carving results. To evaluate the importance gradient information is used in the original seam carving. The gradient information describes the saliency of a pixel in local level and is sensitive to noise or cluttered backgrounds, which to some extent is deeply flawed. Considering local and non-local information Goferman et al. [11] uses the saliency map as importance map for seam-carving; it improves the results and indicates that saliency map is a good surrogate for importance map. Compared to seam carving, saliency detection is an old and well developed topic, and large amount of great works on saliency detection have been proposed in recent years, and they achieve good results in both quantity value and quality assessment. Liu et al. [12] designed a sliding window to locate the salient object, Goferman et al. [11] measure saliency according to the distance of similar image patches, and detect the region that represents the context, and Hou and Zhang [14] used spectral residual for saliency detection. Another interesting work is [20] in which the author takes the boundary as background to sparse represent the whole image; sparse representation error is used as saliency measure. More saliency detection methods can be found in [11–14,20,23,24,28–32]. Although the above mentioned methods work well for saliency detection, we find that using desired saliency map as importance map does not always work well for image retargeting as shown in Fig. 3. In a deeper analysis we find that there are mainly two reasons for this: (1) many saliency maps are provided with central bias or single Gaussian prior, which means it pays more attention to the central part of an image. For example, Lu et al. [20] use the boundary parts as dictionary atoms to sparse represent the whole image. Obviously the boundary parts are considered less salient, which is not appropriate for seam-carving (in this way seam-carving becomes boundary cropping). (2) Generally, saliency detection aims to detect the salient objects rather than retargeting the image; for example, it treats horizontal line and slant line equally as Fig. 3(a) and
Fig. 3. Seam-carving results using saliency map in [11].
8308
X.-x. Jia et al. / Optik 127 (2016) 8306–8316
(b). However, seam carving treats them differently, arbitrary carving makes no harm to horizontal line Fig. 3(a), but damages slant line Fig. 3(b). In fact structures like slant edges have significant means for seam carving, such as reticulation in Fig. 3(c). The two problems mentioned above are due to the different aims of saliency map for saliency detection and important map for retargeting. The former pursues for excellent saliency detection while the latter has its own connotative roles for retargeting. Although there is no definition or measure as to the quality of retargeted image I* being a good representation of the original image I, Shamir and Sorkine [4] give three main objectives for retargeting: 1. The important content of I should be preserved in I* . 2. The important structure of I should be preserved in I* . 3. I* should be free of visual artifacts. According to the above roles, now we give specific roles of seam-carving on the seams and the importance map. 1. For seams: • The seams should not be across small salient objects. • The seams should be chosen as uniformly as possible on slant edges or salient objects of large scale. 2. For the importance map: • The salient objects should be well detected. • The slant edges should be indicated in the importance map and meanwhile the importance map should direct the seams across the slant edges uniformly. We design an importance map that obey these roles which fuses the multi-Gaussian saliency map (MG-SM) and revised slant edge saliency map (SE-SM), as in Fig. 2(b) step 1, then we use the proposed importance map for seam-carving. In summary, the contribution of our paper is as follows: 1. The drawbacks that saliency map used for seam carving is discussed, and the specific roles that seam-carving put on the seams and the importance map are given. 2. A priori multi-Gaussian saliency distribution is proposed. Based on this priority we use sparse representation error to detect the salient objects and get the multi-Gaussian saliency map (MG-SM). 3. We find that distortions on the slant edges are particularly unsatisfactory. Based on this fact, the revised slant edge saliency map is designed to make the seams across the slant edge uniformly (SE-SM). 4. We fuse MG-SM and SE-SM to generate improved importance map (IIM), and use IIM as importance map for seam-carving. The organization of the paper is as follows. We present our task driven saliency detection and improved seam carving in Section 2. Experimental results are given in Section 3. Finally we make a conclusion and some discussions in Section 4. 2. Exposition of our method In this section, we describe our method in more detail. We construct two maps: MG-SM and SE-SM, and fuse them together to generate the final improved importance map (Fig. 4).
Fig. 4. The main step of our method.
X.-x. Jia et al. / Optik 127 (2016) 8306–8316
8309
Fig. 5. (a) Original image segmented into super pixels; (b) cluster centers; (c) the smallest four clusters in spatial variance, the white points are their cluster centers marked by 1,3,2,4, and we take these centers as objects center points. From the image we can see that objects are concentrated and backgrounds are widespread.
2.1. Multi-Gaussian and sparse representation for MG-SM The crux of the sparse representation based saliency detection is the selection of dictionary atoms. In [20], Lu et al. suppose that the priori saliency distribution is a single Gaussian centered at the geometric center of the picture and then chose the boundary parts as dictionary atoms to sparse represent the whole image. In this way seam-carving becomes boundary cropping. This paper improves the work of [20] and assumes the priori saliency distribution as multi-Gaussian centered at object centers. How to find the object centers automatically and efficiently? The objects in an image have their own features (color, texture, contrast and so on) which distinguish themselves from others. These features are scale-invariant and concentrated [33]; according to this we assume that the center of an object are also nearly scale-invariant, which means the center of an object in different scales is almost fixed. Note that to generate our multi-Gaussian distribution, only the rough center locations are needed which would make it possible for the low computational complexity. 2.1.1. Multi-scale center points detection Given an image, we first generate super pixels using the simple linear iterative clustering (SLIC) algorithm [22] and thenusing the mean color features and the coordinates of all the pixels in a super pixel to describe the super pixel by f = R, G, B, L, a, b, x, y ; thus the entire image can be represented as F = [f1 , f2 , . . ., fN ] ∈ Rd×N where d is the dimension of the feature, N is the number of super pixels. We find that salient objects are compact and concentrated as in Fig. 5, and the four objects are all compact. (a) From compactness. To find the center points of the objects, we first apply the low-rank representation (LRR) for subspace clustering [25] to cluster an image into K clusters (common used clustering methods are also practicable, but we find in experiment that LRR is more stable compared with others for different scales and initial super-pixels), and then finding the center point of each cluster by meaning the coordinates of the super pixels in each cluster. According to the spatial distribution of each cluster we decide which clusters contain the objects and choose the center points of these clusters as the center points of the objects, as shown in Fig. 5. To describe the spatial-distribution of a specific cluster, the simplest way is to compute the spatial variance of the cluster. The horizontal and vertical spatial variance of each cluster are:
2
− 1 xi − x m m
Hv =
(1)
i=1
2
− 1 yi − y m m
Vv =
(2)
i=1 − −
where m is the number of super pixels in a cluster, x, y are the coordinates of the center point of the cluster and xi , yi ([i = 1, 2, . . ., m]) are the coordinates of the super pixels in the cluster (here we use mean coordinate of the pixels in a super pixel as the coordinate of that super pixel). Then the spatial variance is: V = Hv + Vv
(3)
Intuitively the cluster with small spatial variance is the cluster of the object, as shown in Fig. 5. (b) From scale invariance. Besides the compactness, another feature of the salient object is its scale-invariant which is different from background, as shown in Fig. 6. In Fig. 6, we decompose an image into 3 scales using Gaussian image pyramid, and resize the image in different scales to the same size. We find that the center points of the objects are almost fixed, but the center points of the background are unstable.
8310
X.-x. Jia et al. / Optik 127 (2016) 8306–8316
Fig. 6. Cluster centers in different scales: (a) original scale; (b) 1/2 scale of the original image (here we resize the scaled image to the original size for comparison); (c) 1/4 scale of the original image; (d) the final salient points. It can be seen that the centers of the four salient objects are almost fixed, and the centers of other parts wobble.
Fig. 7. Multi-Gaussian saliency distribution map. (a) and (c) Original image; (b) and (d) corresponding saliency distribution map.
(c) Combination of the compactness and scale invariance. For image with smooth and clean backgrounds their cluster centers are nearly fixed which is same as the foreground objects; thus scale invariance can not well tell the differences of background and objects. Here we consider to combine the two factors (scale-invariant and compactness) together. Assume that an image is divided into K clusters, we first select m clusters from the K clusters according to their spatial variance, and then select l clusters from the m clusters according to the variance of the coordinate of their center points in different scales. We have l clusters which surely contain the objects. The l cluster centers are considered as center points and used to generate the multi-Gaussian, then the priori saliency distribution map is as: G (x, y) =
l
gi (x, y)
(4)
i=1
−(x−x¯i )2 /(2x2 )−(y−y¯i )2 / 2y2 where gi (x, y) = e , (x¯i , y¯i ) is the coordinate of the ith salient point. we set here x = y = 0.20. The multi-Gaussian priori saliency distribution is shown in Fig. 7. In Fig. 7, images (a) and (c) are the original images marked with object center points, (b) and (d) are their multi-Gaussian saliency distribution map. In (c), the center point of the football is not detected due to the parameter K and l; thus, the center point of the football is not the Gaussian center of the priori saliency distribution.
2.1.2. Sparse representation for saliency detection Based on the priori saliency distribution map G (x, y), we choose k super pixels with small priori saliency value (the summation of saliency value of all pixels in a super pixel) in G (x, y) as dictionary atoms D = [d1 , d2 , . . ., dk ] ⊂ F to coding image segment (super pixel) i as:
2
˛∗i = arg min fi − D˛i + ˛i 2
˛i
1
(5)
The sparse representation error is used to evaluate the saliency:
2
εsi = fi − D˛∗i
2
(6)
Regions with small saliency value in G (x, y) are regarded as background, and sparse reconstruction error is used as saliency measure to highlight the salient parts, as shown in Fig. 8 (the second row). From the second row of Fig. 8, we can see clearly that our method well detect small salient objects, where salient objects are white and backgrounds are black. For large objects such as in the first image in Fig. 8, we highlight the parrot; also in the last image we detect the red leaf. 2.2. Slant edge saliency map (SE-SM) The aforementioned saliency detection method well detects the salient objects, but for seam carving it pays less due attention to the slant edges. We find that when carving seams on an image are too crowded, the distortions would occur, especially for the slant edges.
X.-x. Jia et al. / Optik 127 (2016) 8306–8316
8311
Fig. 8. Saliency detection results by our method. First row is the original image, second row is saliency map by our method, and it shows that our method clearly separates the objects and background. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Fig. 9. Comparison of two carving method on slant edge: intensive carving and uniform carving: (a) original image; (b) the seams of the intensive carving; (c) the carving result of the intensive carving; (d) the seams of the uniform carving; (e) the carving result of the uniform carving.
We show an example in Fig. 9 to illustrate this kind of distortion and the way to avoid it. In Fig. 9, we draw a slant edge, the slant edge is indicated in pixel level as in Fig. 9(a) (the other parts of the image are leaved out). We compare two different carving ways (carving the seams) on the slant edge, the first is intensive carving (Fig. 9(b)), which is a common phenomenon in traditional seam carving, and the second is uniformly carving (Fig. 9(d)). As shown in Fig. 9: the first carving way causes distortion and malposition (Fig. 9(c)), and the second carving method has no such bad things and looks more natural (Fig. 9(e)). From the above observation and according to our seam carving rules, we propose following method to find the slant edges and to construct a modified slant edge saliency map (SE-SM) which will make the carve seams on the slant edges uniformly. Firstly, wavelet decomposition is used to find the slant edges. A given image is decomposed into sub-bands, the diagonal direction parts (set other three parts to zeros) are used to reconstruct the slant edges as shown in Figs. 10(b) and 12(b). Secondly, we use higher order statistics (HOS) to depict the local saliency map of the image. The HOS is calculated as follows: HOS (x, y) =
1 N
I (i, j) − (x, y) 2
(7)
(i,j) ∈ ˝(x,y)
where ˝ (x, y) is a set of neighbor pixels centered at (x, y), (x, y) is the mean value of the set, I (i, j) is the pixel value at position (i, j). It is called local saliency map because the saliency value is only computed on the slant edges. Finally, we set to zeros every other column on the slant edges in the map, and generate the SE-SM as in Figs. 11(c) and 12 the third column, which will scatter the seams on the slant edges.
2.3. Improved saliency detection (fusion of MG-SM and SE-SM) For a given image, we have two guidance maps. Now we consider to fuse these two parts to generate an improved importance map tailored for seam carving marked as IIM (8) IIM = MG-SM + SE-SM
(8)
The fusion of the two maps is shown in Fig. 12. MG-SM protects the objects from being carved. If it is inevitable that the seams will cross the slant edges SE-SM compels to carve seams uniformly, and thus avoid distortion and damage carving as shown in Fig. 13. In Fig. 13, the parrot is the salient object of the image rather than the reticulation, so MG-SM protects the parrot from being carved and SE-SM compels to carve seams on the slant edges uniformly.
8312
X.-x. Jia et al. / Optik 127 (2016) 8306–8316
Fig. 10. Illustration of the slant edges detection by wavelet decomposition (a) and diagonal direction reconstruction (b).
Fig. 11. Illustration of the SE-SM: (a) original image; (b) energy map; (c) SE-SM; (d) carved result.
Fig. 12. The fusion of the saliency map with slant edges map. From left to right: original image; slant edges constructed by wavelet reconstruction; revised slant edge saliency map (SE-SM); the fusion of MG-SM and SE-SM.
Fig. 13. Illustration of our method on image with abundant slant edges or reticulation. From left to right: original image; result by uniform scaling; results using gradient information as important map; our result; as can be seen, for image with abundant slant edges or reticulation, our method protects the target object well and also with no distortion.
X.-x. Jia et al. / Optik 127 (2016) 8306–8316
8313
Fig. 14. Illustration of our method on image with salient and significant slant edges. From left to right: original image; result by uniform scaling; results using gradient information as important map; our result; as can be seen, our method protects the target object well and also with no distortion.
3. Experimental results In this paper, three kinds of images are particularly tested: (1) image with abundant texture or reticulations; (2) image with large structure or organized edges; (3) image with cluttered background. The tested images are randomly chosen from MSRA data set [12]. We compare the results of our task driven saliency detection for image retargeting method with other methods, include traditional scaling method which resizes an image by downsampling, original seam carving method which uses gradient information as important map to guide seams [2], wrapping based image resizing [17]and other methods which directly use corresponding saliency detection for seam carving [11,24]. Results are shown from Figs. 13–18 respectively. Distortions are marked in red ellipse in Fig. 17. 3.1. Experiment details exposition In our experiment, to find the object center points, low-rank representation for subspace clustering (LRR) [25] method is used for clustering. Color information is used as features, and an image is divided into 8 clusters. In describing the scale-invariant, scales are shown in Gaussian image pyramid. We set the scales as 1,1/2,1/4. Three scales are enough to show the character. We locate the center points of the clusters in different scales. First, find the clusters with small spatial variance, then take out the clusters whose center points are not stable in different scales, and finally generate the final center points, as in Fig. 6. In our experiments, we set the number of object centers to be three, and generate the multi-Gaussian saliency distribution. To construct the dictionary, 1/4 of the pixels with lower priori saliency are chosen as dictionary atoms.
Fig. 15. Illustration of our method on image with cluttered backgrounds. From left to right: original image; result by uniform scaling; results using gradient information as important map (original seam carving); our result; as can be seen, our method protects the target object from being carved and achieves comfortable visual effect.
Fig. 16. The enlarge examples. From left to right: original image; enlarged results using [2]; result of our method.
8314
X.-x. Jia et al. / Optik 127 (2016) 8306–8316
Fig. 17. Seam carving results from left to right: original image; results using context aware saliency detection [11]; result by Wang et al. [17]; results by Achanta et al. [24]; results using gradient information as important map [2]; our method without slant edges map; our method. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Fig. 18. Seam carving results from left to right: original image; results using context aware saliency detection [11]; result by Wang et al. [17]; results by Achanta et al. [24]; results using gradient information as important map [2]; our method without slant edges map; our method.
X.-x. Jia et al. / Optik 127 (2016) 8306–8316
8315
Compared with traditional energy function:
∂
e (I (m, n)) =
∂x
∂
I (m, n)
+
∂y
I (m, n)
(9)
where I (m, n) is the pixel value at (m, n), e (I (m, n)) is the energy value at pixel (m, n). Our task driven saliency detection method used for seam carving well protects the salient targets; meanwhile, benefited by the proposed SE-SM, the retargeted images are with less distortions. Experimental result shows that a difference does exist in favor of our method. 3.2. Experiment results analysis In Fig. 13, for image with abundant texture or reticulations, traditional scaling makes the objects out of proportion as the second column in Fig. 13, ordinary seam carving would either carve the body of the bird or cause distortion on the reticulations, the retargeted result in the third column has obvious artifacts, and more comparisons are shown in Fig. 17. Our method not only well protects the bird from being carved and meanwhile avoid distortions. In Fig. 14, for image with large structure or organized edges, as shown in the last column, the proposed method causes no distortions on the wall and the woman is completely reserved; in the second column the traditional scaling method makes the woman out of proportion; and in the third column the method causes distortion on the wall of the slant edge. In Fig. 15, for the image with cluttered backgrounds the proposed method keeps the door untouched and carved the cluttered wall; compared with other two methods in Fig. 15, the advantage is obvious. Fig. 16 shows the result on enlarging, the same as seam carving, and enlarging using our method also causes no distortion. Large amounts of experimental results are shown in Figs. 17 and 18, the compared method either causes distortion or damaging the content of the image, which failed in comparison with our method. The results in Fig. 18 show that our method obtains visual acceptable retargeting results in cluttered backgrounds like the window in the cluttered wall and the door in the wall covered by leafs, the gradient information based method and context aware based saliency detection method are sensitive to noise and cluttered backgrounds; compared with the method mentioned above, our method gets more satisfactory results. 4. Conclusion In this paper, we propose a simple and effective method for seam carving, a method that improves the general importance map for seam carving and achieve satisfactory retargeting results. Meanwhile, we give specific rules of seam carving on seams and the importance map. Our importance map fuses the MG-SM and SE-SM, and thus well solve the problems that general saliency map causes when used for seam-carving. In our saliency detection, a center points detection method was designed. Based on the objects center points we bring our multi-Gaussian saliency distribution priori, and we further extend the work of [20] to well detect the salient objects and background information. To avoid distortion on the slant edges we design a down sample method to carve seams on the slant edges uniformly. Experimental results show that our method gains much profits in seam carving compared with other methods. There are still some details that need to be further explained: (1) if the saliency detection method is not based on single Gaussian which means it is not central biased, the SE-SM is still needed for the preferred importance map; (2) the multiGaussian requires only the objects centers roughly, which is not a hard job. (3) We assume that the Gaussians are of the same type, and we have tested that assigned different Gaussian according to the objects size does not affect too much. (4) We carve seams on the slant edges uniformly for two reasons: firstly, the seams on the slant edges should not affect the trend of the seams off the slant edges; secondly, the seams should not cause visual artifacts. Thus uniform carving on slant edges is a preferred choice; of course, some other carving method still works. Acknowledgments The authors would like to thank the anonymous reviewers for their considerations and suggestions. We would also give thanks to the National Natural Science Foundation of China (Grants 61472303, 61271294, 61271452, 61379030) and the Fundamental Research Funds for the Central Universities (Grant NSIY21) for supporting our research work. References [1] [2] [3] [4] [5] [6] [7] [8] [9]
R. Gal, O. Sorkine, D. Cohen-Or, Feature-aware texturing, in: Proc. Eurographics Conf. Rendering Techniques, 2006, pp. 297–303. S. Avidan, A. Shamir, Seam carving for content-aware image resizing, ACM Trans. Graph. 26 (3) (2007) 267–276. M. Rubinstein, A. Shamir, S. Avidan, Improved seam carving for video retargeting, ACM Trans. Graph. 27 (3) (2008) 1–9. A. Shamir, O. Sorkine, Visual media retargeting, in: SIGGRAPH ASIA Courses, ACM, New York, NY, USA, 2009, pp. 1–13. S. Cho, H. Choi, Y. Matsushita, S. Lee, Image retargeting using importance diffusion, in: Proc. IEEE Int. Conf. Image Processing, 2009, p. 977C980. M. Grundmann, V. Kwatra, M. Han, I. Essa, Discontinuous seam carving for video retargeting, in: Proc. IEEE CVPR, 2010, p. 569C576. K. Utsugi, T. Shibahara, T. Koike, K. Takahashi, T. Naemura, Seam carving for stereo images, in: Proc. 3DTV-Conf., 2010, p. 1C4. K. Mishiba, M. Ikehara, Block-based seam carving[C]//Access Spaces (ISAS), in: 2011 1st Int. Symp. IEEE, 2011, pp. 111–115. H. Wang, D.S. Chien, S.Y., Content-aware image resizing using perceptual seam carving with human attention model, in: IEEE Int. Conf. Multimedia and Expo, 2008, p. 1029C1032.
8316 [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [22] [23] [24] [25] [27] [28] [29] [30] [31] [32] [33]
X.-x. Jia et al. / Optik 127 (2016) 8306–8316 W. Dong, N. Zhou, J.C. Paul, X. Zhang, Optimized image resizing using seam carving and scaling, ACM Trans. Graph. 28 (2009) 1C10. S. Goferman, L. Zelnik-Manor, A. Tal, Context-aware saliency detection, in: IEEE Conf. Computer Vision and Pattern Recognition, 2010. T. Liu, J. Sun, N. Zheng, X. Tang, H. Shum, Learning to detect a salient object, in: CVPR, 2007. J. Harel, C. Koch, P. Perona, Graph-based visual saliency, Adv. Neural Inform. Process. Syst. 19 (2007) 545. X. Hou, L. Zhang, Saliency detection: a spectral residual approach., in: CVPR, 2007, p. 1C8. G. Ciocca, C. Cusano, F. Gasparini, R. Schettini, Self-adaptive image cropping for small displays, IEEE Trans. Consumer Electron. 53 (2007) 1622C1627. S.F. Wang, S.H. Lai, Fast structure-preserving image retargeting, in: Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing, 2009, p. 1049C1052. Y.S. Wang, C.L. Tai, O. Sorkine, T.Y. Lee, Optimized scale-and-stretch for image resizing, ACM Trans. Graph. 27 (2008). C. Barnes, E. Shechtman, A. Finkelstein, D. Goldman, Patch match: a randomized correspondence algorithm for structural image editing, ACM Trans. Graph. 28 (2009). D. Vaquero, M. Turk, K. Pulli, M. Tico, N. Gelfand, A survey of image retargeting techniques, in: Proc. SPIE, vol. 7798, 2010, p. 779C814. X. Li, H. Lu, L.et al. Zhang, Saliency Detection Via Dense and Sparse Reconstruction [C], ICCV, 2013. R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, S. Susstrunk, Slic Superpixels, Technical Report 149300, EPFL, 2010. J. van de Weijer, T. Gevers, A.D. Bagdanov, Boosting color saliency in image feature detection, IEEE Trans. Pattern Anal. Mach. Intell. 28 (1) (2006) 150–156. R. Achanta, F. Estrada, P. Wils, et al., Salient Region Detection and Segmentation[M]//Computer Vision Systems, Springer, Berlin, Heidelberg, 2008, pp. 66–75. G. Liu, Z. Lin, Y. Yu, Robust subspace segmentation by low-rank representation [C], in: Proc. 27th Int. Conf. Machine Learning (ICML-10), 2010, pp. 663–670. M. Rubinstein, A. Shamir, S. Avidan, Multi-operator media retargeting, ACM Trans. Graph. 28 (3) (2009) 23. L. Itti, C. Koch, E. Niebur, A model of saliency-based visual attention for rapid scene analysis, PAMI 20 (1998) 1254C1259. X. Shen, Y. Wu, A unified approach to salient object detection via low rank matrix recovery, in: CVPR, 2012, p. 853C860. A. Borji, L. Itti, Exploiting local and global patch rarities for saliency detection, in: CVPR, 2012, p. 478C485. Y. Wei, F. Wen, W. Zhu, J. Sun, Geodesic saliency using background priors, in: ECCV, 2012, p. 29C42. Y. Hu, D. Rajan, L.-T. Chia, Detection of visual attention regions in image using robust subspace analysis, J. Vis. Commun. Image Represent. 19 (2008) 199–216. Weiwei Wang, Dong Zhai, Tao Li, Xiangchu Feng, Salient edge and region aware image retargeting, Signal Process.: Image Commun. (2014).