Star-effect simulation for photography

Star-effect simulation for photography

Computers & Graphics ∎ (∎∎∎∎) ∎∎∎–∎∎∎ 1 2 3 4 5 6 7 8 9 10 11 Q2 12 13 14 Q1 15 Q3 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 ...

3MB Sizes 0 Downloads 8 Views

Computers & Graphics ∎ (∎∎∎∎) ∎∎∎–∎∎∎

1 2 3 4 5 6 7 8 9 10 11 Q2 12 13 14 Q1 15 Q3 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66

Contents lists available at ScienceDirect

Computers & Graphics journal homepage: www.elsevier.com/locate/cag

Technical Section

Star-effect simulation for photography$ Dongwei Liu a,b, Haokun Geng b, Tieying Liu c, Reinhard Klette d a

School of Information, Zhejiang University of Finance & Economics, China Department of Computer Science, The University of Auckland, New Zealand College of Computer Science, Inner Mongolia University, China d School of Engineering, Auckland University of Technology, New Zealand b c

art ic l e i nf o

a b s t r a c t

Article history: Received 14 June 2016 Received in revised form 24 August 2016 Accepted 30 August 2016

This paper discusses the creation of star effects in captured night-time photos based on using depth information available by stereo vision. Star effects are an important design factor for night-time photos. Modern imaging technologies support that night-time photos can be taken free-hand, but star effects are not achievable for such camera settings. Self-calibration is an important feature of the presented stareffect simulation method. A photographer is assumed to take just an uncalibrated stereo image pair (i.e. a base and a match image), for example by taking two photos subsequently (e.g. by a mobile phone at about the same pose). For self-calibration we apply the following routine: extract a family of pairs of feature points, calibrate the stereo image pair by using those feature points, and calculate depth data by stereo matching. For creating the star effects, first we detect highlight regions in the base image. Second, we estimate the luminance according to available depth information. Third, we render star patterns with a chosen input texture. Basically we provide a complete tool which is easy to apply for the generation of a user-selected star texture. Minor variations can be introduced in star pattern rendering in order to achieve more natural and vivid looking star effects. By extensive experiments we verified that our rendering results are potentially similar to real-world star effect photos. We demonstrate some of our results, also for illustrating that they appear more natural than results achieved by existing commercial applications. We also illustrate that our method allows us to render more artistic star patterns not available in recorded photographs. In brief, this paper reports research on automatically simulating both photorealistic and non-photorealistic star effects. & 2016 Elsevier Ltd. All rights reserved.

Keywords: Star effect Stereo vision Computational photography

1. Introduction Star patterns around highlights, known as star effect in photography (see Fig. 1), are often essential for defining the esthetic meaning of night-time photographs. The appearance of star patterns in night-time photographs depends on scene characteristics (e.g. distribution of lighting) and camera recording (e.g. chosen camera setting). Altogether, the creation of particular star patterns defines a challenge for any photographer. Photos are taken increasingly with compact cameras or mobile phones. The technological progress (e.g. large aperture lenses or high-sensitivity image sensors) also makes it possible to shoot night-time photos free-hand. To obtain a star effect, a photo has to be shot normally with a small aperture (e.g., f22). Such an aperture setting is unfit for free-hand shooting because it requires several seconds of exposure time; the photo would be largely blurred by hand shake. ☆

This article was recommended for publication by J. Kopf.

Star effects can be achieved with a star filter in front of the lens. But this is not really appropriate for casual shooting with compact cameras or mobile phones because special equipment is needed. In this paper we provide an answer to the question: How to obtain a star effect in case of casual hand-held photography? Generating star effects by post-processing is a possible way, for example by adding star effects manually using available photoediting applications. This might be a time-consuming process for a complex scene because the size and color of each star should be carefully set, one by one. Professional skills are also needed. Dedicated applications such as Topaz Star Effects plugins, or instructions given in some online tutorial videos [8,19], draw uniform-sized star patterns to all the overexposed regions of a photo. Though it may also look beautiful, such effects typically look unnatural and different to the star effect generated by a small aperture. Our contribution aims at having an automatic star-effect generator which corresponds to the actual content of a photo. Computational photography [18] became a very active field of research and applications in recent times. For example, exploring

http://dx.doi.org/10.1016/j.cag.2016.08.010 0097-8493/& 2016 Elsevier Ltd. All rights reserved.

Please cite this article as: Liu D, et al. Star-effect simulation for photography. Comput Graph (2016), http://dx.doi.org/10.1016/j. cag.2016.08.010i

67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91

D. Liu et al. / Computers & Graphics ∎ (∎∎∎∎) ∎∎∎–∎∎∎

2

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66

2.1. Star effects in photography A star effect is normally caused by Fraunhofer diffraction; see [2]. Such a phenomenon is most visible when bright light from a “nearly infinite” distance passes through a narrow slit, causing the light to spread perpendicular to the slit. This spreads a point-like beam of light into a pair of arms (i.e. streaks). Suppose a rectangular aperture A with width w1 and height w2, located in an x1 x2 plane having its origin at the centroid of A; axes x1 and x2 are parallel to the edges with width w1 and w2, respectively. Suppose an axis y perpendicular to the x1 x2 plane. When A is illuminated by a monochromatic plane wave of wavelength λ, the intensity Iðθ1 ; θ2 Þ of the light through A forms a pattern that is described by     2 π w1 sin θ 1 2 π w2 sin θ 2 Iðθ1 ; θ2 Þ p sinc  sinc

Fig. 1. A night-time photo with star effect.

λ

the simulation of various photo effects such as fog [13], bokeh [14], glare [21] or high dynamic range photos [26]. This paper presents for the first-time1 a way for automatically simulating photorealistic or non-photorealistic star effects. Self-calibrated stereo vision is at the core of our automatic method for adding star effects to a photo, to be presented in this paper. Consider a given uncalibrated stereo pair, for example two pictures of the same scene taken subsequently with a mobile phone at about “near-parallel” poses. First we detect a family of feature point-pairs and do a self-calibration. By stereo matching we estimate depth information of the scene. Next, “star-capable” highlights are detected, and luminance and color information (clipped by overexposure) is recovered according to the available depth information. Finally, we render star patterns considering the luminance and color of the highlights using an input star model. See Fig. 2 for a brief workflow of the proposed method. Compared to the conference version [15], we introduce in this paper also an interface for generating star models, and a mechanism offering variations of stars during rendering. In this altogether more detailed paper we also added several new experiment results illustrating the flexibility in achieving star effects. The rest of the paper is structured as follows. Section 2 briefly recalls theories and techniques related to our work. In Section 3 we provide details for our self-calibrated depth estimation method. Section 4 discusses our method for the detection of highlights and the estimation of luminance and color of highlights. Then, Section 5 discusses our star effect rendering method. Experimental results are shown in Section 6. Section 7 concludes.

2. Basic and notation This section lists notation and techniques used. RGB color images I are defined on a rectangular set Ω of pixel locations, with IðpÞ ¼ ðRðpÞ; GðpÞ; BðpÞÞ for p A Ω and 0 r RðpÞ; GðpÞ; BðpÞ r Gmax

ð1Þ

Let N cols and N rows be the width and the height of Ω, respectively. Position O be the center of Ω (as an approximation of the principle point). We suppose that lens distortion has been corrected in cameras, otherwise we can correct it by using the lens profile provided by the manufacturer. Altogether, we consider input image I as being generated by undistorted central projection. In the following, a star-effect photo refers to a photo taken of a night scene with a very small aperture, in which star patterns appear around highlights. 1 Not counting our brief report about photorealistic star effects presented at the Pacific-Rim Symposium on Image and Video Technology, November 2015, at Auckland [15].

sin ðxÞ where sincðxÞ ¼ x

λ

ð2Þ

with tan θ1 ¼ x1 =y and tan θ2 ¼ x2 =y. Similarly, for a parallelepiped aperture with edge lengths w1, w1, and w1, we have a pattern defined by     π w1 sin θ1 π w2 sin θ2 Iðθ1 ; θ2 ; θ3 Þ p sinc2  sinc2 λ λ   2 π w3 sin θ 3 ð3Þ sinc

λ

Fig. 3 shows some real-world star patterns shot with different aperture shapes. It can be seen that each blade of the aperture contributes a pair of arms to the star pattern. Due to overlapping, in case of a lens with an even number a of blades, generated star patterns have the same number a of arms. In fact, every beam of light that goes to the image sensor through a small aperture spreads a star pattern to its neighboring region due to diffraction. In normal-contrast regions of an image, the star pattern is very slight and not visible (it only causes a reduction of local sharpness). Only in very high-contrast cases, such as highlights in a night-time photo, the star pattern is noticeable. As described above, the shape of the star pattern depends on the shape of the aperture, which is not a user-controllable parameter. A modern-designed lens normally uses a near-circular aperture (in order to obtain good bokeh quality [14]), which generates only weak and scattered star arms. Our method provides controllable star pattern styles, which is convenient for photo-art creation. A star filter can be used to simulate a star effect as introduced by small aperture (see Fig. 3, right). Such filter embeds a very fine diffraction grating or some prisms. A star effect can be obtained even at large aperture when using such a filter in front of the lens. The star pattern generated by a star filter is visually a little different from that one generated by a small aperture. Our method can simulate both styles by introducing different templates. 2.2. Stereo vision For stereo vision, we take a stereo pair (i.e. a base image Ib and a match image Im) as input, and estimate depth information for a scene by searching for corresponding pixels in the stereo pair [10]. For reducing the complexity, the given stereo pair is normally geometrically rectified into canonical stereo geometry [10], in which Ib and Im are as taken by identical cameras, only differing by translation distance b along the base line on the X-axis of the camera coordinate system of Ib. Rectification transform matrices Rb and R m can be calculated in a stereo calibration process. Our proposed method is not very sensitive to accuracy of available depth values. For convenience, we use a self-calibration method with only a few assumed camera parameters. After rectification, a stereo matching method such as belief-propagation [4], or semi-global matching

Please cite this article as: Liu D, et al. Star-effect simulation for photography. Comput Graph (2016), http://dx.doi.org/10.1016/j. cag.2016.08.010i

67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132

D. Liu et al. / Computers & Graphics ∎ (∎∎∎∎) ∎∎∎–∎∎∎

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66

3

Fig. 2. A brief workflow of our method. Given an uncalibrated stereo pair (left), we do a self-calibration and estimate depth information of the scene by stereo matching (middle-left). Then, “star-capable” highlights are detected (middle-right). Finally, we render star patterns considering the luminance and color of the highlights using an input template (right).

Fig. 3. Relationship between aperture shape (bottom line) and star effect (top line). From left to right: five straight blades aperture, eight straight blades aperture, eight curved blades aperture, nine curved blades aperture, and a circular aperture with a star filter.

[7,6], can be used for calculating a disparity map. Depth information is then obtained by using standard triangulation. Self-calibrated stereo vision is easy to achieve and a convenient setting for obtaining depth information of a scene (i.e. an image), compared to other depth sensors such as structured lighting as used in the Kinect [9], depth from defocus [23], a light-field camera [20], or 3D laser imaging systems [1].

3. Depth estimation Let Ib and Im be an uncalibrated stereo pair. For performing self-calibration, we detect a family of feature pairs, approximate the fundamental matrix defining the projective relation between both images, and rectify then the given stereo pair into canonic stereo geometry. We apply then stereo matching and obtain a disparity map. The disparity map is warped into the coordinates of the unrectified image Ib for avoiding any composition damage. Feature detection and matching is the key step of the selfcalibration method. A number of feature detectors are evaluated by [24], which suggests that the oriented BRIEF (ORB) (see [22] for an implementation) detector appears to be reasonably rotation invariant and noise resistant by also ensuring time efficiency. First of all, two families of features F b and F m are collected from Ib and Im using ORB. An ideal case would be that all the features are uniformly distributed in all the interesting regions of the input images.

The k-nearest neighbors (kNN) method is then used to detect best matches F pairs between the two families of features (see Fig. 4, top line). The feature pairs are filtered by a distance ratio test [17] and a cross-symmetry test: n ðiÞ ðiÞ ðiÞ ; pðiÞ F filtered ¼ ½pðiÞ m  A F pairs j δ1 ðpb Þ o T distance  δ2 ðpb Þ 4 J xb b o ðiÞ ðiÞ ð4Þ  xðiÞ m J 2 o T cross;x 4 J yb  ym J 2 oT cross;y where δ1 ðpðiÞ Þ is the similarity distance to the closest neighbor of b in kNN matching, and δ2 ðpðiÞ Þ is the similarity distance to the pðiÞ b b second-closest neighbor. We use a ratio T distance ¼ 0:8. According to the statistics in [17], such a threshold eliminates 90% of the false matches while discarding less than 5% of the correct matches. T cross;x and T cross;y specify the tolerance thresholds. We use T cross;x ¼ 0:15  N cols and T cross;y ¼ 0:01  N rows . Fig. 4, middle line, shows results of the filtered feature pairs. We use a random sample consensus (RANSAC) algorithm [3] to calculate a fundamental matrix F according to F filtered . Both images Ib and Im have been taken with the same camera. Thus, we do not need to create identical twins of cameras (as in the general stereo imaging case). Homography matrices Hb and Hm can be computed by using F filtered and F [5]; those matrices transform Ib and Im into planar perspective views I^ b and I^m (see Fig. 4, bottom line). We run an SGM stereo matcher [7] on I^ b and I^m , and obtain a ^ of I^ (see Fig. 5, left). disparity map d^ defined on the domain Ω b Note that without knowing the camera matrix and the relative position of the cameras, the transforms Hb and Hm obtained from self-calibration are not standard rectification transforms.

Please cite this article as: Liu D, et al. Star-effect simulation for photography. Comput Graph (2016), http://dx.doi.org/10.1016/j. cag.2016.08.010i

67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132

4

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66

D. Liu et al. / Computers & Graphics ∎ (∎∎∎∎) ∎∎∎–∎∎∎

Fig. 4. Stereo self-calibration. Top: feature pairs generated by ORB and kNN. Middle: filtered feature pairs. Bottom: a rectified image pair I^ b and I^ m .

The rectified image I^b might be “largely warped”, as shown in Fig. 4, bottom line, and thus distorts the attempted composition. We transform d^ into d (see Fig. 5, middle) according to the inverse transform H b 1 . Absolute distance information cannot be obtained using selfcalibrated stereo matching, because the base-line is unknown (i.e. the distance between the two view-points in the world coordinate). The disparity map d contains the relative distance between objects, which is sufficient for our task. A joint bilateral filter [12] is then applied for removing noise from d. See Fig. 5, right, for a result after inverse transformation

and filtering of the disparity map. Because of lack of texture or limited availability of disparities, disparity information close to the border of the image might be incorrect. This does not affect our application because we only use disparities at highlights.

4. Highlight registration Star patterns appear only in very high contrast regions, as already mentioned above. Such regions are typically at highlights in a night-time photo. Luminance and color information in such

Please cite this article as: Liu D, et al. Star-effect simulation for photography. Comput Graph (2016), http://dx.doi.org/10.1016/j. cag.2016.08.010i

67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132

D. Liu et al. / Computers & Graphics ∎ (∎∎∎∎) ∎∎∎–∎∎∎

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66

5

pixels, satisfying Rb ðpÞ ¼ Gb ðpÞ ¼ Bb ðpÞ ¼ Gmax

ð5Þ

at all p A S. Due to the nature of common image sensors, for a very bright light source (which is capable of creating a star effect), even if it is pure red, it will cause that all the three color channels are overexposed. Some regions in F overexposed cannot be considered as a possible source for a star pattern, as occurring in real-world star-effect photos. For example, overexposed but non-luminous regions such as some metallic patches at buildings, large overexposed regions including the sky (e.g. with a backlight) or a white wall, or very small overexposed regions which are normally just local reflections (e.g. of eyes of a cat) or noisy pixels. To eliminate these items, we extract a subfamily F highlight  F overexposed such that   F highlight ¼ S A F overexposed j λS o 1:5 4 2 oRS o 0:03  N cols ð6Þ where λS is the aspect ratio of the bounding rectangle of S, and RS is the circumradius of S (i.e. the radius of the smallest bounding circle of S). The centroid of S A F highlight , denoted by pS, is given by   X μ10 ðSÞ μ01 ðSÞ pS ¼ ; with μab ðSÞ ¼ xa  yb ð7Þ μ00 ðSÞ μ10 ðSÞ ðx;yÞ A S This filter removes a large part of distracting overexposed regions, while some tutorials [8,19] use all the overexposed regions to create star patterns. Our experiments show that our strategy performs more naturally. 4.2. Color recovery Highlights are normally the most saturated parts in a nighttime photo, for example, streetlights, traffic signals, or head- or taillights of a car. Though the center of a highlight is by our definition just overexposed without any color information, the adjacent region normally provides strong hints on color due to diffuse reflection or diffraction. Thus we estimate the color information of a highlight based on the color of adjacent pixels. We first convert Ib into I HSV in the HSV color space following [25]. For I b ðpÞ ¼ ðRðpÞ; GðpÞ; BðpÞÞ VðpÞ ¼ maxfRðpÞ; GðpÞ; BðpÞg

δðpÞ ¼ VðpÞ  minfRðpÞ; GðpÞ; BðpÞg

Fig. 5. Obtained depth information from stereo matching. From top to bottom: the ^ the inversely transformed disparity map d, and the joint, resulting disparity map d, bilaterally filtered result.

we have that 8 60  ½GðpÞ  BðpÞ=δðpÞ > > > > > if VðpÞ ¼ RðpÞ > > > > < 120 þ 60  ½BðpÞ  RðpÞ=δðpÞ ^ HðpÞ ¼ > if VðpÞ ¼ GðpÞ > > > > > 240 þ 60  ½RðpÞ GðpÞ=δðpÞ > > > : if VðpÞ ¼ BðpÞ (

regions normally gets clipped due to limitations of the dynamic range of the image sensor, or due to the image file format. We detect very high contrast regions in the original image Ib. Then we estimate luminance and color information according to the available depth information and to values at adjacent pixels.

4.1. Highlight detection To locate valid highlights, we first detect a family F overexposed of 4-connected regions S in Ib which are formed by overexposed

HðpÞ ¼ ( SðpÞ ¼

^ HðpÞ þ 360 ^ HðpÞ

^ if HðpÞ o0 otherwise

δðpÞ=VðpÞ if VðpÞ a0 0

otherwise

ð8Þ

ð9Þ

ð10Þ

ð11Þ

For a highlight region S, we select the hue HS and saturation SS by detecting the most saturated pixel p in a circular neighborhood ΩS around S:   8 q q A ΩS -SðpÞ Z SðqÞ ð12Þ

Please cite this article as: Liu D, et al. Star-effect simulation for photography. Comput Graph (2016), http://dx.doi.org/10.1016/j. cag.2016.08.010i

67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132

D. Liu et al. / Computers & Graphics ∎ (∎∎∎∎) ∎∎∎–∎∎∎

6

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66

Here S(p) is the saturation value of p, and ΩS is a circular region centered at the centroid pS having the radius of 1:5  RS , where RS is the circumradius of S.

information:

4.3. Luminance estimation

Here ES is the energy of the light source corresponding to S – which we do not know. For convenience, we use a global user parameter C intensity for controlling the “strength” of star effects. Experiments show that this assumption works well in applications; see the reported experiments below.

In a star-effect photo, the luminance value of a highlight implies the size of the corresponding star pattern. Due to the limited dynamic range of the used image sensor, or due to the image file format, the real luminance values of highlight regions are lost in general in a taken photograph. We estimate a possible luminance value VS for each highlight S A F highlight . The goal of such an estimation is to render natural star effects; we do not go so far as understanding the real luminance of the image scene. We make the assumption that all the light sources with the same color have the same energy. For example, all the streetlights along a street have the same energy, which is stronger than the energy of a traffic light. Thus, we classify the light sources by available color information, and assign a weight parameter C color ðSÞ to the light source corresponding to S. Empirically, we have that 8 0:3; if colorðSÞ ¼ Red > > > > < 1; if colorðSÞ ¼ Yellow ð13Þ C color ðSÞ ¼ 0:4; if colorðSÞ ¼ Green > > > > : 0:2; if colorðSÞ ¼ Others with

8 Red > > > < Yellow colorðSÞ ¼ > Green > > : Others

if H S o30 3 H S Z 330 if 30 r H S o75 if 75 r H S o165

VS ¼

ES 4π  DðpÞ2

 C intensity  C color ðpÞ  dðpÞ2

ð15Þ

5. Star effect rendering Light diffracted by an aperture can be approximately modeled by the Fraunhofer diffraction equation [16] because the light sources are normally effectively at infinity. The diffracted light goes into (the system of) optical lenses before it arrives at the image sensor. Refraction introduces further complexity (Fig. 6). Eqs. (2) and (3) show that the diffraction distribution rule is not related to the strength of the input light. Thus, the pattern keeps the same shape under any luminance condition. The pattern is stronger inside and weaker outside. A star of a weaker light source looks smaller because the outer-ring of the pattern is “too weak” to be visible. Due to the limitation of the dynamic range, the centers of the star patterns (i.e. the “peaks”) look all the same. As a result, in a photo, a star of a stronger light source looks larger than that of a weaker light source. See real-world examples in Figs. 1 and 7.

ð14Þ

otherwise

This classification covers common light sources in night-time photos. We also simply ignore the media between light source and lens, and estimate the luminance VS of the highlight region according to the distance D(p) between camera and light source, and color

5.1. Basic star rendering We model the shape of a star pattern by loading an N  N texture α. We render a star pattern αS for each highlight region S on Ib, and obtain the final result. Here, αS is a scaled version of α with an edge length NS. From experiments, for example Fig. 7, a near-linear relationship can be noticed between the subduplicate

67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103

Q4 Fig. 6. Highlight detection. From left to right: original image Ib, a map of overexposed pixels for the scene, detected star-capable overexposed regions (marked by blue circles). (For interpretation of the references to color in this figure caption, the reader is referred to the web version of this paper.)

Fig. 7. Illustration of the relationship between star pattern and luminance. Photos shot with f/22 and ISO 800. Shutter speeds are, from left to right: 1/2 s, 1 s, 2 s, 4 s, and 8 s. In this case, the luminance doubles each time from left to right.

Please cite this article as: Liu D, et al. Star-effect simulation for photography. Comput Graph (2016), http://dx.doi.org/10.1016/j. cag.2016.08.010i

104 105 106 107 108 109 110 111 112 113 114 115

D. Liu et al. / Computers & Graphics ∎ (∎∎∎∎) ∎∎∎–∎∎∎

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66

luminance and the scale of star patterns. Thus, let N S ¼ C star 

pffiffiffiffiffiffi VS

ð16Þ

7

Both C intensity in Eq. (15) and C star in Eq. (16) can be used for controlling the “strength” of star effects. Thus, we simply let C star ¼ 1. We define αS on a rectangular set ΩS, having the center of ΩS as its origin. For each position p A ΩS , let Iðp þ pS Þ ¼ Iðp þ pS Þ þ αS ðpÞðRS ; GS ; BS Þ

ð17Þ

Here RS, GS, and BS are the RGB color values corresponding to the previously calculated HSV color ðH S ; SS ; V S Þ.

5.2. Star model generation From Fig. 3 it can be seen that each lens has its own signature on the style of generated stars. Such star styles lead to different esthetic feelings. In order to achieve a user's desired star effect (according to the scenes or just the user's taste), conveniently we offer an interface for generating the star model α. Several star styles are modeled following real-world photos by different lenses, for example, sharp stars caused by a straightblade aperture (see Fig. 8, top-left) or emanative stars caused by a round-blade aperture (see Fig. 8, bottom-left). We generate a star model according to a user-specified number of arms (see Fig. 8, middle and right). Fig. 8. Star model generation, following real-world photos. From left to right: single-arm star, 8-arm star, and 12-arm star. Upper and lower rows differ by applied style.

5.3. Variations on star pattern Real-world star effects have sometimes very minor variations throughout a photo. In order to create a more vivid star effect, we provide a mechanism to create a variation α^ S of a star pattern αS by merging αS with a slightly rotated copy αS;θ by θ. For each position p A ΩS , let

α^ S ðpÞ ¼ Fig. 9. Introducing minor variations to star patterns. Left: a simple 4-point star pattern. Middle and right: two variations (clearly visible after zooming into the figure).

1 αS ðpÞ þ ω  αS;θ ðpÞ 1þω

ð18Þ

Here, θ and ω can be random numbers. We may also merge multiple slightly rotated copies; see Fig. 9 for a demonstration.

Fig. 10. Use of different strengths for star-pattern rendering. Top-left: original image. Others: star effect of different scales.

Please cite this article as: Liu D, et al. Star-effect simulation for photography. Comput Graph (2016), http://dx.doi.org/10.1016/j. cag.2016.08.010i

67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132

D. Liu et al. / Computers & Graphics ∎ (∎∎∎∎) ∎∎∎–∎∎∎

8

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66

6. Experiments and discussions We illustrate our extensive experiments by showing some of the photorealistic, and also some of the non-photorealistic star effects. Fig. 10 illustrates photorealistic star pattern rendering using different strength parameters. The figure shows that the “degree” of star effect can be easily controlled in our method. Fig. 11 illustrates how to control the styles of the star effect using different textures and our variation mechanism. To control the shape of the star pattern by a camera, photographers need to

use different lenses. Comparatively, our method is very convenient in controlling star effects, and thus convenient for artists to try different styles and obtain their favorite feeling. Fig. 12 compares our method, a uniform-sized star effect, a random-sized star effect, and a real-world star-effect photo. The uniform-sized star effect is generated manually following a tutorial [19], the random-sized star effect is generated using our rendering method but without any use of depth information. The uniform-sized star effect also looks appealing, but does not create any space feeling. Such an effect is similar to the style of a common star filter. It can also be noticed that all the overexposed regions

Fig. 11. Application of different star textures. Top-left: original image. Top-right: star effect with a 10-arm star texture. Bottom-left: with a 8-arm star texture. Bottom-right: with the same star texture of bottom-left while introducing some miner variations.

Fig. 12. Comparison between star effects achieved by different ways. Top line, from left to right: an image without star effect (which is the input of the three results in the bottom line) and a real-world star-effect photo, i.e. the “ground truth”. Bottom line, from left to right: uniform-sized star effect, random-sized star effect, and our method.

Please cite this article as: Liu D, et al. Star-effect simulation for photography. Comput Graph (2016), http://dx.doi.org/10.1016/j. cag.2016.08.010i

67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132

D. Liu et al. / Computers & Graphics ∎ (∎∎∎∎) ∎∎∎–∎∎∎

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66

9

Fig. 13. Simulation of the style of a star filter. Left to right: a photo shot with large aperture (thus does not show any star effect), our result, and a photo shot with a star filter.

Fig. 14. Rendering non-photorealistic star-effects.

Fig. 15. Failed cases due to failures in depth calculations. (For interpretation of the references to color in this figure caption, the reader is referred to the web version of this paper.)

are used in the manual process, including those regions that should not make a significant diffraction, as, for example, some sparkles on the water. The random-sized star effect looks more flexible. Without understanding the image's content, this effect does not create any space feeling either, and involves false logic. For example, the further-away street light should define a smaller star. In comparison, our depth-aware method performs naturally, and the result is similar to the real-world star-effect photo. Another advantage of our method over manual processing [8,19] is

that we can render colorful stars according to the content of the input photo while the manual process only renders monochromatic stars, because the color information of those overexposed regions is lost. Our method can also simulate the style of a star filter, using a proper star texture (as demonstrated in Fig. 13). The photo on the left is shot with aperture f4, which is a relatively large aperture, and thus does not show any significant star effect. The photo in the middle is shot with a 4-point star filter and with the same camera

Please cite this article as: Liu D, et al. Star-effect simulation for photography. Comput Graph (2016), http://dx.doi.org/10.1016/j. cag.2016.08.010i

67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132

D. Liu et al. / Computers & Graphics ∎ (∎∎∎∎) ∎∎∎–∎∎∎

10

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 Q5 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66

parameters as used for the left photo. It can be seen that our result looks similar to the star-filter photo. Inspired by the star filter, we also tested rendering of some non-photorealistic patterns (as shown in Fig. 14). Such nonphotorealistic patterns can create a certain atmosphere if used properly. Variations in robustness of the applied stereo vision method can affect our star rendering. This is a limitation of our method. Fig. 15 shows some failed cases due to failures in depth calculations. The car in the red rectangle of Fig. 15, right, is in an occlusion region (see Fig. 15, middle), thus the depth information is unavailable. As a result, star effect is not applied to the headlight of the car. We implemented our approach on a 3.30 GHz PC with 8.00 GB RAM and no graphics card. The run time is about 3 s for images of 3000  2000 resolution.

7. Conclusions The paper presented the underlying theoretical concepts, the algorithmic steps, and experimental results for a star-effect simulation method. Self-calibrated stereo vision is at the core of the method; it allows us to obtain depth information from pairs of images taken free-hand. After detecting high-intensity contrast regions, we select those specifying highlights, and estimate then the luminance of the highlight regions according to the available depth information. Our subsequent star-pattern rendering is content-aware. Experiments show that our results are similar to real-world stareffect photos if using star patterns which correspond to real-world effects. We explained and demonstrated that it is much more easier to control the style of a star effect by using our method than by using a real-world camera. By experimental results we illustrated that our results are also more natural in their appearance than the results of existing commercial applications when aiming at photorealistic effects. We also illustrated options for non-photorealistic star effects. To the best of our knowledge, this paper reports for the first time extensively about automatically simulated photorealistic and non-photorealistic star effects.

Uncited reference [11].

Appendix A. Supplementary data Supplementary data associated with this article can be found in the online version at http://dx.doi.org/10.1016/j.cag.2016.08.010.

References [1] Blais F. Review of 20 years of range sensor development. J Electron Imaging 2004;13:231–40. http://dx.doi.org/10.1117/12.473116. [2] Born M, Wolf E. Principles of optics. Cambridge: Cambridge University Press; http://dx.doi.org/10.1017/CBO9781139644181. [3] Fischler MA, Bolles RC. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun ACM 1981;24:381–95. http://dx.doi.org/10.1145/358669.358692. [4] Felzenszwalb PF, Huttenlocher DP. Efficient belief propagation for early vision. Int J Comput Vision 2006;70:41–54. http://dx.doi.org/10.1007/s11263-006-7899-4. [5] Hartley RI. Theory and practice of projective rectification. Int J Comput Vision 1999;35:115–27. http://dx.doi.org/10.1023/A:1008115206617. [6] Hermann S, Klette R. Iterative semi-global matching for robust driver assistance systems. In: Proceedings of the ACCV. Lecture notes in computer science, vol. 7726; 2013. p. 465–78. http://dx.doi.org/10.1007/978-3-642-37431-9_36. [7] Hirschmüller H. Accurate and efficient stereo processing by semi-global matching and mutual information. In: Proceedings of the CVPR; 2005. p. 807–14. http://dx.doi.org/10.1109/CVPR.2005.56. [8] Hoey G. Advanced Photoshop starburst filter effect—Week 45. Available: www. youtube.com/watch?v ¼lRKp4_EkIvc, 2009. [9] Khoshelham K, Elberink SO. Accuracy and resolution of kinect depth data for indoor mapping applications. Sensors 2012;12:1437–54. http://dx.doi.org/ 10.3390/s120201437. [10] Klette R. Concise computer vision: an introduction into theory and algorithms. London: Springer; http://dx.doi.org/10.1007/978-1-4471-6320-6. [11] Klette R, Rosenfeld A. Digital geometry: geometric methods for digital picture analysis. San Francisco: Morgan Kaufmann; 2004. [12] Kopf J, Cohen MF, Lischinski D, Uyttendaele M. Joint bilateral upsampling. ACM Trans Graph 2007;26(96). http://dx.doi.org/10.1145/1276377.1276497. [13] Liu D, Klette R. Fog effect for photography using stereo vision. Vis Comput 2016;32:99–109. http://dx.doi.org/10.1007/s00371-014-1058-7. [14] Liu D, Nicolescu R, Klette R. Bokeh effects based on stereo vision. In: Proceedings of the CAIP; 2015. p. 198–210. http://dx.doi.org/10.1007/978-3-31923192-1_17. [15] Liu D, Nicolescu R, Klette R. Star-effect simulation for photography using selfcalibrated stereo vision. In: Proceedings of the PSIVT; 2015. p. 228–40. http:// dx.doi.org/10.1007/978-3-319-29451-3_19. [16] Lipson A, Lipson SG, Lipson H. Optical physics. Cambridge: Cambridge University Press; 2010. [17] Lowe DG. Distinctive image features from scale-invariant keypoints. Int J Comput Vision 2004;60:91–110. http://dx.doi.org/10.1023/B:VISI.0000029664.99615.94. [18] Lukac R. Computational photography: methods and applications. Boca Raton: CRC Press; 2010. [19] mahalodotcom. How to create a lens flare effect in photoshop. Available at: www.youtube.com/watch?v ¼qmL0ct2Ries; 2011. [20] Ng R, Levoy M, Brédif M, Duval G, Horowitz M, Hanrahan P. Light field photography with a hand-held plenoptic camera. Stanford University, CTSR 200502; 2005. [21] Ritschel T, Ihrke M, Frisvad JR, Coppens J, Myszkowski K, Seidel HP. Temporal glare: real-time dynamic simulation of the scattering in the human eye. Comput Graph Forum 2009;28:183–92. http://dx.doi.org/10.1111/ j.1467-8659.2009.01357.x. [22] Rublee E, Rabaud V, Konolige K, Bradski G. ORB: an efficient alternative to SIFT or SURF. In: Proceedings of the ICCV; 2011. p. 2564–71. http://dx.doi.org/10. 1109/ICCV.2011.6126544. [23] Schechner YY, Kiryati N. Depth from defocus vs. stereo: How different really are they?. Int J Comput Vision 2000;39:141–62. http://dx.doi.org/10.1023/ A:1008175127327. [24] Song Z, Klette R. Robustness of point feature detection. In: Proceedings of the CAIP; 2013. p. 580–8. http://dx.doi.org/10.1007/978-3-642-40246-3_12. [25] Smith AR. Color gamut transform pairs. ACM Siggraph Comput Graph 1978;12:12–9. http://dx.doi.org/10.1145/965139.807361. [26] Wang L, Wei LY, Zhou K, Guo B, Shum HY. High dynamic range image hallucination. In: Proceedings of the eurograph; 2007. p. 321–6. http://dx.doi.org/ 10.2312/EGWR/EGSR07/321-326.

Please cite this article as: Liu D, et al. Star-effect simulation for photography. Comput Graph (2016), http://dx.doi.org/10.1016/j. cag.2016.08.010i

67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132