Measurement 43 (2010) 1173–1179
Contents lists available at ScienceDirect
Measurement journal homepage: www.elsevier.com/locate/measurement
Object’s translational speed measurement using motion blur information Xu Ting-Fa a,*, Zhao Peng b a b
Department of Photoelectric Engineering, Beijing Institute of Technology, Beijing 100081, China Information and Computer Engineering Institute, Northeast Forestry University, Harbin City, 150040, China
a r t i c l e
i n f o
Article history: Received 26 November 2009 Received in revised form 22 March 2010 Accepted 27 May 2010 Available online 1 June 2010 Keywords: Machine vision Speed measurement Motion blur Image analysis Image matting
a b s t r a c t In a motion blur based speed measurement work, a key step is the calculation for the horizontal blur extent. To fulfill this calculation when both a defocus blur and a motion blur occur and for a moving object with irregular shape edges robustly and accurately, we propose a novel scheme using the image matting and the transparency map. This scheme can isolate the defocus blur from the motion blur effectively and can also calculate the horizontal blur extent accurately regardless of the object’s shape. The simulation and real experiments prove that our scheme outperforms the current scan-line algorithm for the blur extent computation so as to fulfill the speed measurement more robustly and accurately. Ó 2010 Elsevier Ltd. All rights reserved.
1. Introduction Moving object’s speed measurement is of central importance in many fields such as the object tracking and intelligent traffic. Two popular and expensive speed measurement schemes include using Radio Detection and Ranging (RADAR) or Light Detection and Ranging (LIDAR) device to measure the speed of one moving object (e.g., a vehicle) [1]. A RADAR device bounces a radio signal off a moving object and the reflected signal is picked up by a receiver. This receiver measures the frequency difference between the original and the reflected signals and converts it into the speed. A LIDAR device records how long does it take for a light pulse to travel to the object and come back. By making some distance measurements and comparisons, LIDAR can perform the speed measurement accurately. In recent years, some image-based schemes are proposed for the speed measurement due to the availability of the cheap and high performance imaging hardware [2–4]. These schemes usually use reference information in the scene, such as the traveled distance between the adjacent image frames. The object’s speed is computed by dividing this trav* Corresponding author. Tel.: +86 010 68912560; fax: +86 010 68912560x808. E-mail address:
[email protected] (X. Ting-Fa). 0263-2241/$ - see front matter Ó 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.measurement.2010.05.011
eled distance with the inter-frame time. Two or more image frames are required in these schemes. However, due to the limited imaging frame rate (usually 30 frames per second), the video camera has to be installed far away from the moving object to keep the object in adjacent image frames. Lin et al. propose one scheme for vehicle speed [5] or spherical ball speed [6] measurement based on a single motion blurred image. In their work, one single image taken by a stationary camera is used for speed measurement. Due to the relative motion between the camera and the moving object during the camera exposure time, motion blur occurs in a dynamic image region. This motion blur provides a visual cue for the object’s speed measurement. One approximate object region is first segmented and blur parameters are calculated from the motion blurred sub image. This image is then deblurred and used to derive other parameters. Finally, the object’s speed is calculated using the imaging geometry, camera pose and blur extent in the image. In this paper, we propose another speed measurement scheme based on a single motion blurred image. Two improvements on the blur extent calculation are proposed in our novel scheme, which can perform the speed measurement more robustly and accurately, compared with those schemes [5,6] of Lin et al. In Section 2, we briefly outline the formulation of speed measurement using a single
1174
X. Ting-Fa, Z. Peng / Measurement 43 (2010) 1173–1179
gle between the object’s motion direction and the image plane of the camera is h, and the object’s displacement is d for a fixed camera exposure time T, then the object’s speed is given as follows [5,6]:
v ¼ d=T ðzKsx Þ=ðTf cos h cos bÞ
ð1Þ
if f sx , where f is the focal length of the camera, sx is the CCD pixel size in the horizontal direction; and z is the distance between the object and the camera in the direction parallel to the optical axis, K is the motion blur extent (in pixel) along the horizontal direction (i.e., x -direction); b is the angle between the motion blur direction and the xdirection in the image plane. More details are explained in Refs. [5,6], which are omitted here. For a special case that the object is moving along a direction parallel to the image plane of the camera (i.e., h = 0), then Eq. (1) can be simplified as [5,6]:
v ¼ d=T ðzKsx Þ=ðTf cos bÞ
Fig. 1. Camera model for object’s speed measurement.
motion blurred image in those schemes [5,6]. Section 3 describes two drawbacks on the blur extent calculation in those schemes [5,6]. Section 4 describes our two solutions to the two drawbacks in Section 3. Experimental results and comparisons are illustrated in Section 5. Some conclusions are drawn in Section 6. 2. Formulation of motion blur based speed measurement The motion blur based speed measurement is based on a pinhole camera model, as shown in Fig. 1. Define the an-
ð2Þ
3. Two drawbacks of motion blur based speed measurement In the speed calculation using Eq. (1) or Eq. (2), a key step is the calculation of parameter K. To calculate the motion blur extent K in the horizontal direction in Refs. [5,6], a sub image enclosing the detected object is extracted from the original motion blurred image. Edge detection is then applied on the sub image to find the left and right blur regions. Ideally, there will be two ramp edges with the same width in each scan-line, as illustrated in Fig. 2. Therefore, the motion blur extent K can be calculated by taking the average of the ramp edge widths from those scan-lines. However, the calculation of parameter K using the above scan-line algorithm in Refs. [5,6] has two drawbacks.
Fig. 2. Image scan-line algorithm: (a) images with static and moving objects; (b) the corresponding intensity profile along highlighted scan-line (i.e., the horizontal axis represents the highlighted scan-line, and the vertical axis represents the gray values of the pixels on the above scan-line).
X. Ting-Fa, Z. Peng / Measurement 43 (2010) 1173–1179
First, in general, both the motion blur and the defocus blur will occur in a CCD image in a practical measurement work. In this case, a large error may occur in the above scan-line algorithm, since the defocus blur is not considered in this algorithm. This drawback is illustrated in Fig. 3a, where one rectangle object is moving horizontally with a horizontal motion blur extent K = 30 (in pixel) and with a defocus blur of the camera as well (i.e., the defocus blur diameter D = 11). If the above scan-line algorithm is applied, the parameter K will be approximately K 40, which finally results in a large speed measurement error. Second, another error may also occur in the above scanline algorithm for a moving object with irregular shape edges, even if only the motion blur occurs in one image. This drawback is illustrated in Fig. 3b or c, where one circle object or one irregular object is moving horizontally. It is obvious that the computed blur extents vary greatly for different scan-lines, which results in some errors as well. In fact, the above scan-line algorithm can perform the blur extent calculation accurately only to the vertical edges.
Fig. 3. Images with mixed motion and defocus blurs: (a) blurred rectangle image; (b) blurred circle image; and (c) blurred irregular object image.
1175
4. Two improvements on motion blur based speed measurement To solve the two drawbacks of the above scan-line algorithm and calculate the parameter K more robustly and accurately, two improvements are proposed based on the image matting. By using these two improvements, we can isolate the defocus blur from the motion blur and we can also calculate the blur extent accurately for one object with irregular shape edges. 4.1. Image matting Image matting schemes take as input one image I which is assumed to be a composite of a foreground image F and a background image B. The color of the ith pixel is assumed to be a linear combination of the corresponding foreground and background colors:
Ii ¼ ai F i þ ð1 ai ÞBi
ð3Þ
Fig. 4. Transparency map of Fig. 3: (a) transparency map of Fig. 3a; (b) transparency map of Fig. 3b; and (c) transparency map of Fig. 3c.
1176
X. Ting-Fa, Z. Peng / Measurement 43 (2010) 1173–1179
where ai is the pixel’s foreground opacity. The transparency map produced by alpha matte contains fractional pixel values which can be physically interpreted as the time percentage that the foreground pixels are exposed during image capture. Each pixel has a fractional value between 0 and 1. With the fact that the foreground always occludes the background, it further describes the partial occlusion of the background scene by the foreground object during the exposure. So, essentially, the transparency map abstracts object motion information. The pixels in object center have value 1 and the pixels around the boundary of a blurred object will have fractional values. For an unblurred object with solid boundary, each pixel in the image corresponds to either the background or the foreground. So the transparency map can be approximated by binary values consisting of only 0 and 1. For example, the transparency maps for motion blurred images in Fig. 3 are illustrated in Fig. 4, where the alpha matte is produced by the ‘‘Poisson matting” scheme [7]. The transparency map has been successfully applied in the motion blur kernel estimations [8,9]. For example, by using the transparency map, it is possible to automatically calculate the width and the height of a 2-D blur filter [9]. Denote the rightmost pixel with alpha value a – 0 and rightmost pixel with alpha value a = 1 in the blurred transparency map as ðxr1 ; yr1 Þ; ðxr2 ; yr2 Þ, respectively. Similarly, denote the topmost pixel with alpha value a – 0 and top-
most pixel with alpha value a = 1 as ðxu1 ; yu1 Þ; ðxu2 ; yu2 Þ, respectively. Then the width and the height of a 2-D blur filter can be computed as follows [9]:
W ¼ jxr1 xr2 j þ 1;
H ¼ jyu1 yu2 j þ 1
ð4Þ
4.2. Improved blur extent calculation The two drawbacks of the above scan-line algorithm are solved here using the transparency map to calculate the parameter K more robustly and accurately. If the moving object’s motion blur direction is horizontal, the motion blur extent K and the defocus blur diameter D can be computed as follows by using Eq. (4):
D¼H K ¼W H
ð5Þ
However, in general, the moving object’s motion blur direction is not horizontal, and we denote the angle between the motion blur direction and the horizontal direction as b. In this general case, the following equations are satisfied:
8 > < H ¼ D þ L: sin b W ¼ D þ L cos b > : K ¼ L cos b
ð6Þ
In Eq. (6), there are four unknown parameters b; L; D; K, which cannot be solved using three equations. Therefore, we propose a novel image rotation based scheme to isolate the motion blur length L from the defocus blur. First, the original transparency map is rotated in a counter clockwise by an angle / (i.e., / 2 ½0 ; 90 ). Second, the width and
Fig. 5. Calculation of rotational acute angle /a : (a) in the case of acute angle b and (b) in the case of obtuse angle b.
Fig. 6. Circle image: (a) original circle image and (b) mixed blurred circle image.
1177
X. Ting-Fa, Z. Peng / Measurement 43 (2010) 1173–1179 Table 1 Computed result and error of horizontal blur extent by our scheme. Measuring times
1
2
3
4
5
6
7
2r
Average
K (in pixel) Relative error (%)
17.2 2.8
17.5 1.1
17.8 0.6
17.6 0.6
17.5 1.1
17.3 2.2
17.2 2.8
0.4 2.3
17.4 1.7
Table 2 Computed result and error of horizontal blur extent by scan-line algorithm in Ref. [6]. Measuring times
1
2
3
4
5
6
7
2r
Average
K (in pixel) Relative error (%)
20.3 14.8
21.5 21.6
22.8 28.9
23.6 33.5
24.0 35.7
25.3 43.1
26.8 51.6
4.4 24.9
23.5 32.7
height of the blur filter W / ; H/ are computed by using Eq. (4). Define the difference between W / and H/ as diff / ¼ jW / H/ j. These two steps are repeated for the different angle / with a step length 0.5° of the angle /. After the repeated operations are fulfilled, we continue to the third step. In this step, one least square curve fitting algorithm is applied to the different diff / with respect to the discrete /, where / ¼ 0 ; 0:5 ; 1:0 ; 1:5 ; . . . ; 90 . Define a polynomial as follows:
yð/Þ ¼ a0 þ a1 / þ a2 /2 þ ¼
m X
aj /j
ð7Þ
j¼0
By using the least square algorithm, the sum of the error squares is defined as follows:
Fða0 ; . . . ; am Þ ¼
n X
yð/k Þ diff /k
2
ð8Þ
k¼1
By solving @F=@aj ¼ 0; j ¼ 0; 1; . . . ; m, we can calculate the coefficients of this polynomial. After this polynomial is determined with the computed coefficients aj ; j ¼ 0; 1; . . . ; m, we can go to the fourth step. By solving dy=d/ ¼ 0, we can compute one acute angle /a accurately which is corresponding to the maximum yð/Þ. Then the original transparency map is rotated again in a counter clockwise by this acute angle /a . The width and height of the blur filter W /a ; H/a are computed with Eq. (4). If W /a < H/a , then L ¼ H/a W /a , D ¼ W /a , b ¼ 90 /a , K ¼ L cos b, as illustrated in Fig. 5a. If W /a > H/a , then L ¼ W /a H/a , D ¼ H/a , b ¼ 180 /a , K ¼ L cos b, as illustrated in Fig. 5b. By using the above four steps, we can compute the horizontal motion blur extent K accurately, if the moving object’s motion blur direction is not horizontal. Therefore, we can both isolate the defocus blur from the motion blur and calculate the horizontal blur extent K robustly and
accurately even for a moving object with irregular shape edges. 5. Experimental results and comparisons 5.1. Simulation experiment To quantitatively evaluate our improved scheme for calculating the horizontal motion blur extent K, one simulation experiment is performed here for comparison with the scan-line algorithm in Ref. [6]. Fig. 6a illustrates one circle object in a simple background, and Fig. 6b is a corresponding blurred image where both a motion blur with parameters L ¼ 25; b ¼ 45 and a defocus blur with parameter D = 11 are produced by using the Matlab filter functions. The measurement results using our image rotation based scheme are illustrated in Table 1, where the order of our fitting polynomial is m = 6. The measurement results using the scan-line algorithm in Ref. [6] are illustrated in Table 2, where the horizontal blur extent computation is performed by taking the average of the edge widths which are larger than 75% of the largest edge width. By comparison, we can see that our improved scheme outperforms that scan-line algorithm. For this simple synthesized image Fig. 6b, the alpha matte is produced by the ‘‘Poisson matting” scheme [7] in our improved scheme. The blur extent K is measured repeatedly for the 7 blurred circle images where a fixed motion blur with parameters L ¼ 25; b ¼ 45 and a variable defocus blur with parameter D 2 ½5; 11 are produced. 5.2. Real experiment The real experiment is carried out in an outdoor environment to measure the speed of a moving car, as illustrated in Fig. 7, using Eq. (2). Three parameters are
Fig. 7. Moving car image: (a) original blurred car image; (b) its scribble image; and (c) final alpha matte.
1178
X. Ting-Fa, Z. Peng / Measurement 43 (2010) 1173–1179
Table 3 Computed result and error of vehicle speed by our scheme and scan-line algorithm in Ref. [5] (speed unit m/s). Measuring times
1
2
3
4
5
6
7
2r
Average
Our scheme Ref. [5] algorithm
23.4 26.7
23.5 27.1
23.2 28.4
23.3 26.8
23.7 27.0
23.3 27.2
23.7 27.7
0.4 1.2
23.4 27.3
Fig. 8. Moving car image with a complicated background: (a) original blurred car image; (b) its scribble image; and (c) final alpha matte.
Table 4 Computed result and error of vehicle speed by our scheme and scan-line algorithm in Ref. [5] (speed unit m/s). Measuring times
1
2
3
4
5
6
7
2r
Average
Our scheme Ref. [5] algorithm
27.5 30.5
27.4 31.6
27.9 31.5
27.7 29.3
27.2 29.4
27.1 31.2
27.8 30.6
0.6 1.9
27.5 30.6
known in advance such that f = 10 mm, sx = 0.011 mm, T = 1/150 s, and the unknown parameter z is computed using the calibration method in Ref. [5] using the true length of the vehicle and the pixel length of the vehicle in the image. The vehicle speed is measured repeatedly for the seven sequential image frames obtained by a video camera. The measurement results using our scheme and the scan-line algorithm in Ref. [5] are illustrated in Table 3. In our speed calculation scheme, one key step is the extraction of the alpha matte image. In our simulation experiment, the ‘‘Poisson matting” scheme is used which expects a trimap as part of its input and computes the alpha matte in the mixed region by solving a Poisson equation. However, this scheme may not produce the accurate mattes for the complicated natural images. The ‘‘Scribble matting” scheme proposed by Levin et al. [10] seems to be a better choice. This ‘‘Scribble matting” scheme makes weaker assumptions on the behavior of a foreground image F and a background image B, which generally leads to more accurate alpha matte. Therefore, the ‘‘Scribble matting” scheme is used in our real experiment. Next, to testify the measurement robustness of our scheme, the measurement experiment is performed for another moving vehicle image with a complicated background, as illustrated in Fig. 8. The vehicle speed is measured repeatedly for the seven sequential image frames obtained by a video camera. The measurement results using our scheme and the scan-line algorithm in Ref. [5] are illustrated in Table 4. By comparison, we can see again that our improved scheme outperforms that scan-line algorithm in terms of the measurement repeatability 2r, since the horizontal blur extent K is calculated more accurately and robustly in our improved scheme. In fact, the small measurement repeatability 2r means that every measurement result is nearly identical with each
other. This good measurement accuracy is due to the application of our improved scheme and the ‘‘Scribble matting” scheme. 6. Conclusions In this paper, one robust and accurate blur extent calculation scheme is proposed based on the image matting and transparency map, when a suitable camera exposure time and an appropriate illumination condition are provided. This novel scheme can isolate the defocus blur from the motion blur effectively and calculate the horizontal motion blur extent accurately using the blur filter size derived from the transparency map. Therefore, this scheme is more suitable for the practical speed measurement work, since it can deal with the mixed blurs and a moving object with irregular shape. Acknowledgements We express many thanks and best regards for the valuable review comments proposed by the reviewers. This research is supported by the National Fundamental Research Plan Project (973 Project) with Grant No. 2009CB72400603. It is also supported by the Northeast Forestry University Foundation with grant No. DL09CB11 and the Heilongjiang Province Natural Science Foundation with application No. 42400625-4-10112. References [1] D. Sawicki, Traffic Radar Handbook: A Comprehensive Guide to Speed Measuring Systems, Author House, 2002. [2] T. Schoepflin, D. Dailey, Dynamic camera calibration of roadside traffic management cameras for vehicle speed estimation, IEEE
X. Ting-Fa, Z. Peng / Measurement 43 (2010) 1173–1179 Transactions on Intelligent Transportation Systems 4 (2) (2003) 90– 98. [3] J. Dailey, S. Pumrin, An algorithm to estimate mean traffic speed using uncalibrated cameras, IEEE Transactions on Intelligent Transportation Systems 1 (2) (2000) 98–107. [4] Z. Zhu, B. Yang, G. Xu, D. Shi, A real time vision system for automatic traffic monitoring based on 2D spatiotemporal images, in: Proc. of Workshop on Computer Vision, 1996, pp. 162–167. [5] H.Y. Lin, K.J. Li, C.H. Chang, Vehicle speed detection from a single motion blurred image, Image and Vision Computing 26 (2008) 1327– 1337.
1179
[6] H.Y. Lin, C.H. Chang, Automatic speed measurements of spherical objects using an off-the-shelf digital camera, in: Proc. of 2005 IEEE Conf. Mechatronics, 2005, pp. 66–71. [7] J. Sun, J. Jia, C.K. Tang, H.Y. Shum, Poisson matting, ACM Transactions on Graphics 23 (3) (2004) 315–321. [8] Q. Shan, W. Xiong, J. Jia, Rotational motion deblurring of a rigid object from a single image, in: Proc. IEEE ICCV, 2007, pp. 1–8. [9] J. Jia, Single image motion deblurring using transparency, in: Proc. IEEE CVPR, 2007, pp. 1–8. [10] A. Levin, D. Lischinski, Y. Weiss, A closed-form solution to natural image matting, IEEE Transactions on PAMI 30 (2) (2008) 228–242.