Vehicle speed detection from a single motion blurred image

Vehicle speed detection from a single motion blurred image

Available online at www.sciencedirect.com Image and Vision Computing 26 (2008) 1327–1337 www.elsevier.com/locate/imavis Vehicle speed detection from...

1MB Sizes 4 Downloads 96 Views

Available online at www.sciencedirect.com

Image and Vision Computing 26 (2008) 1327–1337 www.elsevier.com/locate/imavis

Vehicle speed detection from a single motion blurred image Huei-Yung Lin *, Kun-Jhih Li, Chia-Hong Chang Department of Electrical Engineering, National Chung Cheng University, 168 University Road, Min-Hsiung, Chia-Yi 621, Taiwan, ROC Received 7 April 2006; received in revised form 30 October 2006; accepted 2 April 2007

Abstract An image-based method for vehicle speed detection is presented. Conventional speed measurement techniques use radar- or laserbased devices, which are usually more expensive compared to a passive camera system. In this work, a single image captured with vehicle motion is used for speed measurement. Due to the relative motion between the camera and a moving object during the camera exposure time, motion blur occurs in the dynamic region of the image. It provides a visual cue for the speed measurement of a moving object. An approximate target region is first segmented and blur parameters are estimated from the motion blurred subimage. The image is then deblurred and used to derive other parameters. Finally, the vehicle speed is calculated according to the imaging geometry, camera pose, and blur extent in the image. Experiments have shown the estimated speeds within 5% of actual speeds for both local and highway traffic.  2007 Elsevier B.V. All rights reserved. Keywords: Speed measurement; Motion blur; Motion analysis

1. Introduction The major purpose of vehicle speed detection is to provide a variety of ways that law enforcement agencies can enforce traffic speed laws. The most popular methods include using RADAR (Radio Detection and Ranging) and LIDAR (Light Detection and Ranging) devices to detect the speed of a vehicle [1]. A RADAR device bounces a radio signal off a moving vehicle, and the reflected signal is picked up by a receiver. The traffic radar receiver then measures the frequency difference between the original and reflected signals, and converts it into the speed of the moving vehicle. A LIDAR device records how long does it take for a light pulse to travel from the LIDAR gun to the vehicle and come back. Based on this information, LIDAR can quickly calculate the distance between the gun and the vehicle. By making several measurements and comparing the distance the vehicle traveled between

measurements, LIDAR can determine the vehicle’s speed accurately. Both of the above methods use active devices, which are usually more expensive compared to a passive camera system. In addition, for the use of photo radars, it is necessary to integrate fast and high resolution imaging devices to capture images for the identification of detected vehicles. In the past few years, video-based approaches have been proposed for both vehicle tracking and speed measurement due to the availability of low cost and high performance imaging hardware [2–4]. Most of the video-based speed estimation algorithms use reference information in the scene, such as the distance traveled across image frames. The speed of the vehicle is then estimated by the distance divided by the inter-frame time. However, due to the limited imaging frame rate (commonly 30 frames per second), the video camera usually has to be installed far away from the vehicle to avoid motion blur.1 In this work, we propose a novel approach for vehicle speed detection and identification based on a single

*

Corresponding author. Tel.: +886 5 272 0411x33224; fax: +886 5 272 0862. E-mail address: [email protected] (H.-Y. Lin). 0262-8856/$ - see front matter  2007 Elsevier B.V. All rights reserved. doi:10.1016/j.imavis.2007.04.004

1 For a fixed moving speed of the vehicle, the extent of motion blur is inversely proportional to the distance between the vehicle and the camera.

1328

H.-Y. Lin et al. / Image and Vision Computing 26 (2008) 1327–1337

image taken by a stationary camera. Due to the relative motion between the camera and the moving object for an extended period of camera exposure time, motion blur will occur in a region of the image corresponding to the moving object in the scene. For any fixed time interval, the displacement of the vehicle in the image is proportional to the amount of blur caused by the imaging process. Thus, if the parameters of the motion blur (e.g., the motion length and the orientation) and the relative position between the camera and the object can be identified, the speed of the vehicle can be estimated according to the imaging geometry. Furthermore, for the motion blurred image taken with the license plate, image restoration provides a way to identify the vehicle. Depending on the imaging process, image degradation caused by motion blur can be classified as either a spatially invariant or a spatially variant distortion. The spatially invariant distortion corresponds to the cases that the image degradation model does not depend on the position in the image. This type of motion blurred image is usually a result of camera movement during the imaging process. Restoration of spatially invariant motion blurred images is a classic problem and several approaches have been proposed in the past few decades [5–8]. The goal is to find the point spread function (PSF) of the blurring system and then use deconvolution techniques to restore the ideal image. As for the spatially variant distortions, the PSF which causes the degradation is a function of position in the image. This type of motion blur usually appears in the image containing fast moving objects and recorded by a static camera. Image restoration of spatially variant blur is considered as a more difficult problem compared to the spatially invariant case and addressed only by a few researchers [9–12]. This paper reports the feasibility of a vehicle speed detection and identification technique using a single motion blurred image. Motion blur has recently been investigated for different application areas, such as image deblurring and increasing image resolution from video sequences [13–16], depth measurement and 3D shape reconstruction [17–19], optical flow calculation [20], motion estimation [21,22], and the creation of computer generated images [23,24]. However, to the best of the authors’ knowledge, it has never been used for vehicle speed detection before. We have provided a link to establish the relationship between motion blurred images and the corresponding 3D information. In this study, we consider a common case that the optical axis of the camera is parallel to the ground. Reasonable agreement on the speeds of moving vehicles is obtained between several other speed measurement methods and the proposed approach using motion blurred images. The paper is organized as follows. Section 2 introduces theoretic background of the image degradation model related to this work. In Section 3, we describe

the formulation of speed detection using a motion blurred image. In Section 4, we present the methods to estimate the blur and environment parameters. Experimental results and performance evaluation are provided in Section 5. Finally, in Section 6, some conclusions are drawn. 2. Image degradation under uniform linear motion The most commonly used image blur model assumes that the whole image is blurred [25,5]. The observed image g(x, y) is modeled as the output of a 2D linear space-invariant system, which is characterized by its point spread function (PSF) h(x, y). For the assumption of an additive noise model, the degraded image g(x, y) can be formulated as Z 1Z 1 gðx;yÞ ¼ hðx  a;y  bÞf ða;bÞdadb þ nðx;yÞ ð1Þ 1

1

where h(x, y) is a linear shift-invariant PSF, f(x, y) is the ideal image, and n(x, y) is the random noise term. In the case of uniform linear motion, the PSF h(x, y) is given by 1 ; jxj 6 R2 cos /; y ¼ x tan / hðx; yÞ ¼ R ð2Þ 0; otherwise where / and R represent the motion direction and the length of the motion blur, respectively. To restore the original image, one has to estimate h(x, y), or equivalently / and R from the blurred image g(x, y). The above image degradation model for linear motion [26,27], however, cannot be directly used to model the motion blur caused by an object moving in front of a still background. The motion blur in this case consists of the blur caused by the mixture of the moving object and the still background around the boundary of the object (defined as partial blur), and the blur induced by the motion inside the object region (defined as total blur). Suppose an object moves a distance R along the direction / from the horizontal axis of the image, then the blurred image g(x, y) is given by Z 1 R gðx; yÞ ¼ f ðx  q cos /; y  q sin /Þdq ð3Þ R 0 if (x, y) is in the totally blurred region, and ( ) Z R0 1 0 gðx; yÞ ¼ f ðx  q cos/; y  q sin/Þdq ðR  R Þfb ðx; yÞ þ R 0 ð4Þ

if (x, y) is in the partially blurred region, where R 0 < R and fb(x, y) is the unknown background at point (x, y). If the point (x, y) is not in the motion blurred regions, then g(x, y) is identical to f(x, y). If the motion direction / is known, then the above equations can be simplified to one-dimensional cases

H.-Y. Lin et al. / Image and Vision Computing 26 (2008) 1327–1337

gðxÞ ¼

1 R

Z

R

f ðx  qÞdq

ð5Þ

0

for the totally blurred region, and ( ) Z R0 1 0 ðR  R Þfb ðxÞ þ gðxÞ ¼ f ðx  qÞdq R 0

ment of the object is d for a fixed time interval, then we have b d sin h ¼ pþk f

ð6Þ

for the partially blurred region after rotation of the image coordinate system. In practice, image restoration of spatially variant motion blur is considered as a difficult problem due to the unknown background information. Tull and Katsaggelos [11] presented an iterative restoration approach for images blurred by fast moving objects in an image sequence. Kang et al. [12] proposed an image degradation model with a mixture of boundaries in the moving direction of the object. They suggested a spatially adaptive regularized image restoration algorithm based on the proposed degradation model. Since the application focuses on the estimation of motion blur parameters and image restoration of the totally blurred region such as the license plate of the vehicle, techniques dealing with spatially invariant motion blur (e.g., blind deconvolution and Wiener filter) are currently used for motion deblurring after region segmentation [28,7]. However, more sophisticated algorithms such as iterative restoration approach [11,9] can be used to improve the quality of the restored images. 3. Formulation of speed estimation The proposed method for vehicle speed detection is based on a pinhole camera model. As shown in Fig. 1, suppose the angle between the motion direction of the object and the image plane of the camera is h, and the displace-

1329

ð7Þ

and d cos h  b z ¼ k f

ð8Þ

where p and k are the starting position and displacement of the object on the image plane, respectively. Substituting Eq. (7) into Eq. (8) and removing b, we have d¼

zk f cos h  ðp þ kÞ sin h

ð9Þ

where z is the distance between the object and the camera in the direction parallel to the optical axis and f is the focal length of the camera. If the time interval (camera exposure time) for the displacement is T and the CCD pixel size in the horizontal direction is sx, then the speed v of the moving object is given by v¼

d zKsx ¼ T T ½f cos h  sx ðP þ KÞ sin h

ð10Þ

where P and K are the starting position and the length of movement in the image (in pixel), respectively. Thus, if the exposure time, focal length, CCD pixel size, starting position and length of motion blur, and motion direction are available, then the speed of the moving object can be derived from Eq. (10). If we rewrite Eq. (10) as v¼

zKsx Tf cos h½1  sfx ðP þ KÞ tan h

ð11Þ

then the speed v can be approximated by v¼

zKsx Tf cos h

ð12Þ

if f  sx and the angle h is less than 45 (P is always bounded by the width of the image). In Eq. (10), the focal length f and the CCD pixel size sx are the internal parameters of the camera. f can be obtained from camera calibration and sx is given by the manufacturer’s data sheet. The exposure time (or shutter speed) T is assigned by the camera settings during image acquisition. Thus, for the speed measurement of a moving object using a motion blurred image, the parameters to be identified include the blur parameters and the relative position and orientation between the object and the camera. For a special case that the object is moving along a direction parallel to the image plane of the camera, the displacement of the object can be obtained using similar triangles for a fixed camera exposure time (see Fig. 2). Thus, Eq. (10) can be further simplified as Fig. 1. Camera model for vehicle speed detection.

1330

H.-Y. Lin et al. / Image and Vision Computing 26 (2008) 1327–1337

d

L K

z

f

Fig. 2. Pinhole model for the special case.



zKsx Tf

ð13Þ

with the angle h set to zero. In this case, the position of the object is not required for speed estimation, the only parameter that has to be identified from the recorded image is the length of the motion blur. 4. Parameter estimation and image deblurring 4.1. Blur parameter estimation The general purpose of blur identification is to estimate the PSF of the blur and then use it for image restoration. Commonly used techniques, such as observing the periodic zero patterns in the frequency domain [29,30], have to deal with motion blurred images captured from camera motion. For a motion blurred image with an object moving in front of a static scene, the blur parameters to be estimated for speed detection

are the starting position and the length of the partial blur along the motion direction. It is well known that the response of a sharp edge to an edge detector is a thin curve, whereas the response of a blur edge to the same edge detector spreads over a wider region [31]. Fig. 3(a) shows an image of a static object with sharp edge (left) and an image of the object in motion (right). The intensity profile along a scanline of the partially blurred region spreads over a number of pixels and can be modeled as a ramp edge, as illustrated in Fig. 3(b). Thus, the starting and end positions of the partial blur can be identified by finding specific ramp edges in the intensity profiles of the image scanlines. To calculate the blur length in the horizontal direction, a subimage enclosing the detected vehicle is first extracted from the original motion blurred image. This can be done by either manual selection or segmentation with a background image as described in the following section. Edge detection is then applied on the subimage to find the left and right blur regions. Ideally, there will be two edges with the same width in each image scanline. Thus, the blur length can be obtained by taking the average of the ramp edge widths from those image scanlines. To make the algorithm more robust under the presence of noise and an imperfect imaging model, a single gradient change is used to approximate the ramp edge and then to identify the blur length. Furthermore, the horizontal Prewitt edge detector is extended to an n · n mask (where n is an odd number) of the following form

Fig. 3. The intensity profile along an image scanline corresponding to static and motion blurred images.

H.-Y. Lin et al. / Image and Vision Computing 26 (2008) 1327–1337

2

1

6 1 6 6 6 P h ¼ 6 ... 6 6 4 1 1

    n1 0 2     n1 2 .. .

0 .. .

    n1 0 2     n1 0 2

n1 2 n1 2

.. . n1 2 n1 2

 1

3

 17 7 7 .. 7 .7 7 7  15

ð14Þ

 1

to emphasize the response on the ramp edge region. Typically, a 5 · 5 mask is used and a strong response of a ramp edge spreads over a few pixels. Since the partial blur always occurs in the front and back of the moving direction, the symmetry on the blur extent provides assistance on removing false motion blurred regions. In addition, the estimation of the blur parameters is done on the image scanlines with less intensity variations near the boundary of the object. For the vehicle speed detection application, this region of the image usually corresponds to the upper part of the vehicle. In our implementation, the steps of an iterative algorithm for the estimation of motion length are given as follows: (1) Apply a 5 · 5 mask given by Eq. (14) to obtain an edge image. (2) Find the edge widths for each scanline and get a histogram for all edge widths in the edge image. Set Edgewidth equal to the mode of the distribution. Set the number of iterations. (3) Compare the edge widths in each scanline with Edgewidth. Record the edge widths which are larger than 80% and less than 120% of Edgewidth. (This is to allow ±20% error for edge width searching based on the mode of the edge width distribution). (4) Take a summation of the two largest recorded edge widths for each scanline and get a histogram for the summations. Find the mode of the distribution. (5) Update Edgewidth with half of the mode value obtained in Step 4. (6) Go back to Step 2 and repeat until Edgewidth converges or the number of iterations is reached.

first segmented from the background; image restoration is applied on this region, and then overlaid on the background. The restored image provides not only the identification of the vehicle, but also the information used for obtaining other environment parameters for vehicle speed calculation. One simple solution for object segmentation is to apply image subtraction with an additional background image. In this case, the images of the static scene are updated occasionally or constantly during speed measurements. The derived difference image is first converted to a binary image with a given threshold. Morphological operators (erosion and dilation) are carried out to remove the holes inside the vehicle region and noise on the background. The resulting image is then used as a template to segment the object. If there is no background image available for image subtraction, edge detection is applied on the motion blurred image followed by region growing with manual assistance for the vehicle region selection. 4.3. Environment parameters estimation To estimate the relative position and orientation between the vehicle and the camera for speed detection, we first consider the special case illustrated in Fig. 2. Suppose the length L of the vehicle is known (it is usually available from the manufacturer), then the distance z from the camera to the vehicle is given by z¼

To restore the best deblurred image and to obtain more accurate motion blur parameters, a sequence of images are created with differing amounts of blur based upon the lengths of motion blur derived from the previous section. Typically, a sequence of seven images is created for the length of motion blur from K  3 to K + 3 pixels. The most focused image in the sequence and the corresponding length of motion blur are then selected by computing the SML (sum modified Laplacian) focus measure [32] on the images. Due to the ringing effect caused by most image restoration algorithms, motion deblurring is done only on the object region. That is, the (motion blurred) object region is

fL sx l

ð15Þ

from similar triangles, where l (in pixel) is the length of the vehicle in the image, f is the focal length of the camera and sx is the CCD pixel size. Since the distance z is usually fixed for vehicle speed detection, Eq. (15) gives a good approximation even if the actual size of the vehicle is not available in some cases. If we replace the distance z in Eq. (13) with Eq. (15), then the vehicle speed detection formula for this special case becomes v¼

4.2. Image deblurring

1331

KL Tl

ð16Þ

which is independent of the focal length and the CCD pixel size of the camera. Next, we consider the general case shown in Fig. 4 (see also Fig. 1). Without loss of generality, suppose the head of the vehicle corresponds to p in the image coordinate system, then the distance z to the tail of the vehicle is given by z ¼ ½f cos h  ðp þ lÞ sin h 

L l

ð17Þ

where l is the length of the vehicle in the image coordinate frame. Since the depth calculation using Eq. (17) is not possible if the head and tail of the vehicle cannot be identified correctly, we propose another method to estimate the

1332

H.-Y. Lin et al. / Image and Vision Computing 26 (2008) 1327–1337 2

W 2 ¼ jP 0  P 1 j

d

2

2

¼ s2 ðk0 x0  l1 k0 x1 Þ þ s2 ðk0 y 0  l1 k0 y 1 Þ þ f 2 ðk0  l1 k0 Þ l

2

ð19Þ

L

or

z

W k0 ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 2 s2 ðx0  l1 x1 Þ þ s2 ðy 0  l1 y 1 Þ þ f 2 ð1  l1 Þ

f

Fig. 4. Pinhole model for the general case.

distance as well as the motion direction of the vehicle based on the location of the license plate. As shown in [33], given a parallelogram in 3D space with known image coordinates of four corner points, the relative depths of the 3D corner points can be determined. Suppose the four corner points of the license plate are Pi = (Xi, Yi, Zi), and the corresponding image points are pi = (xi, yi, f), where i = 0, 1, 2, 3, and f is the focal length of the camera (see Fig. 5). Then we have (Xi, Yi, Zi) = (kis xi, kis yi, kif), where s is the pixel size and ki is an unknown scale factor. Since the relative depths of the corner points can be written as li = ki/k0, for i = 1, 2, 3, we have 0

1 0 l1 x1 B C B ¼ l @ 2 A @ y1

x2 y 2

11 0 1 x3 x0 C B C y3 A @ y0 A

1

1

1

l3

ð18Þ

1

If the dimension of the license plate is known, say, the width is W, then

That is, k0 is calculated using l1 given by Eq. (18), and ki’s are then obtained by the relative depths, li = ki/k0, for i = 1, 2, 3. Finally, the distance between the camera and the center of the license plate can be estimated by the averqffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi age of di’s, where d i ¼ X 2i þ Y 2i þ Z 2i , for i = 0, 1, 2, 3. 4.4. Motion direction estimation In the above discussion, we have assumed that the direction of motion blur is parallel to the image scanline. If the object is not moving horizontally in the image (e.g., a vehicle is moving uphill or downhill), the image has to be rotated to align the motion direction with the image scanline before further processing. Since the blurring effect mainly occurs in the motion direction, the intensity of high frequency components along this direction should decrease. Thus, the motion direction of the object is perpendicular to the direction of high power components in the Fourier spectrum of the blurred image (see Fig. 7). By detecting the orientation of the maximum response in the Fourier spectrum, the motion direction can be identified. Since the background region of the image affects the result of spectrum computation and consequently the motion direction, the foreground region is segmented and used to obtain the corresponding Fourier spectrum. If we consider a spatial domain approach, a derivative of the image in the motion direction should suppress more of the image intensity compared to other directions. Thus, the motion direction relative to the image scanlines is the angle that minimizes the total intensity of the image derivative M 1 X N 1 X i¼1

Fig. 5. Projection of four coplanar points.

ð20Þ

jDf ði; jÞ½/

degrees j

ð21Þ

j¼1

where Df(i, j)[/ degrees] is a discrete approximation of the derivative along the angle / off the positive horizontal axis, and M and N are the number of rows and columns in the image, respectively [8]. This approach avoids the need to compute a Fourier transform. To use Eq. (10) for vehicle speed detection we need the angle between the motion direction and the image plane. For the special case that the vehicle is moving parallel to the image plane, the angle h is zero (see Fig. 2). For the general case, the angle is nonzero (as shown in Fig. 4) and can be obtained from the relative

H.-Y. Lin et al. / Image and Vision Computing 26 (2008) 1327–1337

orientation between the image plane and the plane determined by the license plate of the vehicle as follows. First, the plane consisting of the license plate can be obtained from least squared fitting of the four corner points estimated in the previous section. The angle h is then given by ~ n ~ c cos h ¼ j~ njj~ cj

1333

Motion Blurred Image

no Horizontal Motion?

Image Rotation

ð22Þ

where ~ n ¼ ðn1 ; n2 ; n3 Þ is the normal vector of the plane determined by the license plate and ~ c ¼ ð0; 0; 1Þ represents the direction of the optical axis of the camera. Thus, Eq. (22) can be simplified as n3 ð23Þ h ¼ cos1 j~ nj If the 2D case is considered (e.g., the vehicle appears around the center of the image), then we have ~ n  ðn1 ; 0; n3 Þ, and Eq. (23) becomes n3 ffi h ¼ cos1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð24Þ 2 n1 þ n23 5. Experimental results The proposed vehicle speed detection algorithms have been tested in both an indoor environment and outdoor scenes. A system flowchart is depicted in Fig. 6. Several experiments are described in detail and the results are compared to other speed detection methods. Statistical data for performance evaluation are provided for different testing environments, followed by a discussion on the correctness and limitation of the proposed method. 5.1. Motion direction parallel to image plane For the indoor environment, a remote-controlled toy car placed in front of a static background was used for the experiment (as shown in Fig. 7(a)). The length of motion blur estimated by the intensity profiles of the image scanlines and refined according to the best focus measure was found to be 25 pixels. The speed of the moving object was calculated using Eq. (13) with the following parameters: focal length f = 12 mm, CCD pixel size sx = 0.011 mm, shutter speed T = 1/100 s, distance to the object z = 575 mm. The estimate of the speed was found to be 1329 mm/s. To obtain the ground truth of the speed and verify our experimental results, we used a video camera to record a video sequence during still image capture. The frame rate of the video camera was set as 30 fps and thus the speed of the moving object was given by 30d, where d was the displacement of the object between the two image frames. The displacement d was measured physically by setting reference landmarks in the environment. Since motion blur can happen at the frame rate of 30 fps, image deblurring is usually required if the object is moving very fast in the

yes Blur Parameter Estimation

Image Deblurring

no Depth Information?

Environment Parameter Estimation

yes Vehicle Speed Estimation Fig. 6. The system flowchart for vehicle speed detection. ‘‘Horizontal Motion?’’, ‘‘Blur Parameter Estimation’’, ‘‘Image Deblurring’’, and ‘‘Environment Parameter Estimation’’ stages are discussed in Sections 4.4, 4.1, 4.2 and 4.3, respectively. Detailed information for ‘‘Vehicle Speed Estimation’’ stage is given in Section 3.

image. In this experiment, the speed of the object obtained using a video camera was found to be 1307 mm/s, which shows less than 2% difference compared to our speed estimation method. As shown in Fig. 8, the second experiment was carried out in an outdoor environment. The length of motion blur was found to be 10 pixels along the image scanlines. The motion blurred region was segmented from the background and image deblurring was done on the object region to reduce the ringing effect caused by the image restoration algorithm. In this experiment, the distance between the moving vehicle and the camera is unknown and Eq. (15) was used to calculate the parameter z for the speed detection formula. The length of the vehicle is 3.88 m according to the manufacturer’s specification, thus the distance was given by 10.79 m with a focal length f = 10 mm, CCD pixel size sx = 0.011 mm and vehicle length l = 325 pixels in the image. Finally, the speed of the moving vehicle was found to be 86.16 km/h using Eq. (13) with a shutter speed T = 1/200 s. Compared to the speed of 86.93 km/h derived from the video-based approach, there is less than a 1% discrepancy between the two speed estimates.

1334

H.-Y. Lin et al. / Image and Vision Computing 26 (2008) 1327–1337

Fig. 7. Fourier spectrums of static and moving objects for motion direction estimation.

The last experiment was performed for vehicle speed detection of highway traffic. Fig. 9 shows the recorded motion blurred image and its restoration. The speed of the vehicle was calculated using Eq. (16) with the following parameters: the length of motion blur K = 22 in the image, shutter speed T = 1/160 s, l = 570 pixels in the image, the length of the vehicle L = 4750 m. Thus, the estimated speed was 104.86 km/h according to the speed detection model. The speed of the vehicle obtained from video camera was 106.11 km/h, and the error is less than 2%.

of the license plate W = 320 mm, f = 26 mm, sx = 0.0068 mm, T = 1/400 s. In this experiment, it was difficult to obtain an accurate speed estimate using a video camera since the vehicle was moving fast and very close to the camera. Thus, the recorded images were deblurred first to find a few reference features and then used for speed computation. An approximation was found to be 110.22 km/h, which is within 2% of the result given by our approach.

5.3. Correctness and limitation of speed detection 5.2. Image taken with license plate To take an image with the license plate of a moving vehicle, the camera was placed approximately 45 off the traffic direction. The length of motion blur was found to be 34 pixels horizontally for the image shown in Fig. 10(a). The four corner points of the license plate in the deblurred image were used to calculate the distance between the vehicle and the camera. The angle between the motion direction and the image plane of the camera was given by Eq. (24). For the image shown in Fig. 10(b), the center of the license plate and the angle h were found to be (2741, 106,251) in mm and 48.25, respectively. Thus, the speed of the vehicle estimated by Eq. (10) was 112.97 km/h with the following parameters: the center of the license plate in the image (167, 67), the width

As shown in Eq. (10), one important factor for the correctness of the speed detection is the accuracy of the camera shutter speed. If the shutter speed is set as Ti seconds, but actual value is Ta seconds, then the measured speed of a moving vehicle will be Ta/Ti times the actual speed. In this work, it is assumed that the shutter time of a state-of-the-art digital camera is accurate enough for the speed measurement. Another important issue on the correctness of the speed detection is the error introduced during the digitization of the motion blurred image. One pixel difference on the motion length creates a speed measurement difference of (zsx)/(Tf) by Eq. (13). This implies that the smaller the cell size of the CCD sensor, the more accurate the result will be to obtain the motion length of one pixel blur. Consequently, this problem is mitigated by taking higher resolution images.

H.-Y. Lin et al. / Image and Vision Computing 26 (2008) 1327–1337

1335

Fig. 8. Motion blurred and restored images in the outdoor environment.

Fig. 9. Vehicle speed detection of highway traffic.

In the blur parameter estimation stage, we have used a 5 · 5 mask and an edge width range of 80–120% to detect the motion length. To evaluate the system sensitivity for different kernel sizes and edge width ranges, the algorithm is also tested with the mask sizes of 5 · 5, 7 · 7 and 9 · 9, and the edge width ranges of 80–110% and 90–120%. Table 1 shows the sensitivity analysis using eight images. The image shown in Fig. 9 and another seven images taken from highway traffic correspond to Image No. 1 and 2–8 in the table, respectively. The relative errors illustrated in Table 1 indicate that smaller kernel size usually gives better results, but the edge width range does not show a clear relationship with the system accuracy. Although the processing times vary for different parameter settings, they are all less than 0.3 s when running on a Pentium M 1.6 GHz laptop with 512 MB RAM. One major limitation of the proposed method is the assumption that there are detectable sharp edges in the motion deblurred image. That is, the captured image of a

vehicle should be focused if the shutter speed is fast enough. Thus, either a rough focus range of the vehicle should be given, or a small aperture should be used to increase the depth of field. The latter case might cause low image contrast under poor weather condition without sufficient illumination. Similarly, the speed detection algorithm might not perform well if the environmental lighting is not sufficient (e.g., for sunset or rainy days) or the shadow of the vehicle is significant. 6. Conclusions Vehicle speed detection for the purpose of traffic speed law enforcement is currently achieved by radar- or laserbased methods. Compared to passive camera systems, those methods use more expensive active devices and can be discovered by radar or laser detectors (anti-detection devices). In this work, we have proposed a novel approach for vehicle speed detection and identification based on a single motion blurred image taken by a stationary camera.

1336

H.-Y. Lin et al. / Image and Vision Computing 26 (2008) 1327–1337

Acknowledgement The support of this work in part by the National Science Council of Taiwan, R.O.C. under Grant NSC94-2213-E194-041 is gratefully acknowledged. References

Fig. 10. Speed detection with license plate.

Table 1 System sensitivity analysis for different kernel sizes and edge width ranges Image No. 5 · 5, 5 · 5, 7 · 7, 7 · 7, 9 · 9, 9 · 9,

1 (%) 2 (%) 3 (%) 4 (%) 5 (%) 6 (%) 7 (%) 8 (%)

80–120% 1.8 90–110% 8.4 80–120% 12.2 90–110% 12.2 80–120% 3.0 90–110% 6.8

3.2 7.4 7.4 10.0 30.1 38.8

0.8 4.8 8.8 8.8 19.0 31.0

1.7 6.1 13.9 6.1 5.6 21.3

0.6 6.1 16.2 16.2 7.3 7.3

1.4 9.0 1.4 2.4 29.3 29.0

5.3 3.8 21.8 9.8 5.3 9.8

2.5 4.3 16.4 13.6 5.9 5.9

The motion blur parameters and the relative position between the vehicle and the camera are estimated and then used to detect the speed of the moving vehicle according to the imaging geometry. For the motion blurred image taken with the license plate, image restoration provides a way to identify the vehicle. We have established a link between the motion blur information of a 2D image and the speed information of a moving object. Experimental results have shown that speed estimates derived from blur are on average within 5% of speed estimates derived using more conventional alternatives.

[1] D. Sawicki, Traffic Radar Handbook: A Comprehensive Guide to Speed Measuring Systems, Author House, 2002. [2] T. Schoepflin, D. Dailey, Dynamic camera calibration of roadside traffic management cameras for vehicle speed estimation, IEEE Transactions on Intelligent Transportation Systems 4 (2) (2003) 90–98. [3] F.C.D.J. Dailey, S. Pumrin, An algorithm to estimate mean traffic speed using uncalibrated cameras, IEEE Transactions on Intelligent Transportation Systems 1 (2) (2000) 98–107. [4] Z. Zhu, B. Yang, G. Xu, D. Shi, A real-time vision system for automatic traffic monitoring based on 2d spatio-temporal images, Workshop on Applications of Computer Vision (1996) 162–167. [5] M. Sondhi, Image restoration: the removal of spatially invariant degradations, Proceedings of IEEE 60 (7) (1972) 842–853. [6] D. Slepian, Restoration of photographs blurred by image motion, Bell System Technical Journal 46 (10) (1967) 2353–2362. [7] M. Cannon, Blind deconvolution of spatially invariant image blurs with phase, IEEE Transactions on Acoustics, Speech, Signal Processing ASSP-24 (1) (1976) 58–63. [8] Y. Yitzhaky, I. Mor, A. Lantzman, N. Kopeika, Direct method for restoration of motion blurred images, Journal of the Optical Society of America 15 (6) (1998) 1512–1519. [9] H. Trussell, S. Fogel, Identification and restoration of spatially variant motion blurs in sequential images, IEEE Transactions on Signal Processing 1 (1) (1992) 123–126. [10] M. Ozkan, A. Tekalp, M. Sezan, Pocs-based restoration of spacevarying blurred images, IEEE Transactions on Signal Processing 3 (4) (1994) 450–454. [11] D. Tull, A. Katsaggelos, Iterative restoration of fast-moving objects in dynamic image sequences, Optical Engineering 35 (12) (1996) 3460–3469. [12] S. Kang, Y. Choung, J. Paik, Segmentation-based image restoration for multiple moving objects with different motions, International Conference on Image Processing (1999) 376–380. [13] A. Rav-Acha, S. Peleg, Restoration of multiple images with motion blur in different directions, Workshop on Applications of Computer Vision (2000) 22–28. [14] Y. Yitzhaky, B.B.L.Y.N. Kopeika, Restoration of an image degraded by vibrations using only a single frame, Optical Engineering 39 (8) (2000) 2083–2091. [15] B. Bascle, A. Blake, A. Zisserman, Motion deblurring and superresolution from an image sequence, European Conference on Computer Vision (1996) II:573–II:582. [16] M. Ben-Ezra, S. Nayar, Motion-based motion deblurring, IEEE Transactions on Pattern Analysis and Machine Intelligence 26 (6) (2004) 689–698. [17] J. Fox, Range from translational motion blurring, IEEE Computer Vision and Pattern Recognition (1988) 360–365. [18] P. Favaro, S. Soatto, A variational approach to scene reconstruction and image segmentation from motion-blur cues, IEEE Computer Vision and Pattern Recognition (2004) I:631–I:637. [19] Y. Wang, P. Liang, 3d shape and motion analysis from image blur and smear: a unified approach, International Conference on Computer Vision (1998) 1029–1034. [20] I.M. Rekleitis, Optical flow recognition from the power spectrum of a single blurred image, in: International Conference in Image Processing, IEEE Signal Processing Society, Lausanne, Switzerland, 1996. [21] W. Chen, N. Nandhakumar, W. Martin, Image motion estimation from motion smear: a new computational model, IEEE Transactions

H.-Y. Lin et al. / Image and Vision Computing 26 (2008) 1327–1337

[22]

[23]

[24]

[25] [26]

on Pattern Analysis and Machine Intelligence 18 (4) (1996) 412– 425. D. Tull, A. Katsaggelos, Regularized blur-assisted displacement field estimation, International Conference on Image Processing (1996) 85–88. G. Brostow, I. Essa, Image-based motion blur for stop motion animation, in: SIGGRAPH 01 Conference Proceedings, ACM SIGGRAPH, 2001, pp. 561–566. M. Potmesil, I. Chakravarty, Modeling motion blur in computergenerated images, in: Proceedings of the 10th Annual Conference on Computer Graphics and Interactive Techniques, ACM Press, 1983, pp. 389–399. M. Banham, A. Katsaggelos, Digital image restoration, IEEE Signal Processing Magazine 14 (2) (1997) 24–41. C. Mayntz, T. Aach, D. Kunz, Blur identification using a spectral inertia tensor and spectral zero crossings, International Conference on Image Processing (1999) 26PP5.

1337

[27] Y. Yitzhaky, N. Kopeika, Identification of blur parameters from motion blurred images, Graphical Models and Image Processing 59 (5) (1997) 310–320. [28] H. Andrews, B. Hunt, Digital Image Restoration, Prentice Hall, 1977. [29] R. Fabian, D. Malah, Robust identification of motion and out of focus blur parameters from blurred and noisy images, Graphical Models and Image Processing 53 (5) (1991) 403–412. [30] M. Chang, A. Tekalp, A. Erdem, Blur identification using the bispectrum, Signal Processing 39 (10) (1991) 2323–2325. [31] R. Gonzalez, R. Woods, Digital Image Processing, second ed., Prentice Hall, 2001. [32] S. Nayar, Y. Nakagawa, Shape from focus, IEEE Transactions on Pattern Analysis and Machine Intelligence 16 (8) (1994) 824–831. [33] C. Chen, C. Yu, Y. Hung, New calibration-free approach for augmented reality based on parameterized cuboid structure, International Conference on Computer Vision (1999) 30–37.