3D Road Reconstruction from a Single View

3D Road Reconstruction from a Single View

COMPUTER VISION AND IMAGE UNDERSTANDING Vol. 70, No. 2, May, pp. 212–226, 1998 ARTICLE NO. IV970633 3D Road Reconstruction from a Single View Antoni...

595KB Sizes 8 Downloads 164 Views

COMPUTER VISION AND IMAGE UNDERSTANDING

Vol. 70, No. 2, May, pp. 212–226, 1998 ARTICLE NO. IV970633

3D Road Reconstruction from a Single View Antonio Guiducci∗ Systems Engineering Department, Istituto Elettrotecnico Nazionale “Galileo Ferraris,” Strada delle Cacce 91, 10135 Torino, Italy Received October 23, 1996; accepted April 3, 1997

A new algorithm is presented for the local 3D reconstruction of a road from its image plane boundaries. Given, on the same scan line, the images of two boundary points of the road and the tangents to the boundaries at the same two points, the algorithms computes the position in 3D space of the road cross segment that has one of the two points as its end point. The approximations introduced are fully discussed and formulas are given for the determination and comparison of the various source of errors. The algorithm is very fast and has been implemented in real time on the mobile laboratory under development at the author’s institution. °c 1998 Academic Press

The reconstruction of a constant width undulating and curved road requires the identification of pairs of edge points separated by a road width (cross segment end points). DeMenthon [4] proposed a discrete numerical method based on the “zero-bank” model in which the road is described by its centerline and by horizontal line segments of constant length and normal to the centerline at their midpoints. His method iteratively computes the road shape from a pair of starting points whose 3D position is known. Kanatani [5] proposed a method based on the same road model as DeMenthon’s, consisting of the numerical integration of a set of differential equations that relate the 3D road geometry with the projection image. Both methods ([4 and 5]) are not local; that is, the reconstruction of the 3D position of a road cross segment requires the reconstruction of the whole road. Hence, reconstruction errors in some places may affect reconstruction far away. By introducing the further assumption that the tangents to the road edges at the end points of cross segments are parallel, DeMenthon [6] showed that pointwise reconstruction is possible. For each point on, say, the left border of the road image, the corresponding point on the right border (i.e., the other end point of the cross segment) must satisfy a constraint relation. Other right border points may satisfy the same relation, however, and dynamic programming is used to search for a globally consistent solution, if it exists. By combining the image geometry of perspective projection with the road model of DeMenthon [6], Kanatani and Watanabe [7] rewrote the equation that relates the two end points of a cross segment in terms of unit vectors from the projection center of the TV camera (N -vectors) and proposed an approximation technique to solve that equation. The resulting 3D reconstruction is indeterminate wherever the line of sight is horizontal, that is, wherever the projection of the road crosses the horizon in the image. To overcome this difficulty and to improve the smoothness of the reconstructed shape the authors proposed a 3D curve fitting approach. In this paper, both the projective geometry approach of Kanatani and Watanabe [7] and the modified zero-bank road model of DeMenthon [6] are used. However, the difficulties of Kanatani’s approach are avoided by introducing suitable approximations (whose effects are fully discussed in the text).

1. INTRODUCTION The determination of the three-dimensional (3D) geometry of the road ahead of a running vehicle is of utmost practical importance in the development of autonomous navigation capabilities. Such a 3D reconstruction can be obtained either directly, by means of radar or stereoscopic sensors, or indirectly from the 2D measurement of the position of the road borders on the image plane (perhaps augmented by some knowledge of the motion parameters of the vehicle as given, for example, by an odometer or a speedometer). Moreover, the reconstruction can either be based on the data from a single frame or rely on the integration of data gathered over time, as in Dickmanns [1]. In this paper, the problem of recovering the 3D structure of the road from its projected borders in a single monocular image is addressed. Many solutions to this problem exist in the literature. They differ on the model assumed to idealize real roads. The simplest model is the “flat-earth” geometry model [2, 3] in which the vehicle is supposed to lie on a flat and horizontal road, and the 3D location of every point on both road borders can be determined as the intersection with the ground plane of the straight line through the projection center of the camera and the image plane point. The flat-earth hypothesis can be relaxed by assuming a model in which the road is straight and has a constant width. The road is reconstructed by forcing it to move up and down from the flat-earth plane so as to retain its constant width. ∗ Fax: +38.11.346384. E-mail: [email protected].

212 1077-3142/98 $25.00 c 1998 by Academic Press Copyright ° All rights of reproduction in any form reserved.

213

3D ROAD RECONSTRUCTION FROM A SINGLE VIEW

The proposed approach is truly local in the sense that, given, on the same scan line, the image of two boundary points of the road and the tangents to the boundaries at the same two points, the algorithm computes the position in 3D space of the road cross segment that has one of the two points as its end point. Hence, the correct reconstruction is possible wherever both road boundaries are visible on a scan line (or better on a small neighborhood of it, to compute tangents), even if nearby the road borders are hidden by obstacles or not detected due to shadows or some other cause. The algorithm does not suffer from geometric failures except where the reconstruction is physically impossible, i.e., where the line of sight is tangent to the road plane and the road itself is then vanishing from view. Moreover, the road reconstruction, as proposed, requires neither any parametrization of the road borders on the image plane nor a curve fitting of the reconstructed 3D points across the horizon. The purpose of the algorithm is the 3D reconstruction of an isolated road cross segment (with an estimation of the reconstruction error), given, on the same scan line, two border points of the road and the corresponding tangent directions. Hence, it is not a complete reconstruction system but only an input for such a system. Indeed, a complete reconstruction system requires the linking of the isolated points to form a global 3D reconstruction, discarding the outliers. Moreover, such a system could profitably track the reconstructed road over a sequence of images. The raw data output by the proposed algorithm already give a quite satisfactory reconstruction, as will be shown by its application to real road images. Moreover, the algorithm is fast. It has been implemented on the Mobile Laboratory under development at the author’s institution [8] and the complete processing chain, including the road border extraction, the “on road” calibration, and the 3D reconstruction runs at a frame speed of 12 images per second using two TMS 320C30 DSPs. After a brief explanation of the notation used throughout (Section 2), the paper describes the extrinsic calibration of the image acquisition system (Section 3) that is the determination of the position and orientation of the camera reference with respect to the vehicle, its height above the ground, and, frame by frame, while the vehicle is running, the road width and the position of the vehicle inside the road. Section 4 describes how the 3D position of a generic road cross segment can be determined. The algorithm uses as input the image position of one of the cross segment end points (say the left border point), the image position of the point on the right border on the same scan line, and the tangents to the road borders at both points. The reconstruction algorithm is based on some approximations whose effect on the reconstruction accuracy is discussed in Section 5 and compared to the effect of the unavoidable “image plane noise.”

Finally, the application of the algorithm to a sequence of highway images in a real-time application is presented in Section 6, and some concluding remarks are presented in Section 7. 2. NOTATION The following notations are used throughout. The camera axes are indicated by ξ , η, and ζ , with ζ in the direction of the optical axis, ξ in the direction of the scan lines, and η downward (in the direction of increasing scan line numbers) to form a right-handed reference frame (see Fig. 1). The image plane is at a distance f = 1 from the projection center O along the optical axis ζ and the image plane coordinates are (x, y) with the x and y axes parallel to ξ and η, respectively. The unit vectors of the axes (camera triad) are indicated by ξˆ , η, ˆ and ζˆ , where a caret is used to denote unit vectors £ ¤ aE aˆ ≡ N aE ≡ |E a|

(1)

Three more reference frames are introduced (all with origin at O): a vehicle reference X, Y, Z whose unit vectors (vehicle triad ) are denoted by uˆ v , vˆ v , w ˆ v with w ˆ v in the vehicle heading direction, uˆ v horizontal (when the vehicle is standing still on a horizontal plane), and vˆ v determined by the right-hand rule; a ˆ vˆ , w world triad u, ˆ (see Fig. 2) oriented as the road in correspondence of the vehicle (w ˆ is in the direction of the road and vˆ is orthogonal to the road plane, downward, uˆ determined by the right-hand rule); finally, to a generic point P on the road (left) boundary, far from the vehicle, is associated a road triad: ˆ P , again with w ˆ P in the road direction and vˆ P perpenuˆ P , vˆ P , w dicular to the road plane at P. In Fig. 2, as well as in the text, the symbol vˆ 0 is associated to the unit vector in the vertical direction that is, the unit vector orthogonal to the horizontal plane. Moreover, for sake of clarity, the unit vectors of the world and road triads are not drawn at O.

FIG. 1. The camera reference frame.

214

ANTONIO GUIDUCCI

is, to the 3D plane of equation aξ + bη + cζ = 0) (a, b, c)> nˆ = √ . a 2 + b2 + c2

(3)

The sign of nˆ can be chosen arbitrarily and will be fixed when needed. Moreover, all 3D planes perpendicular to nˆ have the image line ax + by + c = 0 as their vanishing line. ˆ is short In what follows, the sentence “the image point m” hand for the image point defined by the intersection of the straight line through O in the direction of mˆ with the image plane, and ˆ is shorthand for the image line defined by the “the image line n” intersection of the plane through O perpendicular to nˆ with the image plane. It is worth noting that the “image point” mˆ determined by the intersection of the two “image lines” nˆ 1 and nˆ 2 is mˆ = ± N [nˆ 1 × nˆ 2 ] FIG. 2. The vehicle, world, and road reference frames.

with the sign fixed by the convention mˆ · ζˆ ≥ 0, and that the “image line” nˆ through the “image points” mˆ 1 and mˆ 2 is

For the readers’s convenience, the above defined reference frames are summarized in Table 1. Following Kanatani [9], we denote by mˆ the unit vector from O in the direction of the image point (x, y) mˆ = p

>

(x, y, 1)

x 2 + y2 + 1

,

(2)

where the sign of mˆ has been fixed in such a way that mˆ · ζˆ ≥ 0. Note that all 3D lines parallel to mˆ have the image point (x, y) as their vanishing point. Likewise, nˆ denotes the unit vector perpendicular to the plane through O and the image line of equation ax + by + c = 0 (that

ξˆ ηˆ ζˆ uˆ v vˆ v w ˆv uˆ vˆ w ˆ uˆ P vˆ P w ˆP

Definition in the direction of the scan lines in the direction of increasing scan line numbers in the direction of optical axis orthogonal to vˆ v and w ˆ v : uˆ v = N [ˆvv × w ˆ v] orthogonal to the ground plane (when the vehicle is standing still) in the vehicle heading direction orthogonal to vˆ and w: ˆ uˆ = N [ˆv × w] ˆ orthogonal to the road plane at the vehicle position in the direction of the road at the vehicle position orthogonal to vˆ P and w ˆ P : uˆ P = N [ˆv P × w ˆ P] orthogonal to the road plane at P in the direction of the road at P

nˆ = ± N [mˆ 1 × mˆ 2 ]

(5)

with the sign fixed as needed. The camera frame and frame are connected by the £ the world ¤ ˆ vˆ , w ˆ ; that is, if ξE ≡ (ξ, η, ζ ) is a 3D rotation matrix R > = u, E w ≡ (X w , Yw , Z w ) is the point in the camera reference and X E w = R ξE . In other same point in the world reference, we have X words, the “image point” mˆ is the image of the 3D point whose camera coordinates are ξE = d mˆ and whose world coordinates E w = d R mˆ = d[(uˆ · m) ˆ uˆ + (ˆv · m)ˆ ˆ v + (w ˆ w], ˆ · m) ˆ where d are X is the distance of the 3D point from the projection center O. 3. CALIBRATION

TABLE 1 Symbols for the Reference Frames Used Unit vector

(4)

Reference frame

camera frame

vehicle frame

world frame

road frame

The calibration consists of the determination of the mapping between the pixel plane of the CCD camera and the 3D world directions. The intrinsic calibration maps each pixel to a unit vector in the camera reference ξˆ , η, ˆ and ζˆ (Fig. 1) and can be assumed known by any of the standard procedures such as Tsai’s [10]. The extrinsic calibration gives the position and orientation of the camera reference with respect to a suitably chosen world ˆ vˆ , w reference frame u, ˆ (Fig. 2). Since the purpose of this paper is the 3D reconstruction of the road ahead of the vehicle at a given instant of time, the most convenient choice of the world reference is a reference oriented as the road in correspondence of the vehicle, that is, with the w ˆ unit vector pointing in the road direction and the vˆ unit vector orthogonal to the road plane (and downward so that uˆ = vˆ × w ˆ is in the general direction of the ˆ vˆ , w scan lines of the camera). We can think of u, ˆ as a triad of unit vectors sliding along the road with the vehicle velocity, and it is with respect to this reference that the 3D position of the reconstructed road segments will be given.

215

3D ROAD RECONSTRUCTION FROM A SINGLE VIEW

Then, the extrinsic calibration consists of the determination of ˆ vˆ , w the components of the unit vectors of the world reference u, ˆ in the camera reference (that is, the rotation matrix between the two references) and of the instantaneous position of the camera with respect to the road (that is, the height H above the ground and the distances Wl and Wr from the left and right borders of the road). The link between the world and the camera reference is the ˆ v (Fig. 2) that is fixed with respect vehicle reference uˆ v , vˆ v , w to the camera and coincides with the world reference when the vehicle is still and with its heading direction along the road. Extrinsic calibration is accomplished in two steps. The first step is performed off-line, that is once and for all, and consists of the determination of both the orientation of the vehicle triad with respect to the camera reference and of the height H of the camera. The second step (the so-called on-road calibration) consists of the determination, frame by frame, while the vehicle is running, of the orientation of the vehicle triad with respect to the world triad and of the distances Wl and Wr of the camera from the borders of the road (and then, also of the instantaneous road width W = Wl + Wr ). 3.1. Determination of the Vehicle Triad ˆ v is defined such that vˆ v is normal The vehicle triad uˆ v , vˆ v , w to the ground plane (the plane under the vehicle) w ˆ v is in the vehicle heading direction and uˆ v is determined by the right-hand rule. Hence, vˆ v is determined by Eq. (3) from the position on the image plane of the vanishing line of the ground plane, that is, of the horizon line. To determine the position of the latter, choose a straight horizontal road and frame a sequence of images while the vehicle performs a slow turn. The horizon line is then the straight line through the vanishing points (extrapolated intersections) of the road boundaries in the sequence. Likewise, w ˆ v (heading direction of the vehicle) is determined by Eq. (2) from the image plane coordinates of the focus of expansion (FOE), that is, the average of the vanishing points of the road boundaries in a sequence of images taken while the vehicle is traveling along a straight horizontal road. The third unit vector uˆ v of the vehicle triad is given by the right-hand rule: uˆ v = vˆ v × w ˆ v. 3.2. Determination of the Camera Height The height H of the camera above the ground can be determined from the positions on the image plane of the left and right boundaries of a straight horizontal road of known width W0 (see Fig. 3) by ¯ ¯ ¯ [u, ˆ mˆ R , nˆ l ] ¯¯−1 ˆ mˆ R , nˆ r ] [u, ¯ H = W0 ¯ − . [ˆv , mˆ R , nˆ r ] [ˆv , mˆ R , nˆ l ] ¯ In this equation: ˆ cˆ ] stands for the triple product aˆ · (bˆ × cˆ ); ˆ b, • [a,

(6)

FIG. 3. Extrinsic calibration, determination of the height of the camera above the ground.

• nˆ l and nˆ r are the “image lines” of the left and right road borders computed using Eq. (3) with the sign convention nˆ l · ξˆ > 0 and nˆ R · ξˆ > 0; • mˆ R = N [nˆ r × nˆ l ] is the road vanishing point (intersection of the road borders); • vˆ is the unit vector orthogonal to the road plane and then vˆ ≡ vˆ v for a horizontal road, and • uˆ = vˆ × mˆ R . Indeed, with the sign convention stated, the unit vectors nˆ l and nˆ r are directed as shown in Fig. 4 and, from the same figure, uˆ · (mˆ R × nˆ l ) −→ ˆ u, PN = −H vˆ · (mˆ R × nˆ l )

uˆ · (mˆ R × nˆ r ) −→ ˆ NQ=H u. vˆ · (mˆ R × nˆ r )

(7)

−→ −→ From W0 = |P N + N Q| Eq. (6) follows. 3.3. On-Road Calibration By “on-road calibration” we mean the determination, frame by frame, while the vehicle is running, of the world reference ˆ vˆ , w triad u, ˆ (that is of the attitude of the vehicle), of the position of the vehicle inside the road (distance from the road borders), and of the instantaneous road width W . We neglect here the changes with time of the height H of the camera above the ground. ˆ vˆ , w The world triad u, ˆ is defined to be oriented as the road in correspondence of the vehicle, with w ˆ pointing in the road direction and vˆ orthogonal to the road plane. It differs from the ˆ v because: vehicle triad uˆ v , vˆ v , w 1. the heading w ˆ v of the vehicle changes with respect to the road due to the steering correction of the driver, and 2. while running, the vehicle shakes on its suspensions and the instantaneous horizon (vanishing line of the ground plane, orthogonal to vˆ ) does not coincide with the calibrated horizon (vanishing line of the plane orthogonal to vˆ v ).

216

ANTONIO GUIDUCCI

FIG. 4. Extrinsic calibration, determination of the position of the camera with respect to the road.

written as

The world triad can be (approximately) determined by uˆ = vˆ v × mˆ R

(8)

vˆ = vˆ v − (mˆ R · vˆ v )mˆ R

(9)

w ˆ = mˆ R ,

(10)

where mˆ R is the unit vector toward the vanishing point R of the portion of visible road nearest to the camera (see Fig. 5a), that is mˆ R = + N [nˆ r × nˆ l ]. Indeed, if the radius of curvature and the change of slope of the road are not too great, the portion of the road not framed by the camera can be considered planar and straight, and R lies on the instantaneous horizon in the heading direction and then coincides with w. ˆ For small swings, uˆ = vˆ × w ˆ = vˆ × mˆ R ' vˆ v × mˆ R and the instantaneous horizon can be approximated by vˆ = w ˆ × uˆ = mˆ R × uˆ ' mˆ R × (ˆvv × mˆ R ) = vˆ v − (mˆ R · vˆ v )mˆ R . Remembering (7), the position of the vehicle inside the road and the instantaneous road width W (Fig. 5b) can be

Wl = −H Wr = H

uˆ · (mˆ R × nˆ l ) vˆ · (mˆ R × nˆ l )

uˆ · (mˆ R × nˆ r ) vˆ · (mˆ R × nˆ r )

W = Wl + Wr .

(11) (12) (13)

In these equations Wl (distance from the left road border) is lower than zero if the vehicle is on the left of the left border and Wr (distance from the right border) is lower than zero if the vehicle is on the right of the right border. Hence, as defined, W is always positive. The angle θ of the heading direction w ˆ v with respect to the ˆ is given by road direction mˆ R ≡ w tan θ =

w ˆ v · (mˆ R × vˆ ) . w ˆ v · mˆ R

(14)

3D ROAD RECONSTRUCTION FROM A SINGLE VIEW

217

Our goal is to determine the 3D structure of the road in the hypothesis that the width W is constant along its visible portion. To this end we adopt the following road model [6, 7]: a road is generated by the displacement of a horizontal line segment of constant width W , in such a way to be always orthogonal to its trajectory. This model is obviously only an approximation (banked roads are not allowed) although a very good one. 4.1. Reconstruction Algorithm If P is a generic point on the left border of the road, the purpose of the reconstruction algorithm is to determine the 3D position of P and of the point Q that corresponds to P on the right border (that is, of the two end points of the segment that describe the road in our model). The 3D position of the segment PQ can be computed, as shown by Kanatani [7], if the image positions of P and Q and the road width W are known. In our case, P is given and W is known from the on-road calibration. It remains to determine the image position of Q. Let us consider the point S (Fig. 6) on the right border and on the same scan line as P (in a typical setup the scan lines are almost parallel to the horizon line). Let nˆ P and nˆ S be the corresponding image lines tangent to the road, respectively, at P and S, and let R be their intersection. Let mˆ P , mˆ S , and mˆ R be the unit vectors from O to P, S, and R, respectively (with the signs chosen in such a way that their ζˆ component be positive). We choose the (arbitrary) sign of nˆ P and nˆ S so that nˆ P · ξˆ ≥ 0 and nˆ S · ξˆ ≥ 0 (with this sign convention we have nˆ P = +N [mˆ P × mˆ R ], nˆ S = +N [mˆ S × mˆ R ]). We demonstrate now that the image position of the end point Q of the road cross segment at P, can be computed by approximating the image of the left and right borders of the road in a neighborhood of PQ by the tangents nˆ P and nˆ S (we shall examine the validity of this and subsequent approximations in the following section). The procedure can be summarized as follows.

FIG. 5. On-road calibration, (a) the instantaneous horizon and (b) the vehicle position and heading.

ˆ P (Section 2), where w ˆP 1. Compute the road triad uˆ P , vˆ P , w is the unit vector in the direction of the road, vˆ P is perpendicular to the road plane at P, and uˆ P = vˆ P × w ˆ P is in the direction PQ from P to Q. In our approximation, w ˆ P is given by the intersection of the two tangents nˆ P and nˆ S , while uˆ P and vˆ P are computed by exploiting our model of the road that constrains the

In the above formula, θ is positive when the vehicle heads toward the left border of the road. 4. ROAD RECONSTRUCTION The calibration procedure provides the width W of the road and the position and orientation of the vehicle with respect to its borders, that is, the components in the camera reference of the ˆ respectively, along and across the road and unit vectors w ˆ and u, vˆ orthogonal to its plane, the distance Wl of the vehicle from the left border of the road, and its height H above the ground.

FIG. 6. The image of the road with the scan line P S.

218

ANTONIO GUIDUCCI

segment PQ to be horizontal and orthogonal to the road direction at P. 2. Determine the vanishing point T of the 3D line PQ as the image point defined by uˆ P (remember that the vanishing point of a 3D line is determined by the unit vector in the direction of the line by Eq. (2)). 3. Approximate the looked for image of the point Q by the intersection Q 0 of the image line TP with the tangent at S: nˆ S . 4. Once determined, the (approximate) image of the point Q (that is the unit vector mˆ Q from O to Q) and the road triad ˆ P , the reconstruction of the road segment PQ is based uˆ P , vˆ P , w on −→ OP = W

vˆ P · mˆ Q mˆ P , |ˆv P × (mˆ Q × mˆ P )|

−→ OQ = W

vˆ P · mˆ P mˆ Q . |ˆv P × (mˆ Q × mˆ P )|

(15)

More explicitly, let us approximate the road border in the neighborhood of any point by the tangent at the same point. The intersection R of the tangents at P and S is then a first-order approximation to the vanishing point (VP) of the road at P. Hence, the unit vector w ˆ P of the road triad at P, that is, the unit vector in the direction of the road, can be approximated, to the first order by the unit vector mˆ R in the direction of R w ˆ P ' mˆ R ≡ +N [nˆ S × nˆ P ].

(16)

We exploit now our model of the road; that is, we impose the condition that a cross segment PQ is horizontal and perpendicular to the road direction at P. Remembering from Section 2 ˆ P ) has uˆ P in the direction PQ that the road triad at P(uˆ P , vˆ P , w ˆP from P to Q, vˆ P perpendicular to the road plane at P, and w in the road direction, the condition that PQ be horizontal can be written uˆ P · vˆ 0 = 0, where vˆ 0 is the unit vector orthogonal to the horizontal plane (to be distinguished from both vˆ which is the unit vector orthogonal to the ground plane under the vehicle and vˆ P which is the unit vector orthogonal to the road plane at P). Moreover, the condition that the segment PQ be perpendicular ˆ P = 0. to the tangent to the road in P is the obvious relation uˆ P · w In general, if the ground plane is not horizontal, the unit vector vˆ 0 is different from vˆ and is not known. However, as we shall see in the following section, it is always possible with negligible errors ˆ P ' mˆ R , write to approximate vˆ 0 with vˆ and, remembering that w the model equations as uˆ P · vˆ ' 0

(17)

uˆ P · mˆ R ' 0.

(18)

From (19) and (16) we get vˆ P = w ˆ P × uˆ P ' mˆ R × uˆ P ' N [mˆ R × (ˆv × mˆ R )].

(20)

The straight line through the projection center O in the direction uˆ P intersects the image plane in the point T (vanishing point of the segment PQ) whose unit vector is mˆ T = uˆ P . T lies, in our approximation vˆ 0 ' vˆ , on both the horizon (uˆ P · vˆ = 0, Eq. (17)) and on the road plane at P (uˆ P · vˆ P = 0) and then on their intersection (see Fig. 6 where, for clarity, the point T has been drawn much closer to R than occurs in actual situations). Indeed, with T drawn to scale, the line RT would be almost parallel to the vanishing line of the ground plane and Q would almost coincide with S. The point Q lies on the intersection of the image line TP (whose unit vector, from Eq. (5), is nˆ TP = +N [mˆ P × mˆ T ] = +N [mˆ P × uˆ P ]) with the right border of the road. Approximating the border of the road in a neighborhood of S with its tangent (nˆ S ) we have, to the first order, mˆ Q ' mˆ Q 0 ≡ N [nˆ S × nˆ TP ] = N [nˆ S × (mˆ P × uˆ P )]

(21)

(the signs have been chosen so that mˆ Q · ζˆ > 0, that is so that mˆ Q point from O to Q). The point Q 0 so determined does not, in general, coincide with the true point Q corresponding to P. It is obviously always possible to obtain a better approximation by iterating the above procedure (substituting S with Q 0 , or better with Q 00 , intersection of the straight line TPQ 0 with the right border of the road), but we shall show in the following sections that this is not necessary, and that the first approximation is normally quite enough. Having determined the (approximate) point Q, the 3D reconstruction of the road at P goes as follows (see Fig. 7). −→ OP =

HP HP −→ mˆ P , O Q = mˆ Q , vˆ P · mˆ P vˆ P · mˆ Q

(22)

where H P is the orthogonal distance of O from the road plane

Hence, uˆ P is approximately perpendicular to the plane spanned by vˆ and mˆ R , that is, uˆ P ' N [ˆv × mˆ R ] = N [ˆvv × mˆ R ], where the rightmost equality follows from (9).

(19) FIG. 7. Road reconstruction from the 3D position of the points P and Q.

3D ROAD RECONSTRUCTION FROM A SINGLE VIEW

• substitution of Q, true image point on the right border corresponding to P on the left border, with Q 0 as computed above. This latter approximation, as said, can be reduced by an iterative procedure, but this is not necessary as we shall show.

at P, ¯ ¯ ¯ mˆ Q mˆ P ¯¯ −→ −→ W = | O Q − O P| = H P ¯¯ − vˆ P · mˆ Q vˆ P · mˆ P ¯ = HP

|(ˆv P · mˆ P )mˆ Q − (ˆv P · mˆ Q )mˆ P | (ˆv P · mˆ P )(ˆv P · mˆ Q )

= HP

|ˆv P × (mˆ Q × mˆ P )| , (ˆv P · mˆ P )(ˆv P · mˆ Q )

219

In the following sections, we shall examine the effect of the above approximations and demonstrate that the errors they introduce are negligible (or almost negligible) with respect to those due to the unavoidable image plane noise. (23) 5. RECONSTRUCTION ERRORS

where H P can be determined and substituted into (22) to give (15). The reconstruction equations (15) give indeterminate results when the line of sight is tangent to the road plane (that is when mˆ Q × mˆ P is parallel to the unit vector vˆ P normal to the road plane at P). This situation is of no concern, however, because in this case the line of sight is tangent to a road bump and the road itself is vanishing from view. This must be contrasted with the Kanatani algorithm [9] which fails when the line of sight is horizontal. To overcome this difficulty (that happens, for example, when an uphill road crosses the “horizon”) Kanatani must introduce a 3D curve-fitting approach.

We shall now examine the effect of the various source of errors (both image plane noise and algorithm approximations) on the determination of the 3D position of the road. The position of the road can be specified by giving the position vector cE of the center C of the road segment PQ in units of W , that is, from (15)

4.2. Approximations

where d = OC/W is the distance form O to C in units of W and mˆ C is the unit vector toward C (see Fig. 8). The variation 1Ec of cE due to variations 1mˆ P , 1mˆ Q , and 1ˆv P in mˆ P , mˆ Q , and vˆ P , respectively, can be written from (24) as

The reconstruction of the road, as outlined above, is based on the following two approximations:

−→ −→ −→ 1 OP + OQ OC = cE ≡ W 2 W =

• substitution of the true (and unknown) vertical unit vector vˆ 0 with the unit vector vˆ orthogonal to the ground plane;

FIG. 8. Road geometry at P.

(ˆv P · mˆ Q )mˆ P + (ˆv P · mˆ P )mˆ Q = d mˆ C , 2|ˆv P × (mˆ Q × mˆ P )|

1Ec = Ma 1mˆ P + Mb 1mˆ Q + Mc 1ˆv P

(24)

(25)

220

ANTONIO GUIDUCCI

points in the neighborhood of P and S, respectively, that do not necessarily contain P or S.

with Ma 1mˆ P =

1 {(ˆv P · mˆ Q )1mˆ P 2|ˆv P × (mˆ Q × mˆ P )| + (ˆv P · 1mˆ P )mˆ Q − 2[(uˆ P · mˆ Q )(ˆv P · 1mˆ P ) − (ˆv P · mˆ Q )(uˆ P · 1mˆ P )]Ec}

Mb 1mˆ Q =

(26)

1 {(ˆv P · mˆ P )1mˆ Q 2|ˆv P × (mˆ Q × mˆ P )| + (ˆv P · 1mˆ Q )mˆ P + 2[(uˆ P · mˆ P )(ˆv P · 1mˆ Q ) − (ˆv P · mˆ P )(uˆ P · 1mˆ Q )]Ec}

Mc 1ˆv P =

(27)

1 {(mˆ Q · 1ˆv P )mˆ P 2|ˆv P × (mˆ Q × mˆ P )|

The vector equation that links 1Ec to 1mˆ P , 1ˆvv , 1nˆ P , and 1nˆ S can be computed from (25) by expressing 1mˆ Q and 1ˆv P as a function of the above independent errors, obtaining an equation of the form 1Ec = M1 1mˆ P + M2 1ˆvv + M3 1nˆ P + M4 1nˆ S . The somewhat lengthy expressions of the matrices Mi can be found in [12]. If the covariance matrices of the independent errors are V [mˆ P ], V [ˆvv ], V [nˆ P ], and V [nˆ S ], the covariance matrix of the position vector cE can then be written V [Ec] = M1 V [mˆ P ]M1> + M2 V [ˆvv ]M2> + M3 V [nˆ P ]M3> + M4 V [nˆ S ]M4> .

(29)

+ (mˆ P · 1ˆv P )mˆ Q + 2[(uˆ P · mˆ P )(mˆ Q · 1ˆv P )

5.2. Errors Due to the Algorithm

− (uˆ P · mˆ Q )(mˆ P · 1ˆv P )]Ec}.

As mentioned above (Section 4.2) the approximations on which the reconstruction algorithm is based are due (1) to the substitution, in the formulas, of vˆ for the unknown vˆ 0 , and (2) on approximating the true endpoint Q of the road cross segment at P with the point Q 0 on the tangent to the right border at S. These two sources of error are examined in the following. As we shall see, the first error source is the most important of the two.

(28)

The explicit derivation of the terms above can be found in [12]. Using Eq. (25) as a starting point, we shall now outline the derivation of the expressions for the reconstruction error due to both the image noise and the algorithm approximations. The complete expressions and the detailed derivation will not be given here, however, due to their length, but can be found in [12]. We shall content ourselves with a graphical representation of some components of the resultant error covariance matrices for a particular but important special case. 5.1. Errors Due to the Image Noise The covariance matrix V [Ec] ≡ E[1Ec1Ec> ] of the position error due to the image plane noise can be drawn from (25) that gives 1Ec as a function of 1mˆ P , 1mˆ Q , and 1ˆv P . To this end we must determine the sources of independent noise. The formulas from which cE can be computed are rewritten here for convenience mˆ R = N [nˆ S × nˆ P ]

(190 )

uˆ P = N [ˆvv × mˆ R ]

(220 )

vˆ P = mˆ R × uˆ P

(230 )

mˆ Q = N [nˆ S × (mˆ P × uˆ P )].

(240 )

From these equations it follows that the independent source of errors are: • error 1ˆvv in vˆ v (being vˆ v computed once and for all in the calibration phase); • error 1mˆ P in mˆ P (positioning error of the left border due to the road extraction algorithm); • error 1nˆ P and 1nˆ S in the tangents to the left and right road borders, respectively in P and S. These errors can be considered independent on 1mˆ P being computed from a set of boundary

5.2.1. Error due to the substitution of vˆ 0 with vˆ . The reconstruction algorithm is based on the road model introduced in Section 4 in which a road cross segment PQ is always horizontal, that is, orthogonal to vˆ 0 . Unfortunately, we do not know vˆ 0 because we do not know the slope of the road under the vehicle (we address the problem of the 3D reconstruction of the road from a single TV frame without any other external input like an inclinometer reading). We are then forced to substitute the unknown normal to the horizontal plane (ˆv0 ) with the normal to the ground plane under the vehicle (ˆv ) computed by the algorithm. This approximation introduces an error 1uˆ P in the determination of the direction uˆ P of the cross segment PQ, and an error 1ˆv P in the determination of the unit vector orthogonal to the road plane at P (Eqs. (220 ) and (230 )). These errors are given by vˆ 0 × mˆ R vˆ × mˆ R − |ˆv × mˆ R | |ˆv0 × mˆ R |

(30)

1ˆv P = vˆ P − vˆ P0 = mˆ R × uˆ P − mˆ R × uˆ P0 ,

(31)

1uˆ P = uˆ P − uˆ P0 =

where uˆ P0 and vˆ P0 are the true (and unknown) values of uˆ P and vˆ P , and mˆ R ≡ w ˆP ≡w ˆ P0 is the (known) road direction at P. From these equations one can observe that the error is zero if vˆ lies in the plane defined by mˆ R and vˆ 0 (vertical plane through mˆ R ). Indeed, if this is the case, vˆ ×mˆ R is parallel to vˆ 0 ×mˆ R and the corresponding unit vectors uˆ P and uˆ P0 are equal. With reference to Fig. 8, we can note that the above condition is satisfied if the angle ϕ between the projection of the vehicle heading direction on the horizontal plane and the projection on the same plane of the road direction at P is zero. In order words, the error is

3D ROAD RECONSTRUCTION FROM A SINGLE VIEW

zero when the road is parallel to the vehicle (in this case PQ is perpendicular to both vˆ 0 and vˆ and uˆ P is horizontal and then coincides with uˆ P0 ) and increases with ϕ. Hence, the worst case, for what concerns this error, is that of a road with a radius of curvature ρ constant and equal to the maximum allowed for the kind of road on which the vehicle is running. For the purpose of comparing the various sources of error we put ourselves in this situation. Let ϕ (Fig. 8) be the angle between the projections on the horizontal plane of the directions of the road at P and of the vehicle; θ the angle between the ground plane under the vehicle and the horizontal plane; ψ the angle between the road plane at P and the horizontal plane. Moreover, let α be the angle between the line of sight toward the center C of the road segment PQ and the road direction at P, and let β the angle between the same line of sight and the road plane (0 < β ¿ 90◦ ). It can be shown (see [12]) that, under the hypothesis of small angles θ, ψ and β (cos θ ' cos ψ ' cos β ' 1, sin2 θ ' sin2 ψ ' sin2 β ' sin θ sin ψ ' · · · ' 0)

221

point Q of the road cross segment at P with the point Q 0 on the tangent to the right border at S, as computed by the algorithm. Since our purpose is to demonstrate that this error is negligible if compared to the previous one (ˆv 6= vˆ 0 ), we put ourselves in the same situation as before. That is, we consider a constant curvature and, for simplicity, horizontal road (ρ = const, θ = ψ = 0). In this case ϕ = s/ρ can be used to parametrize the position of the point P along the road; that is, P = P(ϕ). Likewise, we can write Q(ϕ), C(ϕ), R(ϕ), T (ϕ), and the corresponding unit vectors will depend on ϕ. An easy calculation [12] shows that s µ ¶2 ϕ h h uˆ P + vˆ P mˆ C (ϕ) = −sin 1− 2 d d s µ ¶2 ϕ h + cos 1− w ˆP (37) 2 d

1uˆ P ' −sin θ sin ϕ vˆ 0

(32)

1 1 uˆ P , mˆ Q (ϕ) = mˆ C (ϕ) + uˆ P 2d 2d uˆ P (ϕ) = cos ϕ uˆ − sin ϕ w ˆ ≡ mˆ T

1ˆv P ' sin θ sin ϕ uˆ P0

(33)

vˆ P (ϕ) = vˆ

(40)

ˆ ≡ mˆ R , w ˆ P (ϕ) = sin ϕ uˆ + cos ϕ w

(41)

1mˆ Q = mˆ Q − mˆ Q0 '

sin θ sin ϕ cos α (cos α mˆ Q0 − w ˆ P0 ). d sin β

(34)

In (34), d is the distance OC in units of W , and mˆ Q0 is the unit vector computed by making use of the algorithm formulas but with the true vˆ 0 substituted for vˆ . The error is then zero, as said, if θ = 0 (horizontal vehicle) or if ϕ = 0 (straight road) and does not depend, in the first approximation, on the variation of slope of the road (that is on ψ) but only on the slope θ under the vehicle. Remembering (25), the reconstruction error due to vˆ 6= vˆ 0 can be written 1Ec = Mb 1mˆ Q + Mc 1ˆv P

mˆ P (ϕ) = mˆ C (ϕ) −

(39)

where, as before, h = H/W denotes the height of the camera above the ground, and d = OC/W its straight distance to the point C. Let ϕ P the value of the parameter ϕ for the particular point on the left border of the road we are interested to, that is P = P(ϕ P ), Q = Q(ϕ P ), . . .. The (true) unit vectors, as given by the above equations, are mˆ P = mˆ P (ϕ P ), mˆ Q = mˆ Q (ϕ P ), mˆ T = ˆ P (ϕ P ), and vˆ P = vˆ = vˆ P (ϕ P ). The corresponduˆ P (ϕ P ), mˆ R = w ing quantities, as computed by the algorithm, will be denoted by a prime (see Fig. 9).

(35)

with 1mˆ Q and 1ˆv P given by (34) and (33), respectively. Let us consider now, as said, the (worst) case of a road with constant radius of curvature (in units of the road width W ) ρ. Denoting with s = ρϕ the path length along the road and with h = H/W the height of the camera above the ground (both in units of W ), the reconstruction error (35) simplifies to (see [12]) ½

ϕ 4ρ 2 (1 − cos ϕ)2 uˆ P0 − 2ρ sin2 vˆ P0 4h 2 ¾ ρ + sin ϕ[1 − 2ρ(1 − cos ϕ)]w ˆ P0 . (36) 2h

1Ec = sin θ sin ϕ

5.2.2. Error due to the substitution of Q with Q 0 . We shall now discuss the error introduced by approximating the true end

(38)

FIG. 9. True and computed quantities.

222

ANTONIO GUIDUCCI

The reconstruction error due to Q 0 6= Q is given by (25) with 1mˆ Q = mˆ Q 0 − mˆ Q (ϕ P ) and 1ˆv P = vˆ P 0 − vˆ P (ϕ P ) (1mˆ P = 0 in this case). To compute mˆ Q 0 and vˆ P 0 we proceed as follows. Let ϕ S be the value of the parameter ϕ for which mˆ Q (ϕ S ) = mˆ S ; that is, ϕ S is the abscissa of the point PS to which S corresponds. The condition that P and S have the same y component (for simplicity we suppose that the horizon is parallel to the image y axis) can be written mˆ Q (ϕ S ) · vˆ mˆ P (ϕ P ) · vˆ = . mˆ Q (ϕ S ) · w mˆ P (ϕ P ) · w ˆ ˆ

(42)

From this equation we get (see [12]) sin ϕ S =

ρ+ ρ−

1 2 1 2

sin ϕ P

(43)

and, with reference to Fig. 9, remembering Eqs. (4) and (5) nˆ S = N [mˆ Q (ϕ S ) × mˆ R (ϕ S )]

(44)

mˆ R 0 = N [nˆ S × nˆ P ]

(nˆ P = N [mˆ P (ϕ P ) × mˆ R (ϕ P )]) (45)

mˆ T 0 = N [ˆv × mˆ R 0 ]

(46)

vˆ P 0 = N [mˆ R 0 × mˆ T 0 ]

(47)

mˆ Q 0 = N [nˆ S × (mˆ P (ϕ P ) × mˆ T 0 )].

(48)

From these equations, 1mˆ Q and 1ˆv P can be computed and substituted into (25) to give the reconstruction error. 5.3. Comparison of Error Sources We shall now put all things together and compare the components of the various sources of error along the same system of reference axes, e.g., the road axes whose unit vectors are uˆ P , vˆ P , and w ˆ P . We also put ourselves in the case (worst case for what concerns the error due to vˆ 6= vˆ 0 ) of a road with constant radius of curvature. The maximum permissible sharpness of horizontal curves and vertical gradient depends on the type of road. Typical values of the minimum horizontal radius of curvature range from about 1200 m for a primary highway in a flat region to about 200 m for a secondary rural highway in a mountainous region [11]. The maximum permissible vertical gradient, in the same situations, range from 2 to 4%. With a typical lane width W = 4 m, the minimum horizontal radius of curvature (in units of W ): ρmin ranges from 300 to 50. In what follows we suppose that the height of the camera above the ground is h = H/W = 0.25 (that is H = 1 m for W = 4 m) and consider distances along the road between 4 and 100 m (that is, s between 1 and 25 road widths). Figure 10 shows the projection of the model lane on the image plane for a flat road with ρ between 50 and 300. Let us start by considering the covariance matrix V [Ec] of the position error due to the image plane noise (Section 5.1). The covariance matrices of the independent sources of noise on

FIG. 10. Model road for various values of the horizontal radius of curvature ρ (in units of the road width).

the image plane: V [mˆ P ], V [ˆvv ], V [nˆ P ], and V [nˆ S ] can be estimated (as order of magnitude) by making use of the relations shown by Kanatani [9] ˆ ≡ E[1m1 ˆ mˆ > ] ∼ V [m]

²2 1 (I − mˆ mˆ > ), f2 2

(49)

ˆ is the covariance matrix of the unit vector mˆ due where V [m] to an error of ² pixels in the position on the image plane of the ˆ f is the focal length in pixel, and I is the point pointed by m, 3 × 3 unit matrix; ˆ ≡ E[1n1 ˆ nˆ > ] ∼ V [n]

1 6² 2 ˆ nˆ × m) ˆ >, (nˆ × m)( N L2

(50)

ˆ is the covariance matrix of the unit vector nˆ orthogwhere V [n] onal to the plane through O and the image segment of center mˆ

3D ROAD RECONSTRUCTION FROM A SINGLE VIEW

and length L pixels. N is the number of points used to compute the coefficients of the least squares line through the segment points and ² is again the image plane error of each point. Plugging the values of V [mˆ P ], V [ˆvv ], V [nˆ P ], and V [nˆ S ], computed using the above equations into (29), we obtain the covariance matrix of the position error in the camera reference (obviously, the components of the matrices Mi must be computed in the same reference). Having chosen, as the reference in which to compare the various results, the road reference, the V [Ec] matrix must be rotated to that reference by Vr [Ec] = Rr V [Ec]Rr> ,

(51)

ˆ P ] is the rotation matrix from the camera where RrT ≡ [uˆ P , vˆ P , w to the road reference. Figure √ √ 11 shows the standard √ deviations σu P = Vr [Ec]11 , σv P = Vr [Ec]22 , and σw P = Vr [Ec]33 in the

223

road direction, across the road, and in the direction orthogonal to the road plane, downward, for distances from the camera up to 25 W and for radii of curvature from 50 to 1000 W . The figure has been drawn for ² = 1 pixel, f = 574 pixels, and for a segment length (for the computation of V [nˆ P ] and V [nˆ S ]) of 20 pixels. Let us now turn to the errors due to algorithm approximations. Figure 12 shows the components of the reconstruction errors in the road principal directions due to the vˆ = vˆ 0 approximation (Section 5.2.1, Eq. (36)), for various values of the vertical gradient (from 2 to 10%) and for two values of ρ, namely ρ = 50 and ρ = 100. Figure 13 shows the reconstruction errors due to the Q 0 = Q approximation (Section 5.2.2) for ρ = 50. From these figures it follows that the error due to Q 0 6= Q is indeed negligible, as said. For what concerns the error due to vˆ 6= vˆ 0 we can observe that the greatest component is in the direction along the road (w ˆ P ) and that even in the case of ρ = 50 and vertical gradient of 4%, it is smaller than the corresponding error due to the image plane noise (see Fig. 12, where the standard deviation of image plane noise for the appropriate value of ρ has been added as a dashed line). We conclude that in normal situations, the reconstruction error is mainly due to the image plane noise. Also, for this source of error the worst component is in the road direction. In Fig. 11, we have added a straight (dashed) line corresponding to an error of 10%. We see that this level of error is reached, in the worst case of ρ = 50, at distance of about 14 W (56 m) and at proportionally higher ranges for higher ρ. A reconstruction error greater than 10% for ρ = 50 and s > 14 is understandable from Fig. 10. Indeed for distances s of this order of magnitude the road boundaries almost coincide making the reconstruction of the 3D structure of the road from their projection on the image plane almost impossible. 6. EXPERIMENTAL RESULTS

FIG. 11. Standard deviations of the reconstruction error due to the image plane noise.

The algorithm has been implemented on the Mobile Laboratory under development at the author’s institution [8]. The complete chain, including the road border extraction, the onroad calibration, and the 3D reconstruction runs at a frame speed of 12 images per second using two TMS 320C30 DSPs. One of the two DSPs implements the road boundary detection and the on-road calibration (determination of the lateral position of the vehicle, of the instantaneous horizon and of the road width), while the other is reserved to the 3D road reconstruction itself. Figures 14 and 15 show the result of the road reconstruction algorithm applied to two images of a sequence taken on a highway interchange road. The upper part of each figure shows the original image with superposed road boundaries, the instantaneous horizon, and the road cross segments, as determined by the algorithm, whose distance from the vehicle are multiples of five road widths. The lower panels show the top (left) and side (right) views of the reconstructed road. The horizontal grid in

224

ANTONIO GUIDUCCI

FIG. 12. Components of the reconstruction error due to the substitution of vˆ 0 with vˆ for a vertical gradient ranging from 2 to 10%. The error level due to the image plane noise for the same ρ is shown as a dashed line.

the left panel and the vertical grid in the right panel represent distances from the vehicle again multiple of five road widths. The vertical axis in the left panel and the horizontal axis in the right panel represent the vehicle heading direction, while the lower horizontal segment in the right panel represents the ground plane under the vehicle. Note that, for the sake of clarity, heights in the bottom right panel are magnified by a factor of two. The lane width, as computed by the on-road calibration step was 3.7 m with a standard deviation of 0.1 m. The small dot in the lower part of the left panel represents the position of the vehicle inside the lane boundaries.

As can be seen, for example in Fig. 14, no indeterminacy arises when the line of sight is horizontal, while (see for example Fig. 15) the indeterminacy that arises when the line of sight is tangent to the road plane is of no concern since, in that case, the road is vanishing from view downhill. 7. CONCLUSIONS This paper has described an algorithm for the local 3D reconstruction of a road from its image plane boundaries. Given, on the same scan line, the image of two boundary points of the

3D ROAD RECONSTRUCTION FROM A SINGLE VIEW

225

FIG. 13. Reconstruction error due to the substitution of Q with Q 0 , for ρ = 50. The dashed line represents the rms error |1Ec|.

road, and the tangents to the boundaries at the same two points, the algorithm allows one to compute the position in 3D space of the road cross segment that has one of the two points as its end point.

FIG. 15. Reconstruction of a highway interchange road, frame number 30.

FIG. 14. Reconstruction of a highway interchange road, frame number 1.

The approximations introduced are fully discussed and the various source of errors identified and formulas and graphs are given for their determination and comparison. The algorithm is intended to provide reliable and fast local measures of 3D positions and orientations of road cross segments. Such information can be used as raw input data for a reconstruction system whose output is the 3D reconstruction of the road. Hence, the algorithm can find useful application as a building block for an autonomous navigation system, in which the information on the road shape is of great importance both for the localization of obstacles, and for planning purposes. Even if this complete system is not in the scope of this paper, the raw data themselves are quite accurate and already give a satisfactory reconstruction of the road as has been shown by its application to real road images. Moreover, the algorithm is fast and has been implemented in real time on the Mobile Laboratory under development at the author’s institution. The complete chain, including the road border extraction, the on-road calibration, and the 3D reconstruction runs at a frame speed of 12 images per second on two DSPs.

226

ANTONIO GUIDUCCI

REFERENCES 1. E. D. Dickmanns and B. D. Mysliwetz, Recursive 3D road and relative egostate recognition, IEEE Trans. Pattern Anal. Machine Intell. 14(2), 1992, 199–213. 2. M. A. Turk, D. G. Morgenthaler, K. D. Gremban, and M. Marra, VITS—A vision system for autonomous land vehicle navigation, IEEE Trans. Pattern Anal. Machine Intell. 10(3), 1988, 342–361. 3. D. G. Morgenthaler, S. J. Hennessy, and D. DeMenthon, Range–video fusion and comparison of inverse perspective algorithms in static images, IEEE Trans. Systems Man Cybernet. 20(6), 1990, 1301–1312. 4. D. DeMenthon, A zero-bank algorithm for inverse perspective of a road from a single image, Proc. IEEE Int. Conf. Robotics Automation, Raleigh, NC, March–April 1987, pp. 1444–1449. 5. K. Kanatani and K. Watanabe, Reconstruction of 3D road geometry from images for autonomous land vehicles, IEEE Trans. Robotics Automation 6(1), 1990, 127–132.

6. D. DeMenthon and L. S. Davis, Reconstruction of a road by local image matches and global 3D optimization, Proc. IEEE Int. Conf. Robotics Automation, Cincinnati, OH, May 1990, pp. 1337–1342. 7. K. Kanatani and K. Watanabe, Road shape reconstruction by local flatness approximation, Adv. Robotics 6, 1992, 197–213. 8. A. Cumani, S. Denasi, P. Grattoni, A. Guiducci, G. Pettiti, and G. Quaglia, MOBLAB: A mobile laboratory for testing real-time vision-based systems in path monitoring, Proc. SPIE Int. Conf. Photonics for Industrial Applications, Mobile Robots IX, Boston, 1994, pp. 228–238. 9. K. Kanatani, Geometric Computation for Machine Vision, Clarendon Press, Oxford, U.K., 1993. 10. R. Y. Tsai, An efficient and accurate camera calibration technique for 3D machine vision, Proc. IEEE Int. Conf. on Computer Vision and Pattern Recognition, Miami Beach, 1986, pp. 364–374. 11. G. Colombo, Manuale dell’Ingegnere, Hoepli, Milano, 1990. 12. A. Guiducci, Determination of the 3D Geometry of a Road in a Monocular Image, IEN Technical Report No. 519, 1996.