Copyrigth Ii!l IFAC Motion Control for Intelligent Automation Perugia. Italy. October 27-29. 1992
AUTOMATIC RECOGNITION OF LANES FOR HIGHWAY DRIVING V. GRAEFE and L. TSINAS Univerisaet der Bundeswehr Munchen, Institut fuer Messtechnik Wemer-Heisenberg-Weg 39. D 8014 Neubiberg, Germany
ABSfRACT For an automatic co-pilot designed to assist the driver while operating a motor car on well-marked roads an algorithm was developed for recognizing and tracking the driving lane used by the vehicle. A video camera is used as a sensor. The program has been implemented on one parallel processor of the multi-processor robot vision system BVV 3. The method of controlled correlation is used for a quick and robust feature extraction. To further increase its robustness the program utilizes a 2-D model representing knowledge of the appearance of roads and lane markers in the image of the camera. The program has been tested successfully in simulation and in real-world experiments while driving on different types of roads at high speeds. A far look-ahead distance of 100 m has been achieved, adequate for driving at high speed. Keywords: Real-time Image Processing, Mobile Robots, Robot Vision, Autonomous Road Vehicles
1. INTRODUCTION One of the main subtasks of a robot vision system for autonomous driving on highways is to detect and to track the pathway in front of the robot. This subtask is part of the basic feedback control mechanism that keeps an autonomous vehicle on the desired path (Le_ the own lane of a multilane highway). In addition, it assists other modules, such as obstacle recognizers, during their operation. The present road tracker makes it possible to recognize the own lane in front of the vehicle up to a distance of about lOOm.
An algorithm for automatic recognition of pathways is confronted with a multitude of scenes and environmental conditions. The possible pathways may be divided into two broad categories: • Roads with lane markers; the markers may be solid or broken white or yellow lines, according to normal standards. • Pathways without special markers; this includes unmarked highways (Le. highways under construction) and unpaved lanes (i.e. country lanes, field lanes) as well as open terrain (meadows, acres, fields, etc.).
295
GRAEFE V., TSINAS L.
There are two versions of the pathway recognition program. One is specialized for wellmarked roads as described in the sequel. It is based on the recognition of painted lane markers. The other one, to be described elsewhere, recognizes unmarked roads and allows crosscountry driving; it uses texture-based or colorbased techniques and different internal models. Ideally, marked roads may be recognized easily in the image, because lane markers are designed to be well visible. Besides, much a priori knowledge of the appearance of such roads in the image is available. But in reality, the lane markers are often difficult to recognize in an image. They may be occluded, for instance, by vehicles in front of, or passing, the own vehicle, or by leaves on the road; or the image may be cluttered with many other confusing features caused, for instance, by irregularities of the road surface, by skid marks, by dark shadows or by rain water. For unmarked lanes different model knowledge must be applied and other features must be evaluated. For that purpose, the homogeneity of the surface ([Moebius 86], [Liu, Wershofen 92]) and linear extended features, for example, the transition to unpassable terrain or to grooves, have to be taken into consideration. Presently two separate algorithms exist for marked and unmarked pathways. On this basis a common algorithm should be developed in the near future that contains both models and recognizes autonomously the type of the existing pathway.
its importance for vehicle control, it must be assured that the algorithm will never transmit false information to other modules. It is, however, permissible that occasionally image-data information will not be transmitted at all, when single images of a dense image sequence cannot be evaluated. To reach this goal, extensive model knowledge must be integrated into the algorithm, and the algorithm should continuously monitor itself. The maximum attainable look-ahead distance of the pathway tracker is desirable in the interest of safe driving at high speed, and a reasonably short computing time is a prerequisite, as it is for any algorithm being part of a real-time system for robot control.
3. DESCRIPTION OF THE ALGORITHM In order to detect the pathway in the image, characteristic features on both sides of the road are searched. Features may be either edges (gray-level discontinuities) or lines (pairs· of two gray-level discontinuities). The pathway boundaries are first searched near the bottom of the image where they are well visible. Once the boundaries have been detected they are followed upwards in the image. To obtain both, a reasonable cycle time and a high robustness, the extractable features must be searched in suitably selected search areas. The criteria for the selection of feature candidates and search areas depend on the location in the image; therefore, the algorithm is divided into three phases (Figure 1) as described in the sequel.
2. CHARACTERISTICS REQUIRED OF A PATHWAY TRACKER The pathway tracker is a most essential component of a vision-based system for the recognition of traffic situations [Graefe 92] and for the control of an autonomous vehicle. Because of
Descri ption of the Three Phases Phase 1 The objective of phase 1 is the detection of the pathway boundaries close to the lower image
296
AUTOMA11C RECOGr-.TI10N OF LANES H)R HlGI!WA Y DR IV1~(J
border. Since, during this phase, only little is known about the orientation of the lane boundaries, edge detectors of low orientation selectivity are applied. Such edge detectors have a relatively high sensitivity against noise; however, this is acceptable at close range, as the lane boundaries are always relatively well visible here. The edge detector used in all three phases are based on the method of controlled correlation [Kuhnert 86] . The one used in phase 1 detects primarily vertical edges, but its directional selectivity is so small that oblique edges of almost any direction are detected, too. For marked pathways the sought after feature is a bright line upon a dark background. Such a line is recognized when either a bright-to-
Figure 1 The search areas of the road tracker and 14 pixels in height . They are centered around the locations of the edges as found in phase 1. Straight lanes as well as curved ones arc recognized correctly in this phase. False features, such as stains on the road that are sometimes detected erroneously in phase 1, are usually recognized and rejected in phase 2. The separation into phases 1 and 2 constitutes essentially a coarse-to-fine approach . Adapting the edge detectors and the search areas individually to each phase leads to a reduction of the computing time.
Phase 2 Phase 3 In phase 2 the detected line candidates are measured in regards to their position and direction. This includes determination of their slopes and their exact positions in the image. The edge detectors employed in this phase are tuned to edges with slopes of 45° and 135° . They have a higher directional selectivity compared to the one applied in phase 1. Therefore, they are more robust against noise and shadows. The desired feature is again a bright line. The search areas are restricted to 40 pixels in width
Beginning with locations and directions as determined in phase 2, the lane markers are followed upwards in the image step by step in phase 3. With each step the markers are searched within a horizontally extended search region that is centered around the expected position. The noise insensitivity of correlationbased edge detectors allows the tracking of lane markers to a far distance, where they are hardly recognizable due to their faint contrast.
297
GRAEFE V., TSINAS L.
During this phase appropriate templates for the correlation-based edge detectors are selected from a pre-defined repertoire. Three combinations of edge detectors are possible with the present repertoire. On straight roads an edge detector tuned to a slope of 45 0 is used on the left side, and one with a preferred orientation of 135 0 on the right side. In sharp turns both lane boundaries may be searched with both edge detectors being tuned to identical slopes of either 45 0 or 135 0 • The selection is automatic; the selection process may also serve as a curve detector. Phase 3 incorporates two alternative methods for detecting and following lane markers. In cooperative scenes with continuous and undistorted lane markers, single edges are searched, instead of lines; the search area is relatively small, only 20 pixels wide. This algorithm runs fast, but it is not robust enough for difficult scenes. The second method is used for less cooperative scenes, e.g. with broken or partly occluded markers. Here lines (pairs of edges) are searched for, and the search area is larger. The selection of the method is automatic. The algorithm used in phase 3 adapts itself to a multitude of situations and scenes. Edge detectors, feature candidates, and the width of the search areas adjust themselves dynamically. This flexibility not only increases the robustness of the algorithm, but also reduces its cycle-time. A large number of points are measured on each lane marker (indicated by white dots along the lane markers in Figure 2). Depending on the demands of the module requesting data regarding the locations of the lane markers, a smaller number of points may be selected for reporting their coordinates. Typically 4 points on each lane marker, having equal vertical distances in the image, are selected (Figure 2).
298
Figure 2 Curved lane. The measured points on the lane markers (dots) and the selected points whose coordinates are transmitted to other modules (squares) are indicated 4. IMPLEMENTATION The algorithm has been implemented on one parallel processor (lntel 80386/20 Mbz) within the robot vision system BVV 3 [Graefe 90) . To keep the computing time at a minimum, the method of controlled correlation [Kuhnert 86; 88) is applied for feature extraction. The cycle time of the pathway tracker is 40-60 ms, dependent on the complexity of the image sequence. At a speed of 100 kmlh this cycle-time permits a localization of the own lane in the image every 1.6 m. The cycle time of the pathway tracker is thus more than sufficient for a smooth road-following, even at high speed.
s. EXPERIMENTAL RESULTS In the beginning the program was developed applying a PC emulating one parallel processor of the real-time image processing system BVV 3. During that stage of the experiments the basic functioning of the program was tested
AUTOMA TIC RECOGl\'l1l0N OF LANES FOR HIGHWAY DRNING
and optimized using static images. Thus it was possible to select, according to our present state of the art, an ideal combination of the edge detectors, the search areas, the feature candidates, and of other parameters. During the next step the algorithm was tested on the robot vision system BVV 3 with real scenes that had been recorded on video tapes while driving on multilane highways at a speed of 80-90 kmlh . The recorded scenes were replayed and processed by the vision system in the laboratory in real-time. This development phase permitted experiments under dynamic conditions and further refinement of the individual programming stages . Finally, the algorithm was tested during fully autonomous driving with the test vehicle VIT A [Ulmer 92] where the system proved its robustness in the real world. Lane recognition was possible even during difficult and uncooperative situations, for example, during rain, with roads covered with puddles, and during sunshine causing heavy shadows and reflections on the partly wet road. The algorithm was also applied as part of a system for recognizing vehicles approaching the autonomous vehicle from the rear [Efenberger et al. 92]. Some discrepancies were observed during the step-by-step development and especially in the real-time experiments. The causes are that the applied model knowledge relates only partially to some of the situations occurring in real environments. For example, branches of the lane, such as entries, exits, or at road construction sites, that happen to be encountered sometimes, are not modeled and are, therefore, not correctly recognized. The present version solves this problem only partially; the possible existence of a lane branch is considered, and an additional lane
299
Figure 3 A scene from the rear view perspective. The lane markers on the right side have large gaps between the line sections marker is searched for, when a significant increase of the lane width is encountered. This process requires only a small amount of computing time (approximately 2 % of the cycletime). Advantages of a permanent search of lane branches are still being investigated. Whenever wide-angle objectives are used the gaps between sections of lane markers in the image can be very large (see Figure 3, the lane marker on the right side). In such scenes the direction of the short piece of the lane marker near the lower border of the image cannot be determined accurately in phase 2. This leads to an incorrect evaluation of the positions in the image at which the next line section (above the gap) should be searched for. To avoid detecting false features near such gaps (for example, adjoining lines) the search range of phase 3 is in such cases limited to the region in the image below the gap. An increase of the search range distance will be achieved in the future; the search areas could then be positioned according to a priori knowledge gained by processing preceding images.
GRAEFE V.• TSINAS L
Vehicles changing lanes within the search area interfere sometimes with the pathway tracker. This problem should be solved in the near future by coupling modules for recognizing these objects [Solder, Graefe 90] and [Regensburger, Graefe 90] with the pathway tracker.
6. CONCLUSIONS An algorithm for detecting and tracking a lane on a well-marked highway has been developed. In combination with other recognition modules it forms the basis of a vision-based processing system for real-time recognition of traffic situations for autonomous road vehicles [Graefe 92]. An implementation of the algorithm on the real-time vision system BVV 3 was tested first in simulation with static scenes and thereafter in real time using video tapes with different road scenes, recorded while driving on highways with speeds up to 80 kmlh. Furthermore, it was tested and used in real-world experiments during autonomous driving. The tests proved its great robustness. Certain problems remaining to be solved were discovered during the experiments.
ACKNOWLEDGMENT Part of the work reported has been performed with support from the Ministry of Research and Technology (BMFT) and the German automobile industry within the EUREKA project PROMETHEUS.
Graefe, V. (1990): The BVV-Family of Robot Vision Systems. In O. Kaynak (ed.): Proceedings of the IEEE Workshop on Intelligent Motion Control. Istanbul, pp 55-65. Graefe V. (1992): Visual Recognition of Traffic Situations by a Robot Car Driver. Proceedings, 25th ISAT A, Conf. on Mechatronics. Florence, May, pp 439-446. Kuhnert, K.-D. (1986): Comparison of Intelligent Real-Time Algorithms for Guiding an Autonomous Vehicle. In L. O. Hertzberger (ed.): . Proceedings: Intelligent Autonomous Systems, Amsterdam. Kuhnert, K.-D. (1988): Zur Echtzeit-Bildfolgenanalyse mit Vorwissen. Dissertation, Fakultat fUr Luft- und Raumfahrttechnik der Universitat der Bundeswehr Munchen. Liu, F. Y.; Wershofen, K. P. (1992): An Approach to the Robust Classification of Pathway Images for Autonomous Mobile Robots. Proceedings, IEEE International Symposium on Industrial Electronics. Xian, pp 390-394. Moebius, J. (1986): Ein Verfahren zur Erkennung des Stra6enverlaufs im Fahrerdisplay durch Texturanalyse. Informatik Fachberichte, 8. DAGM 1986, pp 176-180. Regensburger, U.; Graefe, V. (1990): Object Classification for Obstacle Avoidance. Proceedings of the SPIE Symposium on Advances in Intelligent Systems. Boston, pp 112-119. Solder, U.; Graefe, V. (1990): Object Detection in Real Time. Proceedings of the SPIE Symposium on Advances in Intelligent Systems. Boston, pp 104-111. Ulmer, B. (1992): VITA - An Autonomous Road Vehicle (ARV) for Collision Avoidance in Traffic. Intelligent Vehicles '92. Detroit, pp 36-41.
REFERENCES Efenberger, W.; Ta Q.; Tsinas, L.; Graefe, V. (1992): Automatic Recognition of Vehicles Approaching from Behind. Intelligent Vehicles '92, Detroit, pp 57-62.
300