More Intelligence by Knowledge-Based Colour-Evaluation; Signal Light Recognition

More Intelligence by Knowledge-Based Colour-Evaluation; Signal Light Recognition

CopyrightlC> IFAC Intelligent Autonomous Vehicles. Espoo.Firuand.I995 MORE INTELLIGENCE BY KNOWLEDGE-BASED COLOUREVALUATION; SIGNAL LIGHT RECOGNITIO...

1MB Sizes 84 Downloads 79 Views

CopyrightlC> IFAC Intelligent Autonomous Vehicles.

Espoo.Firuand.I995

MORE INTELLIGENCE BY KNOWLEDGE-BASED COLOUREVALUATION; SIGNAL LIGHT RECOGNITION

Lampros Tsinas

Universitiit der Bundeswehr Miinchen, Wemer-Heisenberg- Weg 39 85577 Neubiberg, Germany, CompuServe 100340,3147

Abstract: In the past colour-image processing was mostly applied only in the treatment of static images or of video sequences with low evaluation-rate. Since the beginning of the 90's newer and faster equipment allows lower cycle times (in accordance with real-time demands). Based on the method of dynamic machine vision a new application, including extraction of colour-information from video sequences in real-time is introduced. The paper presents the structure of an algorithm for signal light recognition (SLR) on video images and discusses the results. Keywords: Autonomous Mobile Robots; Image Processing; Machine Vision; Object Recognition; Vehicles

1. INTRODUCTION

for tasks. which act on the autonomous decisioDlevel, i.e. autonomous lane change, velocity control, etc. as described in (Kujawski, 1995). Some examples of the additional abilities are

An optical sensor system (at present with two pairs of miniature CCD-cameras with different focal lengths mounted on two platforms viewing the one pair to the front and the other pair to the rear of the autonomous car) constitutes the basis for a successful recognition of important features in the vehicle environment and the navigation of the autonomous mobile robot VaMoRs-P (Dickmanns, et al. 1994). Such an intelligent vehicle has been equipped with the capability to realize and to recognize important parts of its "world section". It includes (see Fig. 1):

• traffic sign recognition (Estable, et aI., 1994; Priese, et aI., 1994). • extended pathway recognition (differentiation between white and yellow lines) (Aubert and Thorpe, 1990), • vehicle colour recognition, • signal light detection and • recognition of other objects or conditions (e.g. bridges, tunnel, weather conditions, etc.).

• road recognition (Behringer, 1994; Tsinas, Graefe , 1992; Wershofen 1992) and • obstacle detection and tracking (Regensburger, 1993; Thomanek, et aI., 1994).

As representative of the following generation of more intelligent robots, VaMoRs-P should also be able to process multi-sensorial input data (video-, odometer-data, etc.) at a higher "intelligence" level. The term "intelligence" is used in the common sense meaning of being able to handle and control, beside of normal, also more complex situations and systems .

But it is desirable to extend the recognition abilities in order to achieve more global information. This additional information is important 7

Fig. 1

Fig. 2

Bird's-eye view of an "extended" traffic situation with signal lights; adapted from (Graefe, 1992).

An example of a highway-scene with a gleaming signal light.

Analogous to human abilities , the autonomous car should also be able to learn some (or all) of the information processing steps (feature extrac tion, object recognition, navigation, situation perception, etc.) .

information. Fog headlights indicate foggy weather. Under that weather condition autonomous driving is still critical. Nevertheless, the last two types of signal lights are treated with lower priority within the SLR-algorithm (the progress in this part of work will be reported elsewhere) . At this stage of development only stop and flashing lights are of higher priority . Moreover, signal lights of motorcycles are not handled, because motorcycles have other arrangements of signal lights and, of course, they cannot be detected easily on the image. because no robust algorithm for motorcycle recognition in video images exists.

In this paper the expansion of intelligence properties will be limited to colour-evaluation . The main topic of the paper is placed on signal light detection (indicated on right part of Fig. 1) . Fig . 2 (the red-excerpt of an original colourimage) shows a car with a gleaming signal light. The figure is an example of the scene complexity , which is to be interpreted. The area covered by signal lights is less than 1 % (in Fig. 2 ca. 0.03 %) of the image size .

2.2 Selection of the car to be searched for signal lights .

2 . SIGNAL LIGHTS IN HIGHWAY TRAFFIC

For a complete registration of signal lights a lot of other traffic participants must be taken into account, shown in Fig. 3. There are cars in front of (car 1.2 and 3) , beside (4 and 5) and behind (6, 7 and 8) the own vehicle (i.e . the autonomous vehicle).

At first. the important types of signal lights and the order of precedence of the cars, which must be taken into account for signal light recognition, are discussed . Before the presentation of the concept for signal light detection, some measurements of signal light features are given .

The limits of available of capacity image processing (cost-factor) and the restriction given by the hardware (processor-performance) lead to concentration of the computation time for image interpretation on only one car. The most important car for signal light recognition is the detected car in front in the own lane . In this decision it has been considered. that both stop light and flashing light are of interest for tbe own vehicle . If the car in front (car no . 1) is braking, than the

2.1 Important types of signal lights . Four different types of signal lights are usually existent in cars. That are flashing lights, stop lights, fog head lights and the warning flasher device. Warning flasher are very seldom used on highways and they have a great potential for

8

(partly self-) illuminated objects on the image is needed. The consequence is, that especially signal light-colours overlap (table 1) . Therefore signal lights (often) cannot be differentiated based only on colour attributes. Therefore additional features are needed. Table 1. Measured ISH-values for flashing- and stop-lights H

S

flashing light

230 ... 50

15 . .. 50

120 . .. 170

stop light

220 ... 10

25 . .. 85

90 ... 155

Frequency . If a flashing light is blinking (periodically gleaming) , then the frequency (about 1.5 Hz) is another (additional to colour) signal light feature (Fig. 4) .

Fig. 3 The autonomous mobile system (AMS) in the center and other traffic participants in front, beside and behind it.

r -\ !

own car has (after stop light detection ordetection of reduced distance between both cars) also to brake, and possibly to alter its velocity. On the other hand if the car in front indicates a lane change to the left (respectively to the right, i.e. in U.K.), when the intention of the autonomous car was also to change the lane, the behavior decision algorithm (Kujawski, 1995) must change to the lane keeping mode .

(

I

.

;'

\\

'

\ ~I

\t J

image-IV'. Fig. 4 Intensity-diagram of a blinking flashing light (with 20 ms time-interval between two image-numbers).

2.3 Measurements of signal light features (colour, frequency) Colour. By consideration of a large number of signal lights, as seen with the camera and tele lens (focal length of 24 mm with 1/2" CCDsensor) in various distances, their colours have been measured . The colour is given in ISHcolour-components (intensity , saturation and hue), which are computed from RGB-colourvalues. ISH is a derivative of the HSI colour space (Tennenbaum, 1994) . In ISH it isn't applied the original HSI hue-transformation formula, but the one proposed by Smith (Smith, 1978). The modification is an advantage over the original transformation. because it is 5 times faster compared to the original. Fast hue transformation is critical, when hue has to be computed for large image parts (such as for signal light recognition) .

3. CONCEPT OF SIGNAL LIGHT RECOGNITION

Multi-filter-concept. The method selected to solve the task of signal light recognition consists of dynamic machine vision and feature extraction in reasonable cycle time utilizing a set of feature extracting filters. Dynamic vision (Dicicmanns, and Graefe, 1988a, b) was applied in order to concentrate the interest in task-relevant parts of the image. The filters (depicted in the sequel) work to segment the images based on their colour-attributes (table 1) and to extract or verify other typical signal lights features in the image . Search-area selection . Image sections of interest for signal light recognition are parts with cars. In VaMoRs-P (Dicicmanns , et aI., 1994) the car finder ODT has been installed (Thomanek , et ai., 1994) . With ODT up to 5 obstacles (cars) may be tracked in 80 ms (cycle time) . From 3Dknowledge about car position extracted by ODT

Because the utilized Teli-camera (Teli, 1993) has an automatic shutter, the camera has the tendency to expose the CCD-sensor for each image more than for a good recognition of highly

9

search area

search area





Fig. 5 Search area around the detected car.

Fig. 6 Reduced (optimized) search area.

from image sequences, and additionally knowledge about lane positIon of VaMoRs-P (Behringer, 1994) the object in front of the own lane could be selected for signal light detection and classification. In order to reduce the processing time for the image-classification for the signal light detection, the search area is reduced to the image-area representing only the selected car (Fig. 5).

Other filters. A set of other filters undertake the verification process for image sections, which were segmented as signal light candidates.

The "field" -filter checks if the segments found have sizes (respectively area) over a required (parameter given) minimum size. Usually for robust classification the field-size requires an area of 12 pixels (arranged as 2*6, 3*4, etc. pixels of the 512*512 pixel image-size). That leads to an ideal maximum distance for detection of signal lights at ca. 40-50 m, when a 25 mm tele lens on the camera Teli CS5130P (Teli, 1993) is used.

Another knowledge-based reduction of the search area follows from the fact, that signal lights are placed only in the lower part of the car (as seen from back). Therefore the upper search-areaborder is given by a height-parameter. After measurements of cars concering the arrangement of signal lights on the car and some signal light recognition experiments the required height was selected to be 1 m (in 3D-car-data given and in image coordinates transformed) above the lower search area border (Fig. 6).

On the other hand the "area"-filter leads to an interrupt of the verification process, if the segment sizes are bigger than another maximum size (also parameter given). This size is the maximally allowed relative size (i.e. 10 %) of signal lights in the search area. The "flash"-filter differentiates between simple luminous and periodically shining signal lights (flashing lights or warning flasher). Like a lowpass-filter, the "flash" -filter allows for the passage of signals (extracted from the image) with frequencies of 1.5 Hz.

Colour-filters. The most significant feature of signal lights is their colour. During the recognition process those pixels, that fulfil the colourcriteria (as expected from table 1) are classified as (part of) signal light-candidates. This colour classification is realized sequentially beginning with the intensity criterion. If the intensity is within the intensity values allowed (in accordance to table 1) the next criterion (saturation) is checked. The last check concerns hue, because hue-computation (from R-,G- and B-values) needs much more time (compared to S- or I-computation). If the pixels haven't passed the i- or sfilter, hue must not be computed.

The "quantity/position"-filter is a complex filter, which can distinguish (based on quantity) between flashing light and stop lights, and (based on position) between left and right flashing lights.

4. SLR-IMPLEMENT ATION The presented parts of the signal light recognition (SLR) algorithm have been combined and implemented on the Transputer Image Processing system TIP. The system has three components:

A set of colour-filtered pixels in immediate neighborhood is then combined into one imagesegment (region growing) and handled as one object (signal light candidate).

10

i

"t;J

~l

i ~

C!)CO

LLI-

()

TIP-BV-S stem

=>co

0..1-

>

c

~ ()

TIP-BUS task right part of

search area

left part of search area

monitoring

Fig. 7

Fig. 8

Process-splitting for SLR on particular hardware components.

A car with detected (and with a square at bottomleft marked) left flashing light.

a) the Colour Frame Grabber card (CFG) with an Inmos transputer TBOO-processor, b) a Versatile Processing Unit (VPU) also with a TBOO- and additionally with a T400-transputer and c) the device for the output of images on monitors (CGD, Colour Graphic Display). The video bus is used to distribute the (digitized) images of the camera from the CFG to the other two TIP-bus subsystems (VPU and CGD).

Bundeswehr Munchen . The recorded scenes were replayed and processed by the vision system in tbe laboratory in real-time. This development phase permitted experiments under dynamic conditions and further refinement of individual programming steps. For botb experimental environments the hardware used was the same . One difference is, that the lab-version hasn't got any support of the carfinder (ODT) and the car-finder (search-areafilter) is simulated using the keyboard and selecting the position of the search area on-line during test-run (size adaptation is also possible, but mostly not used).

The search area (Fig. 6) is divided in two parts (left and right) . The right/left part of the image is processed in parallel on the CFG/VPU-processors with the colour filters. If some parts of the image fulfil (and pass through) colour-filters, than the attributes of these parts (upper-left and lower-right coordinates) form a set of signal light attributes and all the candidates are combined in an object-structure on the CFG-processor, where the other (faster) filters are implemented. If signal lights (flashing light or stop lights) have passed all the filters, then tbe result is marked on the image witb small square(s) at the bottom (one square at tbe left/right indicates a left/rigbt flashing light, two squares centered indicate stop lights).

With both experimental environments signal light recognition with SLR was feasible. In Fig. Band Fig. 9 examples of detected signal lights (left flashing light and stop lights) are shown. The cycle time for the classification of an image area of 250*50 pixels is about BO -100 ms (4 -5 video cycles) and depends on the size of the searcb area and tbe complexity of the scene. After tbe signal ligbts bave been detected, tbey are tracked . Tberefore, tbe searcb area is reduced. placed around tbe signal ligbt(s) position(s) found. Tbat leads to faster (re-)recognition-time (ca. 20 ms).

5. EXPERIMENTS Tbe remaining problem is tbe bad differentiation of signal ligbts, by means of camera-restrictions. The camera-registered colours of flasbing ligbt and stop ligbt can be very similar, or even identical. A camera witb capability to task-depended shutter selection is useful for the classification of images with highly illuminated/gleaming parts.

The presented algorithm has been tested witb several sequencesa of video-scenes (that bad been recorded on video tapes while driving on the Autobahn at a speed of about 100 km/h) or using the equipment of the test vehicle VaMoRsP in the test-area at the Universitiit der

11

Machine Vision and Applications 1, pp . 241261. Springer. Dickmanns, E.D . , R. Behringer, R. , D . Dickmanns, T. Hildebrandt, M. Maurer, F . Thomanek F. and J . Schiehlen (1994) . The seeing passenger car 'VaMoRs-P ' . In: Proceedings of the IEEE Symposium on Intelligent Vehicles '94, pp. 68-73. Paris . October 1994. Estable, S., J. Schick, F. Stein, R. Janssen, R. Ott, W. Ritter and Y.-J. Zheng (1994). A Real-Time Traffic Sign Recognition System . In: Proceedings of the IEEE Symposium on Intelligent Vehicles '94, pp . 213-218. Paris, October 1994. Graefe, V. (1992). Visual Recognition of Traffic Situations by a Robot Car Driver. In: Proceedings of the 25th ISATA Conference on Mechatronics , pp . 439-446 . Florence. Kujawski, C. (1995). Deciding the Behaviour of an Autonomous Mobile Road Vehicle in Complex Traffic Situations . In: Proceedings of the 2nd IFAC Conference on Intelligence Autonomous Vehicles 95, Helsinki, June 1995. Priese, L., J. Klieber. R. Lakmann, V. Rehrmann and R. Schian (1994). New Results on Traffic Sign Recognition. In: Proceedings of the IEEE Symposium on Intelligent Vehicles '94, pp. 249-254 . Paris , October 1994. Regensburger, U. (1993). Zur Erkennung von Hindernissen in der Bahn eines StraBenfahrzeuges durch maschinelles Echtzeitsehen . Dissertation, Fakult~it fur Luft- und Raumfahrttechnik der Universitat der Bundeswehr Munchen. Smith , A.R. (1978). Color gamut transform pairs. In: Computer Graphics. Vot. 12. pp. 12-19. Teli (1993). Service Manual for Teli CS5130P. Tennenbaum, J.M. (1974) . An interactive facility for scene analysis research. Technical Note 87. SRI Project 1187, Artificial Intelligence Center, Stanford Research Institute. Menlo Park, California. January 1974. Thomanek. F . . E.D. Dickmanns and D. Dickmanns (1994). Multiple Objects Recognition and Scene Interpretation for Autonomous Road Vehicle Guidance. In: Proceedings of the IEEE Symposium on Intelligent Vehicles '94, pp. 231-236 Paris. October 1994. Tsinas, L. , V. Graefe (1992). Automatic Recognition of Lanes for Highway Driving. In: Proceedings of the IFAC Conference on Motion Control for Intelligent Automation. pp 295-300, Perugia, October 1992. Wershofen, K. P. (1992) . Real-time Road Scene Classification Based on a Multiple-lane Tracker. In: Proceedings of the lE CON Conference, San Diego. November 1992 .

Fig. 9 A car and the recognized stop lights (indicated by squares bottom-centered) . As well as new camera-technology , faster hardware could allow classification of more than one image section(car) and give more comprehensive situation recognition.

6. CONCLUSION A new image processing application for signal light recognition (flashing lights and stop lights), namely SLR, has been presented. The algorithm has been implemented on standard transputerhardware . Future work may concentrate on higher system robustness using new cameras and image processing systems .

7. ACKNOWLEDGEMENTS This work has been supported by the 'Bundesministerium fur Forschung und Technologie' of the Federal Republic of Germany and the Daimler-Benz AG under the Eureka grant 'Prometheus Ill'.

REFERENCES Behringer, R. (1994). Road Recognition from Multifocal Vision. In: Proceedings of the IEEE Symposium on Intelligent Vehicles '94 , pp . 302-307. Paris, October 1994. Dickmanns, E. D. , V. Graefe (1988a) . Dynamic Monocular Machine Vision. Machine Vision and Applications 1, pp. 223-240. Springer. Dickmanns, E.D . , V. Graefe (l988b). Application of dynamic monocular machine vision.

12