Hybrid approach to road detection in front of the vehicle

Hybrid approach to road detection in front of the vehicle

10th 10th IFAC IFAC Symposium Symposium on on Intelligent Intelligent Autonomous Autonomous Vehicles Vehicles 10th IFAC IFAC Symposium Symposium3-5, o...

848KB Sizes 0 Downloads 26 Views

10th 10th IFAC IFAC Symposium Symposium on on Intelligent Intelligent Autonomous Autonomous Vehicles Vehicles 10th IFAC IFAC Symposium Symposium3-5, on Intelligent Intelligent Autonomous Vehicles Vehicles Gdansk, 2019 10th on Autonomous Gdansk, Poland, Poland, July July 3-5, 2019 Available online at www.sciencedirect.com Gdansk, Poland, July 3-5, 2019 10th IFACPoland, Symposium on Intelligent Autonomous Vehicles Gdansk, July 3-5, 2019 Gdansk, Poland, July 3-5, 2019

ScienceDirect

Hybrid Hybrid Hybrid Hybrid Hybrid

IFAC PapersOnLine 52-8 (2019) 245–250

approach to road detection approach to road detection approach to road detection approach to road detection of the vehicle of the vehicle approach to road detection of the vehicle of the vehicle of the Ochman vehicle∗∗ Marcin

in in in in in

front front front front front

Marcin Ochman ∗∗ Marcin Marcin Ochman Ochman ∗ ∗ Marcin Ochman ∗ Faculty of Electronics, Wroclaw University of Science and ∗ Faculty of Electronics, Wroclaw University of Science and ∗ Faculty of Electronics, Wroclaw University of and Technology, Janiszewskiego 11/17, 50-372 Wrocław, Faculty oful. Electronics, Wroclaw University of Science SciencePoland, and Technology, ul. Janiszewskiego 11/17, 50-372 Wrocław, Poland, ∗ Technology, ul. Janiszewskiego 11/17, 50-372 Wrocław, Poland, Faculty of Electronics, Wroclaw University of Science and [email protected], https://orcid.org/0000-0002-8075-0033 Technology, ul. Janiszewskiego 11/17, 50-372 Wrocław, Poland, [email protected], https://orcid.org/0000-0002-8075-0033 [email protected], https://orcid.org/0000-0002-8075-0033 Technology, ul. Janiszewskiego 11/17, 50-372 Wrocław, Poland, [email protected], https://orcid.org/0000-0002-8075-0033 [email protected], https://orcid.org/0000-0002-8075-0033 Abstract: Recently, Recently, autonomous autonomous cars cars have have become become the the main main aim aim of of the the automotive automotive industry. industry. Abstract: Abstract: Recently, autonomous cars have become become the main aim detection of the the automotive automotive industry. There are many image processing problems. One of them is road which is the main Abstract: Recently, autonomous cars have the main aim of industry. There are many image processing problems. One of them is road detection which is the main There are many image processing problems. One of them is road detection which is the main Abstract: Recently, autonomous cars have become the main aim of the automotive industry. topic of this article. The purpose of this paper is to compare two different approaches which There are many image processing problems. One of them is road detection which is the main topic of this article. The purpose of this paper is to compare two different approaches which topic of this article. The purpose of this paper is to compare two different approaches which There are many image processing problems. One of them is road detection which is the main are Multilayer Perceptron and method based on illuminant invariance to solve the problem of topic of this article. The purpose of this paper is to compare two different approaches which are Multilayer Perceptron and method based on illuminant invariance to solve the problem of are Perceptron and method based on invariance to solve the of topic of this article. The purpose of this paper compare two different approaches which road detection using images from camera located intofront front of the the vehicle. Further, newproblem approach are Multilayer Multilayer Perceptron and method based on isilluminant illuminant invariance to solve the problem of road detection using images from camera located in of vehicle. Further, new approach road detection using images from camera located in front of of together the vehicle. Further, newproblem approach are Multilayer Perceptron and method based on illuminant invariance to solve the of is proposed which uses two mentioned methods combined to reach better results in road detection using images from camera located in front the vehicle. Further, new approach is proposed which uses two mentioned methods combined together to reach better results in is proposed which uses two mentioned methods combined together to reach better results in road detection using images from camera located in front of the vehicle. Further, new approach terms of accuracy and to achieve better robustness of single method. The comparison of the is proposed which uses two mentioned methods combined together to reach better results in terms of accuracy and to achieve better robustness of single method. The comparison of the terms of and to achieve better robustness of single method. comparison of is proposed which useson two mentioned methods combined together to The reach betterin results in algorithms was based images from KITTI dataset. were examined terms of terms of accuracy accuracy and to achieve better robustness of Algorithms single method. The comparison of the the algorithms was based on images from KITTI dataset. Algorithms were examined in terms of algorithms was based on from KITTI dataset. were examined in of terms of accuracy and to images achievemetrics. better robustness of Algorithms single method. comparison of the time execution and classification metrics. algorithms was and based on images from KITTI dataset. Algorithms wereThe examined in terms terms of time execution classification time execution and classification metrics. algorithms was and based on images metrics. from KITTI dataset. Algorithms were examined in terms of time execution classification © 2019, IFAC (International Federation of Automatic Control) Hostingand by Learning; Elsevier Ltd.Robot All rights reserved. time execution and classification metrics. Keywords: Advanced Driver Assistance Systems; Recognition Navigation, Keywords: Advanced Driver Assistance Systems; Recognition and Learning; Robot Navigation, Keywords: Advanced Driver Assistance Systems; Recognition and Learning; Robot Navigation, Programming and Vision Keywords: Advanced Driver Assistance Systems; Recognition and Learning; Robot Navigation, Programming and Programming and Vision Vision Keywords: Advanced Driver Assistance Systems; Recognition and Learning; Robot Navigation, Programming and Vision Programming and Vision 1. INTRODUCTION approach is Liu et al. (2003). Square model was used to 1. INTRODUCTION approach is is Liu Liu et et al. al. (2003). (2003). Square Square model model was was used used to to 1. INTRODUCTION INTRODUCTION approach approximate road shape. Using genetic algorithm allowed 1. approach is Liu et al. (2003). Square model was used to approximate road shape. Using genetic algorithm allowed approximate road Using genetic algorithm allowed INTRODUCTION approach is Liu etshape. al. (2003). Square was used to Fast technological progress can be seen in many human to find optimal parameters of the model. Lack of generality allowed approximate road shape. Using genetic algorithm Fast technological technological1.progress progress can be be seen seen in in many many human human to to find optimal optimal parameters of the the model.model Lack of of generality Fast can find parameters of model. Lack generality approximate road shape. Using genetic algorithm allowed life areas. areas. Automotive is no nocan exception. Improvement is to caused by assumptions regarding road markings existance Fast technological progress be seen in many human find optimal parameters of the model. Lack of generality life Automotive is exception. Improvement is caused by by assumptions assumptions regarding regarding road road markings markings existance existance life Automotive is exception. Improvement is Fast technological progress be seen in many braking human to findto parameters of the model. Lackalgorithms. of generality mainly noticeable in traffic safety. Emergency leads very limited practical usage such life areas. areas. Automotive is no nocan exception. Improvement is caused caused by assumptions regarding roadof existance mainly noticeable in traffic traffic safety. Emergency braking leads tooptimal very limited practical usage ofmarkings such algorithms. mainly noticeable in safety. Emergency braking leads to very limited practical usage of such algorithms. life areas. Automotive is no exception. Improvement is caused by assumptions regarding road markings existance system, lane keep assist or collision prevention while lane mainly noticeable in traffic safety. Emergency braking leads to very limited practical usage of such algorithms. system, lane lane keep keep assist assist or or collision collision prevention prevention while while lane lane The second group is a generic solution. It means that system, The second group is a generic solution. It means that mainly noticeable in traffic safety. Emergency braking leads to very limited practical usage of such algorithms. changing are examples of systems which support drivers. system, lane keep assist or collision prevention while lane group is aa generic solution. It that changing are are examples examples of of systems systems which which support support drivers. drivers. The these algorithms are only on road properties. The second second group isbased generic solution. It means means Basic that changing these algorithms are based only on road properties. Basic system, lane keep assist or collision prevention while lane Increasing number of vehicles on public roads implies more changing examples of systems which support drivers. algorithms are based only on road properties. Basic Increasingare number of vehicles vehicles on public public roads implies more these The second group is a generic solution. It means that approaches used information about color He et al. (2004). these algorithms are based only on road properties. Basic Increasing number of on roads implies more approaches used used information information about about color color He He et et al. al. (2004). (2004). changing are examples which drivers. traffic accidents. This is the main reason of developing new Increasing number of vehicles on public roads implies more traffic accidents. This is of thesystems main reason ofsupport developing new approaches these algorithms are based only on road properties. Basic Requirement of shadows and lighting conditions robustapproaches used information about color He et al. (2004). traffic accidents. This is the main reason of developing new Requirement of of shadows and and lighting lighting conditions conditions robustrobustIncreasing number of vehicles onand public roads impliesJapan more technology that makes more traffic accidents. This iscar thesafer main reason ofintelligent developing new Requirement technology that makes makes car safer and more intelligent Japan approaches used information color He et al.Many (2004). ness disqualified such method for practical usage. Many of Requirement of shadows shadows andabout lighting conditions robusttechnology that car safer and more intelligent Japan ness disqualified such method for practical usage. of traffic accidents. This is the main reason of developing new Automobile Manufacturers Association (2009). Japan technology that makes car safer and more intelligent ness disqualified such method for practical usage. Many of Automobile Manufacturers Association (2009). Requirement of shadows and lighting conditions robustsolutions use machine learning including SVM and neural ness disqualified such method for practical usage. Many of Automobile Manufacturers Association (2009). solutions use machine learning including SVM and neural technology that makes car safer and more intelligent Japan Automobile Manufacturers Association (2009). solutions use machine learning including SVM and neural ness disqualified such method for practical usage. Many of Recently, automotive industry aims to provide cars with networks. Generally, color and texture information is used solutions use machine learning including SVM and neural Recently, automotive industry aims to to (2009). provide cars cars with with networks. networks. Generally, Generally, color color and and texture texture information information is is used used Automobile Manufacturers Association Recently, automotive industry aims provide solutions use machine learning including SVM and neural autonomous features. One of the key engineering probto classify segment or the pixel of the image Alvarez et al. Recently, automotive industry aims to provide cars with networks. Generally, color and texture information is used autonomous features. features. One One of the the key key engineering engineering probprob- to to classify classify segment segment or or the the pixel pixel of of the the image image Alvarez Alvarez et et al. al. autonomous Recently, automotive industry aimskey to engineering provide with networks. Generally, color and Su texture information is et used lem is vehicle localization. modern cars are (2010); Shinzato et or al. (2012); Su et al. (2017). Increasing autonomous features. One of ofAdvanced, the to classify segment the pixel of et theal. image Alvarez al. lem is vehicle vehicle localization. Advanced, modern cars carsprobare (2010); (2010); Shinzato et al. (2012); (2017). Increasing lem is localization. Advanced, modern cars are Shinzato et al. (2012); Su et al. (2017). Increasing autonomous One ofAdvanced, the sensors key engineering to classify segment the pixel ofto the image Alvarez et al. equipped with collection of diverse including camof deep learning led deep neural networks (2010); Shinzato et or al. (2012); Su al. (2017). Increasing lem is vehicle localization. modern carsprobare popularity equipped withfeatures. collection of diverse diverse sensors including campopularity of deep deep learning led toet deep neural networks equipped with collection of sensors including campopularity of learning led to deep neural networks lem is vehicle localization. Advanced, modern cars are (2010); Shinzato et al. (2012); Su et al. (2017). Increasing eras, radars, LIDAR and ultrasonic sensors. cameras which usage for road segmentation Oliveira et al. (2016); Laddha equipped with collection of diverse sensors including campopularity of deep learning led to deep neural networks eras, radars, radars, LIDAR LIDAR and and ultrasonic ultrasonic sensors. sensors. cameras cameras which which usage usage for for road segmentation segmentation Oliveira Oliveira et et al. (2016); (2016); Laddha Laddha eras, equipped with collection ofautomobile. diverse sensors. sensors including cam- usage popularity of deep learning Oliveira led to deep track environment of the One of them et al. (2016). eras, radars, LIDAR and ultrasonic cameras which for road road segmentation et al. al.neural (2016);networks Laddha track environment of the automobile. One of them which et al. (2016). track environment ofof the automobile. One of them which et al. eras, radars, and ultrasonic cameras usage for road segmentation Oliveira et al. (2016); Laddha is located in front the vehicle can be used for road track environment the automobile. them al. (2016). (2016). is located inLIDAR front ofof of the vehicle sensors. canOne be of used forwhich road et Numerous studies have attempted to adopt not only one is located in front the vehicle can be used for road Numerous studies have have attempted attempted to to adopt adopt not not only only one one track environment of the automobile. One of them which et al. (2016). boundaries detection. is located in front of the vehicle can be used for road Numerous studies boundaries detection. detection. camera. Stereo-vision based approaches use disparity map Numerous studies have attempted to adopt not only one boundaries camera. Stereo-vision Stereo-vision based based approaches approaches use use disparity disparity map map is located in front of the vehicle can be used for road camera. boundaries detection. Numerous attempted to adopt not only one The structure of this document is as follows. The next part obtained by stereo matching. In Wang et al. (2016) road is camera. Stereo-vision based approaches map The structure of this this document document is is as as follows. follows. The The next next part part obtained obtained bystudies stereo have matching. In Wang Wang etuse al. disparity (2016) road is boundaries detection. The structure of by stereo matching. In et al. (2016) road is camera. Stereo-vision based approaches use disparity map of the article gives overview of the existing work. In section detected based on color, texture and normals which were The structure of this document is as follows. The next part obtained by stereo matching. In Wang et al. (2016) road is of the article gives overview of the existing work. In section detected based on color, texture and normals which were of the article gives overview of existing work. In section detected on color, texture and normals which were of this document isand as follows. Therelated next part obtained based bybased stereo In Wang et al. (2016) road is 4The road detection is introduced difficulties to calculated on disparity map. In Gu et al. (2018); Hu of thestructure article gives of the the existing work. In section detected based on color, texture and normals which were road detection isoverview introduced and difficulties related to calculated calculated based onmatching. disparity map. In Gu Gu et al. al. (2018); Hu 444 road detection is introduced and difficulties related to based on disparity map. In et (2018); Hu of the article gives overview of the existing work. In section detected based on color, texture and normals which were theroad problem described. Used algorithms algorithms were presented presented in calculated et al. (2014) 3D-LiDAR and camera data fusion allowed detection is introduced and difficulties related to based on disparity map. In Gu et al. (2018); Hu the problem described. Used were in et al. al. (2014) (2014) 3D-LiDAR 3D-LiDAR and and camera camera data data fusion fusion allowed allowed the problem described. Used algorithms were presented in 4 detection issection introduced and difficulties related to calculated based onboundaries. disparity In Gu al. (2018); Hu section 5. Finally, 6 demonstrates results of tests. to road theroad problem described. Used algorithms were presented in et et determine al. (2014) 3D-LiDAR and map. camera dataetfusion allowed section 5. Finally, Finally, section 6 demonstrates demonstrates results of tests. tests. to determine road boundaries. section 5. section 6 results of to determine road boundaries. the problem described. Used algorithms were presented in to et determine al. (2014) road 3D-LiDAR and camera data fusion allowed 6 demonstrates results of tests. section 5. Finally, section boundaries. section 5. Finally,2. 6 demonstrates RELATED WORK to determine road boundaries. 2.section RELATED WORK results of tests. 3. MOTIVATION 2. 3. MOTIVATION MOTIVATION 2. RELATED RELATED WORK WORK 3. 3. MOTIVATION 2. RELATED WORK There are many available methods trying to solve road There are are many many available methods methods trying trying to to solve solve road As shown in section3.2 there MOTIVATION are many approaches to solve There As shown shown in in section section 22 there there are are many many approaches approaches to to solve solve detection In Hillel et al. (2014) desription of There are problem. many available available methods trying to solve road road As detection problem. In Hillel et al. (2014) desription of road detection problem. Previous work has only focused As shown in section 2 there are many approaches to solve detection problem. In Hillel et al. (2014) desription of road detection problem. Previous work has only focused There are many available methods trying to solve road recent progress on road and lane detection was prepared. detection problem. In Hillel et al. (2014) desription of road detection problem. Previous work has only focused recent progress progress on on road and and lane lane detection detection was was prepared. prepared. road As shown in section 2one there are many approaches solve on developing only technique. Each has detection problem. Previous work hasalgorithm only to focused recent on developing only one technique. Each algorithm has detection problem. Incan Hillel et al. (2014) desription of on Recent developments be grouped into two categories recent progress on road road and detection was prepared. developing only one technique. Each algorithm has Recent developments can be lane grouped into two two categories road detection problem. Previous work has only focused their advantages and disadvantages. Despite the fact of on developing only one technique. Each algorithm has Recent developments can be grouped into categories their advantages and disadvantages. Despite the fact of recent progress on road and lane detection was prepared. depending on road lines occurance. Recent developments can be grouped into two categories their advantages and disadvantages. Despite the fact of depending on on road road lines lines occurance. occurance. on developing only one technique. Each algorithm has high requirements of automotive industry for robustness Despite the fact of their advantages and disadvantages. depending high requirements requirements of of automotive automotive industry industry for for robustness robustness Recent developments canoccurance. be grouped into two categories high depending on road lines their advantages and disadvantages. Despite the fact First group uses road markings to detect boundaries. Many and high accuracy in different conditions, a major issue of high requirements of automotive industry for robustness First group uses road markings to detect boundaries. Many and high accuracy in different conditions, a major issue of depending on road lines occurance. First group uses road to boundaries. Many accuracy in different conditions, afail major issue of highhigh requirements of they automotive industry for robustness examples of algorithms can be found. In Wang et al. (2004) such methods is that are expected to under some First group road markings markings to detect detect boundaries. Many and and high accuracy in different conditions, major issue of examples of uses algorithms can be be found. found. In Wang Wang et al. al. (2004) (2004) such methods is that that they are expected expected to afail fail under some examples of algorithms can In et such methods is they are to under some First group uses road markings to detect boundaries. Many and high accuracy in different conditions, a major issue of B-Snake algorithm was proposed. Another model-based circumstances. examples of algorithms can be found. In Wang et al. (2004) such methods is that they are expected to fail under some B-Snake algorithm algorithm was was proposed. proposed. Another Another model-based model-based circumstances. circumstances. B-Snake examples algorithm of algorithms be found.Another In Wangmodel-based et al. (2004) circumstances. such methods is that they are expected to fail under some B-Snake wascan proposed. B-Snake algorithm proposed. Another model-based 2405-8963 © © 2019, IFAC IFACwas (International Federation of Automatic Control) circumstances. Hosting by Elsevier Ltd. All rights reserved. Copyright 2019 Copyright © 2019 IFAC Copyright © 2019 Peer review responsibility of International Federation of Automatic Control. Copyright © under 2019 IFAC IFAC 10.1016/j.ifacol.2019.08.078 Copyright © 2019 IFAC

2019 IFAC IAV 246 Gdansk, Poland, July 3-5, 2019

Marcin Ochman / IFAC PapersOnLine 52-8 (2019) 245–250

Table 1. Maximum traveled distance driving 140km/h between sequential frames. Nf ps 10 30 60

tf r [ms] 10.00 33.33 16.67

smax [m] 3.89 1.30 0.65

Fig. 1. Example of camera setup on the car. To avoid uncertaintity new, hybrid solution is proposed which uses two combined schemes. As a result better overall accuracy and lower fail deviation is expected. 4. PROBLEM DESCRIPTION Road detection is a two-step process. At the beginning road boundaries should be found. Afterwards, car localization in relation to determined road is calculated Cheng (2011); Alvarez and Lopez (2011). This article is focused on the first part and in next sections road detection is assumed to be road boundaries detection only.

Fig. 2. Intense rain decreases visibility.

Road detection can be formulated as binary classification problem of image pixels from camera located in front of the vehicle (presented in figure 1). Pixel can be classified as road or non-road. Let assume that A is set of all pixels of the image which present road. The problem of the road detection algorithm may be formulated as finding f function of pixel x defined as: f (x) =



1, 0,

x∈A x ∈ A

(1)

4.1 Difficulties

Fig. 3. Cobblestone road

According to Cheng (2011); Alvarez and Lopez (2011) reliability is one of the most significant features of road detection algorithm. There are many difficulties and obstacles which algorithm should be aware of. In case of road detection failure, steering algorithm may make wrong decision and as a consequence cause traffic accident. Next sections describes problems which each road detection algorithm encounters. Real-time limitations are crucial for algorithms used in autonomous vehicles. In most countries speed limits on highways are not higher than vmax = 140km/h. It is straightforward to calculate traveled distance smax in period between two frames e.g. tf r = Nf1ps , where Nf ps is the number of frames per second using following formula: smax = vmax tf r =

vmax Nf ps

(2)

Equation 2 shows that decreasing Nf ps causes longer distance traveled by vehicle. The higher performance of the algorithm (limited by NF P S ) the more reliable and safe steering algorithm might be. Table 1 shows maximum traveled distance by vehicle depending on typical Nf ps based on equation 2 246

Fact of seasons occurrence implies different weather conditions during the year. In winter snow might fall and summer might be very warm but violent storms may take place. Despite of recurrent long-term weather changes, weather very often changes in shorter period of time. Raining (figure 2), fog or wind is real challenge for such algorithm. Time of the day or cloudiness are the reason of changes in image brightness. Another phenomenon which might be difficulty is shadow. Uneven illuminated road or different colors especially coming from banners and neons should also be taken into account. Roads are made of various materials which are asphalt, concrete or cobblestone presented in figure 3 . It results in different structure and colors which means that color or pattern detection approach will fail. The gap in the road which is common on the less popular roads is also a challenge. They are seen as discontinuity on the image. 5. ALGORITHMS There were two basic algorithms which have been tested. Each of them used different machine learning approaches to obtain high accuracy. These were: illuminance invari-

2019 IFAC IAV Gdansk, Poland, July 3-5, 2019

Marcin Ochman / IFAC PapersOnLine 52-8 (2019) 245–250

247

X'

1

ln gb

Fig. 4. Original image.



X

2

θ

0

−1

−2

−2

ance (II) and Multi-layer perceptron (MLP). Finally, the hybrid solution consisting of two mentioned algorithms was proposed combined with linear formula. They are described in detail in the following subsections.

0 ln gr

1

2

Fig. 6. RGB to II conversion Input layer

Hidden layers

...

Fig. 5. Figure 4 converted to II space.

−1

...

Output layer

Iθ =

ln gb + tan θ ln gr √ 1 + tan2 θ

(3)

The algorithm is divided into three steps: (1) Calibration – run only once. It is intended to identify θ parameter of camera by calculating entropy Hα =  Hα (i) log Hα (i) for all training images, specific α ∈ [0, π] and picking θ which satisfies θ = argmin Hα . (2) Learning – step where λ parameter which is threshold for built nonparametric model (histogram) of road pixels e.g. probability p that satisfies p(I|road) ≥ λ (3) Road detection – based on found parameter λ pixel is classified. Final step is flood fill algorithm starting from specified locations which are assumed to be road. 5.2 MLP The MLP algorithm for road detection is a three step process: (1) Image is divided into small rectangle regions. Region is basic classification unit, which means that all pixels of the region will be classified as road or non-road pixel. 247

...

In Finlayson et al. (2006) new space was proposed called illuminant invariant I. Its main feature is shadows removal. Using such space leads to achieve shadow robustness by road detection algorithm. This idea was introduced in Alvarez and Lopez (2011). According to Alvarez et. al. Alvarez and Lopez (2011) transforming original image to I is based on the projection of point (ln gr , ln gb ) to the lθ line where r, g, b are corresponding pixel values of red, green and blue channels in RGB space. The formula for projection is presented in equation 3, while the process is visualized in the figure 6. Image from KITTI dataset converted to II space is presented in figure 5.

...

5.1 Illuminant invariance Is road

3 layers

Fig. 7. Neural network used for region classification (2) Calculate features. (3) Using trained neural network perform classification Neural network used for classification is presented in figure 7. There were 8 inputs, 3 hidden layers each containing 10 neurons with sigmoid activation function. Finally, there was only one output neuron with sigmoid activation function too. Values returned by network above 0.5 were interpreted as road, otherwise region was classfied as nonroad. Position, mean and standard deviation in RGB space of region were used as inputs for the MLP. 5.3 Hybrid approach Section 6 shows that minimum accuracy for II algorithm is poor. Under some circumstances II is not robust. Our solution to the issue is to use simultanously two methods. Equation 4 shows used formula. kM LP , kII = 1 − kM LP are weight coefficients for rM LP , rII which are results of MLP and II algorithms respectively. Pixel x is classified as non-road when final result is lower than threshold λ. Otherwise, pixel is labeled as road one. kM LP , kii , λ are parameters of the hybrid approach and may be found by optimization algorithms.

2019 IFAC IAV 248 Gdansk, Poland, July 3-5, 2019

Marcin Ochman / IFAC PapersOnLine 52-8 (2019) 245–250

Table 2. Time measurements of tested algorithms. Execution time min max 6,43 24,45 69,07 249,75 79,87 279,62

Algorithm II MLP Hybrid

Fig. 8. Modified classified image.  0, kM LP rM LP + kII rII ≤ λ fH (x) = 1, kM LP rM LP + kII rII > λ

[ms] mean 7,26 78.70 92,27

Table 3. Accuracy of tested algorithm. (4)

Algorithm II MLP Hybrid

Learning is a two step process. Basic algorithms were tuned independently. Afterwards, kM LP , λ were found using grid search method.

Accuracy [%] min max mean 13,36 99,56 89,46 77,95 98,87 92,40 79,06 99,349 94,11

6. EXPERIMENTAL RESULTS Tests were performed on Intel Core i7 3820 based computer with 16GB RAM. KITTI dataset described in detail in section 6.1 was used to measure accuracy and execution time of the algorithms.

0.93

6.1 KITTI dataset

0.91

To test and compare all algorithms described in this article modified Kitti dataset for road detection (Fritsch et al., 2013) was used. It contains 289 + 290 images which are divided into two groups. The first group is dedicated for learning step. The second one is intended to perform the tests. Images has been captured both on highways and rural areas so dataset contains diversified environments. The example of training image is presented in figure 4. Road area was marked with white color, non-road pixels were set to black. The modification of the KITTI dataset was caused by the purpose of described algorithms. Due to road detection in front of the vehicle dataset was modified to contain only current vehicle road. To visualize our changes the original and transformed training image was shown on the figure 8.

Mean accu racy

0.94

0.90 0.89 0.82

0.94 0.93 0.92 0.91 0.90 0.89

0.71 0.17

0.34 0.50 Mlp coe f f ic ie 0.67 nt

0.60 0.49 0.83 0.38

r Th

es

ho

ld

Fig. 9. Accuracy dependent on threshold and MLP coefficient. and average of accuracy (aaverage ) was calculated according to equation 5, where N = 290 is total number of test images. Table 3 shows results of the tests. Illustrations of well detected road and algorithm failure are demonstrated in figure 13 and 14.

6.2 Time The execution time of the algorithm was measured for each test image. Then three parameters were calculated: average time per image, maximum and minimum execution time for every algorithm. These three parameters allows to fully compare them regarding real-time requirements. Table 2 presents execution time for each of the algorithms. Due to complexity of hybrid approach, it is the worst solution regarding time performance. However, current processors may help to improve times through parallelisation. 6.3 Classification To compare described algorithms accuracy for every image (indexed by i) was measured separataly (ai ). Accuracy is a number of well classified pixels divided by total pixels of single image. Then, minimum (amin ), maximum (amax ) 248

aaverage =

1 1  ai N N

amin = min {a1 , a2 , ..., aN } amax = max {a1 , a2 , ..., aN }

(5)

The differences between new approach and other algorithms are crucial. Linearly combined results of two algorithms caused that both minimum and mean accuracy significantly improved. Figure 9 shows how kM LP , λ affected accuracy. Distribution of accuracy also has been investigated. Figures 10-12 present results. As can be seen, the best distribution in terms of accuracy range is represented by hybrid algorithm. The smaller spread the better distribution and stability of the algorithm. As opposed to hybrid solution, II algorithm has higher spread which leads to uncertaintity of detection and random fails.

2019 IFAC IAV Gdansk, Poland, July 3-5, 2019

Marcin Ochman / IFAC PapersOnLine 52-8 (2019) 245–250

249

80

Im ages

60

40

20

0 0.0

0.2

0.4

0.6

0.8

1.0

Accuracy

Fig. 10. Accuracy for all test images from KITTI dataset for II algorithm. Fig. 13. Example of algorithm low accuracy. 100

Im ages

80

60

40

20

0 0.0

0.2

0.4

0.6

0.8

1.0

Accuracy

Fig. 11. Accuracy for all test images from KITTI dataset for MLP algorithm.

100

Fig. 14. Example of well detected road.

Im ages

80

7. CONCLUSION Expected results were confirmed. Using two combined, independent algorithms caused that robustness increased. II algorithm supported by MLP achieves mean accuracy of 95%. Not only mean accuracy has been improved. Minimum accuracy for single image also raised. Progress has been made from 18% to 79%. Combined algorithms reached higher stability and robustness which is proved by calculated distribution of single image accuracy.

60

40

20

0 0.0

0.2

0.4

0.6

0.8

1.0

Accuracy

Fig. 12. Accuracy for all test images from KITTI dataset for hybrid algorithm. 249

As a future work multithreading may be implemented to decrease execution time of the algoritms. Another possible way to improve described solution is to use another combined algorithms which characterizes with better, standalone accuracy.

2019 IFAC IAV 250 Gdansk, Poland, July 3-5, 2019

Marcin Ochman / IFAC PapersOnLine 52-8 (2019) 245–250

REFERENCES Alvarez, J.M.A. and Lopez, A.M. (2011). Road detection based on illuminant invariance. IEEE Transactions on Intelligent Transportation Systems, 12(1), 184–193. Alvarez, J.M., Gevers, T., and Lopez, A.M. (2010). 3d scene priors for road detection. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE. Cheng, H. (2011). Autonomous Intelligent Vehicles. Springer London. Finlayson, G.D., Hordley, S.D., Lu, C., and Drew, M.S. (2006). On the removal of shadows from images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(1), 59–68. Fritsch, J., Kuehnl, T., and Geiger, A. (2013). A new performance measure and evaluation benchmark for road detection algorithms. In International Conference on Intelligent Transportation Systems (ITSC). Gu, S., Lu, T., Zhang, Y., Alvarez, J.M., Yang, J., and Kong, H. (2018). 3-d LiDAR + monocular camera: An inverse-depth-induced fusion framework for urban road detection. IEEE Transactions on Intelligent Vehicles, 3(3), 351–360. He, Y., Wang, H., and Zhang, B. (2004). Color-based road detection in urban traffic scenes. IEEE Transactions on Intelligent Transportation Systems, 5(4), 309–318. Hillel, A.B., Lerner, R., Levi, D., and Raz, G. (2014). Recent progress in road and lane detection: a survey. Machine Vision and Applications, 25(3), 727–745. Hu, X., Rodriguez, F.S.A., and Gepperth, A. (2014). A multi-modal system for road detection and segmentation. In 2014 IEEE Intelligent Vehicles Symposium Proceedings. IEEE. Japan Automobile Manufacturers Association (2009). Automotive technologies in japan. Laddha, A., Kocamaz, M.K., Navarro-Serment, L.E., and Hebert, M. (2016). Map-supervised road detection. In 2016 IEEE Intelligent Vehicles Symposium (IV). IEEE. Liu, T., Zheng, N., Cheng, H., and Xing, Z. (2003). A novel approach of road recognition based on deformable template and genetic algorithm. In Proceedings of the 2003 IEEE International Conference on Intelligent Transportation Systems, volume 2, 1251–1256 vol.2. Oliveira, G.L., Burgard, W., and Brox, T. (2016). Efficient deep models for monocular road segmentation. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Shinzato, P.Y., Grassi, V., Osorio, F.S., and Wolf, D.F. (2012). Fast visual road recognition and horizon detection using multiple artificial neural networks. In 2012 IEEE Intelligent Vehicles Symposium, 1090–1095. Su, Y., Zhang, Y., Alvarez, J.M., and Kong, H. (2017). An illumination-invariant nonparametric model for urban road detection using monocular camera and singleline lidar. In 2017 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE. Wang, L., Wu, T., Xiao, Z., Xiao, L., Zhao, D., and Han, J. (2016). Multi-cue road boundary detection using stereo vision. In 2016 IEEE International Conference on Vehicular Electronics and Safety (ICVES). IEEE. Wang, Y., Teoh, E.K., and Shen, D. (2004). Lane detection and tracking using b-snake. Image and Vision Computing, 22(4), 269 – 280. 250