Radar Sensor Fusion for Tracking Surrounding Vehicles

Radar Sensor Fusion for Tracking Surrounding Vehicles

10th IFAC Symposium on Intelligent Autonomous Vehicles 10th Symposium on Autonomous 10th IFAC IFACPoland, Symposium on Intelligent Intelligent Autonom...

2MB Sizes 0 Downloads 45 Views

10th IFAC Symposium on Intelligent Autonomous Vehicles 10th Symposium on Autonomous 10th IFAC IFACPoland, Symposium on Intelligent Intelligent Autonomous Vehicles Vehicles Gdansk, July 3-5, 2019 10th IFACPoland, Symposium on Intelligent Intelligent Autonomous Vehicles Available online at www.sciencedirect.com Gdansk, July 2019 10th IFAC Symposium on Autonomous Vehicles Gdansk, Poland, July 3-5, 3-5, 2019 10th IFAC Symposium on Intelligent Autonomous Vehicles Gdansk, Gdansk, Poland, Poland, July July 3-5, 3-5, 2019 2019 Gdansk, Poland, July 3-5, 2019

ScienceDirect

IFAC PapersOnLine 52-8 (2019) 130–135

A Geometric Model based 2D LiDAR/Radar Sensor Fusion for Tracking A Geometric Model based 2D LiDAR/Radar Sensor Fusion for Tracking A Geometric Model based 2D LiDAR/Radar Sensor Fusion for Tracking Vehicles A basedSurrounding 2D Sensor Fusion Fusion for for Tracking Tracking A Geometric Geometric Model Model based 2D LiDAR/Radar LiDAR/Radar Sensor Surrounding Vehicles Surrounding Vehicles Surrounding Vehicles Surrounding Vehicles Hojoon Lee*, Heungseok Chae**, and Kyongsu Yi***

Hojoon Hojoon Lee*, Lee*, Heungseok Heungseok Chae**, Chae**, and and Kyongsu Kyongsu Yi*** Yi*** Hojoon Lee*, Heungseok and Kyongsu  Chae**, Hojoon Lee*, Heungseok Chae**, and Kyongsu Yi*** Yi*** Hojoon Lee*, Heungseok Chae**, and Kyongsu Yi***  *Mechanical Engineering Department, Seoul National University  *Mechanical Engineering Department, Seoul *Mechanical Engineering Department, Seoul National National University University Seoul, Korea,(e-mail: [email protected]) *Mechanical Engineering Department, Seoul National University *Mechanical Engineering Department, Seoul National University Seoul, Korea,(e-mail: [email protected]) Seoul, Korea,(e-mail: [email protected]) *Mechanical Engineering Department, Seoul National University **Mechanical Engineering Department, Seoul National University Seoul, Korea,(e-mail: [email protected]) Seoul, Korea,(e-mail: [email protected]) **Mechanical Engineering Department, Seoul National University **Mechanical Engineering Department, Seoul National University Seoul, Korea,(e-mail: [email protected]) Seoul, Korea,(e-mail: [email protected]) **MechanicalSeoul, Engineering Department, Seoul National National University University **Mechanical Engineering Department, Seoul Korea,(e-mail: [email protected]) Seoul, Korea,(e-mail: [email protected]) **MechanicalSeoul, Engineering Department, Seoul ***Mechanical Engineering Department, SeoulNational NationalUniversity University Korea,(e-mail: [email protected]) Seoul, Korea,(e-mail: [email protected]) ***Mechanical Engineering Department, Seoul ***Mechanical Engineering Department, Seoul National National University University Seoul, Korea,(e-mail: [email protected]) Seoul, Korea,(e-mail: [email protected]) ***Mechanical Engineering Department, Seoul National University ***Mechanical Engineering Department, Seoul National University Korea,(e-mail: [email protected]) Seoul, Korea,(e-mail: [email protected]) ***Mechanical Seoul, Engineering Department, Seoul National University Seoul, Korea,(e-mail: [email protected]) Seoul, Korea,(e-mail: [email protected]) Seoul, Korea,(e-mail: [email protected]) Abstract: This paper presents a novel and efficient sensor fusion system for tracking multiple moving Abstract: This paper presents novel and sensor for multiple moving Abstract:over ThisLight paper Detection presents aa and novel and efficient efficient sensor fusion system for tracking trackingvehicles. multiple What moving vehicles Ranging (LiDAR) andfusion Radarsystem in autonomous is Abstract: This paper presents aa and novel and efficient sensor fusion system for tracking multiple moving Abstract: This paper presents novel and efficient sensor fusion system for tracking multiple moving vehicles over Light Detection Ranging (LiDAR) and Radar in autonomous vehicles. What is vehicles over Light Detection and Ranging (LiDAR) andfusion Radarthey in utilize autonomous vehicles. What is Abstract: This paper presents a novel and efficient sensor system for tracking multiple moving important in the sensor fusion using LiDAR and Radar is how well the characteristics of each vehicles over Light Detection and Ranging (LiDAR) and Radar in autonomous vehicles. What is vehicles over Light Detection and Ranging (LiDAR) and Radar in autonomous vehicles. What is important in the sensor fusion using LiDAR and Radar is how well they utilize the characteristics of each important inproposed theLight sensorDetection fusionsystem using LiDAR and Radar is how well they utilize the characteristics of each vehicles over and Ranging (LiDAR) and Radar in autonomous vehicles. What is sensor. The fusion improves the estimating accuracy and maximum perception distance important in the sensor fusion using LiDAR and Radar is how well they utilize the characteristics of each important in the sensor fusion using LiDAR and Radar is how well they utilize the characteristics of each sensor. The proposed fusion system improves the estimating accuracy and maximum perception distance sensor. The proposed fusion system improves theRadar estimating accuracy and maximum perception distance important in the sensor fusion using LiDAR and is how well they utilize the characteristics of each for the target vehicles by utilizing LiDAR which has high distance accuracy and Radar which has wide sensor. The proposed fusion system improves the estimating accuracy and maximum perception distance sensor. The proposed fusion system improves the accuracy and perception distance for the vehicles by utilizing which has distance accuracy and which wide for theoftarget target vehicles by utilizing LiDAR which has high high distance accuracy and Radar Radar which has has wide sensor. The proposed systemLiDAR improves the estimating estimating accuracy and maximum maximum perception distance Field View (FOV)fusion and observability of relative speed. In addition, multiple hypothesis tracking for the target vehicles by utilizing LiDAR which has high distance accuracy and Radar which has wide for the target vehicles by utilizing LiDAR which has high distance accuracy and Radar which has wide Field of View (FOV) and observability of relative speed. In addition, multiple hypothesis tracking Field of View (FOV) and observability of relative speed. In addition, multiple hypothesis tracking for theoftarget vehicles by utilizing LiDAR which has high distance accuracy and Radar whichmultitarget has wide (MHT) based track management and extended Kalman filtering (EKF) based filtering enable Field View (FOV) and observability of relative speed. In addition, multiple hypothesis tracking Field of View (FOV) and observability of relative speed. In addition, multiple hypothesis tracking (MHT) based track management and extended Kalman filtering (EKF) based filtering enable multitarget (MHT) based track management and extended Kalman filtering (EKF) based filtering enable multitarget Field of View (FOV) and observability of relative speed. In addition, multiple hypothesis tracking tracking with less computational complexity than particle filter-based methods. Three measurement (MHT) based based track management and and extended Kalman Kalman filtering (EKF) based based filtering enable multitarget (MHT) track management extended filtering (EKF) filtering enable multitarget tracking with less complexity than filter-based methods. tracking with less computational computational complexity than particle particle filter-based methods. Three measurement (MHT) track management extended filtering (EKF)tobased filtering enable multitarget models based are designed according toand the types of Kalman measurement assigned each track, Three and themeasurement tracks were tracking with less computational complexity than particle filter-based methods. Three measurement tracking with less computational complexity than particle filter-based methods. Three measurement models are designed according to the types of measurement assigned to each track, and the tracks models designed according the types ofThe measurement assigned to each track, Three andthrough themeasurement tracks were tracking with less according computational complexity than particle fusion filter-based methods. updated are optimally to to each model. proposed system is evaluated thewere real models are designed according to the types of measurement assigned to each track, and the tracks were models are designed according the types of measurement assigned to each track, and the tracks updated optimally according to each model. The proposed system is through the real updatedtest optimally according to to each model. The proposed fusion fusion system is evaluated evaluated through thewere real models are designed according to the types of measurement assigned to each track, and the tracks were vehicle with RT-range. updated optimally according to each model. The proposed fusion system is evaluated through the real updated optimally according vehicle with vehicle test test with RT-range. RT-range. updated optimally according to to each each model. model. The The proposed proposed fusion fusion system system is is evaluated evaluated through through the the real real vehicle test with RT-range. vehicle test with RT-range. © 2019, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved. Keywords: Autonomous vehicles, multitarget tracking, sensor fusion, extended Kalman filters, point vehicle test with RT-range. Keywords: Keywords: Autonomous vehicles, multitarget multitarget tracking, tracking, sensor sensor fusion, fusion, extended extended Kalman Kalman filters, filters, point point cloud, pose Autonomous estimation. vehicles, Keywords: Autonomous vehicles, multitarget multitarget tracking, tracking, sensor fusion, extended extended Kalman Kalman filters, filters, point point Keywords: Autonomous vehicles, sensor fusion, cloud, pose estimation. cloud, pose estimation. Keywords: Autonomous vehicles, multitarget tracking, sensor fusion, extended Kalman filters, point cloud, pose estimation. cloud,  cloud, pose pose estimation. estimation.  surrounding vehicles using LiDAR. However, since LiDAR 1. INTRODUCTION  surrounding using However, since surrounding vehicles usingitLiDAR. LiDAR. However, since LiDAR LiDAR uses optical vehicles equipment, is difficult to guarantee the  1. INTRODUCTION INTRODUCTION surrounding vehicles using LiDAR. However, since LiDAR 1. surrounding vehicles using LiDAR. However, since LiDAR uses optical equipment, it is difficult to guarantee the uses optical equipment, it is difficult to guarantee the 1. INTRODUCTION surrounding vehicles using LiDAR. However, since LiDAR durability of the vehicle, and the price is also higher than that 1. INTRODUCTION The equipment rate 1. ofINTRODUCTION Advanced Driver Assistance Systems uses optical equipment, it is difficult to guarantee the uses optical equipment, it is difficult to guarantee the durability of the vehicle, and the price is also higher than that durability of the vehicle, and the price is also higher than that The equipment rate of Advanced Driver Assistance Systems uses optical equipment, it is difficult to guarantee the of Radar or camera. The equipment rate of Advanced Driver Assistance Systems (ADAS) such asrate Lane Keeping Assistance System (LKAS), ofcamera. the vehicle, and the price is also higher than that The equipment of Advanced Advanced Driver Assistance Assistance Systems durability durability the of The equipment of Driver Systems of Radar Radar or orof camera. (ADAS) such asrate Lane Keeping Assistance System (LKAS), (LKAS), durability ofcamera. the vehicle, vehicle, and and the the price price is is also also higher higher than than that that (ADAS) such as Lane Keeping Assistance System The equipment rate of Advanced Driver Assistance Systems Adaptive Cruise Control (ACC) and Emergency Brake of Radar or (ADAS) such such as Lane Lane Keeping Assistance System (LKAS), (LKAS), of Radar or camera. It has been studied widely for a decade to percept the (ADAS) as Keeping Assistance System Adaptive Cruise Control (ACC) and Emergency Brake of Radar or camera. Adaptive Cruise Control (ACC) and Emergency Brake (ADAS) such as Lane Keeping Assistance System (LKAS), System (AEB) has Control increased(ACC) in orderand to reduce accidents due It for It has has been been studied studied widely for aa decade decade to to percept percept the Adaptive Cruise Emergency Brake surrounding vehicles widely through et the al Adaptive Cruise (ACC) and Emergency Brake System (AEB) has Control increased inand order to reduce reduce accidents due It has been been studied studied widely for LiDAR. a decade decadeSchueler to percept percept the System (AEB) has increased in order to accidents due Adaptive Cruise Control and Emergency Brake to increase in elderly drivers(ACC) deterioration of driver. In It has widely for a to the surrounding vehicles through LiDAR. Schueler et al surrounding vehicles through LiDAR. Schueler et al System (AEB) has increased in order to reduce accidents due It has been studied widely for a decade to percept the proposed the best knowledge model to track LiDAR System (AEB) has increased in order to reduce accidents due to increase in elderly drivers and deterioration of driver. In surrounding vehicles through LiDAR. LiDAR. Schueler et al al to increase in elderly driversinand deterioration of driver. In surrounding System (AEB) has order to reduce accidents due addition, there is anincreased increasing demand for mandatory AEB vehicles through Schueler et proposed the best knowledge model to track LiDAR proposed the best knowledge model to track LiDAR to increase in elderly drivers and deterioration of driver. In surrounding vehicles through Schueler et al segments without ghost motion LiDAR. (Schueler et al., LiDAR 2012). to increase in drivers and deterioration of In addition, there is an demand for mandatory AEB the best knowledge model to track addition, there issome an increasing increasing demand for Abdullah, mandatory AEB to increase inforelderly elderly drivers and deterioration of driver. driver. In proposed and LDWS vehicles(Qureshi & 2013). proposed the best knowledge model to et track segments ghost motion (Schueler al., 2012). segments without without ghost motion (Schueler et al., LiDAR 2012). addition, there is an increasing demand for mandatory AEB proposed theet al best knowledge model track LiDAR Petrovskaya proposed a dynamic andtogeometric model addition, there is an increasing demand for mandatory AEB and LDWS for some vehicles(Qureshi & Abdullah, 2013). segments without ghost motion (Schueler et al., 2012). and LDWS for some vehicles(Qureshi & Abdullah, 2013). addition, issome an increasing forapplied mandatory AEB segments Also, thethere performance of thedemand ADAS& to 2013). masswithout ghost motion (Schueler et Petrovskaya et al aatracking. dynamic and model Petrovskaya etdetection al proposed proposed dynamic and geometric geometric model and LDWS for vehicles(Qureshi Abdullah, segments without ghostand motion (Schueler et al., al., 2012). 2012). based vehicle Their approach didn’t and for some vehicles(Qureshi Abdullah, Also, the performance of ADAS applied to massPetrovskaya et al proposed aatracking. dynamic and geometric model Also,LDWS the vehicles performance of the thecontinuously ADAS& applied to 2013). massand LDWS for some vehicles(Qureshi & Abdullah, 2013). produced has been improved, for Petrovskaya et al proposed dynamic and geometric model based vehicle detection and Their approach didn’t based vehicle detection and tracking. Their approach didn’t Also, the performance of the ADAS applied to massPetrovskaya et al proposed a dynamic and geometric model need to segmentation and vehicle size assumption but need Also, the performance of the ADAS applied to massproduced vehicles has been continuously improved, for vehicle detection and tracking. Their approach didn’t produced vehicles has been continuously improved, for based Also, the performance of the ADAS applied to massexample, in the past, it can support only on the expressway, based vehicle detection and tracking. Their approach didn’t need to segmentation and vehicle size assumption but need need to segmentation and vehicle size assumption but need produced vehicles has been continuously improved, for based vehicle detection and tracking. Their approach didn’t high computation power(Petrovskaya & Thrun, 2009). In our produced vehicles has been continuously improved, for example, it can support only the expressway, to segmentation and vehicle size assumption but need example, in invehicles the past, itsupport can support only on on the expressway, produced been improved, for need nowadays, itthe canpast, alsohas incontinuously a traffic jam(AutoNet2030, need to segmentation and vehicle size assumption but need high computation power(Petrovskaya & Thrun, 2009). In our high computation power(Petrovskaya & Thrun, 2009). In our example, in the past, it can support only on the expressway, need to segmentation and vehicle size assumption but need study, the problem is divided into two parts to handle point example, in the past, it can support only on the expressway, nowadays, it can also support in a traffic jam(AutoNet2030, computation power(Petrovskaya & Thrun, 2009). In our nowadays, itthe canpast, alsoitsupport in a traffic jam(AutoNet2030, example, cantechnology support only on themore expressway, 2014). Asindriver assistive becomes popular high high computation power(Petrovskaya & Thrun, 2009). In our study, the problem is divided into two parts to handle point study, the problem is divided into two parts to handle point nowadays, it can also support in a traffic jam(AutoNet2030, high computation power(Petrovskaya & parts Thrun, Inpoint our with low computation power. The first step is to extract nowadays, it also support in aa traffic jam(AutoNet2030, 2014). As technology becomes more popular study, the problem problem is divided divided into two two to 2009). handle 2014).performance As driver driver assistive technology becomes more actively popular cloud nowadays, it can can assistive also support inresearch traffic jam(AutoNet2030, and improves, is being study, the is into parts to handle point cloud with low computation power. The first step is to toand extract cloud with low computation power. The first step is extract 2014). As driver assistive technology becomes more popular study, the problem is divided into two parts to handle point the shape of the target vehicle from the point cloud, the 2014). As driver assistive technology becomes more popular and performance improves, research is being actively cloud withoflow low computation power. The first step is to toand extract and performance improves, research is being actively 2014). As driver assistive becomes more actively popular conducted on systems that technology are capable of autonomous driving the cloud with computation power. first is extract the shape theto target vehicle from the point cloud, the shape oflow the target vehicle fromThe the pointstep cloud, and the and performance improves, research is being cloud with computation power. The first step is to extract second step is estimate the target information by tracking and performance improves, research is being actively conducted on systems that are capable of autonomous driving the shape of the target vehicle from the point cloud, and the conducted on systems that are capable of autonomous driving the and performance improves, research is being actively beyond driver assistance. shape of the target vehicle from the point cloud, and the second step is to estimate the target information by tracking second step is to estimate the target information by tracking conducted on systems that are capable of autonomous driving the shape of the target vehicle from the point cloud, and the an arbitrary number of targets through the extracted shape. conducted on that beyond assistance. second step is to estimate the target information by tracking beyond driver driver assistance. conducted on systems systems that are are capable capable of of autonomous autonomous driving driving an second step is the target by tracking an arbitrary number of targets targets through the extracted extracted shape. arbitrary of the beyond driver assistance. second step number is to to estimate estimate the through target information information byshape. tracking beyond driver assistance. LiDAR measures the environment in the form of a point an arbitrary number of targets through the extracted shape. beyond driver assistance. an arbitrary number of targets through the extracted shape. Radar measures the surrounding environment by measuring LiDAR measures the environment in the form of a point an arbitrary number of targets through the extracted shape. LiDAR measures the environment in the form of a point cloud bymeasures measuringthe intensity and theinflight time of laser Radar Radar measures the surrounding environment by measures theintensity surrounding environment by measuring measuring LiDAR environment the form form of the point the time, angle, and of reflected radio waves that has LiDAR the environment the of aaa point cloud measuring intensity and the flight time of the laser Radar measures the surrounding environment by measuring cloud by bymeasures measuring intensity andemitting thein flight time of the laser LiDAR measures the environment in the form of point reflected from the object after the laser from the Radar measures the surrounding environment by measuring the time, angle, and intensity of reflected radio waves that the time, angle, and intensity of reflected radio waves that has has cloud by measuring intensity and the flight time of the laser Radar measures the surrounding environment by measuring a longer wavelength than LiDAR. Because of wavelength cloud by measuring intensity and the flight time of the laser reflected from object after emitting the laser the time, angle, and intensity of reflected radio waves that has reflected from the the object after emitting thetime laser from the the cloud byBecause measuring intensity andusing the flight of from laser sensor. it is measured a laser with athe shorter the time, angle, and intensity of reflected radio waves that has athe longer wavelength than LiDAR. Because of wavelength a longer wavelength than LiDAR. Because of wavelength reflected from the object after emitting the laser from the time, angle, and intensity of reflected radio waves that has resolution and accuracy are lower than LiDAR, but it is reflected from the object after emitting the laser from the sensor. Because it is measured using a laser with a shorter longer wavelength than LiDAR. Because of wavelength sensor. Because itRadar, is measured using a laser with a shorter reflected from object emitting the from the aaresolution wavelength thanthe theafter environment can laser be recognized longer wavelength than LiDAR. Because of wavelength and accuracy are lower than LiDAR, but is sensor. Because it is measured using aa laser with aa shorter resolution andocclusion. accuracy are lower than LiDAR, but it itarea is a longer wavelength than LiDAR. Because of wavelength robust against Also, since it can measure wide sensor. Because it is measured using laser with shorter wavelength than Radar, the environment can be recognized resolution andocclusion. accuracy Also, are lower lower than LiDAR, wide but it itarea is wavelength than Radar, theaccuracy. environment can be recognized sensor. Because itRadar, is measured using For a laser a shorter with higher resolution and this with reason, it has resolution and accuracy are than LiDAR, but is robust against since it can measure wavelength than the environment can be recognized robust against occlusion. Also, since it can measure wide area and accuracy are lower LiDAR, but itarea is as the static sensor, it canAlso, guarantee whenwide mounted wavelength Radar, the environment be recognized with higher resolution For this reason, it robust against occlusion. since durability itthan can measure measure with widely higher than resolution and accuracy. For can thisfrom reason, it has has resolution wavelength than Radar, theaccuracy. environment can be recognized been installed inand autonomous vehicles the Urban robust against occlusion. Also, since it can wide area as the static sensor, it can can guarantee durability when mounted with higher resolution and accuracy. For this reason, it has as the static sensor, it guarantee durability when mounted robust against occlusion. Also, since it can measure wide area on the vehicle. It also has a long measuring range and can with higher resolution and accuracy. For this reason, it has been widely installed in autonomous vehicles from the Urban as the static sensor, it can guarantee durability when mounted been widely installed inand autonomous vehicles from the Urban with higher resolution accuracy. For thisfrom reason, it has on Grand Challenge (UGC) in 2007. In vehicles addition, as Audi began as sensor, it durability when mounted the vehicle. It also has aa long measuring range and on the thestatic vehicle. It the also hasguarantee long measuring range and can can been widely installed in autonomous the Urban as the static sensor, it can can guarantee durability directly measure relative velocity throughwhen the mounted Doppler been widely installed in autonomous vehicles from the Urban Grand Challenge (UGC) in 2007. addition, as began on the vehicle. It also has aa long measuring range and can Grand Challenge (UGC) incapable 2007. In In addition, as Audi Audi began been widely installed in autonomous vehicles from the Urban to mass-produce vehicles of autonomous driving at on the vehicle. It also has long measuring range and can directly measure the relative velocity through the Doppler Grand Challenge (UGC) in 2007. In addition, as Audi began directly measure the relative velocity through the Doppler on the vehicle. It also has a long measuring range and can Effect. Therefore, LiDAR and Radar have mutually Grand Challenge vehicles (UGC) in 2007. In as Audi began to of autonomous driving at measure the relative velocity through the Doppler to mass-produce mass-produce vehicles capable ofaddition, autonomous driving at directly Grand (UGC) incapable 2007.more In as that Audi began level 3Challenge with LiDAR, it become important percept the relative through the Doppler Effect. Therefore, and Radar to mass-produce vehicles capable ofaddition, autonomous driving at directly Effect. measure Therefore, LiDAR and Radar have mutually directly measure the LiDAR relative velocity velocity through the mutually Doppler complementary characteristics because the have wavelengths of to mass-produce vehicles capable of autonomous driving at level 3 with LiDAR, it become more important that percept Effect. Therefore, LiDAR and Radar have mutually level 3 with LiDAR, it become more important that percept to mass-produce vehicles capable of autonomous driving at Effect. Therefore, LiDAR and Radar have mutually complementary characteristics because the wavelengths of level 3 3 with with LiDAR, LiDAR, it it become become more more important important that that percept percept Effect. complementary characteristics because the have wavelengths of Therefore, LiDAR and Radar mutually level characteristics because the wavelengths of level 3 with LiDAR, it become more important that percept complementary complementary characteristics because the wavelengths of complementary characteristics because the wavelengths of 2405-8963 © 2019, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved.

Copyright © 2019 IFAC Copyright 2019 IFAC Peer review© of International Federation of Automatic Control. Copyright ©under 2019 responsibility IFAC Copyright © 10.1016/j.ifacol.2019.08.060 Copyright © 2019 2019 IFAC IFAC Copyright © 2019 IFAC

2019 IFAC IAV Gdansk, Poland, July 3-5, 2019

Hojoon Lee et al. / IFAC PapersOnLine 52-8 (2019) 130–135

the electromagnetic waves used are different. Therefore, proper sensor fusion considering characteristics of two sensors can improve the perception performance for autonomous driving. For this reason, fusion research of two sensors has been continuously studied until recently.

131

image sensor is installed inside the windshield and the highperformance GPS for securing the reference data is installed. The maximum sensing range and HFOV of each mounted sensor are shown in Fig. 2 by bird’s-eye view.

The structure for sensor fusion is fundamentally divided into two. The first is a method of collecting all measurement information of all sensors into one subject and then making a global estimation on that subject (Munz et al., 2010). The centralized Kalman filter has this conventional structure. This type of fusion can optimally describe the characteristics of each sensor and theoretically enables optimal state estimation (Baig et al., 2011). The second method is Track-to-Track Fusion method in which each sensor tracks objects independently and collects the tracked results from each sensor and then fuses to make a final estimation (Mendes et al., 2004). Since this method can be modularized, it has a merit that it can be utilized in various sensor configurations with one fusion structure(Li et al., 2013).

Fig. 1. Hardware configuration of autonomous driving vehicle

Centralized Kalman filter based fusion is a typical approach to track moving vehicles. Göhring, D et al (2011) have proposed a representative study of LiDAR and Radar sensor fusion (Göhring et al., 2011). In the above study, sensor fusion was performed under the assumption of a straight path of the highway environment, and the estimation performance of the position speed was improved compared with the estimation using only one LiDAR or Radar. However, in using the relative speed information of the Radar, it is assumed that the direction of the target vehicle is the same as the direction of the host vehicle, and the direction does not largely differ from the measuring direction of the Radar. In other words, the scope of the study was limited to the highway environment. To tackle these issues, this study proposed a measurement model that takes into account the relative position of the vehicle and the relative vehicle and the measurement position of the Radar, and made it possible to update the track appropriately regardless of the presence or absence of LiDAR.

Fig. 2. The field of view of the autonomous driving vehicle 2.2 Fusion System The flow chart of the LiDAR/Radar fusion system is shown in Fig. 3. The basic structure of the fusion system is based on Multiple Hypothesis Tracking (MHT). MHT is a method that creates possible tracks based on measured values and continuously compares these tracks against measured values to increase the reliability of tracks that are continuously updated. The measured value is assigned to the globally nearest track as compared to the track that was previously tracked. If the mahalanobis distance between measurement and all previous tracks is longer than a predetermined value, the measured value is tracked as a new track. If the track is not assigned measurement for more than 30% of the track's life time, the track is deleted.

The remainder of the paper is organized as follows. In section 2, Hardware configuration of used autonomous driving vehicle and the fusion system are described. The shape extraction from LiDAR point cloud is also depicted in section 2. The detailed of centralized Kalman filter for sensor fusion is described in section 3. In section 4, the proposed system is verified by vehicle test. Finally, concluding remarks are given in section 5. 2. SYSTEM OVERVIEW

The velocity and yaw rate of the host vehicle were estimated by using the vehicle's wheel speed and yaw rate sensor values through the host vehicle filter, and this information was used to update the processes of the target vehicles. By using the lane information through the camera, only the measured values of LiDAR and Radar existing in the area where able to exist the target vehicle tracked and the state of target vehicle is estimated.

2.1 Vehicle Platform The vehicle used in this study is equipped with environmental sensors for autonomous driving in a K5. Figure 1 shows a hardware configuration of the autonomous vehicle. The front of the vehicle is equipped with a long range Radar (LRR), a mid-range Radar (MRR), and a 2D LiDAR in front of the vehicle. One 2D LiDAR and one Short Range Radar (SRR) are mounted on each side bumper. The 131

2019 IFAC IAV 132 Gdansk, Poland, July 3-5, 2019

Hojoon Lee et al. / IFAC PapersOnLine 52-8 (2019) 130–135

IMM/EKF based Filtering IMM/EKF based Filtering IMM/EKF based Filtering Target States Filtering

Host Vehicle Filter

Point cloud

Shape Extraction

Track Management

Process Update

Validation of Measurement for Each Track

Lidar Measurement?

no

Radar Measurement?

no

yes

Update Reward Function

Radar Measurement? yes Lidar/Radar Measurement Update

no

yes

Lidar Measurement Update

Radar Measurement Update

Fig. 3. Flow chart of model based LiDAR/Radar sensor fusion system for multitarget tracking Since it is hard to use directly the point cloud from LiDAR for measurement update as EKF, the candidates of target vehicles are generated from the point cloud via shape extraction. Shape extraction clusters the point cloud based on distance between point and point and creates possible candidates for position and heading angle of the target vehicles through the bounding box and the virtual ray for each segment. Because of the amount of computation required when using the point cloud of LiDAR as a direct filter measurement value, the method of generating and tracking representative points through segmentation(Premebida & Nunes, 2005) and then edge detection (Borges & Aldon, 2004) has been studied through many previous studies(Mertz et al., 2013; Schueler et al., 2012). In study of Cho et al (Cho et al., 2014), a set of possible edge targets are generated from point cloud. The set of possible edge targets in the previous study are depicted in fig. 4 (a), and we can see that many possible candidates are generated from one point segments. In this study, however, only the candidates that are actually possible are extracted by using the virtual ray(Thrun, 2001) expressing LiDAR’s straightness. The shape extraction result is depicted on Fig. 4 (b).

Fig. 4. Shape extraction result (a) without virtual ray (b) with virtual ray 3. LIDAR/RADAR SENSOR FUSION This section details how to estimate actually takes place in the fusion system described above. The process update consists of the constant acceleration model of the particle model and the measurement update consists of three types according to the type of measurement assigned to each track. 3.1 Vehicle dynamics model

The measurement of LiDAR and Radar are assigned to each track, and the state of each track is estimated through Extended Kalman Filtering (EKF). Centralized Kalman filtering architecture which its optimality is proven is used for sensor fusion. The measurement update is differently conducted depend on the type of assigned measurements. There are three measurement updates from when all the measurements are assigned upon when only one measurement is assigned, which will be described in Section 3.2.

In this study, the model designed to estimate the state of target vehicles is as follows. Each physical meaning is shown in Fig. 5. x n   pn , x

pn , y  n

vn , x

n

an , x

&n  , u  vx  T T

(1)

Equation (1) defines the states and inputs of the model. The state of the model is composed of the relative position of the target vehicle with respect to the host vehicle, the relative yaw angle, the absolute speed, the absolute yaw rate, the absolute acceleration, and the absolute angular velocity, and the input is the absolute speed and angular velocity of the host vehicle. The dynamics model is described in (2).

132

2019 IFAC IAV Gdansk, Poland, July 3-5, 2019

Hojoon Lee et al. / IFAC PapersOnLine 52-8 (2019) 130–135

133

1 0 0 0 0 0 0  H l , n  0 1 0 0 0 0 0  0 0 1 0 0 0 0 

Measurement model (Radar only): T

z n [k ]  h( xn [k ], u[k ])  vn[k ]  hr , n1 , hr , n 2 , hr , n 3   vn[k ] vn [k ] ~  0, Vr , n [k ]

Fig. 5. Physical meaning of each state and measurement model

h r ,n1  pn , x  sx  bn , x cos( n )  bn , y sin( n )

x&n  a  x n , u   q

h r ,n 3  (vn , x  bn , y ( n   host )) cos( n   r ) 

 a1 a 2

a3 a 4

a5 a6

h r ,n 2  pn , y  s y  bn , x sin( n )  bn , y cos( n ) bn , x ( n   host ) sin( n   r )  pn ,x host sin r 

a7   q T

( pn , y host  vhost ) cos r

a1  vn , x cosi  vx  pn , y   a 2  vn , x sin i  pn , x  

(2)

a3   n  

The physical meanings of the variables expressed in each equation are shown in Fig.5. In this case, ( bn , x , bn , y ) is the

a 4  an , x a5  &n a 6   ka a 7   k&

position measured by Radar in the target vehicle coordinate system. If there is no measurement of LiDAR, it is unknown. Therefore, if there is a LiDAR measurement, it can be directly calculated using shape extraction result. However, if there is only radar measurement, this value should also be estimated. It was solved by the IMM approach proposed by Kim et al (Kim et al., 2015).

q ~  0, Q 

3.2 Measurement Model In LiDAR, the position and yaw angle of the target vehicle can be measured through the above described shape extraction, and the position and radius direction relative speed of the target vehicle surface can be measured in the Radar. The measurement model according to the case where the LiDAR and Radar measurement values exist is as follows.

4. EXPERIMENTAL VALIDATION In this section, the performance of the tracking system is verified through various vehicle test. In 4.1, the tracking result and shape extraction result is compared with the actual scene in the multi-vehicle environment of the motorway. The estimation performance of the tracking system is validated through RT-range in 4.2.

Measurement model (fusion): z n [k ]  h( xn [k ], u[k ])  v n [k ] T

 hl , n1 , hl , n 2 , hl , n 3 , h r , n1 , h r , n 2 , h r , n 3   v n [k ] v n [ k ] ~  0, Vn [ k ]

4.1 Tracking result

 Vl , n 033  Vn [ k ]   33  Vr , n   0

Fig. 6 depicts a moment on the highway in three figures. Fig. 6-(a) is the actual scene by the front camera, and the numbers displayed on each vehicle are designed so that the same vehicle can be displayed in the follows figures. Fig. 6-(b) shows the result of virtual ray and shape extraction in this situation. In the case of the target vehicle (1), three candidate groups are generated when only the information of the point cluster is used as in the previous study. However, in this research, we can see that only a single candidate is generated because the information of free space is utilized by the virtual ray utilizing the straightness of LiDAR. In the case of the target vehicle (2), it can be seen that two candidates are generated as an occlude space by the target vehicle (1). This shape extraction method can reduce hypotheses to be tracked and misrecognition rate. Fig. 6-(c) shows the result of tracking using the shape extraction result and the Radar measurement value. It can be seen that the candidate in the

h l ,n1  pn , x h l , n 2  pn , y h l ,n 3   n h r ,n1  pn , x  sx  bn , x cos( n )  bn , y sin( n ) h r ,n 2  pn , y  s y  bn , x sin( n )  bn , y cos( n ) h r ,n 3  (vn , x  bn , y ( n   host )) cos( n   r )  bn , x ( n   host ) sin( n   r )  pn ,x host sin  r  ( pn , y host  vhost ) cos  r

Measurement model (LiDAR only): z n [k ]  Hl , n xn [k ]  vn [k ]

vn[k ] ~ 0, Vl , n[k ]

133

2019 IFAC IAV 134 Gdansk, Poland, July 3-5, 2019

Hojoon Lee et al. / IFAC PapersOnLine 52-8 (2019) 130–135

vertical direction generated in the vehicle (2) of fig. 6-(b) is excluded from the tracking process.

Fig. 8. Cumulative distribution of estimation errors for lanekeeping vehicles

Fig. 6. Tracking result on motorway. In (c), green car represent the host vehicle and red dots and black circle represent LiDAR point cloud and Radar measurement respectively. In (b), red cars mean shape extraction result and black line present virtual ray. (a) shows the actual environment, and the numbers on each vehicle represent the same vehicle in each figure 4.2 Tracking accuracy validation To evaluate the performance of the fusion system, vehicle tests were conducted using an RT-range and two vehicles. The test road consisted of curved roads and straight roads in half, and tests were conducted at three speed profiles of 0 ~ 15kph (TJA), 40kph, and 80kph. The relative positions of the host vehicle and the target vehicle were tested for four cases as shown in Fig. 7, and the lane was maintained with the three speed profiles described above for each relative position. The estimated performance of the lane-changing target vehicle was also evaluated for the front lane change and rear lane change scenarios.

Fig. 9. Cumulative distribution of estimation errors for lanechanging vehicles The estimation error distribution for the above scenario is shown in Fig. 8, 9. The Fusion of legend is estimated by the proposed fusion system, LiDAR is estimated by only shape extraction result, and Radar is estimation result using radar only through IMM filtering proposed by Kim et al (2015). Fig. 8 shows the estimation performance of the lane keeping vehicle. For reliable comparison with the estimation results using LiDAR only, the cases within the measurement range of the LiDAR were evaluated in the test. The accuracy of the position estimation is more accurate than LiDAR, and the

Fig. 7. Test scenarios to evaluate estimation accuracy 134

2019 IFAC IAV Gdansk, Poland, July 3-5, 2019

Hojoon Lee et al. / IFAC PapersOnLine 52-8 (2019) 130–135

speed estimation results show that the performance is improved by fusion with the radar. In particular, the estimation performance is improved by measuring the relative speed in the TJA scenario.

135

Borges, G. A., & Aldon, M.-J. (2004). Line extraction in 2D range images for mobile robotics. Journal of Intelligent & Robotic Systems, 40(3), 267-297. Cho, H., Seo, Y.-W., Kumar, B. V., & Rajkumar, R. R. (2014). A multi-sensor fusion system for moving object detection and tracking in urban driving environments. Robotics and Automation (ICRA), 2014 IEEE International Conference on. Göhring, D., Wang, M., Schnürmacher, M., & Ganjineh, T. (2011). Radar/lidar sensor fusion for car-following on highways. Automation, Robotics and Applications (ICARA), 2011 5th International Conference on. Kim, B., Yi, K., Yoo, H.-J., Chong, H.-J., & Ko, B. (2015). An IMM/EKF approach for enhanced multitarget state estimation for application to integrated risk management system. IEEE Transactions on Vehicular Technology, 64(3), 876-889. Li, H., Nashashibi, F., Lefaudeux, B., & Pollard, E. (2013, 69 Oct. 2013). Track-to-track fusion using split covariance intersection filter-information matrix filter (SCIF-IMF) for vehicle surrounding environment perception. 16th International IEEE Conference on Intelligent Transportation Systems (ITSC 2013). Mendes, A., Bento, L. C., & Nunes, U. (2004, 14-17 June 2004). Multi-target detection and tracking with a laser scanner. IEEE Intelligent Vehicles Symposium, 2004. Mertz, C., Navarro‐Serment, L. E., MacLachlan, R., Rybski, P., Steinfeld, A., Suppe, A., . . . Thorpe, C. (2013). Moving object detection with laser scanners. Journal of Field Robotics, 30(1), 17-43. Munz, M., Mahlisch, M., & Dietmayer, K. (2010). Generic Centralized Multi Sensor Data Fusion Based on Probabilistic Sensor and Environment Models for Driver Assistance Systems. IEEE Intelligent Transportation Systems Magazine, 2(1), 6-17. doi:10.1109/mits.2010.937293 Petrovskaya, A., & Thrun, S. (2009). Model based vehicle detection and tracking for autonomous urban driving. Autonomous Robots, 26(2-3), 123-139. Premebida, C., & Nunes, U. (2005). Segmentation and geometric primitives extraction from 2d laser range data for mobile robot applications. Robotica, 2005, 17-25. Qureshi, K. N., & Abdullah, A. H. (2013). A survey on intelligent transportation systems. Middle-East Journal of Scientific Research, 15(5), 629-642. Schueler, K., Weiherer, T., Bouzouraa, E., & Hofmann, U. (2012). 360 degree multi sensor fusion for static and dynamic obstacles. Intelligent Vehicles Symposium (IV), 2012 IEEE. Thrun, S. (2001). A probabilistic on-line mapping algorithm for teams of mobile robots. The International Journal of Robotics Research, 20(5), 335-363.

Fig. 9 shows the estimation error for the lane change scenario. Since the yaw angle of the lane-changing vehicle changes more rapidly than the lane-keeping vehicle, the estimation performance of the yaw angle depends heavily on the shape extraction algorithm. However, due to the limitation of extracting the yaw angle through the one-time measurement of LiDAR, the shape extraction may not be performed appropriately. In this case, it is possible to prevent erroneous calculation of the yaw angle through IMM filtering of the Radar, and thus it can be seen that the yaw angle estimation performance of the lane-changing vehicle is improved through fusion. For all scenarios, estimation result by proposed fusion system are more accurate than using only one sensor. It shows that the advantages of each sensor, such as LiDAR’s precise position accuracy, Radar’s wide FOV, and the ability to measure relative speed have been appropriately fused. 5. CONCLUSIONS In this study, a LiDAR/Radar fusion system is proposed that fuses LiDAR and Radar measurements with a centralized Kalman filter in a multiple hypothesis tracking based multitarget tracking system. In order to utilize LiDAR point cloud in extended Kalman filter, the shape extraction via virtual ray is proposed and the new measurement models are introduced for sensor fusion. The performance has been verified through vehicle tests using autonomous vehicles. Through this, the proposed fusion system improved the estimation performance by reflecting the characteristics of each sensor is confirmed. 6. ACKNOWLEDGMENTS This work was supported by the Brain Korea 21 Plus Project in F14SN02D1310, the Technology Innovation Program (10079730, Development and Evaluation of Automated Driving Systems for Motorway and City Road and driving environment) funded By the Ministry of Trade, Industry & Energy (MOTIE, Korea), National Research Foundation of Korea(NRF) grant funded by the Ministry of Science, ICT & Future Planning (NRF-2016R1E1A1A01943543), and The Ministry of Land, Infrastructure, and Transport (MOLIT, KOREA) [Project ID: 18TLRP-B146733-01, Project Name: Connected and Automated Public Transport Innovation (National R&D Project)]. REFERENCES AutoNet2030. (2014). Co-operative Systems in Support of Networked Automated Driving by 2030. Baig, Q., Aycard, O., Vu, T.-D., & Fraichard, T. (2011). Fusion between laser and stereo vision data for moving objects tracking in intersection like scenario. IV'2011-IEEE Intelligent Vehicles Symposium. 135