Lane change identification and prediction with roadside LiDAR data

Lane change identification and prediction with roadside LiDAR data

Optics and Laser Technology 123 (2020) 105934 Contents lists available at ScienceDirect Optics and Laser Technology journal homepage: www.elsevier.c...

2MB Sizes 0 Downloads 31 Views

Optics and Laser Technology 123 (2020) 105934

Contents lists available at ScienceDirect

Optics and Laser Technology journal homepage: www.elsevier.com/locate/optlastec

Full length article

Lane change identification and prediction with roadside LiDAR data a

b

Yuepeng Cui , Jianqing Wu , Hao Xu a b

b,⁎

, Aobo Wang

T

b

Department of Transportation, Fujian University of Technology, Fuzhou 350118, China Department of Civil and Environmental Engineering, University of Nevada, Reno, NV 89557, USA

H I GH L IG H T S

Change was identified with Roadside LiDAR Data. • Lane Trajectories from Roadside LiDAR were used for Lane Change Prediction. • Vehicle • Method was evaluated with the real-world data.

A R T I C LE I N FO

A B S T R A C T

Keywords: Lane change Roadside LiDAR Lane boundary Vehicle trajectory

Lane change identification and lane change prediction are important tasks for the Connected-Vehicle (CV) technologies. Since both connected vehicles and non-connected vehicles may exist on the roads for a long time, the real-time information of the unconnected traffic could not be obtained by the current CV network. Therefore, lane change identification and lane change prediction could not be achieved with the missing traffic information of the unconnected vehicles. The roadside Light Detection and Ranging (LiDAR) provides a solution to fill the data gap under the mixed traffic situation. This paper developed the methods of lane change identification and prediction based on the vehicle trajectories extracted from the roadside LiDAR data. Lane boundaries were used to enhance the accuracy of lane change identification. The proposed method was evaluated using real-world data. The results showed that the proposed method can achieve the relatively high accuracy. The lane change information can be used to develop the lane-change warning system for the CV network.

1. Introduction Connected-Vehicle (CV) technologies have been an important component for the future Intelligent Transportation System (ITS). In an ideal CV network, all road users can communicate with each other through various wireless communication technologies [1]. The CV technologies have many benefits including reducing congestions, improving traffic safety, reducing fuel consumptions, and improving mobility [2]. Lane change identification and prediction is an important component of the functions of the CV network. Lane changing is considered one of the most challenging driving maneuvers [3]. The occurrence of lane changes can be random, except for mandatory lane changes (such as lane change caused by a blocked lane or traffic regulations). The identification and prediction of lane changes have been proven to be useful for crash avoidance support [4]. To give the drivers useful warning information, the lane change behavior needs to be realtimely predicted (with limited time delay) [5]. The data used for lane change identification and prediction should be High-Resolution Micro



Traffic Data (HRMTD), which means second-by-second (or higher frequency) traffic data of all road users are required. On the other hand, the false report rate should be low since drivers may ignore the warning information if the accuracy of the system is low [6]. In theory, any lane change maneuver can be detected under the CV environment since all vehicles share their real-time locations, speeds, and moving directions with interactions of Vehicle-to-Sensor on-board (V2S), Vehicle-to-Vehicle (V2V), Vehicle-to-Road infrastructure (V2R), and Vehicle-to-Internet (V2I). However, it takes time to equip the connected-vehicle devices into all vehicles, which means the connected-vehicles and unconnected-vehicles will exist for the next few decades or even longer [7]. How to identify and predict lane change behavior during the transition period from the traditional traffic to the full connected traffic is a challenge. The roadside infrastructure provides a bridge to build communication between the unconnected road users and connected vehicles on the road [8]. When it is difficult for all vehicles and pedestrians to broadcast their real-time status soon, enhancing the traffic infrastructures to

Corresponding author. E-mail address: [email protected] (H. Xu).

https://doi.org/10.1016/j.optlastec.2019.105934 Received 7 July 2019; Received in revised form 4 September 2019; Accepted 28 October 2019 0030-3992/ © 2019 Published by Elsevier Ltd.

Optics and Laser Technology 123 (2020) 105934

Y. Cui, et al.

[25] modeled human lane-change behavior using Multilayer Perceptrons (MLP) and Convolutional Neural Networks (CNN) with the LiDAR data collected by the autonomous vehicle. The accuracy of the MLP model and the CNN model was 82.91% and 89.64%, respectively. However, this research used the offline LiDAR data, meaning the realtime prediction could not be achieved. Xiong et al. [26] developed the decision-making framework of lane change based on Hierarchical State Machine (HSM) using the onboard LiDAR data. However, that paper lacked a quantitative evaluation for the performance of the proposed algorithm. Those above-mentioned studies provided good references for lane change identification and prediction. Nonetheless, all of the previous methods of lane change identification and prediction algorithms were developed using onboard LiDAR data for autonomous vehicles. Those algorithms could not directly be used for the roadside LiDAR data serving CV network due to different work environments and data collection requirements [27]. Furthermore, a lot of previous studies relied on the combination of LiDAR data and camera data. The roadside LiDAR is expected to work independently without the help from camera data considering the massive deployment in the future [10]. Therefore, there are no algorithms that can directly be used for roadside LiDARs to identify and predict the lane change behavior for connected vehicles. This paper presents an approach for lane change identification and prediction using the roadside LiDAR data serving CV network. The lane boundary was extracted from the roadside LiDAR data. The real-time vehicle trajectories were selected as the input of the rule-based method for lane change detection. The proposed method was evaluated using real-time roadside LiDAR data. The identified lane change events can be broadcasted by the Dedicated Short Range Communications (DSRC) Roadside Unit (RSU), and be received by the Onboard Unit (OBU) installed on the vehicle.

actively sense and broadcast each road user’s status is an intuitive solution to fill the data gap. There are already many existing traffic sensors installed along the road network, but data from these conventional traffic sensors are not HRMTD required by the CV network. The traditional traffic sensors such as loop detectors, video detectors, Bluetooth sensors, and radar sensors mainly provide macro traffic data such as traffic flow rates, average speeds and occupancy. For example, conventional video sensors measure vehicle speed using the predefined detection zones. Only the average speed can be reported in the predefined detection zones [9]. HRMTD can be collected by probe vehicles or connected vehicles with the GPS logging function. However, probe vehicles or the low number of connected vehicles provide only sample data of the traffic fleet on the roads, while the connected vehicle applications need the data of all road users. Even the latest crowdsourced data, such as real-time travel time information from Wave, is still the macro-level traffic information that is generated by aggregating probe vehicle data. The new sensor technologies need to be explored to enhance the connected traffic infrastructures to sense the HRMTD of all traffic participants. Research has been started to enhance traffic infrastructures by deploying light detection and ranging (LiDAR) sensors on the roadside [10–15]. LiDAR is a surveying method that measures the distance to a target by illuminating that target with laser light. The new LiDAR technology can detect the 360-degree surrounding objects with relatively high accuracy. By deploying the LiDAR sensors along the roads or at the intersections, the real-time information of all road users, including connected vehicles and unconnected vehicles, can be obtained and broadcasted through the wireless technologies [16]. The roadside LiDAR sensors provide a solution to fill the gap during the transition period from unconnected vehicles to full-connected vehicles. The authors have done a series of studies for roadside LiDAR data processing, including background filtering, object clustering, vehicle identification, vehicle tracking and information broadcasting [9–14,17]. The detailed trajectories of vehicles can be obtained through those LiDAR data processing methods. The detailed trajectories provided good data inputs for lane change identification and lane change prediction. A lot of studies have been conducted for lane change identification and prediction using LiDAR sensors [18]. Sivaraman et al. [19] developed a dynamic probabilistic drivability map (DPDM) to predict lane change behavior using the combination of the LiDAR sensor and onboard camera. The DPDM encapsulated spatial, dynamic, and legal information into the system and recommended the required acceleration and timing to change lanes with minimum cost. Gao et al. [20] used multisensor data (video data, GPS data, wheel odometry data, laser, and data logger) fusion for lane change detection. A collaborative representation optimized projection classifier was trained to distinguish lane change behavior and non-lane change behavior. An average accuracy of 81.82% can be achieved in the test. Park et al. [21] developed an adaptive hidden Markov model (HMM) for rapid lane change recognition. The measurement uncertainty such as sensor noise and object detection error was considered in the HMM. The testing results showed that the proposed method can reach the 88.7% accuracy in the real driving real driving environment. Wang et al. [22] developed a lane change identification model for distant preceding vehicles using the a back-propagation (BP) neural network optimize by a particle swarm optimization algorithm. The accuracy of the algorithm was higher than 88% within 1.0 s before the vehicle reached the crossing line. Lee et al. [23] presented a method of mixing the lane marking information of LiDARs with the Around View Monitor camera-based lane marking data. The LiDAR data and camera data were compared to generate the correct lane location. The generated lane location was expected to be used for lane change prediction. Woo et al. [24] developed a lane-change detection method based on vehicle trajectories for autonomous vehicles. With the lane boundary information, the proposed method can predict the lane change behavior 1.74 s before the vehicle passed the centerline with 98.1% accuracy. Díaz-Álvarez et al.

2. Roadside LiDAR The LiDAR can emit a focused beam of light (a laser). Based on the returning beam, the distance between an object and the LiDAR can be calculated [28]. The LiDAR can work day and night without the influence from different light conditions. The LiDAR sensor used in this research was the VLP-16 LiDAR. The VLP-16 LiDAR has a rotational speed of 5 to 20 rotations per second, which can generate 600,000 3D points per second. It covers a 360° horizontal field of view and a 30° vertical field of view with ± 15° up and down. Table 1 summarizes the major features of VLP-16. The roadside LiDAR is different from the onboard LiDAR in autonomous vehicles. First, the roadside LiDAR is installed in a fixed location temporarily or permanently. Second, the roadside LiDAR has to be able to detect the vehicles without good shapes (such as when the vehicle is far away from the LiDAR sensor) [29]. The roadside LiDAR sensor can temporarily be installed on a tripod for a pilot study or permanently installed on roadside structures for long-term data collection. The location of the installation should allow the LiDAR to detect the objects on the road as far as possible. The location of the LiDAR should also consider people’s actions towards the LiDAR, such as vandalizing or destroying it. The suitable position of the LiDAR could be at a similar Table 1 Major features of VLP-16.

2

H orizon Field of View

3 60-degree

Field of View R eturn Modes Package Output Format Number of laser beams Rotation Rate Detection Range Operating Temperature Vibration

± 15-degree S trongest, last, or dural pcap 16 5–20 Hz 100 m (max) −10 °C to +60 °C 5 Hz to 2000 Hz, 3 Grms

Optics and Laser Technology 123 (2020) 105934

Y. Cui, et al.

Raw LiDAR Data

Background Filtering

Point Clustering Lane Boundary Identification Object Classification

LiDAR

Lane Boundary Generation GNN Tracking

Lane Change Identification and Prediction

Vehicle Trajectory

Lane Change Identification Off-Line

Fig. 1. Roadside LiDAR Installation.

RSU

height to a top of a pedestrian signal. Fig. 1 shows an example of the LiDAR sensor installed on the top of a pedestrian signal at the intersection of N Virginia St@15th St in Reno, Nevada. The recommended height is 7 ft ~ 9 ft above the ground considering the limited vertical field of view (FOW) and possible vandalism [30]. Fig. 2 shows the raw LiDAR data reviewed in the visualization software. A two-minutes roadside LiDAR video was uploaded to YouTube: https://youtu.be/IO1kvis5ERo. As is shown in Fig. 2, the scanned objects by the roadside LiDAR include vehicles, pedestrians, trees, ground surface, and buildings. The purpose of this research is to detect the lane change behavior from the raw LiDAR data. The whole system of the proposed lane change detection includes three major parts: lane boundary identification, lane change identification and prediction, and system communication. The flow chart of the proposed procedure was documented in Fig. 3. The lane boundary identification includes four major parts:

Lane Change Prediction Real-Time

OBU

DSRC-Bluetooth

Andriod App for Visualization

System Communication (Under Construction)

Fig. 3. Flow Chart of Proposed System.

• Lane Boundary Generation: to extract the locations of the traffic lanes from the LiDAR data.

Vehicle trajectories can be generated after applying the vehicle tracking algorithm. The lane change identification, which uses the offline LiDAR data, is developed for the driver’s behavior analysis. The lane change prediction part is developed to predict the driver’s lane change motion in advance. The system communication part is still under construction. The unsafe lane change behavior is proposed to be broadcasted by the RSU through DSRC. The OBU will receive the message and send it to the App developed for the Android system through Bluetooth. The drivers can visualize the events and receive warning information from the App. A Beta version of the communication system was documented in [16].

• Background Filtering: to exclude background points from the raw LiDAR data. • Point Clustering: to cluster points belonging to the same object. • Object Classification: to distinguish vehicles and non-vehicle users.

3. Lane boundary identification

Trees

3.1. Background filtering Building

The background means the irrelevant points other than moving objects in the space. The trajectories of the vehicles could not be extracted with the background points existing in the space. So those background points— buildings, trees, and the ground surface should be excluded before lane detection. The algorithm firstly filters the background based on a point density-based method named 3D-DSF [13]. The 3D-DSF integrated multiple frames (1500–3500 frames) into one space. The whole space was then divided into many small cubes. Each cube can be identified as background or not based on a threshold of point density. Fig. 4 shows an example of the result of background filtering. The previous practice showed that this algorithm can exclude more than 97% of background points from the space with the very limited time cost. For more details about the 3D-DSF, we refer the readers to [13].

Pedestrian Vehicle

Ground Surface

Fig. 2. Roadside LiDAR Data. 3

Optics and Laser Technology 123 (2020) 105934

Y. Cui, et al.

(a) Before Background Filtering

(b) After Background Filtering Fig. 4. Before-and-after Background Filtering.

information (error) between target outputs and estimated outputs was given back to the input layer as a guide to adjust the weights in the next training round. Through this iteration process, the neural network gradually learned the inner relationship between input and output by adjusting the weights for each neuron in each layer to reach the best accuracy. When the minimal error was reached, or the number of iterations was beyond the predefined value, the training process was terminated with fixed weights. The testing results showed that the ANN can distinguish vehicles and pedestrians with an accuracy of 94%. The pedestrians were then excluded from each frame.

3.2. Point clustering After background filtering, points belonging to the same object were grouped with a Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm for lane identification [27]. The conventional DBSCAN method was updated to a revised version with the adaptive parameters (the point density threshold and search radius) at the different distances from the LiDAR sensor. The adaptive parameters considered different point densities and distributions at different distances to the LiDAR [11]. Though there may be some noise left in the space after background filtering, the influence of the noise on the performance of DBSCAN was limited [13].

3.4. Lane boundary generation Most existing lane recognition methods were based on detecting landmarks, which did not work well if the road marking was not obvious [10]. This research used the point density instead of the lane marking for lane boundary generation. The location of the lane is identified based on the distribution of the historical vehicle points on the road. This paper applied a revised grid-based clustering (RGBC) developed by Cui et al. [31]. There are two assumptions for RGBC. The first assumption of this method is that the density of vehicle points on the road is much higher than that of other areas. The second one is that the proportion of lane-changing vehicles is much lower than that of gothrough vehicles. With the two assumptions, the vehicle points in the same lane can be clustered together. Instead of generating a regression line for the boundary of the lanes, this paper used the squares to

3.3. Object classification If there are pedestrians on the road, those pedestrians may be clustered as vehicles by mistake. A method based on an Artificial Neural Network (ANN) was developed for vehicle and pedestrian classification [27]. The total number of points, distance to LiDAR, and direction of each cluster group were selected as the inputs for ANN. The input data was fed into the input layer. Then the activity of each hidden layer was determined by the inputs and the weights that connected the input layer and the hidden layer. A similar process was done between the hidden layer and the output layer. The transmission from one neuron in one layer to another neuron in the next layer was independent. The output layer produced the estimated outcomes. The comparison 4

Optics and Laser Technology 123 (2020) 105934

Y. Cui, et al.

headways and different travel speeds. The curve of frame-to-frame distance at 10 HZ presents the travel distance of the same vehicle between adjacent frames (recorded at 10 HZ) at different travel speeds. The comparison shows that the vehicle travel distance between adjacent frames is much shorter than the distance between vehicles in the same lane. The customarily-suggested headway is 3 s for traffic safety. Although Fig. 6 is the comparison of the distance of vehicles in the same lane, similar traffic headways are needed when a vehicle changes lanes. Therefore, the GNN method can be used to track the same vehicle efficiently in different frames. Fig. 7 shows the trajectories of the vehicles collected in 30 min at the site of N Mccarran Blv@Evans Ave in Reno, Nevada. Lane change trajectories are also included in the results. Markers with different colors in Fig. 7 represent individual vehicle trajectories. Each tracking point in one trajectory contains the detailed vehicle information such as frame number, speed, location (x, y coordinate), and a tracking ID (same value for the same object). The vertical distance between each tracking point to the nearest lane boundary (DNLB) can then be calculated, as shown in Fig. 8. If y coordinate is higher than the nearest point in the lane boundary, the distance is defined as positive (+), otherwise, it is defined as negative (-). The lane boundary meets the Eq. (1):

LiDAR

(a) Field Site

y = kx + by = kx + b

(1)

The intersecting point of tangent and perpendicular can be calculated using Eqs. (2) and (3):

Y-axis/m

Xip = −

k (b − (yi +

xi )) k

1 + k2

Yip = Xip ∗ k + b

(2) (3)

where ip is the intersect point of tangent and perpendicular. x i is the x coordinate of ith tracking point. yi is the y coordinate of ith tracking point. The distance can then be obtained using the following Eq. (4).

D=

(x i − Xip )2 + (yi − Yip)2

(4)

The signal of D meets the following threshold.

⎧if yi − y > 0, signal is+⎫ ⎨ ⎭ ⎩if yi − y < 0, signal is−⎬

X-axis/m

(b) Generated Lanes Fig. 5. Lane Identification.

(5)

The nearest lane boundary was identified at the beginning of the tracking and did not change during the tracking time. This means the “nearest lane boundary” may not always be the nearest lane physically since the vehicles may change lanes or make turns at intersections. The tracking time for different vehicles is a different subject to the different location and speed of the vehicles. To better explore the difference between the vehicles with lane change and the vehicles without lane change, the tracking time was normalized from 0 to 1, which means the beginning of the tracking time is 0 and the end of the tracking time is 1. Fig. 9 shows nine vehicles with different movements. The DNLB of each tracking point was calculated. Vehicle A, B, and C were three vehicles going straight in the same lane without lane changes. The DNLBs of Vehicle A, B, and C were relatively fixed. The max distances for vehicle A, B, and C were 1.06 m, 0.54 m, and 0.90 m, respectively. Vehicle D, E, and F were three vehicles that made lane changes during the tracking time. The DNLB was obviously higher than those without the lane change. The max distances for vehicle D, E, and F were 3.04 m, 2.89 m, and 2.87 m, respectively. We also extracted three vehicles (G, H, I) that made a left turn at the intersection. The results showed that the DNLBs for turning vehicles were larger (G: 9.85 m, H: 9.87 m, I: 9.85 m) compared to lane change movement and going straight movement. Fig. 8 clearly shows that the trend of DNLB is Turning Movement > Lane Change > Going Straight. Ideally, a predefined max DNLB may be used to distinguish different movements. However, since there may be some outliers in the vehicle trajectories

represent the location of the lanes in a 2D space. This means space is divided into small squares in the 2D space, some of the squares are identified as the areas of different lanes based on the point density. This can reduce the computational load for calculating the high order equations, especially for the roads with curves. The squares representing the areas of the lanes are stored in a N*4 matrix. For realtime data processing, the vehicle is matched to the corresponding lane if the vehicle points (XY coordinates) are found in the corresponding matrix. Fig. 5 shows an example of the identified lane boundaries. It is shown that the location of each lane can be generated correctly.

4. Lane change identification The Global Nearest Neighbor (GNN) was applied to track the same vehicle in different frames [29]. The algorithm tracks the front key point of the vehicle approaching the sensor and tracks the back corner key point when the vehicle moves away from the sensor. An analysis was performed to examine whether the vehicle travel distance between adjacent frames (at 10 HZ) was shorter than the distance between continuous vehicles in a road lane. Fig. 6 presents the analysis results. The curves of 1-second headway, 2-second headway, and 3-second headway describe the closest vehicle distance in one lane with different 5

Optics and Laser Technology 123 (2020) 105934

Y. Cui, et al.

350

Frame-to-Frame Distance at 10 Hz

308.07

1-second headway

300

264.06

2-second headway

Distance (ft)

250

3-second headway

220.05

200

176.04

88.02

100

58.68 29.34 2.934

50

176.04 146.7

132.03

150

205.38

117.36

88.02 58.68

44.01 4.401

73.35

88.02

102.69

5.868

7.335

8.802

10.269

40

50

60

70

0 10

20

30

Vehicle Speed (mph) Fig. 6. Comparison of Frame-to-frame Travel Distance and Distances between Different Vehicles in the Same Lane.

calculated and was used for movement classification. Then the max value of DNLB can be obtained. The threshold of the division for different movements is summarized in Table 2. Fig. 10 shows an example of lane change identification. The Max AD in this example was −2.92 m, which was located in the area {−4.0 m ≤ Max AD ≤ −2.0 m}. This event was then identified as a lane change movement. The method has been coded into an automatic procedure in Matlab. The proposed method was evaluated using 100 vehicle trajectories. The results showed that the lane change identification can distinguish different movements with 95% accuracy. Five turning movements were identified as lane change subject to the limited detection range (Vehicles went out of the detection range when they turned right, therefore, Max AD did not reach the threshold of Turning Movement). This method is fairly straightforward and can distinguish different movements with high accuracy.

Y Axis

Fig. 7. Vehicle Trajectories.

5. Lane change prediction The lane change identification can be used to extract lane change movement from vehicle trajectories, which can be used for offline data analysis. For the driving assistance system to provide real-time warning information, real-time lane change prediction is required. Different drivers and different types of vehicles may lead to different DNLB. The previous study [33] found that lateral offset while going straight has a standard deviation of 0.51 m. If a vehicle is detected that has 1.0 m of DNLB at one frame, it is difficult to say whether it is going to go straight or not, considering the deviation on the lateral offset. The lateral speed may be useful for lane change prediction. However, drivers may have different lateral speed when they make lane changes. Aggressive drivers may choose sharp lateral speeds and conservative drivers may choose a smooth lateral speed, indicating the threshold for lateral speed may also not work well for the prediction of the lane change. High lateral speed may miss some lane change events that drivers choose while a low lateral speed may increase the false detection, for example, some gothrough vehicles with a large deviation of lateral speed were detected as lane-change vehicles. The change of DNLB (CDNLB) and the cumulative change of DNLB (CCDNLB) was finally selected for lane change prediction. The CDNLB can be easily calculated using Eq. (6)

+ +

+

Fig. 8. Distance to Nearest Lane Boundary.

obtained from the vehicle tracking algorithm [32], the max DNLB may not distinguish different movements correctly. Those error points often appear randomly during the tracking procedure and the number was relatively low. We did not see many points in adjacent frames that were both error points from the previous test. A sliding window with 5 points in length was used to scan the trajectory. The moving interval of the sliding window was set at 1 point per time. The average DNLB (AD) was

CDNLBi = DNLBi − DNLBi − 1

(6)

where DNLBi represents the DNLB at frame i. The CCDNLB can be obtained using Eq. (7) i+n

CCDNLBi : i + n =

∑ CDNLB i

6

(7)

Optics and Laser Technology 123 (2020) 105934

Y. Cui, et al.

12 A: Going Straight

Distance from Tracking Point to Lane Boundary (m)

B: Going Straight

C: Going Straight

10

D: Lane Changing E: Lane Changing F: Lane Changing G: Turning Left

8

H: Turning Left I: Turning Left

6

4

2

0 0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Relative Tracking Time (s) Fig. 9. Distance from Tracking Point to Lane Boundary for Different Movements.

illustrated as follows. To eliminate outliers in tracking points, the procedure searches the nearest three frames before and after the tracking frame. For example, if we want to predict the movement of one vehicle at frame i, the procedure will calculate CDNLBi, CDNLBi-1, and CDNLBi-2. If CDNLBi-2, CDNLBi-1, and CDNLBi have the same signal (+or -), and CCDNLBi-2:i ≥ 0.8 m, this vehicle at the frame i will be reported as lane change movement (We did not distinguish lane change and turning movement in the prediction part). For the case that CCDNLB ≥ 0.8 m, but CCDNLB at those three adjacent frames have different signals, this case will not be reported as lane change movement since a high CCDNLB may be caused by the outliers. Then the procedure will search CDNLBi+1 and CDNLBi+2 to see if they have the same signal with CDNLBi. If three adjacent frames are found to have the same signal and CCDNLB ≥ 0.8 m, this vehicle movement will still be

Table 2 Thresholds for Movement Division with Sliding Window. Turning Movement

Max AD ≥ 4.0 m or Max AD ≤ −4.0 m

Lane Change

2.0 m ≤ Max AD ≤ 4.0 m or −4.0 m ≤ Max AD ≤ -2.0 m −2.0 m ≤ Max ADDN ≤ 2.0 m

Going Straight

The threshold of CCDNLB for lane change prediction should be higher than 0.51 m to exclude the influence of the deviation of lateral offset. A higher value will increase accuracy but will increase the time used for prediction [32]. In this research, 0.8 m were selected as the threshold of CCDNLB for lane change prediction based on the analysis of the previous study [33]. The details of the prediction procedure are

-1 -1.01

-0.5

0

2

4

6

8

-1.02 -1.03

-1

-1.04 -1.05

-1.5

-1.06

Interval: 1 point per time

-2

Max AD=-2.92m

Distance to the Nearest Lane Boundary (m)

0

-2.5 -3

-3.5

0

10

20

30

40

50

60

Tracking Time (second) Fig. 10. Lane Change Identification. 7

70

80

90

100

Optics and Laser Technology 123 (2020) 105934

Y. Cui, et al.

using PT.

Start at frame i

CDNLBi-2-:i have the same signal (+,-)

1) Success: 0 < PT < 5.0 (predicted within the time limit). 2) Failure: PT < 0 (too late). No

j=i+1

CDNLBi-2:j have the same signal (+,-)

Yes

No

In the 96 detected lane change movements, 85.42% (82) of the lane change behavior was detected with PT located among [3.5 s, 4.0 s] and 95.83% (92) of the lane change behavior was detected with PT located among [2.3 s, 4.0 s]. Only 4 events were considered as a failure of prediction since PT < 0. From the results, we can confirm that the proposed method can meet the real-time lane change prediction for the CV network.

i=j+1

Yes

No

i=i+1

No

CCDNLBi-2:j>=0.8 Yes

CCDNLBi-2:i>=0.8

Report Lane Change at frame j

Yes

7. Conclusions and discussion

Report Lane Change at frame i

This paper developed lane change identification and prediction method with roadside LiDAR data for the CV network. The method for lane change identification from historical vehicle trajectories is straightforward. The sole input for lane change identification is the lateral offset from the lane boundary-DNLB. The testing results showed that almost all lane change and turning movements can be distinguished from the go straight vehicles, though a few movements may be misidentified due to the quality of roadside LiDAR data. The extracted trajectories of lane change vehicles will be helpful for lanechange related near-crash analysis. The change of DNLB and a cumulative change of DNLB were involved in the lane change prediction. Most lane change events can be detected within 1.0 s from the maneuver. This research did not require any inputs from driver observations such as eye glance and head pose data. The lane change prediction was conducted based on the pure vehicle trajectories which were easy to obtain with the help of roadside LiDAR. The authors are working on developing an Android system-based app to receive the real-time lane change warning information through the RSU. A Beta version of the app is shown in Fig. 12. The unsafe lane change events will be shown in the app in the “Collision warning message” part. It should be noted that we used a constant threshold (0.8 m) for CCDNLB for lane change prediction. The threshold was selected based on a previous study conducted 20 years ago [33]. Driver behavior may have already changed a lot in the past 20 years. Therefore, it is necessary to examine driving behaviors using the new traffic database, such as the Strategic Highway Research Program 2 (SHRP 2) database [34]. This work will be studied in the next step. As for the object classification, the basic ANN was applied in this paper. The authors will test the performance of other methods, such as random forest and probabilistic neural network, in future studies. This research did not analyze the different maneuvering of different vehicle types since the classification of different vehicle types is still an on-going research for roadside LiDAR data. Though the proposed lane change detection method can achieve relatively high accuracy, it is still expected to further reduce the false rate since the false alert could cause the driver to get distracted or annoyed. The accuracy of the current method was mainly limited by the accuracy of the roadside LiDAR data processing algorithms [32]. One challenge is that some vehicles may not be detected or can partially be detected if only one roadside LiDAR sensor is deployed on the road considering the occlusion problem. Occlusion refers to the situation when one object is hidden by another object that passes between it and the roadside LiDAR [35]. For example, the vehicle is a far lane may be only partially visible or invisible if it is blocked by another vehicle in the lane close to the LiDAR sensor. Our solution is to install multiple LiDAR sensors at different corners of intersections and along both sides of roads, thus providing better quality LiDAR data of all road users. The following studies will investigate how to integrate the data obtained from different LiDAR sensors.

Fig. 11. Procedure of Lane Change Prediction.

identified as a lane change movement. For those events that vehicles have low lateral speeds, CDNLB at three adjacent frames may have the same signal but the CCDNLB may be less than 0.8 m. Under that situation, the procedure will search more frames as long as CDNLBs at those frames have the same signal. If at the same time, CCDNLB of the vehicle at any frame is higher than 0.8 m, then the vehicle movement at this frame will be reported as a lane change movement. The procedure is further illustrated in Fig. 11. 6. Evaluation of lane change prediction The algorithm was implemented in Matlab and was deployed on a Dell desktop equipped with Intel Core i7-4790 CPU (3.60 GHz) and 16 GB of RAM. This procedure was evaluated using the data collected in a real site—N Virginia St@15th St, in Reno, Nevada. A total of 103 lane change movements and 324 go straight movements were used for evaluation. Two evaluation criteria [32], prediction accuracy (PA) and prediction time (PT), were applied for the performance evaluation of the lane change prediction. As for PA, the definition was shown as

PA =

Ps + Pc NS + NC

(8)

where NS and NC represent the number of go-straight vehicles and the number of lane change vehicles, respectively; Ps and Pc represent the number of predicted go-straight vehicles and the number of predicted lane change vehicles, respectively. The results of the PA were shown in Table 3. In the testing dataset, 4.63% of go straight vehicles were identified as lane change vehicles by mistake, and 5.88% of lane change vehicles were not successfully detected. The overall accuracy of lane change prediction was 95.1%. The PT was defined as

PT = Tc − Tj

(9)

where Tc is the moment at which the vehicle crosses the centerline and Tj is the moment at which the algorithm judges the movement of the vehicle. Obviously, a high PT can give the driver more time to react if an emergency happens. Motivated by [20], two criteria were defined Table 3 Evaluation of Lane Change Identification. NS

Ps

PA (%)

324 NC 102

309 Pc 96

95.07

8

Optics and Laser Technology 123 (2020) 105934

Y. Cui, et al.

DBCMA-LS

DBCMA-LS 9s 16s

Count down signal heads of the next signal

Use the device location as map center

Several cycles marked with distance

Vehicles or pedestrians detected by LiDAR Buttons to zoom in/zoom out/reset to center

Manage Bluetooth connection & more options

Collision warning message Warning: possible collision on the left

Status of Bluetooth connection

Fig. 12. Implementation of the real-time safety warning app on an Android device.

References

and automated vehicles: a survey, IEEE Trans. Intell. Veh. 1 (1) (2016) 105–120. [19] S. Sivaraman, M.M. Trivedi, Dynamic probabilistic drivability maps for lane change and merge driver assistance, IEEE Trans. Intell. Transp. Syst. 15 (5) (2014) 2063–2073. [20] J. Gao, Y.L. Murphey, H. Zhu, Personalized detection of lane changing behavior using multisensor data fusion, Computing (2019) 1–24. [21] S. Park, W. Lim, M. Sunwoo, Robust lane-change recognition based on an adaptive hidden Markov model using measurement uncertainty, Int. J. Automot. Technol. 20 (2) (2019) 255–263. [22] C. Wang, Q. Sun, Z. Li, H. Zhang, K. Ruan, Cognitive competence improvement for autonomous vehicles: a lane change identification model for distant preceding vehicles, IEEE Access 7 (2019) 83229–83242. [23] H. Lee, S. Kim, S. Park, Y. Jeong, H. Lee, K. Yi, June. AVM/LiDAR sensor based lane marking detection method for automated driving on complex urban roads, 2017 IEEE Intell. Veh. Symp. IV (2017) 1434–1439. [24] H. Woo, Y. Ji, H. Kono, Y. Tamura, Y. Kuroda, T. Sugano, Y. Yamamoto, A. Yamashita, H. Asama, Lane-change detection based on vehicle-trajectory prediction, IEEE Rob. Autom. Lett. 2 (2) (2017) 1109–1116. [25] A. Díaz-Álvarez, M. Clavijo, F. Jiménez, E. Talavera, F. Serradilla, Modelling the human lane-change execution behaviour through multilayer perceptrons and convolutional neural networks, Transp. Res. Part F: Traffic Psychol. Behav. 56 (2018) 134–148. [26] G. Xiong, Z. Kang, H. Li, W. Song, Y. Jin, J. Gong, Decision-making of lane change behavior based on RCS for automated vehicles in the real environment, 2018 IEEE Intelligent Vehicles Symposium IV, 2018, pp. 1400–1405. [27] J. Zhao, H. Xu, J. Wu, Y. Zheng, H. Liu, Trajectory tracking and prediction of pedestrian's crossing intention using roadside LiDAR, IET Intel. Transp. Syst. 13 (5) (2018) 789–795. [28] M. Campos-Taberner, A. Romero-Soriano, C. Gatta, G. Camps-Valls, A. Lagrange, B. Le Saux, A. Beaupere, A. Boulch, A. Chan-Hon-Tong, S. Herbin, H. Randrianarivo, Processing of extremely high-resolution Lidar and RGB data: outcome of the 2015 IEEE GRSS data fusion contest–part a: 2-D contest, IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 9 (12) (2016) 5547–5559. [29] J. Wu, An automatic procedure for vehicle tracking with a roadside LiDAR sensor, Instit. Transp. Eng. ITE J. 88 (11) (2018) 32–37. [30] J. Chen, H. Xu, J. Wu, R. Yue, C. Yuan, L. Wang, Deer crossing road detection with roadside LiDAR sensor, IEEE Access 7 (2019) 65944–65954. [31] Y. Cui, H. Xu, J. Wu, Y. Sun, J. Zhao, Automatic vehicle tracking with roadside LiDAR data for the connected-vehicles system, IEEE Intell. Syst. 34 (3) (2019) 44–51. [32] J. Wu, H. Xu, Y. Zheng, Z. Tian, A novel method of vehicle-pedestrian near-crash identification with roadside LiDAR data, Accid. Anal. Prev. 121 (2018) 238–249. [33] A. Pentland, A. Liu, Modeling and prediction of human behavior, Neural Comput. 11 (1) (1999) 229–242. [34] J. Wu, H. Xu, The influence of road familiarity on distracted driving activities and driving operation using naturalistic driving study data, Transp. Res. Part F: Traffic Psychol. Behav. 52 (2018) 75–85. [35] B. Lv, H. Xu, J. Wu, Y. Tian, S. Tian, S. Feng, Revolution and rotation-based method for roadside LiDAR data integration, Opt. Laser Technol. 119 (2019) 105571.

[1] E. Uhlemann, Introducing connected vehicles [connected vehicles], IEEE Veh. Technol. Mag. 10 (1) (2015) 23–31. [2] E. Uhlemann, Connected-vehicles applications are emerging [connected vehicles], IEEE Veh. Technol. Mag. 11 (1) (2016) 25–96. [3] R. Toledo-Moreo, M.A. Zamora-Izquierdo, IMM-based lane-change prediction in highways with low-cost GPS/INS, IEEE Trans. Intell. Transp. Syst. 10 (1) (2009) 180–185. [4] W.H. Lin, H. Liu, H.K. Lo, Guest editorial: Big data for driver, vehicle, and system control in its, IEEE Trans. Intell. Transp. Syst. 17 (6) (2016) 1663–1665. [5] J. Wu, H. Xu, Driver behavior analysis for right-turn drivers at signalized intersections using SHRP 2 naturalistic driving study data, J. Saf. Res. 63 (2017) 177–185. [6] J. Wu, H. Xu, Y. Zheng, W. Liu, Y. Sun, R. Yue, X. Song, November. Driver Behavior Fault Analysis on Ramp-related Crashes/Near-Crashes Using SHRP 2 Naturalistic Driving Study Data, 2018 21st International Conference on Intelligent Transportation Systems (ITSC), 2018, pp. 2134–2139. [7] Z.Y. Zhang, J. Zheng, X. Wang, X. Fan, July. Background Filtering and Vehicle Detection with Roadside Lidar Based on Point Association, 2018 37th Chinese Control Conference (CCC), 2018, pp. 7938–7943. [8] Q. Wang, J. Zheng, H. Xu, B. Xu, R. Chen, Roadside magnetic sensor system for vehicle detection in urban environments, IEEE Trans. Intell. Transp. Syst. 19 (5) (2018) 1365–1374. [9] Y. Wan, Y. Huang, B. Buckles, Camera calibration and vehicle tracking: highway traffic video analytics, Transp. Res. Part C: Emer. Technol. 44 (2014) 202–213. [10] J. Wu, H. Xu, J. Zheng, Automatic Background Filtering and Lane Identification with Roadside LiDAR Data, 20st International Conference on Intelligent Transportation Systems (ITSC), 2017, pp. 1–20. [11] J. Zhao, H. Xu, H. Liu, J. Wu, Y. Zheng, D. Wu, Detection and tracking of pedestrians and vehicles using roadside LiDAR sensors, Transp. Res. Part C: Emerg. Technol. 100 (2019) 68–87. [12] J. Wu, Y. Tian, H. Xu, R. Yue, A. Wang, X. Song, Automatic ground points filtering of roadside LiDAR data using a channel-based filtering algorithm, Opt. Laser Technol. 115 (2019) 374–383. [13] J. Wu, H. Xu, Y. Sun, J. Zheng, R. Yue, Automatic background filtering method for roadside LiDAR data, Transp. Res. Rec. 2672 (45) (2018) 106–114. [14] Wu, J. Xu, H., and Zhao, J. Automatic Lane Identification using the Roadside LiDAR Sensors. IEEE Intelligent Transportation Systems Magazine. In press, 2018. DOI: 10. 1109/MITS.2018.2876559. [15] Y. Sun, H. Xu, J. Wu, J. Zheng, K.M. Dietrich, 3-D data processing to extract vehicle trajectories from roadside LiDAR data, Transp. Res. Rec. 2672 (45) (2018) 14–22. [16] B. Lv, H. Xu, J. Wu, Y. Tian, Y. Zhang, Y. Zheng, C. Yuan, S. Tian, LiDAR-enhanced connected infrastructures sensing and broadcasting high-resolution traffic information serving smart cities, IEEE Access 7 (2019) 79895–79907. [17] J. Zheng, B. Xu, X. Wang, X. Fan, H. Xu, G. Sun, A portable roadside vehicle detection system based on multi-sensing fusion, Int. J. Sensor Netw. 29 (1) (2019) 38–47. [18] D. Bevly, X. Cao, M. Gordon, G. Ozbilgin, D. Kari, B. Nelson, J. Woodruff, M. Barth, C. Murray, A. Kurt, K. Redmill, Lane change and merge maneuvers for connected

9