Automated driving recognition technologies for adverse weather conditions

Automated driving recognition technologies for adverse weather conditions

IATSSR-00231; No of Pages 10 IATSS Research xxx (2019) xxx Contents lists available at ScienceDirect IATSS Research Review article Automated drivi...

2MB Sizes 1 Downloads 49 Views

IATSSR-00231; No of Pages 10 IATSS Research xxx (2019) xxx

Contents lists available at ScienceDirect

IATSS Research

Review article

Automated driving recognition technologies for adverse weather conditions Keisuke Yoneda ⁎, Naoki Suganuma, Ryo Yanase, Mohammad Aldibaja Kanazawa University, Kakuma-machi, Ishikawa, Kanazawa 9201192, Japan

a r t i c l e

i n f o

Article history: Received 23 August 2019 Received in revised form 24 October 2019 Accepted 20 November 2019 Available online xxxx Keywords: Automated vehicle Self-localization Surrounding recognition Path planning Adverse condition

a b s t r a c t During automated driving in urban areas, decisions must be made while recognizing the surrounding environment using sensors such as camera, Light Detection and Ranging (LiDAR), millimeter-wave radar (MWR), and the global navigation satellite system (GNSS). The ability to drive under various environmental conditions is an important issue for automated driving on any road. In order to introduce the automated vehicles into the markets, the ability to evaluate various traffic conditions and navigate safely presents serious challenges. Another important challenge is the development of a robust recognition system can account for adverse weather conditions. Sun glare, rain, fog, and snow are adverse weather conditions that can occur in the driving environment. This paper summarizes research focused on automated driving technologies and discuss challenges to identifying adverse weather and other situations that make driving difficult, thus complicating the introduction of automated vehicles to the market. © 2019 International Association of Traffic and Safety Sciences. Production and hosting by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

Contents 1. 2.

Introduction . . . . . . . . . . . . . . Automated driving . . . . . . . . . . . 2.1. System overview. . . . . . . . . 2.2. Process flow . . . . . . . . . . . 2.3. Self-localization technology . . . . 2.4. Surrounding perception technology 2.5. Trajectory planning technology . . 3. Adverse weather conditions . . . . . . . 3.1. Sun glare condition. . . . . . . . 3.2. Rainy condition . . . . . . . . . 3.3. Foggy condition . . . . . . . . . 3.4. Snowy condition . . . . . . . . . 4. Discussion . . . . . . . . . . . . . . . 5. Conclusion . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

1. Introduction ⁎ Corresponding author. E-mail address: [email protected] (K. Yoneda). Peer review under responsibility of International Association of Traffic and Safety Sciences.

Automated driving technology is being researched and developed as a next-generation transportation system for purposes such as reducing the number of traffic accidents and driving loads. At present, public road

https://doi.org/10.1016/j.iatssr.2019.11.005 0386-1112/© 2019 International Association of Traffic and Safety Sciences. Production and hosting by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http:// creativecommons.org/licenses/by-nc-nd/4.0/).

Please cite this article as: K. Yoneda, N. Suganuma, R. Yanase, et al., Automated driving recognition technologies for adverse weather conditions, IATSS Research, https://doi.org/10.1016/j.iatssr.2019.11.005

2

K. Yoneda et al. / IATSS Research xxx (2019) xxx

demonstration experiments have mainly been conducted in Europe, the United States, and Asia. Solving technical problems in various circumstances is an important issue in the market introduction of automated driving technology. An automated vehicle recognizes surrounding objects, makes decisions and controls its steering and acceleration autonomously. It is generally equipped with various types of sensors such as light detection and ranging (LiDAR), millimeter-wave radar (MWR), cameras, and global navigation satellite system/inertial navigation system (GNSS/INS). The following processes occur in real-time: • Self-Localization: The vehicle position is estimated by introducing GNSS/INS and applying map matching. • Environmental Recognition: Static obstacles and dynamic objects are recognized using range sensors and vision sensors. Traffic lights as dynamic road features are recognized using vision sensors. • Motion Prediction: The future states of objects are predicted by using the digital map and dynamic object information. • Decision-Making: The driving route is computed considering driving rules using the predefined digital map and recognition results. • Trajectory Generation: Smooth and safety trajectories are optimized by considering the driving lane, static objects, and dynamic objects. • Control: Steering and acceleration/deceleration are controlled along the obtained trajectory. In order to introduce automated vehicles into the marketplace, successful driving evaluations of various types of traffic conditions are indispensable. Developing decision-making systems by taking into account different traffic conditions such as road structures, traffic density, and traffic rules is necessary when introducing automated vehicles into different applications such as urban or suburban areas, as well as different countries. In addition, if there is a streetcar or parked vehicles in the driving area, it is necessary to consider additional safety decisionmaking for such specific situations. In order to develop a robust system and adapt it to such complicated situations flexibly, the priorknowledge-based approach is an effective solution. Therefore, digitalmap-based perception and decision-making systems are commonly developed for the Society of Automotive Engineers (SAE) International level 4 and 5 systems [1]. Digital maps such as high-definition (HD) maps are maintained by various. A map consists of stationary road features such as lanes, curbs, stop lines, zebra crossings, traffic signs, and traffic lights, together with their precise positions including latitude, longitude, and altitude. In addition, the map contains detailed road shapes and relational information for the road features. This makes it possible to simplify the environmental recognition and abstract decision-making in surrounding road situations. For example, by referring to a map relating to traffic light positions, it is possible to determine search regions in the captured image and simplify recognition of the traffic light states in a robust manner. By referring to a map relating to the centerlines around the vehicle, it is possible to extract a driving lane, determine the conditions of vehicles traveling in the surrounding lanes, and easily generate smooth trajectories. In this way, automated driving that considers differences in travel areas and traffic rules can be abstracted by using digital maps. Another important challenge is the implementation of a robust recognition system that considers adverse weather conditions. Sun glare, rain, fog, and snow are conditions of adverse weather that can occur in a typical driving environment. Since each sensor mounted on an automated vehicle has advantages and disadvantages for different environmental conditions, it is necessary to design robust recognition and decision-making systems under these conditions. From a practical point of view, depending on the specifications of the onboard sensors, there are situations in which automated driving is not possible under adverse conditions. Therefore, it is an important issue to identify the situations that make driving difficult. This paper summarizes research cases that focus on automated driving technology under adverse conditions and discusses the challenging issues in introducing automated vehicles under such conditions. Specifically, this paper

describes the effects on the observation data and the recognition performance of major sensors such as LiDAR, MWR and camera under adverse weather conditions, and discusses the technical problems. Many review papers on automated driving technology have been reported, and an outline of the key technologies was created [2–14]. This paper presents automated driving technologies for adverse weather conditions that have not been covered sufficiently in previous review papers. The rest of this paper is composed as follows. Section 2 explains a general automated vehicle system. Section 3 describes challenging issues under adverse weather conditions such as severe sunshine, rain, fog, and snow. Finally, Section 4 discusses how to deal with such challenging conditions, and Section 5 concludes the paper. 2. Automated driving 2.1. System overview Automated vehicles are equipped with various sensors such as LiDAR, MWR, cameras, and GNSS/INS to recognize the surrounding environment. Fig. 1 shows an example of an automated vehicle owned by our research group. The vehicle is equipped with an Applanix POS/ LV220 coupled GNSS and INS. It provides position (i.e., latitude, longitude, and altitude) and orientation (pitch, yaw, and roll) information at 100 Hz. A 3D LiDAR Velodyne HDL-64E S2 with 64 separate beams is mounted on the vehicle to take measurements of the environment. It measures the 3D omnidirectional distance at a frequency of about 10 Hz. Nine MWRs are installed inside the front and rear bumpers to recognize distant objects. The MWRs measure the distance, angle, and relative velocity of objects at 20 Hz. A mono-camera is installed on the windshield inside the vehicle. It provides a 1920 × 1080-pixel resolution at 10 Hz. Table 1 summarizes the characteristics of major sensors (LiDAR / MWR / Camera). LiDAR is an active sensor that can measure the distance to an object by irradiating an infrared laser using the Time of Flight (ToF). In addition to the distance to the object, infrared reflectivity can be measured. Then, it is possible to distinguish objects with different

GNSS Antenna

3-D LiDAR

Frontal Camera

Millimeter-wave Radar (a) Experimental Vehicle 4

Rotary Encoder 3 2

5

1 6

9 7 8 (b) Omni-directional Millimeter-wave Radar Fig. 1. Automated Vehicle.

Please cite this article as: K. Yoneda, N. Suganuma, R. Yanase, et al., Automated driving recognition technologies for adverse weather conditions, IATSS Research, https://doi.org/10.1016/j.iatssr.2019.11.005

K. Yoneda et al. / IATSS Research xxx (2019) xxx

3

Table 1 Characteristics of LiDAR, MWR and camera.

LiDAR MWR Camera

Sensing type

Physical type

Measurement range

Directional resolution

Distance resolution

Sunlight resistance

Environmental resistance (Rain, Fog, Snow)

Active Active Passive

Infrared laser Radio wave Visible light /Near infrared (NIR) /Far infrared (FIR)

About 200 m About 200 m More distant (depending on lenses)

O X O

O O X

O O X

X O X

O: better, X: not good.

reflectance such as road surface and lane boundary. Fig. 2 shows an example of 3D LiDAR observation data. According to Fig. 2, there is an advantage that the surroundings can be observed three-dimensionally. However, since the measurement is performed by irradiating a laser from the sensor, not only the point cloud becomes sparse at a distant object but also a blind spot occurs on the behind of the object. MWR is also an active sensor that measures the distance to an object using radiated radio waves. In addition to the distance to the object, the relative velocity in the irradiation direction can be measured using the Frequency Modulated Continuous Wave (FMCW) radar. MWR is a sensor with excellent environmental resistance because it is less attenuated by fog, rain, and snow than LiDAR and cameras (visible light camera). For example, in snowy measurements, LiDAR measures snow itself, but MWR has the difference that it can pass through the snow and measure guardrails and utility poles ahead. These active sensors such as LiDAR and MWR, can be measured without being affected by sunlight. Therefore, they can measure the surroundings regardless of day or night. On the other hand, the camera is a passive sensor that can capture the surroundings as an image. It is a sensor that not only measures the surroundings with high density but also acquires color information. For example, it is indispensable for recognizing traffic light colors. The distance cannot be measured directly like LiDAR or MWR, but it can be measured as a stereo camera. Visible light camera is vulnerable to bad conditions such as fog and are difficult to see without a light source at night. However, infrared cameras have a feature that allows them to permeate through the fog and can observe pedestrians at night since it can measure temperature. But, the infrared camera has a lower resolution than the visible light camera due to an image sensor that can detect infrared rays. The resolution of the infrared camera of the onboard sensor is about 1.3 megapixel. Therefore, visible light cameras are mainly used, and the current use of infrared cameras is limited.

Using these sensors, the vehicle estimates the position on the digital map, recognizes surrounding objects and road features, and then achieves safe driving according to the traffic rules. In urban driving, the vehicle is required to recognize objects in various directions such as oncoming vehicles at intersections, pedestrians on zebra crossings, and back coming vehicles at adjacent lanes. Therefore, it is important to consider the sensor layout so that objects in all directions can be observed. To examine particularly important areas, a robust system can be expected to cover the observation area with multiple sensors. In addition, the automated driving system needs to process a large amount of sensor information in real time to make appropriate situational decisions. 2.2. Process flow Fig. 3 shows an example of the flow of processing of sensor information. There is a wide variety of information to be processed by the onboard computer in order to drive autonomously, but three major technological developments are required. The first technology is self-localization. In automated driving systems that travel on various roads including urban roads, there are many systems that operate using HD maps. A digital map has high accuracy and various kinds of recorded information, which makes it possible to reduce the technical level required for automated driving. On the other hand, in order to effectively use the HD map, it is necessary to always estimate the vehicle position on the map with high accuracy using onboard sensors. In general, decimeter-level accuracy is required. The second technology is environment recognition. Since the automated vehicle recognizes its surrounding environment using onboard sensors and makes situational decisions based on the recognition results, it is important to understand the environment around the vehicle

Fig. 2. Typical Observation Data for 3D LiDAR.

Please cite this article as: K. Yoneda, N. Suganuma, R. Yanase, et al., Automated driving recognition technologies for adverse weather conditions, IATSS Research, https://doi.org/10.1016/j.iatssr.2019.11.005

4

K. Yoneda et al. / IATSS Research xxx (2019) xxx

Planning

Digital Map

Route Planning

Perception Camera Millimeter-wave Radar

Traffic lights

Decision-Making

Static Obstacles

Trajectory Planning

Dynamic Object Estimation

Control

LiDAR GNSS/INS

Steering Self-Localization

Acceleration

Map Matching

Fig. 3. Process Flow for Automated Driving.

and to predict how the conditions can change. In addition, in order for the vehicle to behave smoothly in various types of traffic situations, it is necessary to recognize road features such as traffic lights. The third technology is trajectory planning. It is necessary to plan the behavior of the vehicle in real time by integrating recognition results in terms of self-localization, surrounding perceptions, and digital map information. These three technologies were first developed in the field of mobile robots. Because various situations can occur in a traffic environment where human drivers and automated systems are mixed, specialized technologies must be developed for the driving situations encountered by an automated vehicle.

2.3. Self-localization technology Simply providing an HD map does not reduce the technical level of automated driving. It is necessary to estimate the vehicle position on the map with high accuracy in order to utilize the map. In the past, GNSS was often used to obtain the ego position. GNSS is capable of measuring a position; however, there are major issues when this is applied to automated driving. For example, in an urban environment where many tall buildings are lined up, it is difficult to measure the precise position owing to the influence of multi-paths of GNSS signals. In addition, in a tunnel, the position cannot be obtained because there are no satellite signals. In order to reduce such problems, the coupling of GNSS and INS is generally implemented. However, the accuracy decreases in environments where GNSS signals cannot be received for a long time. In addition, even if the vehicle position on the centimeter level can be measured using real-time kinematic GNSS (RTK-GNSS), there is also a problem in that the appropriate correspondence between the vehicle and the map cannot be obtained if the digital map itself contains a large error. Therefore, it is essential to estimate the vehicle position on the map in real time. Map matching technology is used to estimate the vehicle position. In terms of self-localization, many studies applied a map matching technique between the reference digital map and sensor observations. The reference map contains sensor feature information around roads with precise positions. Generally, three types of map structure (3D point cloud map, 2D image map, and vector structured map) are utilized as predefined maps. A 3D point cloud map contains information about 3D objects around the road. Although the maintenance cost for this type of map is not high, the data size is enormous. A 2D image map is generated by extracting road surface features from a 3D point cloud. As a result, the data size is reduced as compared with the 3D map. The vector map contains polynomial curves of road and lane boundaries such as white lines and curbs. This is a considered a lightweight map in terms of data size, but it has a high maintenance cost.

In order to obtain decimeter-level accuracy, a LiDAR-based method was studied because of the high measurement accuracy of the sensor and a robustness to changes in day and night. In related works, map matching methods such as a histogram filter [15] and scan matching [16,17] were proposed. These methods generally estimate the location of the vehicle by matching road paint and building shape information along its path road as landmarks. Additionally, for low-cost implementation, some camera-based methods were proposed that match image observations onto a 3D LiDAR map [18,19]. On the other hand, self-localization methods that utilize non-LiDAR maps are also being considered [20–22]. These methods estimate the position by using image features [20] or MWR features [21,22] as a reference map. However, currently, the accuracy of the obtained position is inferior to that of the LiDAR-based method. It is necessary to select an appropriate method according to the driving conditions, sensor specifications, and required accuracy. 2.4. Surrounding perception technology In surrounding object recognition, road features such as traffic lights; surrounding obstacles; and traffic participants such as vehicles, pedestrians, and cyclists must be recognized in real time using onboard sensors. In order for the automated vehicle to drive on public roads, it is an important task to recognize road features along the traveling lanes to follow traffic rules. Recognition of static road features such as speed limits and stop lines can be omitted by registering them on the digital map as prior information. However, dynamic road features such as traffic lights need to be recognized in real time. Therefore, the traffic light is one of the most important dynamic road features in intersection driving. The traffic lights can be recognized only using a color camera because it is necessary to classify the lighting color. For smooth driving at intersections, it is necessary to recognize the traffic light states from a distance greater than 100 m. In related works, a prior map-based detection method was proposed [23–25]. Introducing a digital map makes it possible to determine search regions to reduce the number of false detections. Recognition methods using circular features [26,27], which are the basic shape of traffic lights, and recognition methods using DNN [25] have been reported. The recognition of static obstacles around the vehicle requires accurate measuring accuracy. Generally, surrounding obstacles are detected using range sensors such as LiDAR, stereo camera, and MWR. In order to reduce the influence of instantaneous false detection, an occupancy grid map is generated as a 2D or 3D static obstacle map with free space by applying binary bayes filter as time-series processing. In addition, in order to recognize surrounding traffic participants, a classification by machine learning is performed using the object shape obtained from the range sensor and the image information as input features. In recognition using a range sensor, the observation points obtained by the sensor are clustered, and the object category is classified using machine learning such as Adaboost and Support Vector Machine (SVM) from the features of each shape of these objects [28–30]. Fig. 4 is typical object data for LiDAR observation. If the distance to the object is about several tens of meters, a dense observation point cloud can be obtained, then the detailed shape of the object can be confirmed. However, there is a problem that it is difficult to obtain detailed shape information for distant objects because the obtained point cloud is sparse. On the other hand, in recognition using camera images, dense observation information can be obtained compared to LiDAR, then it is possible to recognize distant objects over 100 m by selecting an appropriate lens. In recent years, since graphic processing unit (GPU)-based acceleration became possible, deep neural network (DNN)-based recognition was developed as a camera-based recognition method with high accuracy [31,32]. However, it is difficult to obtain the distance information of the object directly because these methods can obtain the recognition result as a rectangular bounding box in the image. Therefore, in order to

Please cite this article as: K. Yoneda, N. Suganuma, R. Yanase, et al., Automated driving recognition technologies for adverse weather conditions, IATSS Research, https://doi.org/10.1016/j.iatssr.2019.11.005

K. Yoneda et al. / IATSS Research xxx (2019) xxx

5

Fig. 4. Typical LiDAR Point Clouds for Traffic Participants.

obtain the relative position of the recognized objects, it is necessary to acquire distance information by sensor fusion using a stereo camera or other range sensor. In addition to traffic participant recognition, it is necessary to predict the state of objects after several seconds in order to perform safe collision detection and harmonized driving such as distance-keeping driving. Therefore, it is important to estimate not only the positions of the dynamic objects but also the velocity and acceleration. Generally, state estimation of surrounding objects is performed using probabilistic algorithms such as a Kalman filter or particle filter. Since the distance to the object needs to be measured, range sensors such as LiDAR, MWR, and stereo cameras are mainly used [4]. Moreover, a method using a mono-camera [33] has also been proposed. Robust object tracking can be implemented not only by using a single sensor but also by fusing multiple types of sensors. In order to predict the future trajectory of a recognized object, it is effective to use the road shape information from the digital map. In addition to the current motion state of the recognized object, the object behavior after several seconds can be predicted by using the road shape and its connection information. Moreover, the prediction of more appropriate behavior can be expected by introducing traffic rules and the interactions between surrounding traffic participants [34]. 2.5. Trajectory planning technology Decision-making during automated driving requires three types of situational judgment. The first technology is route planning. This is familiar in car navigation systems, etc., but it is necessary to search for the route in the lane level from the current position to the destination. In an environment where road information from a digital map exists, it is possible to search the route using dynamic programing along the connections of roads. If there is no explicit road information such as a parking lot or a large space, it is necessary to search the optimal route from the drivable area [35–37]. The second technology is a travel plan that complies with traffic rules. It is essential to follow the traffic rules when traveling along the route. In particular, when making an entrance judgment at an intersection, it is necessary to secure a safe procedure in consideration of the recognition results of the traffic light status and the stop line position, and the positional relationship of the oncoming vehicle and pedestrians on the zebra crossing. In addition, it is important to consider the priority relationship between the current road and the destination road when driving through an intersection. Therefore, it is an important task to judge the situation while logically processing the travel in such traffic conditions [38]. The third technology is trajectory optimization. For the obtained route, trajectory planning is performed to search for the minimumcost, collision-free trajectory [39,40]. In trajectory planning, polynomial functions are used to generate continuous trajectories and minimize acceleration (jerk). Primitive driving patterns such as distance keeping and velocity keeping were defined in previous studies [39]. Flexible automated driving can be realized by designing the driving behavior on a public road as a decision-making model. These trajectory generation methods can

theoretically guarantee the continuity and smoothness of the determined trajectory by using polynomial functions. On the other hand, in recent years, machine-learning-based approaches for generating control commands [42] or trajectories [43] were researched and presented using DNN. The input information contains multidimensional data such as the camera image or the positions of surrounding objects. Then, the neural network can produce a driving output behavior such as smooth acceleration. Since the DNN can obtain adaptive behavior by machine learning, it can be expected to be applied to a wide range of environments. On the other hand, since the obtained behaviors such as lanekeeping driving are still relatively simple and primitive, it is necessary to design a learning model that includes a rule-based approach. 3. Adverse weather conditions This section summarizes the influences of the sensor observation and recognition technologies on sun glare, rain, fog, and snow conditions as adverse weathers that commonly occur on public road driving. Table 2 describes the characteristics of each recognition technology on the adverse weather conditions. The technical issues in each adverse condition are described in the following subsections. There are sensors that are not good at each condition, but technologies that can be used in combination with other sensors can be expanded to the scope of automated driving systems by designing the system as a multiple system. However, if there are conditions that are difficult to recognize for technologies that can only be implemented by a single sensor, such as signal recognition by a camera, a robust implementation using Vehicle-toeverything (V2X) communication is also required in addition to recognition by in-vehicle sensors. Related researches on such adverse weather conditions are categorized researches that evaluate sensor performance under each condition using actual environments, test courses, and simulations [44–47], researches that classify the types or degrees of adverse conditions from observation data [48–52], and researches that achieve the same level of recognition performance to the normal condition [53–55]. If the degree of adverse conditions can be estimated, then it becomes possible to perform realistic operations such as prompting the driver to override. 3.1. Sun glare condition As summarized in Table 1, LiDAR and MWR, which are active sensors, are sensors that can be measured without being affected by sunlight, but cameras which is a passive sensor, have the problem of image saturation due to the influence of sunlight. Camera recognition is a recognition technology that is affected by sunlight, which degrades its recognition ability. In this case, it is necessary to consider the possibility of the performance degradation of the recognition function that is realized only by the camera such as a traffic light recognition. Fig. 5 indicates typical driving image at an intersection. There is a problem in that the image is blown out under strong sunshine, as shown in Fig. 5 (a). Thus, it is necessary to consider not only the algorithm but also hardware improvements. In recent years, cameras with the high dynamic range (HDR) function have been developed [56–58] to be robust to changes in brightness as shown in Fig. 5(b). In particular, since the

Please cite this article as: K. Yoneda, N. Suganuma, R. Yanase, et al., Automated driving recognition technologies for adverse weather conditions, IATSS Research, https://doi.org/10.1016/j.iatssr.2019.11.005

6

K. Yoneda et al. / IATSS Research xxx (2019) xxx

Table 2 Influences of recognition technology for adverse weather conditions. Sensor

Recognition technology

LiDAR

-Self-localization -Traffic participant recognition -Static/dynamic objects estimation MWR -Self-localization -Static/dynamic objects estimation Camera -Self-localization -Traffic light detection -Traffic participant recognition -Static/dynamic objects estimation

Sun glare

Rain

Fog

Snow

-Reflectivity degradation -Reduction in recognition distance -Shape change due to splash

-Reflectivity degradation -Reduction in measurement distance

-Change in peripheral shape -Road surface occlusion -Noise due to snow fall

-Reduction in measurement distance

-Reduction in measurement distance

-Mild noise due to fallen snow

-Visibility degradation

-Visibility degradation -Road surface occlusion

-Whiteout of objects -Visibility degradation

recognition of traffic lights is an indispensable task when judging the entrances of intersections, it is important to solve the problem for severe sunlight in both the software and hardware point of view. On the other hand, when recognizing surrounding traffic participants, the recognition processes can be implemented using LiDAR and MWR as well as cameras. As mentioned above, it is assumed that false detection and non-detection deteriorate due to image recognition during sun glare condition. Therefore, it is important to build a system with multiple systems without depending on a single sensor. 3.2. Rainy condition Rainfall is an adverse condition that occurs frequently, and it is necessary to consider the influences on camera and LiDAR observations. In the measurement with the camera, the raindrops on the lens cause noise on the captured image as shown in Fig. 6 (a). There is a possibility that the recognition performance of the object is degraded due to the degradation of image quality. When installing a camera in the front of the vehicle, raindrops can be removed by installing the camera within the movable range of the wiper. However, when a camera is installed on the side or outside of the vehicle to capture images in all directions, the effects of raindrops are inevitable. Therefore, a camera installation that takes into account raindrops and a mechanism for removing them are necessary. In addition, researches on the recognition of raindrops in the captured image have been reported [51,52]. In order to remove false recognitions in the rainy case, these detection results can be used for reliability of image recognition results. On the other hand, the LiDAR observation becomes noisy owing to the reflection and attenuation of raindrops. In order to investigate the influence of rain on LiDAR, analysis results when evaluating observations in a test environment by actual data and simulations were reported [44,45]. During a rain event, a heavy rain of about 10 mm/h has little effect on LiDAR's

ability to create accurate measurements, but when it becomes a heavy rain of 30 mm/h or more, the measurable distance will be less than 50% [44]. In addition to the effects of rainfall, an important consideration is rain splash from puddles as oncoming vehicles and adjacent vehicles pass by as shown in Fig. 6 (b). With LiDAR observation information alone, it is difficult to distinguish whether the observation is of a splash of water or actual obstacles to be avoided. In such a situation, it is necessary to use sensor fusion, such as incorporating camera image information. Although MWR is more environmentally resistant than LiDAR, it is affected by radio attenuation due to rainfall [46]. According to the evaluation results by simulation [46], the detectable range drops to 45% in the case of heavy rain of 150 mm/h compared to the normal case. Moreover, since the infrared reflectance characteristics become unstable compared to the non-rainfall situation, it is also necessary to consider the influence on performance during self-localization. In the map matching process, false estimations may occur owing to differences in the characteristics of the reflectance obtained between the map and real-time observation. In order to overcome such problems, approaches such as robust matching against the influence of noise and the reconstruction of observations have been reported [59]. Stable localization is expected by reducing the difference between the prior map and the observation information. 3.3. Foggy condition Although it is a rare situation, the influence of foggy conditions on sensing is also an issue to be considered. Like rainy condition, in a foggy environment, the influence on the measurements of LiDAR and camera is a concern. According to reports that evaluated the measurement distance [44,45], it was confirmed that the LiDAR measurement distance decreased as fog became darker, and the visibility distance became shorter. In addition, the measurement ranges of the camera

Fig. 5. Severe Sunshine Image for Traffic Lights using Non-HDR and HDR cameras.

Please cite this article as: K. Yoneda, N. Suganuma, R. Yanase, et al., Automated driving recognition technologies for adverse weather conditions, IATSS Research, https://doi.org/10.1016/j.iatssr.2019.11.005

K. Yoneda et al. / IATSS Research xxx (2019) xxx

7

Fig. 6. Splash by Adjacent Vehicle during Rain.

became an issue as the visibility distance was shortened. Image recognition with visible light cannot be expected in situations where it is difficult for humans to see. Fig. 7 is captured images in an artificial foggy environment. It can be confirmed that the image quality is degraded in foggy environments and the visibility of lighting objects such as traffic lights is also degraded. Fog is a rare case, but using an infrared camera is one of the solutions to ensure that the surroundings can be seen even in such a situation. In this way, both LiDAR and cameras are affected by fog, but researches have been reported to suppress the deterioration of recognition performance by using a deep neural network that uses these sensor information [53,54]. On the other hand, it was also reported that there is no significant influence on the ranging distance of MWR at a short-range about 20 m [45]. Although millimeter waves are affected by the absorption of radio waves in water vapor, the reported results indicate that the effect of the detection range is not serious. Under rain and fog conditions, it is important to recognize surrounding objects in consideration of the characteristics of each sensor.

3.4. Snowy condition The implementation of automated driving during snowfall is an important issue in expanding the application range and operating area of automated driving. In addition, this can be expected to be used for operations that require experience in conventional human driving, such as when driving snowplows. However, it is important to solve issues for each elemental technology because there is a different surrounding

road condition at the time of snowfall compared with normal weather conditions. One issue is that the vehicle cannot self-localize because it is impossible to match the map information properly because of snow on the road surface. Specifically, the object shape around the road changes owing to snowfall and the road paint cannot be observed owing to occlusions, the information gathered is inconsistent with the map information. In such a case, it becomes difficult to recognize in which lane on the road the vehicle is traveling, making it difficult to perform appropriate automated driving. There are three approaches to achieving stable self-localization during snowfall. The first approach uses high-precision RTK-GNSS. However, as mentioned in Section 2.3, it is difficult to determine the exact position on the map if the absolute accuracies obtained in real time and using the map are different. The second approach is a method to reconstruct observation information in the same way as measures against rainfall. It has been reported that a better effect is expected if the amount of snowfall is small and the road surface and surrounding conditions can be partially observed [60]. The third approach uses a robust sensor for snowfall. A system has been proposed that uses MWRs to perform self-localization as robust sensors against adverse weather [61]. The obtained lateral accuracy is about 0.2 m when using MWR, and although this is not the 0.1 m accuracy obtained with LiDAR under normal weather conditions, it was reported that this is not affected by the presence or absence of snowfall. (See Fig. 8) The next technical issue for automated vehicles during snowfall is object recognition. For example, with LiDAR observation, snow in the

Fig. 7. Captured Images in Artificial Foggy Condition.

Please cite this article as: K. Yoneda, N. Suganuma, R. Yanase, et al., Automated driving recognition technologies for adverse weather conditions, IATSS Research, https://doi.org/10.1016/j.iatssr.2019.11.005

8

K. Yoneda et al. / IATSS Research xxx (2019) xxx

air may be perceived as noise as shown in Fig. 9. Snow in the air is perceived this way more than rain because it has a slower falling speed and larger particles than rain. In order to resolve this issue, methods have been developed to remove noise by reconstructing observational information [55]. Robust recognition can be expected by removing the noise contained in the sensor's observation by machine learning. On the other hand, the influences for MWR has been reported that the observable distance is reduced by about 25% due to snowfall [47]. Although the detectable distance is shorter, the influences during snowfall is less than LiDAR. Another technical issue is the path planning on snow-covered roads. It is possible that the drivable area around the road may change owing to snow on the side of the road. For example, if the road is usually a straight two-lane road, it is possible that a large amount of snowfall will result in only 1.5 lanes of space. Under such circumstances, human drivers can travel empirically by adjusting the number of lanes that can be traveled in a self-organizing manner. However, since it is not easy to implement such situational judgment with simple rules, an adaptive method is necessary. The same problem can be considered when traveling through an intersection. In automated driving using an HD map, it is difficult to cope with smooth maneuvers during snowfall by referring only to map information because the centerlines are recorded assuming there is no snow. During automated driving under snowfall, it is important to be able to adaptively plan the drivable area according to the surrounding conditions rather than completely relying on the map information. 4. Discussion Section 2 described the role of each elemental technology of an automated driving system, and Section 3 described the technical issues under each weather condition. In general, in LiDAR-based systems, it is important to design a system that considers each technical issue under adverse weather conditions. In addition, when using a camera, it is important to consider the performance degradation owing to the influence of sunlight. LiDAR is an important type of sensor that plays a major role in today's automated vehicles, and its robustness to adverse weather conditions is an essential issue in existing systems. In developing an automated driving system assuming such adverse conditions, important issues include improving the recognition ability by sensor fusion, estimating the reliability of each recognition system, and the utilization of infrastructure-installed sensors. As mentioned above, there are weather conditions that make each camera and LiDAR inconvenient. Therefore, sensors with excellent environmental resistance such as MWR are important when designing a system configuration. In fact, MWR's self-localization system has been reported to be useful in snowfall conditions, and future improvements are expected. However, in a system configuration based on MWR alone, there is a problem in that sufficient ability cannot be obtained when recognizing surrounding objects.

Fig. 8. Automated Driving on Snowy Road using MWR-based Localization.

Fig. 9. Typical LiDAR Observation in Snowy Condition.

Although it is possible to estimate the motion of surrounding objects in MWR-based object tracking, it is difficult to recognize detailed shapes and the object category. MWR hardware is being developed to improve the resolution of the observation information by moving from 76 GHz to 79 GHz. In addition, since research cases using 3D MWR, developed as a next-generation MWR, are being reported, this is expected to contribute to future recognition ability [62]. At present, system design that effectively utilizes rich observation information obtained with a camera or LiDAR is effective for implementing robust road recognition and self-localization. Thus, recognition technology by low-level fusion that mutually integrates the observation information of a camera, LiDAR, and MWR can be developed. For example, a robust system design can be expected when performing object recognition using camera information that can be expected to have a highly accurate recognition ability. Distance information to an object can be acquired from distance measurement information using LiDAR or MWR. In addition, many methods for modeling and recognizing a series of sensor fusions using Deep Neural Networks have been proposed [63]. In these approaches, robust recognition can be expected by performing sensor fusion to compensate for the advantages and disadvantages of each sensor. However, under adverse conditions, it is also necessary to assume that the observation of a specific sensor drops significantly. Therefore, the low-level fusion of observational information and a high-level fusion approach that integrates the results recognized by each sensor are important. Although improvements in the recognition ability can be expected when using sensors with excellent environmental resistance and robustness by sensor fusion, it is necessary to estimate the reliability of each recognition result for practical operation. When designing a system with the abovementioned high-level fusion, appropriate integration can be implemented by estimating the reliability of each system. For example, considering self-localization, high-level fusion can be implemented by designing multiple estimations in parallel using such as the LiDAR-based method and the MWR-based method. In this case, it is assumed that the stability of the obtained position fluctuates depending on the surrounding buildings, road surfaces, and weather conditions for each method. Therefore, reliability should be introduced to objectively estimate the stability of the method and the degree of the positioning error. For example, research has been reported on a measure to estimate the error state of matching in real-time selflocalization [64]. If the reliability for each system is obtained, then the final estimated position can be computed by weighting with reliability values. This process can contribute to robust automated driving. As a practical implementation method other than the abovementioned approach based on in-vehicle sensor recognition, the use of infrastructure-installed sensors can be mentioned. With regard to sensors installed in an automated vehicle, situations may occur in which

Please cite this article as: K. Yoneda, N. Suganuma, R. Yanase, et al., Automated driving recognition technologies for adverse weather conditions, IATSS Research, https://doi.org/10.1016/j.iatssr.2019.11.005

K. Yoneda et al. / IATSS Research xxx (2019) xxx

the blind spot is essentially impossible to observe and situations in which the ability to recognize under various adverse conditions declines. For example, in traffic light recognition during severe sunshine, a method of acquiring the traffic light states by communication with the infrastructure in situations where it cannot be seen from an onvehicle camera can be considered a practical solution. In addition, it is possible to reduce the load on the on-vehicle sensor by acquiring recognition information of pedestrians and surrounding vehicles from the infrastructure sensor installed at the intersection. On the other hand, with regard to the installation cost of the sensor, the benefits of installing an infrastructure sensor are significant if many automated vehicles are traveling, but this is expensive if the number of automated vehicles using the sensor is low. Thus, expanding the social introduction while considering the balance between the limit point of the recognition ability (by the onboard sensor of the automated vehicle and the introductory cost of the infrastructure sensor) is a necessary discussion in order to introduce automated vehicles into the market. 5. Conclusion This paper summarized the problems and research on automated driving in adverse weather conditions and each relevant recognition technology for urban roads in an effort to develop a practical operation system to clarify the limit points of the system under the weather conditions of sun glare, rain, fog, and snow. By considering the advantages and disadvantages of each sensor and recognition method, we may design a robust recognition system by sensor fusion and using infrastructure sensors. In order to perform effective sensor fusion, it is important to implement both low-level and high-level fusions and estimate sensor reliability. In addition, when using an infrastructure sensor, a system design based on a balance between the installation cost and the merits of introduction is required for introducing the automated driving system into society. Using today's learning technologies, we can design for automated driving by defining the operation area and application range of a driving system so that it is introduced to society based on realistic factors and conditions of everyday driving. References [1] SAE International, Taxonomy and Definitions for Terms Related to on-Road Motor Vehicle Automated Driving Systems J3016 201401, 2014 (https://www.sae.org/ standards/content/ j3016\_201401/). [2] C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, J.J. Leonard, Past, present, and future of simultaneous localization and mapping: towards the age, IEEE Transactions on Robotics, No. 32, 6, 2016, pp. 1309–1332. [3] G. Bresson, Z. Alsayed, L. Yu, S. Glaser, Simultaneous localization and mapping: a survey of current trends in autonomous driving, IEEE Transactions on Intelligent Vehicles 2 (3) (2017) 194–220. [4] K. Granstrom, M. Baum, S. Reuter, Extended object tracking: introduction, overview and applications, Advances in Information Fusion 12 (2) (2016) 139–174. [5] U. Franke, D. Pfeiffer, C. Rabe, C. Knoeppel, M. Enzweiler, et al., Making Bertha See, Proceedings of 2013 IEEE International Conference on Computer Vision Workshops, 2013. [6] J. Ziegler, T. Dang, U. Franke, H. Lategahn, P. Bender, et al., Making Bertha drive: an autonomous journey on a historic route, IEEE Intell. Transp. Syst. Mag. 6 (2) (2014) 8–20. [7] G.V. Zitzewitz, Survey of Neural Networks in Autonomous Driving, 2017. [8] E. Arnold, O.Y. Al-Jarrah, M. Dianati, S. Fallah, D. Oxtoby, A. Mouzakitis, A survey on 3D object detection methods for autonomous driving applications, IEEE Transactions on Intelligent Transportation Systems, 2019 https://doi.org/10.1109/TITS. 2019.2892405. [9] C. Urmson, J. Anhalt, D. Bagnell, C. Baker, R. Bittner, et al., Autonomous driving in urban environments: boss and the urban challenge, J. Field Robot. 25 (8) (2008) 425–466. [10] J. Levinson, J. Askeland, J. Becker, J. Dolson, D. Held, et al., Towards fully autonomous driving: systems and algorithms, Proceedings of 2011 IEEE Intelligent Vehicles Symposium, 2011. [11] D. Gonzalez, J. Perez, V. Milanes, F. Nashashibi, A review of motion planning techniques for automated vehicles, IEEE Trans. Intell. Transp. Syst. 17 (4) (2016) 1135–1145. [12] B. Paden, M. Cap, S.Z. Yong, D. Yershov, E. Frazzoli, A survey of motion planning and control techniques for self-driving urban vehicles, IEEE Transactions on Intelligent Vehicles 1 (1) (2016) 33–55.

9

[13] K. Abboud, H.A. Omar, W. Zhuang, Interworking of DSRC and cellular network technologies for V2X communications: a survey, IEEE Trans. Veh. Technol. 65 (12) (2016) 9457–9470. [14] C. Badue, R. Guidolini, R.V. Carneiro, P. Azevedo, V.B. Cardoso, et al., Self-Driving Cars: A Survey, arXiv preprint arXiv:1901.04407 2019. [15] J. Levinson, S. Thrun, Robust vehicle localization in urban environments using probabilistic maps, Proceedings of 2010 IEEE International Conference on Robotics and Automation 2010, pp. 4372–4378. [16] K. Yoneda, C.X. Yang, S. Mita, T. Okuya, K. Muto, Urban road localization by using multiple layer map matching and line segment matching, Proceedings of 2015 IEEE Intelligent Vehicles Symposium 2015, pp. 525–530. [17] N. Akai, L.Y. Morales, E. Takeuchi, Y. Yoshihara, Y. Ninomiya, Robust localization using 3D NDT scan matching with experimentally determined uncertainty and road marker matching, Proceedings of 2017 IEEE Intelligent Vehicles Symposium 2017, pp. 1357–1364. [18] R.W. Wolcott, R.M. Eustice, Visual localization within LIDAR maps for automated urban driving, Proceedings of 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems 2014, pp. 176–183. [19] Y. Xu, V. John, S. Mita, H. Tehrani, K. Ishimaru, S. Nishino, 3D Poind cloud map based vehicle localization using stereo camera, Proceedings of 2017 IEEE Intelligent Vehicles Symposium 2017, pp. 487–492. [20] J. Ziegler, H. Lategahn, M. Schreiber, C.G. Keller, C. Knoppel, J. Hipp, M. Haueis, C. Stiller, Video based localization for BERTHA, Proceedings of 2014 IEEE Intelligent Vehicles Symposium 2014, pp. 1231–1238. [21] F. Schuster, M. Worner, C.G. Keller, M. Haueis, C. Curio, Robust localization based on radar signal clustering, Proceedings of 2016 IEEE Intelligent Vehicles Symposium 2016, pp. 839–844. [22] S. Park, D. Kim, K. Yi, Vehicle localization using an AVM camera for an automated urban driving, Proceedings of 2016 IEEE Intelligent Vehicles Symposium 2016, pp. 871–876. [23] N. Fairfield, C. Urmson, Traffic light mapping and detection, Proceedings of the 2011 IEEE International Conference on Robotics and Automation 2011, pp. 5421–5426. [24] J. Levinson, J. Askeland, J. Dolson, S. Thrun, Traffic light mapping, localization and state detection for autonomous vehicles, Proceedings of the 2011 IEEE International Conference on Robotics and Automation 2011, pp. 5784–5791. [25] V. John, K. Yoneda, Z. Liu, S. Mita, Saliency map generation by the convolutional neural network for real-time traffic light detection using template matching, IEEE Transactions on Computational Imaging 1 (3) (2015) 159–173. [26] M. Omachi, S. Omachi, Traffic light detection with color and edge information, Proceedings of ICCSIT 2009 (2009) 284–287. [27] K. Yoneda, N. Suganuma, M. Aldibaja, Simultaneous sate recognition for multiple traffic signals on urban road, Proceedings of MECATRONICSREM (2016) 135–140. [28] L. Spinello, et al., A layered approach to people detection in 3D range data, Proceedings of 24th AAAI Conf. on Artificial Intelligence 2010, pp. 1625–1630. [29] A. Teichman, J. Levinson, S. Thrun, Towards 3D object recognition via classification of arbitrary object tracks, Proceedings of ICRA (2011) 4034–4041. [30] N. Suganuma, M. Yoshioka, K. Yoneda, M. Aldibaja, LIDAR-based object classification for autonomous driving on urban roads, J. Adv. Control Autom. Robot. 3 (2) (2017) 92–95. [31] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.Y. Fu, A.C. Berg, SSD: single shot MultiBox detector, Proceedings of European Conference on Computer Vision (2016) 21–37. [32] J. Redmon, A. Farhadi, YOLOv3: An Incremental Improvement, arXiv preprint arXiv: 1804.02767 2018. [33] A. Kuramoto, M. Aldibaja, R. Yanase, J. Kameyama, K. Yoneda, N. Suganuma, Monocamera based 3D object tracking strategy for autonomous vehicles, Proceedings of 2018 IEEE Intelligent Vehicles Symposium 2018, pp. 459–464. [34] J. Schulz, C. Hubmann, J. Lochner, D. Burschka, Interaction-aware probabilistic behavior prediction in urban environments, Proceedings of 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems 2018, pp. 3999–4006. [35] D. Dolgov, S. Thrun, M. Montemerlo, J. Diebel, Practical search techniques in path planning for autonomous driving, Proceedings of the First International Symposium on Search Techniques in Artificial Intelligence and Robotics (STAIR-08), 2008. [36] M. Likhachev, D. Ferguson, G. Gordon, S. Thrun, A. Stenz, Anytime dynamic A*: an anytime, re- planning algorithm, Proceedings of International Conference on Automated Planning and Scheduling, 2005. [37] Q. Huy Do, H. Tehrani, K. Yoneda, S. Ryohei, S. Mita, Vehicle path planning with maximizing safe margin for driving using lagrange multipliers, Proceedings of 2013 IEEE Intelligent Vehicles Symposium 2013, pp. 171–176. [38] C.R. Baker, J.M. Dolan, Traffic interaction in the urban challenge: putting boss on its best behavior, Proceedings of IEEE/RJS International Conference on Intelligent Robots and Systems 2008, pp. 1752–1758. [39] M. Werlind, J. Ziegler, S. Kammel, S. Thrun, Optimal trajectory generation for dynamic street scenarios in a Frenet frame, Proceedings of International Conference on Robotics and Automation 2010, pp. 987–993. [40] H. Tehrani, M. Shimizu, T. Ogawa, Adaptive lane change and lane keeping for safe and comfortable driving, Proceedings of Second International Symposium on Future Active Safety Technology Toward Zero-Traffic Accident, 2013. [42] M. Bojarski, D.D. Testa, D. Dworakowski, B. Firner, B. Flepp, et al., End to End Learning for Self-Driving Cars, arXiv preprint arXiv:1604.07316, 2016. [43] M. Bansal, A. Krizhevsky, A. Ogale, ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst, arXiv preprint arXiv:1812.03079 2018. [44] M. Hadj-Bachir, Philippe de Souza, LIDAR Sensor Simulation in Adverse Weather Condition for Driving Assistance Development, hal-01998668 2019.

Please cite this article as: K. Yoneda, N. Suganuma, R. Yanase, et al., Automated driving recognition technologies for adverse weather conditions, IATSS Research, https://doi.org/10.1016/j.iatssr.2019.11.005

10

K. Yoneda et al. / IATSS Research xxx (2019) xxx

[45] M. Kutila, P. Pyykonen, H. Holzhuter, M. Colomb, P. Duthon, Automotive LiDAR performance verification in fog and rain, Proceedings of 2018 21st International Conference on Intelligent Transportation Systems, 2018. [46] S. Zang, M. Ding, D. Smith, P. Tyler, T. Rakotoarivelo, M. Kaafar, The impact of adverse weather conditions on autonomous vehicles: how rain, snow, fog, and hail affect the performance of a self-driving Car, IEEE Vehicular Tecunology Magazine 14 (2) (2019) 103–111. [47] T. Yamawaki, S. Yamano, 60GHz Millimeter-wave automotive radar, Fujitsu Ten Tech. J. 11 (1998) 3–14. [48] R. Heinzler, P. Schindler, J. Seekircher, W. Ritter, W. Stork, Weather Influence and Classification with Automotive Lidar Sensors, arXiv preprint arXiv:1906.07675 2019. [49] M. Roser, F. Moosmann, Classification of weather situations on single color images, Proceedings of 2008 IEEE Intelligent Vehicles Symposium 2008, pp. 798–803. [50] J. Guerra, Z. Khanam, S. Ehsan, R. Stolkin, K. Maier, Weather Classification: A new multi-class dataset, data augmentation approach and comprehensive evaluations of Convolutional Neural Network, Proceedings of 2018 NASA/ESA Conference on Adaptive Hardware and Systems, 2018. [51] H. Kurihata, T. Takahashi, I. Ide, Y. Mekada, H. Murase, Y. Yamatsu, T. Miyahara, Rainy weather recognition from in-vehicle camera images for driver assistance, Proceedings of 2005 IEEE Intelligent Vehicles Symposium, 2005. [52] D. Webster, T. Breckon, Improved raindrop detection using combined shape and saliency descriptors with scene context isolation, Proceedings of 2015 IEEE International Conference on Image Processing, 2015. [53] B. Cai, X. Xu, K. Jia, C. Qing, D. Tao, DehazeNet: an end-toEnd system for single image haze removal, IEEE Trans. Image Process. 25 (11) (2016) 5187–5198.

[54] M. Bijelic, F. Mannan, T. Gruber, W. Ritter, K. Dietmayer, F. Heide, Seeing Through Fog Without Seeing Fog: Deep Sensor Fusion in the Absence of Labeled Training Data, arXiv: 1902.08913 2019. [55] L. Caccia, H.V. Hoof, A. Courville, J. Pineau, Deep Generative Modeling of LiDAR Data, arXiv preprint arXiv:1812.01180 2019. [56] ON Semiconductor AR0233AT CMOS Image Sensor, HDR+LFM, https://www. onsemi.jp/PowerSolutions/product.do?id=AR0233AT 2019. [57] Sony IMX490 CMOS Image Sensor for Automotive Cameras, https://www.sony.net/ SonyInfo/News/Press/201812/18-098E/index.html 2018. [58] OmniVision Technologies OAX4010 Automative Image Signal Procssor, 2019 (https://www.ovt.com/ sensors/OAX4010). [59] M. Aldibaja, N. Suganuma, K. Yoneda, Robust intensity based localization method for autonomous driving on snow-wet road surface, IEEE Transactions on Industrial Informatics 13 (5) (2017) 2369–2378. [60] M. Aldibaja, A. Kuramoto, R. Yanase, T.H. Kim, K. Yoneda, N. Suganuma, Lateral roadmark reconstruction using neural network for Safe autonomous driving in snowwet environments, Proceedings of 2018 IEEE International Conference on Intelligence and Safety for Robotics 2018, pp. 486–493. [61] K. Yoneda, N. Hashimoto, R. Yanase, M. Aldibaja, N. Suganuma, vehicle localization using 76ghz omnidirectional millimeter-wave radar for winter automated driving, Proceedings of 2018 IEEE Intelligent Vehicles Symposium 2018, pp. 971–977. [62] M. Slutsky, D. Dobkin, Dual inverse sensor model for radar occupancy grids, Proceedings of 2019 IEEE Intelligent Vehicles Symposium 2019, pp. 1550–1557. [63] C.R. Qi, W. Liu, C. Wu, H. Su, L.J. Guibas, Frustum PointNets for 3D Object Detection from RGB-D Data, arXiv preprint arXiv:1711.08488 2017. [64] N. Akai, L.Y. Morales, H. Murase, Reliability estimation of vehicle localization result, Proceedings of 2018 IEEE Intelligent Vehicles Symposium 2018, pp. 740–747.

Please cite this article as: K. Yoneda, N. Suganuma, R. Yanase, et al., Automated driving recognition technologies for adverse weather conditions, IATSS Research, https://doi.org/10.1016/j.iatssr.2019.11.005