AUTONOMOUS EXPLORATION BEHAVIOR PLANNING FOR PLANETARY ROVER Takashi Kubota, Riho Ejiri, Ichiro Nakatani Institute of Space and Astronautical Science, JAXA,
Abstract: This paper proposes an efficient scheme to make a routing and sensing plan simultaneously for lunar or planetary exploration rovers. In the proposed scheme, the environment is widely recognized by using a vision system. The proposed algorithm can detect obstacles based on gray-level image. And a rough route is produced by judging whether the area is safe or dangerous. Firstly, the shade average and the variance at each pixel are calculated from the obtained image, and the environment recognition is performed. According to the recognition results, a rough path and a sensing strategy are planned. A rover senses locally based on the proposed behaviour and moves to the destination. The validity of the proposed method is shown by some computer simulations. Keywords: Path Planning, Visual Navigation, Obstacle Avoidance, Planetary Rover, Autonomous Behavior Planning, Space Robotics, Mobile Robot.
1. INTRODUCTION Several missions to explore the moon or Mars by an unmanned mobile robot are being planned for scientific observation. Recently many researchers have studied and developed planetary rovers for unmanned surface exploration of planets (Chatila, et al., 1993; Volpe and Peters, 2003; Bornschlegl, et al., 2003; Weisbin, et al., 2003). Especially micro-rover missions have received a lot of attention, because small and low-cost missions are typically constrained by mass, budget and schedule. In July 1997, NASA/JPL performed Mars Pathfinder mission and the Sojourner rover could move on the Martian surface and transmit voluminous data back to the earth (Weisbin, et al., 2003). NASA also succeeded in 2003MER mission (Volpe and Peters, 2003). NASA/JPL has developed twin rovers, called Spirit and Opportunity. These rovers are capable of long traverses, reliable navigation and science instrument control. These rovers carried some science instruments and obtained a lot of scientific results on Mars. These rovers also succeeded in long traverse and are now in operation.
As a part of the development programs, teleoperation or autonomous navigation technologies have been studied for realizing a rover to be able to move on unknown lunar or planetary surface (Matthies, et al., 1995). In recent years, many researchers have earnestly studied and developed planetary rovers for unmanned surface exploration of planets (Miller, et al., 1990; Simmons, et al., 1993). However, there are few navigation systems that can travel safely over a long distance for many days in unknown terrain. There have also been proposed only few practical path-planning methods (Gat, et al., 1994; .Lee and Shih, 1990; Simmons et al., 1995, Morlanss and Liegeois, 1992) based on sensory data. In Japan, Institute of Space and Astronautical Science (ISAS) has actively studied surface exploration technologies required for Mars and other planetary exploration missions. The authors have studied a planetary rover that can travel safely over a long distance on rough terrain. The rover R&D group of ISAS, Meiji university and Chuo university have developed a small, lightweight micro rover with a new mobility system, which is called "Micro5"
(Kubota, et al., 1995). The developed novel suspension system is a simple and light mechanism like a four-wheeled rover and provides a high degree of mobility like a six-wheeled rover. As an efficient navigation scheme, the autonomous behavior planning system has been studied. This paper proposes an efficient scheme to make a routing and sensing plan simultaneously for lunar or planetary exploration rovers. In the proposed scheme, the environment is widely recognized. This paper proposes a method to detect obstacles based on image. The validity of the proposed method is shown by computer simulations. 2. PLANETARY EXPLORATION IN JAPAN Since the launch of the first Japanese artificial satellite Ohsumi in 1970, ISAS, which is now the space science division of Japan Aerospace Exploration Agency (JAXA), has been responsible for space science missions of Japan, from conceptualizing through developing, launching and operating spacecraft. In the course of the cooperative and concerted efforts among engineers, scientists and industries in developing and operating missions, ISAS has made progress in technologies and accumulated experiences for requirements to achieve the recent and the near-future space science missions. The planetary missions, including the lunar one and those by planetary landers and rovers require a high degree of autonomy in every aspect of spacecraft performance. For such missions, navigation, guidance and control are no exception, and the technologies have been required. Those are intelligently adaptable to unknown or ambiguous environment. With this in mind and to cope with the launch weight limitation, a variety of sensors, actuators and control systems have been investigated to develop a small explorer with light weight and low power consumption. Figure 1 shows the planned and studied future exploration missions in Japan. MUSES-C spacecraft, renamed HAYABUSA in orbit, meaning falcon in English, was launched on May 9, 2003. The primary goal of the mission is to develop and verify technologies that are necessary to retrieve samples from a small body in the solar system, with the obvious but surplus science return of obtaining fragments from the asteroid 1998SF36. The major technologies to be verified include among others 1) use of solar powered electrical propulsion system for an interplanetary mission, 2) autonomous navigation, guidance and control to a small target in space, 3) rendezvous with an asteroid and sampling of the surface material, and 4) reentry at a high speed (more than 12km/s) from an interplanetary earth-return trajectory. The scientific objective of LUNAR-A mission is to explore the lunar interior using seismometry and heat-flow measurements to better understand the
origin and evolution of the Moon. A lunar orbiter carries two penetrators to be shot into below the surface 1 to 3 meters deep, collects and relays data from the penetrators to the ground station. ISAS/JAXA is also promoting SELENE mission, in which the lunar circular orbiter and elliptical orbiter will be launched in 2007. SELENE will perform the highly accurate and high-resolution observation of elementary and miner compositions of surface, topography, subsurface structure, magnetic anomaly and gravitational field. ISAS is planning a Venus orbiter mission, Planet-C, which launch will be scheduled in FY2010. Planet-C will elucidate the mechanism of the superrotation, the circulation of the Venusian atmosphere at speed 60 times higher than the rotation speed of the planet, and will investigate and identify the movement of the atmosphere by infrared and other observations and will investigate the distribution of lightning discharges. BepiColombo, a large Mercury exploration mission of ESA, is planned to be implemented as a joint project between ESA and JAXA. ESA will send two explorers to Mercury. The Mercury Magnetospheric Orbiter (MMO) developed by JAXA will be carried together with the Mercury Planetary Orbiter (MPP) of ESA. MMO mainly conducts the magnetic field and magnetosphere observations. MMP mainly conducts the surface and internal observations. For the time being, ISAS/JAXA will proceed with the near future plan focusing on terrestrial planets, including the Moon and asteroids, keeping Jupiter exploration in view as a future plan. To study the origin and evolution of the solar system, asteroid exploration missions have been studied, because asteroids preserve the information from the age of the solar system’s birth. Utlilizing engineering technologies obtained from the HAYABUSA explorer, an asteroid multi-sample return mission is under study as a next small body mission, which includes global observation of the surface layer, investigation of internal structure, and collection of surface material, by the remote-sensing instruments and the surface exploration robots. As the large scale lunar exploration missions, SELENE series are under study, internal observation, geological observation by lander and rovers, lunar observatory, sample return missions etc. For Mars missions, the Mars penetrator and Lander-Rover missions are under study. To investigate the internal structure of Mars, the lander will land at a place where the prior explorations suggest that a groundwater layer exists with high possibility and rovers carry the artificial earthquake source and a broadband seismometer. A venus balloon mission is also under study. By floating a balloon in the deepest region of the Venusian atmosphere, it is possible to conduct meteorological observations, ground surface observations, measurement of atmospheric composition and the others.
3.3 Navigation Strategy A visual sensor is used for navigation and scientific observation. Rovers can get the global information on the environment from the vision system. The image obtained from a single camera has a lot of environment information from the area near rovers to the skyline. However, it is difficult to navigate a rover to the destination with only a gray-level image, because the farther area's information is more ambiguous.
Fig. 1. Planetary exploration in Japan 3. ROVER EXPLORATION STRATEGY 3.1 Requirwment Lunar or planetary exploration rovers have the following restrictions: (1) Limited power consumption (2) Poor electronic machines, which can work in the vacuum and the space environment (3) Limited weight because of launch cost Rovers are expected to have the ability to explore as widely as possible in the limited exploration period. Therefore an efficient exploration is required for a lunar rover. The most of the surface on the moon is the flat ground. If rovers know that there are no obstacles in front of them, therefore, they do not have to make detailed maps from an LRF (Laser Range Finder) or a stereo vision. So, exploration rovers need the ability to process sensing data as short as possible and to navigate themselves efficiently.
Accordingly, this paper proposes a method to know the situation and plan the behavior based on graylevel images. Firstly, rovers recognize their environment widely from a gray-level image and then plan their rough route paths and their sensing methods on the way. In the safe-area, rovers move based on the planning results, and then rovers sense and confirm their environment locally with graylevel images. When rovers are close to the dangerous-area, they change the navigation strategy and avoid the area exactly with the special attention by a stereo vision. And rovers update their environment data, including ambiguous information, by local sensing to obtain certain information. Figure 2 shows the flowchart of the proposed strategy.
3.2 Functions Here let us consider how people go to their destination at first for investigating the navigation ability rovers need. People usually recognize their environment with their vision. After recognition, (a) if there is any obstacle in front of them, they consider the path where they can avoid the obstacle and proceed with the attention on them. (b) if there is no obstacles in front of them, they proceed as fast as they like. Therefore, people change the processing information in their brain according to their environment. The conventional rovers do not have such an ability. Hence it is concluded that rovers need the following abilities: (1) the ability to know generally the far environment information and the near environment information from them at the same time (2) the ability to change the environment recognition methods depending on their situation
Fig. 2. Flowchart of navigation strategy 4. ENVIRONMNET RECOGNITION 4.1 Dangerous Area When the gray-level images of the surface of the moon are shown, the following recognition is performed: white area is "safe area" black and white area whose shades change a lot is "rough area" Hence the safe-areas for rovers are the whiter area and the area whose shades do not change so much. In the gray-level image, the shades indicate the rate of white and the variances indicate the changes of shades.
For the extraction of the dangerous-area, the graylevel image is divided into mesh. One of meshes is called "window". From the result of calculation for the average of shades and the variance in each window created on the image, dangerous-areas are extracted. Figure 3 shows coordinate system which is decided to the X [pixel]×Y [pixel] on the obtained image.
camera β
…
T θ0 d θ1 θ 2 2 θ3 d 3d θk 2 2d
R
…
The presupposition for the lunar exploration rovers is mentioned as follows. Lunar surface is the flat ground. Lunar rover has the inclinometer. y
…
(k+1)d 2
The inclinometer is the sensor which shows the inclination of rovers' body. When rovers run on the rock etc, the error which appears in the image can be corrected from the inclinometer data.
H
α
rover
Obtained Image
l
…
…
β wg3 or wv3
wg1 or wv1 wg0 or wv0
wg2 or wv2 0
x
…
X-1
θ0 θ1 θ2 θ3 θk
Fig. 4. Window on image Fig. 3.Image coordinate system
The degree of danger with respect to each of pixels is calculated based on the image. The extraction sequence (Ejiri, et al., 2001) is shown as followed.
4.2 Window width Things near rovers look bigger and things far from rovers look smaller. Therefore, an obtained image is not meshed evenly. The bigger y is, the smaller the image is meshed. The width of each window, wn, is determined as shown in Figure 4. In Figure 4, the angle for an equipped camera is α[degree], the angle of camera's view is β[degree], the camera's height is H[m], the diameter of rovers' wheel is T[m] and the length of rovers' diagonal is R[m]. And d[m] indicates the base length to decide each windows (d=R at the calculation of the shades average, d=T at calculation of the variance). With these parameters, each θk's tangent is given from the following equation.
1) The average of shades or variance with respect to each window is calculated. Each pixel (x,y) memorizes the sum of the shades average or the variance, and the number of calibration. 2) From the average of the shades average calculated at each pixel (x,y), the gray-level value G[x,y] (x,y=0,1,2,…) is decided. From the average of the variance calculated at each pixel (x,y), the variance value V[x,y] (x,y=0,1,2,…) is also decided. 3) The threshold of the variance B, is determined by the value of experience. The degree of danger D[x,y] to each of pixels is calculated from the following equation. D[x,y]=1-f(V[x,y],B)・
tanθk=
(k+1)d
β
2H+{(k+1)d+2Htan(α- β)}tan(α- ) 2 2
(1)
f(V[x,y],B)=
wn=
2
(3)
4) N is determined as the maximum of brightness, and f(V[x,y],B) is calculated from the following equation.
And the width of each window, wn, is decided from the following equation. Y(1+tan2β) tanθk 2 ・ β 2tan 1+tan β tanθk 2 2 Y {tan(β -θ ) - tan(β -θ )} k-2 2 k 2 2tan β
G[x,y] N
y≧ v
1
( y
B V[x,y]
( y>v:V[x,y] > B )
(4)
(k=0,1) (2) (k≧2)
The range of the degree of danger D[x,y] is 0 ≦ D[x,y] ≦ 1. And the bigger the degree of danger D[x,y] is, the harder area rovers travel over.
5. ROUTE PLANNING 5.1 Pre-assumption
The maximum length from the straight path to avoid dangerous areas is memorized as the area rovers will change their course. → .
The ground operator sends commands on the direction and the distance to the destination to lunar rovers. Rovers have the sun sensor and the inclinometer in advance, and the clock to know exact time. Therefore rovers can recognize the direction of the given goal and turn to the destination. The area which cannot be seen from a camera image is certified to be safe in advance. (X/2,Y) is the direction to the destination on image, because rovers turn to the direction for the destination at first. From the lunar exploration at Apollo mission, it is said that most areas on the surface of the moon are flat and that there are some rocks and craters on the surface. Therefore, rovers need the ability to judge whether it can go forward or not. 5.2 Route Path Planning To judge whether rovers can move in the direction of the destination, it is judged whether any dangerous areas exist or not on the center line of the obtained image (x=X/2). If the degree of danger D[x,y] on the line is bigger than the threshold for it, it is judged as a dangerous area for rovers. If there are no dangerous areas in front of rovers, the path is planned as rovers go straight. If there are some dangerous areas in front of rovers, it is necessary to plan rough path to avoid the areas. The way to judge whether rovers can go or not at each area is the same as the way described previously. Figure 5 shows the start area and destination area. The algorithm for a route path planning is as follows. It is searched how wide the dangerous area in front of rovers, and the maximum length from the center of the image (x=X/2) to avoid the dangerous area is memorized as the area rovers will change their course. → . The straight path from the area rovers will change their course to the destination or to the next above area rovers will change their course is considered next. It is judged whether any dangerous area exists or not on the path. If any dangerous area exists, go to . If no dangerous areas exist, go to . More areas rovers will change their course are searched like . → . When more dangerous areas do not exist on the path, the straight path from the start point (X/2,0) to the areas rovers will change their course is considered next. If any dangerous area exists, go to . If no dangerous areas exist, sensing planning is started.
Fig. 5. Start area and destination area 6. SENSING STRATEGY 6.1 Sensing Planning In the environment recognition, the image is divided into three parts as follows: Long-distance Area, Middle-distance Area, Short-distance Area. Moreover, the five-phase Safe Rate is defined on the planned route with the threshold determined, as follows: Safe, a little safe, a little attention , attention, special attention. A sensing strategy is planned with the Safe Rate on the planned route. The list of sensing strategy is as follows: A: move without any sensing B: obtain a gray-level image locally in progress and confirm the safety in front C: sense the distance between rovers and dangerous areas with an LRF or a stereo vision in progress D: make a detailed map with LRF or stereo vision and avoid dangerous areas exactly E: do the environment recognition and plan the behavior based on an obtained image, follow the planned route F: turn to the destination, do the environment recognition, and the plan the behavior based on the obtained image The reason why a sensing strategy is planned depending on the three parts of image is that the ambiguous information on the image is variable according as how the window size is determined. It is obvious that the 1-pixel's information about the environment and the ambiguous is different comparing between Long-distance Area and Shortdistance Area.
7. SIMULATION STUDY 4.1 Dangerous Area To investigate the validity of the proposed behavior planning method, computer simulations were performed by using the Mars image obtained at the MPF mission [mpfwww]. Figure 6 shows an example of the simulation results. Figure 6(a) shows the Mars image. Figure 6(b) and 6(c) show the results of the recognition and the planning rough route respectively. Figure 6(d) shows the result of the planning rough route and the result for the planning how to sense. As a result, the reasonable rough route and the sensing strategy were planned. (a)
(b)
(c)
(d)
E
F C B
Fig. 6. Simulation results [NASA/JPL] 8. CONCLUSIONS In this paper, a vision based obstacle detection by a gray-level and a route planning for planetary exploration rovers have been proposed. The proposed algorithm could detect obstacles based on a gray-level image. And a rough route was produced by judging whether the area was safe or dangerous. The shade average and the variance at each pixel were calculated from an obtained image, and the environment recognition was performed. According to the recognition, the rough path and the sensing method were planned. A rover could sense locally based on the behavior planning and go to the
destination. For future works, some experiments are under going. REFERENCES R.Chatila, R.Alami, S.Lacroix, J.Perret, C.Proust, “Planet Exploration by Robots : From Mission Planning to Autonomous Navigation,”Proc. of ICAR'93, pp.91-96, 1993. R.Volpe, S.Peters, “Rover Technology Development and Infusion for the 2009 Mars Science Laboratory Mission,” Proceeding of the 7th International Symposium on Artificial Intelligence and Robotics & Automation in Space, 2003. E.Bornschlegl, G.Hirzinger, M.Maurette, R.Mugnuolo, G.Visentin, “Space Robotics in Europe, a Compendium,” Proceeding of the 7th International Symposium on Artificial Intelligence and Robotics & Automation in Space, 2003. C.R.Weisbin, D.Lavery, G.Rodriguez, “Robotics Technology for Planetary Missions Into the 21st Century,” Proc. of i-SAIRAS'97, pp.5-10, 1997. http;//mpfwww.jpl.nasa.gov/. http://mars.jpl.nasa.gov/mer/mission/. L.Matthies, E.Gat, R.Harrison, B.Wilcox, R.Volpe, T.Litwin, “Mars Microrover Navigation : Performance Evaluation and Enhancement,” Proc. of IROS'95, pp.433-440, 1995. D.P.Miller, M.G.Slack, R.J.Firby, “Path planning and execution monitoring for a planetary rover,” Proc. of IEEE Int. Conf. on R&A, pp.20-25, 1990. R.Simmons, E.Krotkov, “Autonomous Planetary Exploration: From Ambler to APEX,” Proc. of ICAR'93, pp.429-434, 1993. E.Gat, R.Desai, R.Ivlev, et al., “Behavior Control for Robotic Exploration of Planetary Surfaces,”IEEE Trans. on R&A, Vol.10, No.4, pp.490-503, 1994. T.T.Lee, C.L.Shih, “Robust Path Planning in Unknown Environments via Local Distance Function,” Proc. of i-SAIRAS'90, pp.251-254, 1990. R.Simmons, E.Krotkov, L.Chrisman, F.Cozman, R.Goodwin, M.Hebert, L.Katragadda, S.Koenig, G.Krishna-swamy, Y.Shinoda, W.Whittaker, “Experience with Rover Navigation for LunarLike Terrains,” Proc. of IROS'95, pp.441-446, 1995.. R.Morlanss, A.Liegeois, “A D.T.M. based Path Planning Method for Planetary Rovers,” Missions, Technologies et Conception des Vehicles Mobiles Planeraires, pp.499-507, 1992. T.Kubota, Y.Kuroda, Y.Kunii, I.Nakatani, “Micro Planetary Rover “Micro5”,” Proc. of 5th Int. Symposium on Artificial Intelligence, Robotics and Automation in Space, pp.373-378, 1999. R.Ejiri, T.Kubota, I.Nakatani, “Vision based behavior planning for planetary exploration rover,” Proc. of 10th Int. Conf. on Advanced Robotics, pp.535-540, 2001.