A sub goal seeking approach for reactive navigation in complex unknown environments

A sub goal seeking approach for reactive navigation in complex unknown environments

Robotics and Autonomous Systems 57 (2009) 877–888 Contents lists available at ScienceDirect Robotics and Autonomous Systems journal homepage: www.el...

13MB Sizes 0 Downloads 18 Views

Robotics and Autonomous Systems 57 (2009) 877–888

Contents lists available at ScienceDirect

Robotics and Autonomous Systems journal homepage: www.elsevier.com/locate/robot

A sub goal seeking approach for reactive navigation in complex unknown environments Chen Ye ∗ , Phil Webb School of Mechanical, Materials and Manufacturing Engineering, The University of Nottingham, University Park, Nottingham, NG7 2RD, United Kingdom

article

info

Article history: Received 10 April 2008 Received in revised form 17 June 2009 Accepted 22 June 2009 Available online 7 July 2009 Keywords: Mobile robot Navigation Unknown environment Reactive navigation Local minimum

abstract Reactive-based approaches are widely used in autonomous navigation. However, in complex unknown environments, pure reactive-based navigation still poses a few challenges since it can be easily trapped by a local minimum and may produce some extra manoeuvres. This paper presents the design of a reactive-based approach for navigation in complex and unknown environments called sub goal seeking, in which depth point maps of the environment are analysed to extract free spaces around the robot. These spaces are then evaluated the one that is most likely to lead to the final goal is chosen as a sub goal. The robot then drives towards these sub goals, instead of the final goal until it is visible. By analysing the environmental structure, dead-ends within robot sensory range are able to be detected thus reducing the chance of being trapped and also reducing unnecessary manoeuvres. This paper also evaluates the performance of the sub goal seeking approach using three criteria, goal achievable ability, safety and maneuvering through extensive simulation and real mobile robot experiments. © 2009 Elsevier B.V. All rights reserved.

1. Introduction The main aim of any research on reactive navigation systems is to guide a mobile robot moving freely in unknown environments. Without being given environmental knowledge, the robot has to deal with unforeseeable circumstances using a reactive mechanism. Sensor noise, imprecise control and inaccurate localisation information make the mission more difficult to achieve. Theoretically, an ideal reactive system should be able to navigate a robot safely in the presence of any number of uncertainties and produce a high speed fluid motion. Over the last few decades, efforts to develop such an ideal system have resulted in a number of successful approaches. The widely used artificial potential field approach [1] provides an elegant solution to the navigation problem, in which obstacles assert repulsive forces on the robot while the target asserts an attractive force. The strength of the forces is relative to the distance and orientation of the obstacles. The vector sum of these forces is used to drive the robot, thus avoiding obstacles while still keeping a track on the on the target. In simple environments, the artificial potential field approach has proved to be successful. However, some inherent limitations have been discovered, such as not passing between closely spaced obstacles, oscillation in



Corresponding author. Tel.: +44 (0) 115 951 4034; fax: +44 (0) 115 951 3800. E-mail address: [email protected] (C. Ye).

0921-8890/$ – see front matter © 2009 Elsevier B.V. All rights reserved. doi:10.1016/j.robot.2009.06.009

narrow corridors and local minima [2]. Although some solutions to these limitations have been proposed [2–6], its ability to control navigation in complex environments is still limited. The vector field histogram is another popular approach developed by Borenstein in 1991 to overcome the limitations of the artificial potential field [7]. The vector field histogram (VFH) works by manipulating a histogram created from a local occupancy grid map of the environment around the robot. Within the histogram, those openings large enough for the vehicle to pass through are identified, and a cost function is applied to every candidate opening and the opening with the lowest cost is chosen. The travel direction is then generated dependent upon the chosen opening. The VFH approach successfully overcomes some of the limitations of potential field. Its enhanced version VFH+ [8] also takes into account a simplified model of the moving robot’s possible trajectories based on its kinematic limitations and thus reduces the risk of collision. A pure collision avoidance approach called a dynamic window (DW) is also a very successful approach which can generate a smooth trajectory by considering the vehicle dynamics [9,10]. Instead of choosing the travel direction; the dynamic window selects the motion commands in velocity space. The robot’s trajectory is supposed to consist of several curves; each curve is uniquely determined by the velocity vector. Obstacles are mapped onto a grid map and considered to impose restrictions on the rotational and translational velocities. Only those velocity sets that ensure that the robot can come to a stop before hitting an obstacle are considered. These velocities are called admissible velocities.

878

C. Ye, P. Webb / Robotics and Autonomous Systems 57 (2009) 877–888

A dynamic window restricts the admissible velocities to those that can be reached within the next time interval according to the robot’s acceleration capability. The dynamic window approach greatly increases the robot’s obstacle avoidance performance, but expensive computation is the tradeoff. Although these approaches achieved great successes in a number of applications, they are still challenged by complex and cluttered environments in which concave shaped structures can trap the robot and stop it achieving its goal. Some solutions have been proposed recently to solve this problem such as virtual obstacle approach [11] and virtual target approach [12]. However, these approaches make an empirical evaluation for trap situations before invoking a suitable strategy and may still get trapped in unforeseen situations. The complete solution to the local minimum problem is to implement global path planning. However, a high resolution environmental map is hard to build which poses the requirement for the capability of a navigation system to avoid or recover from trap situations on a small scale, and therefore increase the possibility to accomplish a navigational task within unknown environments. For practical reasons, these approaches experience difficulties when deployed into different environments. Also, most approaches have a few parameters which dominate the system performance and the tuning of these parameters is a time consuming operation. A finely tuned pair of parameters under one environment may fail or not be efficient when deployed into another environment [13]. An adaptive capability would make the system easy to deploy in varying environments. This paper introduces a new sub goal seeking approach by which the chance of being trapped due to local minima is reduced. It is also able to drive a robot in complex environments without oscillation by analyzing the depth map of the environment and adapt to various environments from cluttered to open without any parameter tuning. Its design is presented in Section 2, Section 3 implements the approach and demonstrates the results and Section 4 provides conclusions and discussion. 2. Sub goal seeking reactive navigation The sub goal seeking navigation system [14] described in this paper was implemented using a Brook’s behaviour architecture [15]. It consisted of three behaviours, move to goal, sub goal seeking and collision avoidance. The architecture is illustrated in Fig. 1. In this case a Laser range finder was used to provide a depth map of the robot’s environment as is described further in Section 3. In the sub goal seeking approach instead of heading toward the final goal the robot heads towards a series of visible sub goals generated by the sub goal seeking behavior. The sub goal seeking behavior analyses the depth point maps of the environment from a laser range sensor, identifies the gaps (free space) around the robot and evaluates these gaps to select the gap direction as the sub goal which is most likely to lead to the final goal. By iterating this procedure, a series of sub goals are generated which lead to the final goal. The result of the move to goal behaviour is used to evaluate the cost of sub goals. The collision avoidance behaviour is designed to protect the robot from collision when an object is within a predefined safe zone. 2.1. Sub goal seeking behaviour The sub goal seeking behavior was designed to dynamically generate a temporal sub goal that is visible to the robot at every instance and was implemented using the 3 steps shown in Fig. 2. The process iterates until the robot reaches the final goal: Step 1: Identify all gaps (free space) around the robot at the current position and evaluate every gap to check whether it is passable to the robot Step 2: Select one of the gaps as a sub goal based on a cost function Step 3: Calculate a safe turning angle and move toward the sub goal.

Fig. 1. Architecture of sub goal seeking approach.

2.1.1. Step 1—Identify gap The first step is to identify all the gaps around the robot. The basic process used is to check the width of a gap and compare it with the width of the robot. Those widths smaller than the size of the robot are not identified. The sensory information is available as depth point maps. The minimum beam number nMin required for a valid gap is decided by (Eq. (1)): nMin =

1

θ

× arccos

2D2S − WR2 2D2S

(1)

where θ is the sensor angular resolution which is 1◦ for the laser scanner; D◦ is the beam point list related to the distance between obstacles and the robot perceived by the sensors, DS is the specified detecting range, WR is the width of the robot (Fig. 3). Given a set of continuous distance readings {(D◦i . . . D◦i+nG )|D◦i ∈ ◦ D } with each D◦i > DS , if the continuous beam number nG > nMin , then a gap is identified. Fig. 4 illustrates a sample with Gap II and Gap III identified. Although the gap width has been compared with the robot’s width, those identified gaps may still not be passable by the robot. In Fig. 4, GAP III’s width meets the requirement for a valid gap. However, it is not passable to the robot. It is therefore necessary to check whether the identified gaps allow the robot to pass. A passable gap is evaluated by checking two safe angles between the gap edges and the robot margin. In Fig. 5 θLS and θRS are considered as two safe angles that will keep the robot’s edges clear of collision. θLS is an angle that makes the robot left margin out of collision if the robot performs a θLS turn. θRS is an angle that makes that robot right margin out of collision if the robot performs a θRS turn. The values of angles θLS and θRS are obtained using the following process. Given the ith obstacle point coordination (xoi , yoi ) and robot boundary coordinates (xL , yL ), (xR , yR ), an angle θi between the obstacle point and robot boundaries can be calculated by

θi = tan−1

xoi − x(L,R) yoi − y(L,R)

.

(2)

C. Ye, P. Webb / Robotics and Autonomous Systems 57 (2009) 877–888

879

Fig. 2. Steps of sub goal seeking behavior.

Fig. 3. The minimum beam number nMin for a valid gap. Fig. 5. Safe angles.

Fig. 4. Sample of the identified gaps.

For all the obstacle points on the left edge of the selected gap, there is an angle θLi for each point on the robot left boundary, and the same θRi for the right boundary. Let θL be the list of angles between the left boundary and the gap edge, and θR be the list of angles between the right boundary and the gap edge. Supposing that angles on the right are positive [0◦ –90◦ ], and those on the left are negative [−90◦ –0◦ ]. The two angles for both robot boundaries can then be obtained using:

θLS = max(θL ) θRS = min(θR )

(3) (4)

whether the identified gap is passable is determined by these two safe angles values using the rule: If θRS − θLS > 0,

then the gap is passable.

In the case of the identified but not passable gap in Fig. 6, The safe angles θRS < 0 and θLS > 0 do not meet the criteria θRS −θLS > 0, and is thus not passable. All the identified gaps are evaluated using the above process to determine whether they are passable, and impassable gaps are removed from the list of identified gaps. At this stage, the specified detecting range DS plays a very important role. It determines the number of identified gaps. With

Fig. 6. A sample of evaluating a passable gap.

different detecting ranges, the number of identified gap may vary. In Fig. 7, only one gap is identified within the detecting range DS4 while three gaps were identified within the detecting range DS2 . In narrow or cluttered environments, an improper detecting range may fail to identify even one gap which will prevent the robot passing through the area. To compensate for this, a dynamic range varying mechanism was introduced to automatically change the detecting range. A max detecting range (MDR) is initialized and the detecting range DS is reduced by a certain distance every cycle until either a front gap is identified or it reaches the minimum range. A front gap is a gap which has the robot heading line within its area (Fig. 7). The reason for this is that if the front of the robot is clear, it means there is a passable way, and can consider that the robot is not in a cluttered environment and the robot can simply pass through with very little maneuvering. The selection of a suitable DS value depends on the sensor’s range and system computation ability, the dynamic range varying mechanism uses an iteration process and a large DS value requires more processing time. The next stage in the algorithm is to select one passable gap as the sub goal.

880

C. Ye, P. Webb / Robotics and Autonomous Systems 57 (2009) 877–888

Fig. 7. Identified gaps using different detection range DS .

In situation a: the gap is on the right side of the robot, the turning angle θS = θLS ; In situation b: the gap is on the left side of the robot, the turning angle θS = θRS ; In situation c: the gap is in front of the robot, θLS < 0 and θRS > 0, no turning angle is needed, robot can just move forward without collision; In situation d: no gap is found, the DS is reduced to perform another sub goal seeking procedure. Once the DS is smaller than a minimum range and still no gap is identified, then θS = Random(180, −180); In situation e: the gap is in front of the robot, θLS < 0 and θRS < 0, θS = Max(θLS , θRS ); Fig. 8. Gap selection.

2.1.2. Step 2—Select gap Within the list of evaluated gaps, the one that has the lowest cost is selected as a sub goal according to a cost function (Eq. (5)): Abs(β − directionSG )

directionSG + α2 × (5) 2π 2π where directionSG is the alignment of the sub goal (gap) direction with the current robot direction (Fig. 8) and β is the heading error angle which is obtained from the move to goal behavior. The α value indicates the relative weight to be given to each term in the cost function. The objective of this cost function is to try to select a gap that has the minimum direction difference from the final goal and also has the minimum direction difference with the current robot heading direction. A lower α1 encourages the robot to head toward the gap that has less heading error to the final goal while a lower α2 encourages the robot to stay on its current direction. In this research, α1 and α2 are set to 0.7 and 0.3 respectively which were obtained after extensive simulations tests using a trial and error method. One situation that may occur is that two gaps have the same cost value. If it happens, one of them is picked randomly. Gi = α1 ×

2.1.3. Step 3—Generate safe turning angle Once the next robot heading direction is decided, a turning angle θS is then produced to drive the robot to face that direction. The turning angle θS is obtained from the safe angles θLS and θRS which are calculated at the Gap Identified stage by specific situations. Fig. 9 illustrates all the situations that may occur while the robot is operating.

In situation f: the gap is in front of the robot, θLS > 0 and θRS > 0, θS = Min(θLS , θRS ). However, due to the sensor noise, the calculation of safe angles for the robot sides may not be precise. Thus, an extension of the robot width proportional to the robot translational speed is added to compensate for this. This compensation makes the robot stay further away from the obstacles when the translational speed is higher. The equivalent robot width WR0 is obtained from WR0 = WR + t v × ∂

(6)

where WR is the actual robot width; t v is the translational speed of the robot. ∂ is an experimentally obtained parameter, 0.4 in this research, whose value ensures that the robot will not miss most passable paths whilst still keep a safe distance from obstacles. 2.2. Collision avoidance behaviour The collision avoidance behaviour is designed to cover the side points which the sub goal seeking behaviour may ignore because it is only interested in those obstacles that have the potential to stop the robot. The behavior is triggered when the closest obstacle is within the robot’s hazard zone which is a semi circle with 33 cm radius as shown in Fig. 10. The semi circle is divided into three equal zones which are the left, right and front. The output of this obstacle avoidance behavior which is the escaping turning angle θOA is set using the following rules. If the obstacle is within the left zone, then θOA = 5◦ If the obstacle is within the right zone, then θOA = −5◦ If the obstacles are within the left and right zone, then θOA = 0◦ If the obstacle is within the front zone, then θOA = 360◦ .

C. Ye, P. Webb / Robotics and Autonomous Systems 57 (2009) 877–888

881

Fig. 9. Different situations when generating a safe turning angle.

2.3. Move to goal behaviour The move to goal behavior is designed to drive the robot from its current position to the desired goal position. This behavior tries to rotate toward the goal position by nullifying the heading error β which is the angle by which the robot needs to turn to face the goal directly (Fig. 11). By continuously comparing the goal position (Xg,Yg), robot position and heading direction (Xr , Yr , α ), the heading error β can be calculated by. Fig. 10. Hazardous zone of the robot.

The first three rules constrain the turning of the robot when obstacles are very close to the left and right sides of the robot. The last rule is designed to stop the robot moving forward and drive it backwards.

β = arctan



Xg − Xr Yg − Yr



− α.

(7)

The range for β is {−180◦ , 180◦ }. The heading error angle β is then summed with the escaping angle λ generated from the obstacle avoidance behaviour using a situation-based weighted sum.

882

C. Ye, P. Webb / Robotics and Autonomous Systems 57 (2009) 877–888

reference frame. β 0 = π − β and θSGS is the safe turning angle from the sub goal seeking behavior, θOA is the escaping turning angle from the obstacle avoidance behavior. 3. Settings and experimental results

Fig. 11. Move to goal behavior.

2.4. Situation-based behaviour fusion Finally, the sub goal seeking behaviour, move to goal behaviour and collision avoidance behaviour are fused according to specific situations. The final turning angle θF will be produced accordingly. In this research, three basic situations were considered, see Fig. 12. If θOA is not 0, then θF = θOA else Situation I: If αS ≤ β 0 ≤ αE then θF = β Situation II: If β 0 ≤ αS < αE and θSGS ≥ 0 then θF = αS else

θF = θSGS

Situation III: If αS < αE ≤ β 0 and θSGS ≤ 0 then θF = αE else

θF = θSGS where αS is the start angle of a gap, αE is the end angle of a gap, β is the heading error angle. αS , αE , β are measured in illustrated

The experiments were carried out by both simulation and robot experiments. The robot experiments were performed by NAMR (Fig. 13), a four wheeled rectangular shaped holonomic vehicle, driven by two differential wheels located in the middle of the robot and supported by two castor wheels. NAMR is 42 cm in width, 54 cm in length and 75 cm in height. In simulation, the robot is represented by a simplified rectangle with 28 × 36 pixels in size. NAMR’s controller is an on-board industrial embedded PC with one Pentium III 1.4 GHz processor and 256 MB memory running the Windows XP operating system. NAMR’s main sensor systems are a Sick LMS200 laser scanner and two integrated encoders to detect wheel rotation. NAMR is also equipped with a wireless connection and can be remotely controlled and monitored through intranet or internet. NAMR’s control software uses typical three layer architecture, see Fig. 14. It integrates all the modules needed to drive the sensory and motor hardware, sensor control module to interpret laser sensor information and navigation engine module to implement the navigation algorithms. The control software drives NAMR by combining incoming sensor data with the predefined goal position. Speed control commands are issued to the Motion Control module to drive the robot. At the same time, the Motion Control module returns the encoder data to the Navigation Engine module to compute the position and orientation of the robot. The typical cycle

Fig. 12. Situation-based fusions for SGS approach.

Fig. 13. NAMR mobile robot.

C. Ye, P. Webb / Robotics and Autonomous Systems 57 (2009) 877–888

883

Fig. 16. Simulation modules and their relationships.

Fig. 14. Navigation software architecture of NAMR.

time for a control loop is 80 ms. Since windows XP is not a real time operating system, the control cycle time is just an average value. The sub goal seeking approach is implemented by a module named Navigation Engine. The Navigation Engine module is shared by both the simulated robot and the NAMR robot. It also incorporates a user interface that displays sensory data and navigational parameters such as robot status, obstacle distance and orientation in real time (Fig. 15). The robot view displays all the identified gaps by sectors; the selected gap (sub goal) is shown darkened. The obstacles are displayed as points. The left and right driven wheel trajectories can be displayed to illustrate the heading direction trend of the robot depending on the motion commands issued for the next moment. The maximum translational velocity is set to 0.5 m/s and the maximum rotational velocity is set to 1.57 rad/s. Considering NAMR’s computational ability, the max detecting range value (MDR) was set to 240 cm for all experiments. The simulation software used to perform the tests was programmed using C#. It consists of three software modules: (1) Environment Simulator, (2) Sensor Simulator and (3) Robot Simulator, their interconnection is shown in Fig. 16. The simulated environment maps the objects and free space by using two value grid cells. The simulated laser sensor scans the simulated environment and

Fig. 17. User interface of simulation.

returns a set of distance values. A differential drive robot kinematic model is applied to simulate the robot for the calculation of its position and orientation in the simulation environment following the instructions from the Navigation Engine module. The Navigation Engine module implements the SGS navigation approach. In simulation, the Navigation Engine module has exactly the same setting as the real one such as the control cycle time. In addition, the sensor error and robot position error are simulated by adding a normal

Fig. 15. User interface of the navigation engine module.

884

C. Ye, P. Webb / Robotics and Autonomous Systems 57 (2009) 877–888

Fig. 18. Concave environment with 3.8 m width and 2.4 m depth.

Fig. 19. Concave environment with 3.8 m width and 1.4 m depth (left). Concave environment with 3 m width and 2.4 m depth (right).

distribution noise signal to the sensor’s returned range values and the robot’s positions and orientations. The user interface (Fig. 17) allows the user to specify the positions of the simulated robot and goals simply by dragging and dropping. Experiments could thus be performed with varying initial positions for the simulated robot and goals. 3.1. Experimental testing Four experiments were carried out to evaluate the system using the following criteria: (1) Goal achievable ability, to test whether the robot can reach its target in large and complex environments. (2) Safety, to test whether the navigation system is able to navigate the robot safely in cluttered environment with small side space; (3) Manoeuvres, to evaluate the motion performance. Since the avoidance of local minima is currently the main weakness of reactive based navigation approaches that do not use a global path planner [16], the most important evaluation of a reactive navigation system’s goal achievable ability is how easily it becomes trapped in, and can recover from, a local minimum when it occurs. Because it is easier to repeat a test in a simulation environment rather than in a real environment, and complex environments can be more easily constructed in simulation environments, the testing of the goal achievable ability will be

more comprehensive and thorough. Hence, a typical concave environment together with four complex environments was constructed in a simulation to evaluate the goal achievable ability. However, the simulation is vulnerable to inaccuracies in the robot dynamic model, sensor model and environmental model, therefore, the safety of the robot can only be tested through a robot operating in real environments where the manoeuvring behaviour of the robot can be seen more. The safety and manoeuvre tests were performed by the real robot NAMR running in an extremely narrow aisle. A final test was carried out in a dense and complex environment to evaluate the overall performance of system. In these experiments, the maximum detecting range was set to 240 cm as a default initialized value. 3.2. Experiment 1: Concave environments This experiment was set up using a classical concave shaped environment with obstacles laid between the robot initial position and the goal position. Firstly, three trials were run in similar environments with different size of concave obstacles. The first one was 2.4 m in depth and 3.8 m in width, the second one was 1.4 m in depth and 3.8 m in width and the third one was 2.4 m in depth and 3 m in width.

C. Ye, P. Webb / Robotics and Autonomous Systems 57 (2009) 877–888

885

Fig. 20. Concave environment with 2.4 m width and 3.8 m depth with 300 cm maximum detecting range.

Fig. 21. Scenario I.

The three trials presented three different results. The robot failed to reach the goal in the first trial while in the second and third trial the robot escaped from the concave obstacle and reached the goal with two completely different trajectories. In the first trial, the robot became trapped in an infinite cycle in front of the concave shaped obstacle and failed to reach the goal. At position A, two gaps were identified, see Fig. 18 (Snap A), one of them was selected as the sub goal randomly because of the identical cost values for both gaps. The safe angle navigated the robot until it reached the position B or B0 . At position B (B0 ), the robot had already moved out of the concave obstacle and no obstacle was within the robot’s view, Fig. 18 (Snap B)]. The move to goal behaviour therefore navigated the robot toward the goal by a left (when the robot was at Position B) or right (when the robot was at Position B0 ) turn which drove the robot back to position A again. Another cycle has thus began. The robot fell into the local minimum. In the second trial, a different heading direction was selected compared with the first trial when the robot was at position B Fig. 19 (Left). The reason for this difference was that the robot-togoal direction (right) coincided with the same direction (right) that might drive the robot out of the concave obstacle when the robot was at position B.

Fig. 22. Scenario II.

Fig. 23. Scenario III.

In the third trial, the robot had the same heading direction when it was at position B, see Fig. 19 (Right). The robot-to-goal

886

C. Ye, P. Webb / Robotics and Autonomous Systems 57 (2009) 877–888

the SGS approach with a proper set maximum detecting range, the chance of getting trapped because of a local minimum can be reduced. However, simply increasing the maximum detecting range can help the robot recover from the trapping situations. Other navigation methods such as APF [2] and Fuzzy logic [17] have proved to be difficult in recovering from such scenario. 3.3. Experiment 2: Complex environments In this experiment, the goal achievable ability was further tested in more complex environments where closely spaced obstacles, concave shaped trap, aisle, odd shaped objects and small poles were constructed (Figs. 21–23). In this experiment, all scenarios were accomplished successfully. The sub goal seeking approach demonstrated its ability to drive the robot moving in complex environments. Special attention should be paid to Scenario IV (Fig. 24) in which the robot seemed to wander a little bit in front of two concave shaped structures, but the sub goal seeking approach still successfully recovered the robot from this situation.

Fig. 24. Scenario IV.

direction was left while the direction to drive the robot out of the concave obstacle was right and so the robot was turning left toward the concave. However, instead of heading back to the concave, at position B, the robot was able to detect the obstacle edge E due to the reduction in width which prevented the robot entering the concave again. The local minimum was therefore avoided. The principle cause of the local minimum problem is the lack of a global view of the environment. With sensed information, the robot has only a partial view of its surrounding environment at every instance. Once the obstacles are beyond the detecting range of the sensor, the robot moves directly toward the goal. As soon as the robot heading direction coincides with the robot-to-goal direction after escaping from obstacles which are on the robot-togoal direction, the robot begins to generate a limit cycle path and inevitably falls into a local minimum. Using the sub goal seeking approach, the local minimum may occur only when the concave obstacles were too large and the edges of the concave obstacle can not be detected. This analysis was validated by the fourth trial in which the concave size is exactly the same as the first trial. The only difference is that the robot maximum detecting range was increased from 240 cm to 300 cm. The result of the fourth trial demonstrated that the increasing detecting range caused the robot to detect the obstacle edge (Fig. 20 Snap D), and be able to move out from the trap situation. One conclusion that can be drawn from the results is that by using

3.4. Experiment 3: Safety test The safety test was carried out in an extremely narrow aisle with a minimum width of 48 cm. The width of NAMR is 42 cm so there was only 6 cm extra free space which makes this scenario very difficult. Any sensor noise or wheel slip may cause a collision. The goal was set 6 m in front of the robot. Fig. 25 illustrates the environment within which the experiments were performed and the velocity profile of the robot during the experiment. In this trial on approaching the aisle, the robot gradually slowed down to make a fine adjustment of its heading direction, and finished the trial with fluid motion which is indicated by the robot velocity profile in Fig. 25. The trial was finished in 52 s with an average 15 cm/s velocity. 3.5. Experiment 4: Complex office environment A dense and complex scenario was also performed in an office in which the robot was supposed to avoid a concave shaped structure which was placed in front of the robot initial position and then travel through an opened door. On the way to the door a few densely packed obstacles were placed. The goal was located outside of the door 14 m distance from the robot start position. The robot finished the experiment with a very smooth trajectory in 112 s and average wheel speed of 21.5 cm/s see the

Fig. 25. NAMR is performing the safety evaluation.

C. Ye, P. Webb / Robotics and Autonomous Systems 57 (2009) 877–888

Fig. 26. Tests in complex office environment, Label B-H correspond to the position.

887

888

C. Ye, P. Webb / Robotics and Autonomous Systems 57 (2009) 877–888

robot velocity profiles in Fig. 26. At position A, the robot detected the concave shaped structure and bypassed it, Fig. 26 (Snap A, B and C). Fig. 26 (Snap D, E, F, G and H) illustrates how NAMR traveled through the door while avoiding narrow placed obstacles on the way. No oscillation and trap situation were observed. 4. Conclusion This paper has presented a sub goal seeking approach for reactive navigation in complex unknown environments. The sub goal seeking approach was geometry based which allowed analysis of the environmental structure and thus have the ability to detect all dead-ends within the robot’s detecting range. Although, the sub goal seeking approach may still get trapped in situations where the concave obstacles were too large and the edges of the concave obstacle can not be detected. The chance of being trapped is reduced. In the method, a geometrical relationship between robot shape and the environment is also established. The robot margin is inherently considered when the robot is computing the next heading direction. All the directions that are not passable according to the robot’s geometry are ignored which reduced the number of unnecessary manoeuvres and the risk of collision. Since both the robot in the simulation and the real robot NAMR were driven by the same navigation controller without any modification of their parameters, the simulation environment could be used to evaluate its performance in a large number of environments. The results suggest that the sub goal seeking approach is robust in controlling the robot and very adaptive in various environments. The experiments demonstrated that the sub goal seeking approach is capable of driving the robot safely in cluttered environments even with the presence of sensor noise. The trajectory was oscillation free and smooth. Trap situations were rarely observed during the tests and the robot was able to recover from them once they occurred. Future work based on this method includes taking into account the change rate of safe turning angle which is expected to further improve the performance and validating the method in dynamic environments with moving obstacles. The integration with SLAM is also expected to provide a complete solution on autonomous navigation. References [1] O. Khatib, Real-time obstacle avoidance for manipulators and mobile robots, International Journal of Robotic Research 5 (1986) 90–98. [2] Y. Koren, J. Borenstein, Potential field methods and their inherent limitations for mobile robot navigation, in: Proceedings of the IEEE international Conference on Robotics and Automation, 1991, pp. 1398–1404. [3] T. Wang, B. Zhang, Time varying potential field based perception-action behaviours of mobile robot, in: Proc. IEEE Int. Conf. on Robotics and Automation, 1992 (presented).

[4] Y.C. Wang, G.S, A new potential field method for robot path planning, in: IEEE International Conference on Robotics and Automation, 1999. [5] S.S. Ge, Y.J. Cui, Path planning for mobile robots using new potential functions, in: Proceedings of the 3rd Asian Control Conference, Shanghai, 2000 (presented). [6] J. Ren, K.A. McIssac, R.V. Patel, Modified Newton’s method applied to potential field-based navigation for mobile robots, IEEE Transactions on Robotics and Automation 22 (2006) 1–7. [7] J. Borenstein, Y. Koren, The vector field histogram—Fast obstacle avoidance for mobile robots, IEEE Journal of Robotics and Automation 7 (1991) 278–288. [8] I. Ulrich, J. Borenstein, VFH+:Reliable obstacle avoidance for fast mobile robots, in: IEEE International Conference on Robotics and Automation, Leuven, Belgium, 1998 (presented). [9] D. Fox, W. Burgard, S. Thrun, Controlling synchro-drive robots with the dynamic window approach to collision avoidance, in: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 1996. [10] K.O. Arras, J. Persson, N. Tomatis, R. Siegwart, Real-time obstacle avoidance for polygonal robots with a reduced dynamic window, in: Proceedings of the 2002 IEEE International Conference on Robotics & Automation, Washington DC, 2002 (presented). [11] L. Chengqing, M.H. Ang Jr., H. Krishnan, L.S. Yong, Virtual obstacle concept for local-minimum-recovery inpotential-field based navigation, Robotics and Automation, in: Proceedings. ICRA ’00. IEEE International Conference on, San Francisco, CA, USA, 2000 (presented). [12] W.L. Xu, A virtual target approach for resolving the limit cycle problem in navigation of a fuzzy behavior-based mobile robot, Robot Autonomous System 30 (2000) 315–324. [13] C Fayad, P Webb, Fuzzy logic based collision avoidance algorithm for a mobile robot, in; Workshop on Recent Advances in Soft Computing 1999, De Montfort University, Leicester, 1999 (presented). [14] C. Ye, P. Webb, A sub-goal seeking approach for mobile robot navigation in complex unknown environments, in: The 10th Mechatronics Forum Biennial International Conference, PA, USA, 2006 (presented). [15] R.C. Arkin, Behavior-Based Robotics, MIT Press, 1998. [16] J. Minguez, L. Montano, Nearness Diagram (ND) navigation: Collision avoidance in troublesome scenarios, IEEE Transactions on Robotics and Automation 20 (2004) 45–59. [17] C. Fayad, P. Webb, Development of a hybrid crisp-fuzzy logic algorithm optimised by genetic algorithms for path-planning of an autonomous mobile robot, Journal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology 17 (1) (2006) 15–26.

Chen Ye was born in China in 1979. He received his B. Eng. degree in Mechtronics from the Shanghai JiaoTong University, China in 2001 and the Ph.D. degree in Mechanical Materials and Manufacturing Engineering from the University of Nottingham, UK in 2006. He is currently working as a research fellow in the department of Mechanical Materials and Manufacturing of The University of Nottingham. His research is focused on mobile robot and artificial intelligence.

Phil Webb has a B.Eng. degree in Electronic Engineering and Ph.D. degree in Manufacturing Engineering and has over 15 years of experience in Manufacturing Engineering and Robotics research. He has particular experience in control, integration and development of advanced robotic production systems and has produced over 50 publications on the subjects of control, automation and robotics. He is a council member and director of the British Automation and Robotics Association and is currently the United Kingdom’s representative on the General Assembly of the International Federation for Robotics.