TOWARDS A HUMAN-ROBOT COLLABORATIVE CONTROL FOR ADJUSTABLE AUTONOMY

TOWARDS A HUMAN-ROBOT COLLABORATIVE CONTROL FOR ADJUSTABLE AUTONOMY

TOWARDS A HUMAN-ROBOT COLLABORATIVE CONTROL FOR ADJUSTABLE AUTONOMY ZIEBA Stéphane POLET Philippe JOUGLET David VANDERHAEGEN Frédéric Laboratoire d'Au...

89KB Sizes 1 Downloads 70 Views

TOWARDS A HUMAN-ROBOT COLLABORATIVE CONTROL FOR ADJUSTABLE AUTONOMY ZIEBA Stéphane POLET Philippe JOUGLET David VANDERHAEGEN Frédéric Laboratoire d'Automatique, de Mécanique et d'Informatique industrielles et Humaines Université de Valenciennes et du Hainaut-Cambrésis Le Mont Houy 59313 Valenciennes FRANCE

Abstract: This paper aims at presenting an approach gathering the concepts of humanrobot collaborative control and adjustable autonomy as a way of designing resilient systems. The first part focuses on the definition of adjustable autonomy for a mobile robot. Then, resilience is presented as an objective for designing the system, and an approach integrating both adjustable autonomy and collaborative control is described. The last part of this paper is devoted to the presentation of a micro-world developed in order to test assumptions on adjustable autonomy. Copyright © 2007 IFAC Keywords: human-robot interaction, adjustable autonomy, collaborative control.

1. INTRODUCTION Nowadays, the breadth of missions that an autonomous mobile robot can handle increases rapidly. Designing autonomous robots allows limiting risks for the human operators by taking them away from the field of operation but it seems today essential to design systems allowing an effective human-robot interaction. Indeed, on the one hand, the difficulties of communication or the poor quality of transmitted necessary data for teleoperation make the direct remote control ineffective in some situations. On the other hand, the maturity of technology and algorithms does not allow today to design completely autonomous and sufficiently reliable robots to deal with all possible situations. A compromise has to be found between completely remote controlled robots and fully autonomous robots. This compromise consists in developing a mode of human-machine interaction aiming at optimising the use of competencies of both the human operator and the robot. Such an interaction mode is the collaborative control which has to be integrated in a system allowing a dynamic adjustment of autonomy in order to maintain a sufficient level of efficiency whatever the conditions

are. Indeed, it must be taken into consideration that these unmanned ground vehicles evolve in a dynamic environment that may require an adjustable autonomy. For instance, the level of autonomy has to be raised up in case of unavailability of the human operator, due to another task-demand. The proposed approach intends to extend the collaborative control by investigating unifying links with adjustable autonomy. Some links are likely to be found if adjustable autonomy and collaborative control notions are gathered around the concept of resilience. Firstly, this paper introduces a discussion about autonomy for a mobile robot. Then, adjustable autonomy and an approach to assess transitions between modes of autonomy are presented and adaptability is the first characteristic proposed to illustrate the resilience of the human-robot system. This concept of resilience is introduced in the second section of this paper and other characteristics which refer to the collaborative control are detailed in the third section. Finally, a micro-world is proposed to simulate the human-robot interaction and some first results are discussed.

2. AUTONOMY 2.1 Autonomy A discussion about adjustable autonomy should begin with the definition of the word autonomy itself for which two main senses can be distinguished. These senses are given by the etymology of the word autonomy which is derived from the combination of two Greek words: autos, which means “oneself” and nomos which means “law”. The dictionary defines autonomy as the capacity of an individual or a group to take care of itself or not to depend on an external influence. Bradshaw et al., (2004) propose two dimensions to define autonomy, namely the descriptive dimension (actions that the robot is able to perform) and the prescriptive dimension (actions that the robot is authorised to perform). Steels (1995) states that, in order to be autonomous, systems must first be automatic. It means that a system must be able to operate in an environment in order to achieve follow tasks for which it was designed. Smithers (1997) states that autonomous systems are able to build the laws and strategies according to which they regulate their behaviour. Considering these different approaches, it is interesting to define the concept of autonomy for a mobile robot following three distinct axes which take into consideration the different senses previously mentioned: 1. skill and capacity of the robot to achieve a given task. 2. skill and capacity of the robot to decide how to achieve a task. 3. skill and capacity of the robot to identify and manage goal-directed constraints. In the three axes, a distinction is made between skill and capacity of the robot. Skill refers to the existence of technology and algorithms necessary to achieve a task. Capacity depends on the context of the mission and is likely to evolve. So, the robot may not be able to achieve a given task although it has the skill to do it. In the following section, adjustable autonomy is introduced as a way to react to the evolution of the capacities of both the human operator and the robot. 2.2 Adjustable autonomy The concept of adjustable autonomy is close to adaptive automation (Inagaki, 2003). It establishes that functions are allocated dynamically between the human operator and the robot according to criteria related on the environment, the human operator workload and the performances of the humanmachine system. Goodrich et al. (2001) qualify adjustable autonomy as a system with several levels of autonomy and in which only the human operator has the total control of the change of level. Then, in each mode, the robot

has some authority on its behaviour according to the selected mode of autonomy and the human operator can only influence the behaviour of the robot through the human-machine interface. This definition of adjustable autonomy seems incomplete because it does not take into consideration the dynamic aspect of the situation or the operator workload which may require an adjustment of autonomy during the activity of the robot. Moreover, only the human operator is responsible for adjusting the autonomy level. This restriction does not lead to a really effective system. Indeed, if an adjustment of the autonomy level is due to the operator neglect, it is possible that this one is not available or does not have the data necessary to carry it out. The robot must thus be able to adjust itself its level of autonomy. The definition we focus on considers adjustable autonomy as the property of an autonomous system to change its level of autonomy while the system operates. The human operator, another system or the autonomous system itself can adjust the autonomy level (Dorais et al., 1999). In this definition, the autonomous system represents for us the ground vehicle. The other system mentioned which can adjust the autonomy level can be compared to a calculator managing the function allocation. Adjustable autonomy is thus a way for the system to adapt to new constraints or pressure in order to maintain a normal functioning state. This maintainability is performed by transitions between the different modes of autonomy. 2.3. Assessment of transitions An interesting approach to assess the transitions can consist in exploiting the model defined by Parasuraman et al. (2000). The authors describe the activity of information processing through four categories: information acquisition (C0), information analysis (C1), decision selection (C2), action implementation (C3). Considering three general modes of autonomy (a manual mode M0, a semi-autonomous mode M1 and a fully autonomous mode M2), the table 1 illustrates the allocation of the categories of activity between the human operator and the robot. From this table, it is possible to build an approach to estimate the cost for a transition between different modes of autonomy. A transition from the mode M2 to the mode M0 for instance may be too difficult to manage for the human operator. A gradual reconstruction of the situational awareness is necessary for successful transitions. Indeed, these transitions will result in different roles for the human operator, which require different types of information to maintain sufficient situational awareness (Scholtz, 2003).

Table 1. Roles of the robot and the human operator M0 M1 M2 C0 H R R C1 H H-R R C2 H H-R R C3 H R R These transitions can also be based on objective criteria. Huang et al. (2004) define a set of metrics organised in three axes: complexity of the mission, environmental difficulty and human-robot interaction. Adjustable autonomy can thus be seen as a way to adapt the system to modifications in the environment in order to optimise the task allocation between the robot and the human operator. Fiksel (2003) defines adaptability for a system as the flexibility to change in response to new pressures. Adaptability is a characteristic of the resilience of a system. The following section introduces the notion of resilience and presents adjustable autonomy as a component of a resilient system. 3. RESILIENCE Resilience is a concept borrowed from the field of ecology and characterizes natural systems that tend to maintain their integrity when subject to disturbance (Ludwig et al., 1997). In industrial systems, resilience is related to the concept of robustness which is related to errorresistant and error-tolerant systems. Systems are error-resistant if it reduces the probability that a human operator interacting through the interface will attempt to perform actions that endanger the safety of the system. Systems are error-tolerant if the consequences of a human operator performing erroneous actions through the interface are limited to some acceptable level (Dearden and Harrison, 1995). Hollnagel (2006) defines resilience as "the intrinsic ability of an organization to keep or recover a stable state allowing it to continue operations after a major mishap or in presence of a continuous stress". Resilience can provide an interesting framework in order to build mobile robots able to recover from errors, both internal and due to the human operator. Resilience and autonomy, more especially adjustable autonomy, are related concepts. An autonomous system has to be resilient in order to adapt to unplanned events. Hollnagel (2006) introduces the notion of proactive organisation (figure 1). Such an organisation can be applied to an autonomous mobile robot. Some metrics constantly analyze the situation to determine if the system is able to recover from any unplanned event. Thus, resilience completes the concept of robustness with the notion of proactive activity which is a way of anticipating failures. The system has also the possibility to learn from the reactions to unplanned events.

Safetyplanning Preparingfor unexampledthreats

Alert and observant

Situation assessment, reorganisation

Constantly self-critical& inquisitive

Unplanned event

Evaluation, learning

Alternative waysoffunctioning

Fig. 1. Resilient or proactive organisation (from Hollnagel, 2006) Table 2 summarizes different characteristics to design a resilient human-robot system. Fiksel (2003) defines these characteristics as following. Diversity expresses the existence of multiple forms and behaviours in the system. Efficiency is the performance of the system with modest resource consumptions. Adaptability is the flexibility of the system to change in response to new pressures. Cohesion expresses the existence of unifying forms or linkages. Table 2. Characteristics of resilience (adapted from Fiksel, 2003) Human operator Mobile Robot HumanRobot System

Diversity Efficiency Different Efficient strategies decisions Material and Efficient decisional decisions redundancy Different forms of Efficient human-robot cooperation cooperation

Adaptability Human adaptability Adjustable autonomy

Cohesion Respect of objectives Respect of prescribed plan of action

Adjustable Efficient autonomy communication

In the following section, some of these characteristics of resilience are expressed in the collaborative control. 4. COLLABORATIVE CONTROL Collaborative control can be defined like the elaboration of a single command from multiple sources (Goldberg and Chen, 2001). The authors show that the aggregation of the orders makes it possible to increase the performances of the system and its reliability. This first definition of collaborative control is close to shared control (Sheridan, 1992) but the difference comes from the phase of aggregation. Indeed, the human operator and the robot control the same functions here whereas in shared control, they perform different tasks. Moreover, in this definition, there is no communication between the human operator and the system. The following definition introduces this essential dialogue in a collaborative activity. Fong et al. (2003) define collaborative control as a mode of human-machine interaction placing the human operator and the robot in the same decisional level. They are considered as partners working together to achieve a common goal. The human operator is now more a collaborator than a supervisor but the robot remains subordinate to a high level strategy developed by this human operator.

The main feature of this mode of interaction relates to the two-way dialogue initiated between the human operator and the robot. The operator, considered as a limited and imprecise source of information, can give commands to the robot and the robot can address requests to the operator about its situation. This dialogue can lead the human operator and the robot to confront their points of view about a given situation and contribute to the cohesion of the human-robot system and to its resilience. Collaborative control thus allows the human operator and the robot not only to combine their competencies but also to use them to negotiate a situation of conflict. Schmidt (1991) defines three forms of cooperation which can be applied to collaborative control: the augmentative, the integrative and the debative cooperation. In the following definitions, the term agent stands for the human operator or the automated system. The purpose of the augmentative cooperation is to compensate for the individual limitations of the agents. These agents have similar know-how and the task is divided into equivalent sub-tasks. The augmentative cooperation combines the agents' capacities to produce a better performance. An example is the assistance that the human operator provides to the robot by inflecting the trajectory in order to avoid an obstacle. The integrative cooperation integrates the different and complementary know-how of the agents. In this case, the task is divided into sub-tasks, each one achievable by the most qualified agent. The contribution of each agent can contribute to the achievement of another task. An example of integrative cooperation is related to the difference of perception between the human operator and the robot. The robot can indicate on an interactive map an obstacle which is not perceived by the operator and this new information is then likely to be used by the human operator. The debative cooperation applies to a task which is not broken up and to agents with similar know-how. This mode of cooperation aims at confronting the results of the various agents in order to produce a better result. An example of this mode of cooperation is given by the dialogue initiated between the human operator and the robot where each agent has to justify its point of view and its strategy about a situation to work out the best possible solution. These different forms of cooperation contribute to the resilience of the system by introducing diversity and cohesion through the different possible interactions. Figure 2 thus illustrates a perspective of architecture for the collaborative control which integrates adjustable autonomy in order to contribute to the adaptability of the system.

Constraints Capacity of managing constraints Result of the action

Capacity of decision

Selection of the mode of autonomy

Objectives

Elaboration of the plan of action

Mode of autonomy

Decision Human operator

Constraints

Objectives

Human operator

Robot

Robot

Constraints Objectives

Execution of the plan of action

Human operator

Decision Capacity of execution

Robot

Fig. 2. Perspective for collaborative control. Collaborative control is thus a way to implement principles of adjustable autonomy by controlling the three axes of autonomy previously defined: - achieve a task, - decide how to achieve a task, - identify goal-directed constraints. In order to implement the principles of adjustable autonomy, the following part presents a micro-world simulating the control of a ground vehicle by a human operator. 5. APPLICATION The application simulates the control of a ground vehicle travelling on a map from a point A to a point B. This robot has various control modes representing different modes of autonomy and is supervised by a human operator. Several control modes of the robot were developed in this micro-world. In the manual mode, which corresponds to the lowest level of autonomy, the operator controls the robot via the keyboard. In the autonomous control mode, the robot is able to avoid obstacles without referring about it to the operator. Two modes of human-robot interaction were integrated into this micro-world. The exchange of control consists in allowing the operator to take the manual control of the robot at any time. This actually corresponds to the deactivation of the autonomous control mode. The difficulty which arises then relates to the transition between the modes. In a transition from a manual mode to an automatic mode, the robot must have assimilated the progression of the mission during the phase of manual control. In a transition from an automatic mode to a manual mode, the operator must have information on the context to be effective in the teleoperation of the robot. In the other interaction mode - the mixed control - the operator plans the way of the robot by including several points of passage. This decomposition of the mission aims at distributing the operator workload and at optimizing the use of its resources. In this way, the global strategy of the human operator is associated to the local strategy of the robot.

In the following section, an experimental protocol that was performed in the micro-world is presented and some results are discussed. 6. EVALUATION The evaluation aims at comparing the performances of the different modes of autonomy. This experimental protocol is organized around a common mission, namely travelling from a point A to a point B while passing by an obligatory stage. The operator has six different experimentations to achieve making it possible to handle the various control modes of the robot. The first scenario consists in handling the robot in manual mode and prohibiting any crossing of obstacle. Any obstacle will be announced to the operator by a message displayed on the screen. The second scenario is identical to the precedent but the crossing of obstacle is now authorized. The experimentations 3 and 4 introduce the autonomous control mode where the robot is able to avoid the detected obstacles. In scenario 3, the obstacles must all be avoided. In the fourth experimentation, their crossing is authorized. Moreover, in these two control modes, the exchange of control is active. The last two ways to be achieved are carried out under the same mode of autonomous control mode combined with the mixed control. The following part discusses the results obtained from this experimental protocol which was carried out with 12 subjects. 7. PRELIMINARY RESULTS In this part, mode 1 represents the manual mode, mode 2 the autonomous mode only and mode 3 the mixed control. By focusing on the mission times of the operators, scenarios 1, 3 and 5 (crossing of obstacles forbidden) can be separated from scenarios 2, 4 and 6 (crossing authorized). Table 3 presents mean times and standard deviation for mission times for each mode of autonomy, with and without crossing of obstacles. Table 3. Mean time and standard deviation Mean Standard Autonomy mode (seconds) deviation Manual mode 172 53.14 Without Autonomous mode crossing of 167 45.94 only obstacles Mixed mode 109 13.22 Manual mode 91 26.06 With crossing Autonomous mode 56 34.97 only of obstacles Mixed mode 53 16.79 The paired comparisons of the mean time between each mode of autonomy were carried out using the Student test for paired samples, with an alpha level of 0.05 (two-tailed test) and the following results can be detailed.

Without crossing of obstacles: - There is no significant difference of the mean time between manual mode and autonomous mode only: t(11)=0.27; n.s.. - The mean time is significantly higher in autonomous mode only than in mixed control: t(11)=5.32; p<0.0003. - The mean time is significantly higher in manual mode than in mixed control: t(11)=4.16; p=0.002. Manual control and autonomous mode only are not significantly different regarding the mission time. Only the mixed control seems to improve in a significant way the mission time, which is a metric related to the efficiency. With crossing of obstacles: - The mean time is significantly higher in manual mode than in autonomous mode only: t(11)=4.62; p=0.001. - There is no significant difference of the mean time between autonomous mode only and mixed control: t(11)=0.36; n.s.. - The mean time is significantly higher in manual mode than in mixed control: t(11)=6.24; p<0.0001. The adjustment of the ability to avoid obstacles seems to lead to an optimal strategy for the humanrobot system, which makes autonomous mode only and mixed control not being significantly different. It can be noticed that the mixed control remains better than the manual mode. From these results, it appears that a mode of control where both the human operator and the robot are involved increases performances in a significant way regarding teleoperation or fully autonomous modes. It can be useful to assess the human-robot interaction by focusing on the number of interventions of the operator and the ratio between the different modes. Regarding the first results consigned in table 4, the experimentations 5 and 6 reduce the number of interventions. In scenarios 3 and 4, this number could be decreased by increasing the situation awareness of the operator. Two behaviours can be identified. Either the operator frequently alternates between the manual and automatic modes, or the operator switches in manual mode and then stays in this mode. Table 4. Assessment of the human-robot interaction Scenario Scenario Scenario Scenario 3 4 5 6 Number of ≈=13 <1 <1 ≈=4 interventions Time spent ≈= 2/3 of <1/3 of ≈=0 ≈=0 in manual the total the total mode time time VI. CONCLUSION AND PERSPECTIVES This paper summarizes the existing links between adjustable autonomy and collaborative control gathered around the concept of resilience, which

appears to be a promising way for increasing performances of the system. Moreover, integrating adjustable autonomy and collaborative control in a same architecture appears to be necessary to obtain efficient human-robot cooperation. The preliminary results obtained on the micro-world are summarized in table 5 and let suppose that adjustable autonomy is a means of increasing the performances and of reducing the operator workload. The mixed control, which introduces some of the specifications of the collaborative control by associating competences of the human operator and the robot, also tends to improve the results. Table 5. Summary of main results Crossing obstaclesCrossing obstacles authorized forbidden Mode 1 2 3 1 2 3 Interaction 100% 0% 50% 100% 0% 50% level Completion Low Low Low High High Low time Human operator:Human operator : 3 strategies Execution 3 strategies Robot : 1 strategy Robot : 1 strategy The perspectives on adjustable autonomy will consist in integrating into the robot the strategies employed by the human operators. Thus, the robot will be able to adjust its autonomy among an increasing number of strategies which contributes to its resilience. The problem is that only a subset of the strategies of the human operators could probably be implemented, which still requires the existence of a human-robot interaction. This interaction can be deducted from the approach proposed in section 2 to define the modes of autonomy. By analysing the types of interaction, it is possible to determine the useful information to increase the performances and reduce the cost of transitions by maintaining a correct situation awareness. 8. ACKNOWLEDGMENTS This work was performed in the Human-Machine Systems research group, in the Laboratoire d'Automatique, de Mécanique et d'Informatique industrielles et Humaines (LAMIH) of the University of Valenciennes. This work is supported by the French Defence Procurement Agency (DGA) and takes place in collaboration with the THALES Company. REFERENCES Bradshaw, J. M., P. J. Feltovitch, H. Jung, S. Kulkarni, W. Taysom, A. Uszok (2004). Dimensions of adjustable autonomy and mixedinitiatives interaction. In: AUTONOMY 2003 (M. Nickles, M. Rovatsos, and G. Weiss (Eds.)), LNAI 2969, pp. 17–39. Springer-Verlag, Berlin. Dearden, A.M. and M.D. Harrison (1995). Formalising human error resistance and human error tolerance. In Proceedings of the Fifth International Conference on Human-Machine

Interaction and Artificial Intelligence in Aerospace, EURISCO. Dorais, G. A., R. P. Bonasso, D. Kortenkamp, B. Pell, D. Schreckenghost (1999). Adjustable autonomy for human-centered autonomous systems on Mars. In Proceedings of the 6th International Joint Conference on Artificial Intelligence (IJCAI), Workshop on Adjustable Autonomy Systems. Fiksel J., (2003). Designing resilient, sustainable systems. In: Environmental Science and Technology, Vol. 37, N° 23. Fong T., C. Thorpe, C. Baur (2003). Robot, asker of questions. In: Robotics and Autonomous Systems, Vol. 42, pp 235-243. Goldberg, K. and B. Chen (2001). Collaborative control of robot motion – Robustness to error. In: Proceedings of the IEEE/RSJ International Conference on Robots and Systems, October 2001, Maui, HI. Goodrich, M. A., D. R. Olsen Jr., J. W. Crandall, T. J. Palmer (2001). Experiments in Adjustable Autonomy. In: Proceedings of IJCAI Workshop Autonomy, Delegation and Control : Interacting with Intelligent Agents. Hollnagel, E. (2006). Achieving system safety by resilience engineering. In: The 1st Institute of Engineering and Technology International Conference on Systems Safety. Huang H., E. Messina, R. Wade, R. English, B. Novak, J. Albus (2004). Autonomy measures for robots. In: Proceedings of IMECE : International Mechanical Engineering Congress. Inagaki, T. (2003). Adaptive Automation: Sharing and Trading of Control. In: Handbook of Cognitive Task Design (E. Hollnagel (Ed.)). Lawrence Erlbaum Associates. Mahwah, NJ. Ludwig, D., B. Walker, C.S. Holling (1997). Sustainability, stability and resilience. In: Conservation Ecology, Vol. 1, N° 1, Art. 7. Parasuraman R., T.B. Sheridan, C.D. Wickens (2000). A model for types and levels of human interaction with automation. In: IEEE Transactions on Systems, Man and Cybernetics – Part A : Systems and Humans, Vol. 30, N° 3, May 2000. Schmidt, K. (1991). Cooperative Work: A Conceptual Framework. In: Distributed Decision Making: Cognitive Models for Cooperative Work (J. Rasmussen, B. Brehmer and J. Leplat (Eds)), pp 75-110. John Wiley & Sons Ltd. Scholtz, J. (2003). Theory and evaluation of human robot interaction. In: Proceedings of the 36th Hawaii International Conference on Systems Sciences. Sheridan, T. B. (1992). Telerobotics, Automation and Human Supervisory Control. MIT Press, Cambridge, MA. Smithers, T. (1997). Autonomy in Robots and other agents. In: Brain and Cognition, Vol. 34, pp. 88106. Steels, L. (1995). When are robots intelligent autonomous agents? In: Robotics and Autonomous Systems, Vol. 15, N° 1, pp. 3-9, July 1995.