Automation in Construction 99 (2019) 265–277
Contents lists available at ScienceDirect
Automation in Construction journal homepage: www.elsevier.com/locate/autcon
Human adaptation to latency in teleoperated multi-robot human-agent search and rescue teams
T
Amro Khasawneha, Hunter Rogersa, Jeffery Bertranda,c, Kapil Chalil Madathila,b, , Anand Gramopadhyea ⁎
a
Clemson University, Department of Industrial Engineering, 110 Freeman Hall, Clemson, SC 29634, United States of America Clemson University, Glenn Department of Civil Engineering, 116 Lowry Hall, Clemson, SC 29634, United States of America c Clemson University, School of Computing, 326 Engineering Innovation Building, Clemson, SC 29634, United States of America b
ARTICLE INFO
ABSTRACT
Keywords: Search and rescue robots Human-robot teaming Human-robot interaction Trust in automaton Latency Human performance
Teleoperation of unmanned vehicles in high stress environments has been a subject of research in many domains, which focus primarily on system and operator performance. Unmanned ground vehicles for rescue, also known as search and rescue robots, serve as extensions of responders in a disaster, providing real-time video and other relevant information about the situation. However, physically separating responder and robot introduces latency between the human input provided to the unmanned vehicle to execute an operation and the subsequent response provided by the system. This latency (lag or time delay) is determined by the distance and the bandwidth of the connection between the operator and the unmanned vehicle. Automating these systems may mitigate the effect of latency to an extent; however, this has its own consequences, such as leaving the responder out of the loop, which subsequently leads to detrimental effects on situational awareness. This research investigates the relationship between latency and the performance of the human operator of a teleoperated robot at different levels of system complexity and the effect of different levels of automation on this relationship. Eighty participants operated one or two unmanned teleoperated robots to complete two search and rescue tasks. The study utilized a 2 × 2 × 2 mixed-subjects experimental design with the automation level and latency level being the between-subjects factors and the system complexity (controlling one or two robots) being the within-subjects factor. The dependent variables were operator performance, perceived workload, and the subjective rating of trust with automation. A latency of 500 ms showed a significant decrease in performance in time to complete the task and a significant increase in the perceived physical workload. Both the automation level and latency level moderated the system complexity effect on the subjective rating of trust in the robotic system. The level of trust decreased over time in the one-robot condition as opposed to no change in the two-robot condition. The error rate decreased over time at different rates based on the number of robots or the latency level. Based on the results of the study, several design implications are suggested for improving performance including adding features to the automation that will allow the operator to use common strategies and providing necessary information using multiple sensory channels. Future research directions are also proposed.
1. Introduction Teleoperated robotics covers a wide range of technology, specifically unmanned vehicles, operating on the ground, in the air, on the sea surface, or under water. All of which have been widely used in several domains including but not limited to: exploring other planets (e.g., the National Aeronautics and Space Administration (NASA) Mars Rovers); applications in the military (e.g. surveillance/reconnaissance); search and rescue operations (e.g. at the World Trade Center (WTC) after September 11, 2001); automated inspection, construction, or ⁎
maintenance of infrastructure [1–5]; and surgery [6–8]. Several types of input devices can be used to control these teleoperated robots including handheld devices, mobile phones, or joystick controllers, all including a screen displaying the robot's field of view [9–11,71]. Generally, these display panels show the robot view transmitted from its camera and information from its sensors, the command given to the robot, and a map to support the operator's situational awareness and enhance navigation. The size of teleoperated robots can vary; depending on their capabilities or the required task, they can be as small as a few inches in dimension or as large as massive utility vehicles [12].
Corresponding author at: Departments of Civil and Industrial Engineering, Clemson University, S Palmetto Blvd, Clemson, SC 29634, United States of America. E-mail address:
[email protected] (K.C. Madathil).
https://doi.org/10.1016/j.autcon.2018.12.012 Received 15 January 2018; Received in revised form 8 December 2018; Accepted 12 December 2018 0926-5805/ © 2018 Elsevier B.V. All rights reserved.
Automation in Construction 99 (2019) 265–277
A. Khasawneh et al.
These robots will be an essential part of the United States Army's Future Combat Systems (FCS), in part because of their ability to operate in a variety of difficult and stressful environments [12,72].
operators are not constantly or directly controlling them, humans maintain supervisory control to complete tasks [23]. To do so, these operators need to perform the necessary cognitive tasks of planning and monitoring to achieve the objective of the operation and to increase the safety and efficiency of the system [24,25], a situation that becomes more complex when operators supervise more than one vehicle because of the increased cognitive demand placed on them [23]. As a result, operators teamed with automation systems have been found to exhibit a decrease in performance [26], an increase in overall workload [27], and a reduction in situational awareness (SA) [28,29]. Adding further complexity, manually controlling the teleoperated robots remains a possibility in situations the automation is not designed to handle, meaning the operator must take control and continuously navigate the robot to complete the assigned task.
1.1. Search and rescue robotics Search and rescue robotics, a specific area of teleoperated robotics, employs unmanned aerial (UAV), ground (UGV), underwater (UUV), or water surface (USV) vehicles to assist in search, reconnaissance and mapping, rubble removal, extraction, and other tasks required by disaster relief efforts [13]. These robots, which are particularly effective in situations involving active threats or in mobility-challenged environments, have been used during or after such man-made or natural disasters as the World Trade Center (WTC) attack on September 11, 2001; Hurricane Katrina in 2005; and the Sago Mine incident in West Virginia in 2006 to find trapped survivors and search areas cut off by flooding or made inoperable by carbon monoxide and methane [13]. Such distance operations involve challenges that impact the development of situational awareness due to the absence of direct viewing, which results in the degradation of operator performance [14,73]. These issues need to be addressed by system designers to create appropriate telepresence, which is the feeling of operators that they are immersed in the teleenvironment and directly manipulating the telerobot as an extension of their bodies [15], to enhance the effectiveness of these telerobots. Several factors prevent achieving this telepresence in telerobotics [16], one of which is the disconnect in haptic feedback from movements in a virtual environment. More important is the presence of latency, or time delay, in the data transfer in some systems. This latency, also referred to as end-to-end latency, time delay, or lag, is defined by MacKenzie and Ware [17] as the postponement between the input/ control action from the human operator and the visual output on the display (feedback) notifying the operators that their orders have been executed. This latency can also be found in wilderness search and rescue or post disaster rescue in which communication networks are damaged or weak, specifically in areas after a man-made disaster [18]. In addition, transmitting high-bandwidth display signals through a lowbandwidth channel induces delay [17] as does the slowness of the highinertia system being manipulated or the sluggishness of the computer system in updating precise graphic imagery as the viewing platform is translated over or rotated inside the environment [15]. Studies of human behavior under latency conditions have found that people are able to detect latency as small as 10 ms and that they change their control strategy from continuous control to “move and wait” when the latency is above 1 s, meaning from open-loop to closed-loop control [19,20]. Goodrich and his colleagues suggested that the latency between an operator on Earth and a robot on Mars is approximately 45 min, and 500 ms between a computer and a robot in a laboratory [21].
1.3. Trust in automation Trust, an important variable in human-automation interaction, is a cognitive state that reflects the operator's faith in the system that it is completing the intended task, which is typically assessed through subjective ratings [30–33]. Specific to robotic systems, analyses conducted by Hancock et al. [34] suggest that this trust is influenced by various factors within the system itself as well as in the nature and characteristics of the humans working with it [34]. More recent research conducted by Hoff and Bashir [35] suggests that the level of performance of the robot is the most important indicator of human trust, reinforcing the idea that machine reliability predicts trust in automation, and that trust in turn predicts automation use. Specifically, trust is correlated with dependence, which is an objective behavior reflecting the amount of user interaction with the automated system, i.e., how frequently the user turns it on, adheres to its suggestions, and cross-checks it [36,37,67]. Trust and dependence, ideally, should be correlated with the actual system reliability, referred to as calibrated trust. However, in situations when the automation is highly reliable but not perfect, the operator tends to over-trust it and cease monitoring it, a problem referred to as complacency [38,74]. This miscalibration between trust and reliability is also seen in the opposite situation, when the operator under-trusts or distrusts the automation, abandoning it even when it's accurate as a result of poor feedback or automation complexity [39,40]. In the research presented here, we examine this construct using a dynamic model of trust developed by Lee and See [41], suggesting that trust develops cyclically through the stages of Information Assimilation and Belief Formation, Trust Evolution, Intention Formation, Automation, and Display; within these stages elements of the user, environment and system affect trust development. For example, when assimilating information and forming beliefs at the beginning of the cycle, reputation and gossip affect the beliefs a user holds about a system, and interface features influence the assimilation of information. Later, elements such as workload or self-confidence affect the user's intention formation within the system, a change which impacts trust. Next, in the reliance action stage, time constraints and configuration errors are elements of concern. Further the changes in the capabilities of the automation and its attributable abstraction and level of detail changes the user's trust [41]. In our previous study, we investigated the effect of latency on manually controlled rescue robots, finding that a 500 ms latency reduced the operator's performance and trust, and increased the perceived workload [42]. We suggest that increasing the automation level could help mitigate this effect. However, this approach also has consequences. According to Barnes and his colleagues [75] the timing of feedback from the system (latency) affects the operator's trust in it: the shorter response time of the system, the more likely the operator will trust and rely on it. The opposite is also true: the longer the system take to display its feedback, the more likely the operators will distrust it, abandoning it to rely on their own decisions [43], either by leaving the
1.2. Automation Teleoperated robots represent one type of automation, situations when a computer takes over a task usually performed by a human; the degree to which the computer takes over this work or the distribution of the task between it and the human defines the level of automation, with Parasurman, Sheridan, & Wickens (2000) defining ten levels of automation. According to them, “automation is not all or none, but can vary across a continuum of levels” with Level 1 describing a system with no computer assistance and Level 10 a system in which the computer completes all tasks without any human intervention. The eight levels between these two extremes vary, for instance, from Level 2 where the computer offers a complete set of decisions or actions for the human to make to Level 3 in which the computer narrows the selection of actions to a few alternatives to Level 4 when the computer suggests only one alternative [22]. Teleoperated robots are primarily semi-autonomous because 266
Automation in Construction 99 (2019) 265–277
A. Khasawneh et al.
human operators out of the loop, reducing their situational awareness and their ability to take over in cases of uncertainty [44–46] or by opposing the purpose of automation, which is to reduce the operator's workload and increase their ability to perform other concurrent tasks [40]. Another consideration for any task involving latency is adaptation. Here, adaptation is defined as the ability of operators to compensate for discrepancies in the system feedback and control input over time, almost being able to predict them and perform at a level such that the effect of latency is reduced [47]. Research using a driving simulator has found that this adaptation can be generalized for different tasks [47]. While the primary focus of this study was to investigate the effects of latency, multi-robot control and possible mitigation through increased automation, the researchers also considered the adaptation phenomenon in how operator performance and potentially other measures such as trust may change over time. The objective of human-centered automation is to distribute tasks between the automation and the operator to achieve an optimal level of performance while keeping the human operator in the loop, ready to intervene as needed. Systematic studies are required to determine the level of automation that reduces the effect of latency on the performance of the system while keeping the human operator in the loop. This paper provides an empirical evaluation of the effect of varying the level of automation, latency, and system complexity in a teleoperated multirobot manipulation. The context is unmanned ground vehicles in a simulated search and rescue task environment. Using the levels of automation proposed by Sheridan [68] as the basis, we used two levels of automation, manual control, and semi-autonomous control. In manual control, the operator is fully responsible for manipulating the robot; in semi-autonomous control, the automation is active in a flat, straight trajectory and the operator is responsible for making directional decisions and taking control in complex environments. Two levels of latency were examined: the latency between the input of the control and the feedback on the screen was either 0 ms or 500 ms. The complexity of the system was varied by the number of robots: either one or two. Real-time and post-study quantitative performance measures and subjective measures of workload and trust in automation were used to study the human-agent teaming process. To explore these issues, the following research questions were proposed for this study: RQ1: What is the effect of latency on operator trust and workload and how does it affect performance? RQ2: How is the relationship among latency, trust, workload, and performance affected when the system complexity increases by providing the operator with an additional robot? RQ3: Will automating the robot manipulation task mitigate the effects of latency and system complexity and what are the related costs? RQ4: How will the operator adaptation phenomenon vary based on the different levels of latency, system complexity, and automation?
between latency and performance such that the decrease in performance due to the increase in latency will be less for a higher level of automation. Hypothesis 4. The level of automation will moderate the relationship between system complexity and performance such that the decrease in performance due to the increase in task complexity will be larger for a lower level of automation. Hypothesis 5. The level of automation will moderate the relationship between latency and trust. The reduction in trust due to the increase in latency will increase as the automation level increases. Hypothesis 6. System complexity will moderate the relationship between latency and trust such that the decrease in trust due to the increase in latency will be larger for more complex tasks. Hypothesis 7. System complexity will moderate the relationship between automation level and trust such that the decrease in trust due to the increase in the automation level will be larger for more complex tasks. Hypothesis 8. There will be a three-way interaction between the level of automation, latency, and system complexity on the operator's perceived workload. In the low level of automation, the system complexity will moderate the relation between latency and perceived workload such that the increase in workload due to the increase in latency will be higher for more complex tasks. However, in the high level of automation, there will be no effect of either the latency or the system complexity. Hypothesis 9. Error rate will decrease over time as the operator begins to adapt to the system, thus increasing the trust level. Hypothesis 10. There will be no effect of any of the independent variables on the system usability.
2. Method 2.1. Participants This study included 80 participants, 47 males and 33 females, ages 20–41 with a mean age of 23.4 and a standard deviation of 3.5, all reported frequent use of computers with 98.75% using them daily. In addition, 73.75% indicated they played video games at least once a week and 17.5% had used a joystick. All had normal or corrected-tonormal vision, without any hearing or motor difficulties, and none of them had any previous experience with the simulated research platform. The detailed participant demographics are reported in Table 1. The study was approved by the Institutional Review Board, and all Table 1 Participant demographics.
1.4. Hypotheses To examine the effects of automation level, latency, and system complexity on the performance, trust, and workload of operators of teleoperated robots, we formulated the hypotheses listed below. Lee and See's model of trust (2004) was used to determine hypothesized effects of conditions on trust.
Variable
Category
Number
%
Gender
Male Female High school graduate 2-year degree Some college 4-year degree Master's degree Doctoral degree Professional degree Daily Weekly Weekly None at all At least once None at all
47 33 1 1 48 15 0 9 6 79 1 59 21 14 66
58.75 41.25 1.25 1.25 60 18.75 0 11.25 7.5 98.75 1.25 73.75 26.25 17.5 82.5
Education
Hypothesis 1. The level of automation will have a positive relationship on operator performance, meaning as the level of automation increases, the operator performance will increase. Computer usage
Hypothesis 2. System complexity will moderate the relationship between latency and performance such that the decrease in performance due to the increase in latency will be larger for more complex tasks.
Video game usage Joystick usage
Hypothesis 3. The level of automation will moderate the relationship 267
Automation in Construction 99 (2019) 265–277
A. Khasawneh et al.
layout of which was unknown to them before beginning the study. The map at the bottom Fig. 2(b) represents the starting point of the study, while those under Figure (b) and (c) show the maps at several time points farther into the trial. This map acted as a “fog of war” map, an incomplete map that gradually reveals the discovered areas to the participant and was provided at the bottom of the screen. The participants were instructed that an earthquake had occurred, and ten people were stuck inside the building. Their objective was to locate and rescue these ten victims as fast as possible using the robots. The robots were controlled using a joystick, and the ten victims were randomly distributed inside the rooms of the building based on a different distribution for every trial. Fig. 2(f) illustrates a victim being found. 2.5. Experimental design The study used a 2 × 2 × 2 mixed-subjects experimental design, with automation level and latency as the between-subjects variables and the number of robots as the within-subjects variable. Participants were randomly assigned to one of the automation levels (manual or semi-autonomous control) with 40 participants assigned to each. The experiment was counterbalanced such that the participants were randomly assigned to one the following conditions, ten participants to each automation level:
Fig. 1. Experimental setup.
participants read and signed an informed consent form before beginning the study. All were given a $10 Amazon gift card as compensation for their time.
1. No latency, controlling one robot followed by two robots. 2. No latency, controlling two robots followed by one robot. 3. With 500 ms of latency, controlling one robot followed by two robots. 4. With 500 ms of latency, controlling two robots followed by one robot.
2.2. Apparatus The study was conducted in a controlled lab environment using a desktop computer with a 22-inch monitor equipped with a Logitech Extreme 3D PRO joystick and a Tobii X60 mobile eye tracker (www.tobii.com). The simulation and the study conditions were developed using a Unity 3D game engine (www.unity3d.com), and the scripts were written in C#. The experimental setup is shown in Fig. 1.
2.6. Procedure The participants completed the study one at a time. At the beginning of each trial, one of the researchers greeted the participants in the lab, provided them with a short overview of the study and asked them to read and sign a consent form indicating that they understood it. Then the participants completed the demographic survey. Next, the researcher explained the task to the participants/operators and took them through a training session, which resembled the primary task of the study except the building was smaller with different and fewer rooms in the layout and the task involved finding only three victims. Step-by-step instructions on how to use the system were provided on the screen to guide the participants throughout this training session. The training session was conducted using the same latency and level of automation as the trials that the participants would complete using two robots. The researcher then calibrated the eye tracker, and the participants completed the first trial of the experimental task. They were subsequently asked to complete the overall trust questionnaire scale [48], the IBM Computer System Usability Questionnaire scale (IBM-CSUQ) [49,64,66], the NASA Task Load Index (NASA-TLX) [50], and such general debriefing questions as rating the helpfulness of the map or the second robot, if applicable, and providing any strategies used to complete the task. The same procedure then followed for the second condition. It took approximately an hour for each participant to complete the study.
2.3. Independent variables There were two between-subjects and one within-subjects independent variables. One between-subject independent variable was the level of automation at two levels, with the participants randomly assigned to one of the following conditions: (1) manual control, where the human operator had full control of the robot and continuously navigated the robot throughout the environment and (2) semi-autonomous, where the human operator made the decisions at the intersections for the robot, taking full control in more complex environments or when an advanced tactical maneuver was required. Fig. 2(c) shows the settings for the manual control, Fig. 2(d) for those for the semi-autonomous control at the intersections, and Fig. 2(e) settings for semi-autonomous control while the robot is in automatic mode. The second between-subjects variable was level of latency between the controller input and the feedback on the screen, with the participants randomly assigned to one of two conditions: (1) no latency (0 ms) and (2) with latency (500 ms since this is the typical latency between a computer and a robot in a laboratory) [21]. The within-subjects variable was the number of robots (system complexity), with the participants using one or two robots to complete the task for the different trials. Fig. 2(c) shows the settings for controlling one robot, while the remaining parts of this figure show the settings for controlling two robots.
2.7. Dependent variables The eight conditions were assessed using objective and subjective measures. The objective measure was the performance of the operator assessed based on the amount of time needed to complete the task and the error rate. The total time to complete the task was measured in seconds from when the operator began controlling the robot to the time the last victim was found. The error rate was measured as the number of
2.4. Task For the task in this study, the participants controlled the robots in a simulator which consisted of a virtual environment of a building, the 268
Automation in Construction 99 (2019) 265–277
A. Khasawneh et al.
Fig. 2. Representation of the simulated system interface. (a) the real-time trust rating, (b) The starting point for the semi-autonomous control using two robot settings, (c) Half-way into the study of manual control using one robot setting, (d) the intersection view for semi-autonomous control using two robots, (e) the auto view for semi-autonomous control using two robots, (f) locating a victim for semi-autonomous control using two robots' settings.
errors per 2 min the operator made throughout the study represented by the number of times the robot stopped due to an obstacle (for example, a wall, door frame, or desk) blocking its view. Both of these measures were collected by the simulator. The subjective measures included the workload, trust in the system, and system usability, with workload measured by the NASA-TLX [50] workload assessment instrument. Trust in the system was measured using a scale developed by Jian et al. [48], for which participants were asked to rate their trust in the system on 7-point Likert scale on 12 different dimensions (items) of trust. In addition, real-time trust was collected using the question seen in Fig. 2(a) that appeared every 2 min during the trial, asking the participants to rate their overall trust in the system. The system usability was measured using the IBM-CSUQ [49].
mixed ANOVA were met before conducting the analysis. The repeated measures of error rate and trust over time were analyzed using the repeated measures Hierarchical Linear Modeling (HLM) technique [51]. 3.1. Objective measures The time to complete the task was measured by the simulator in seconds. For all conditions, the system began recording the time as soon as the participants started moving the robots until they found the last victim. The results of the subsequent analysis indicated a statistically significant main effect of latency level on the time to complete the task, F (2, 76) = 4.68, p = 0.03, η2 = 0.06 with the time to complete the task being faster for the no latency group (M = 574.92, SD = 318.82) than the with latency group (M = 711.35, SD = 351.36) as shown in Fig. 3. Two- and three-way interactions were not statistically significant.
3. Results & analysis SPSS 24.0 was used to analyze the data collected. To determine the significant differences, if any, among the study conditions, a three-way mixed ANOVA with a 95% confidence interval was used. If the null hypothesis of the interaction among the independent variables was rejected, subsequent tests to determine the locus of the significant differences were conducted. The appropriate assumptions for a three-way
3.2. Subjective measures 3.2.1. Workload The NASA-TLX workload assessment instrument was used to assess the participants' perceived workload using the scales of mental demand, physical demand, temporal demand, effort, performance, and 269
Automation in Construction 99 (2019) 265–277
A. Khasawneh et al.
Table 2 Trust items. Item
Mean
SD
Deceptiveness Underhandedness Suspiciousness Wariness Harmful outcomes Confidence Security Integrity Dependability Reliability Familiarity Overall trust Average real-time trust
2.13 1.75 1.57 2.04 1.42 4.47 4.44 4.61 4.7 4.78 4.76 4.87 5.02
1.28 1.23 1.19 1.34 0.89 1.6 1.61 1.65 1.51 1.53 1.62 1.53 1.26
accepted at a Bonferroni-adjusted alpha level of 0.025, with the analysis indicating a statistically significant simple main effect of latency level when the participants used two robots, F (1, 76) = 6.95, p = 0.01, but not when they used one, F (1, 76) = 0.32, p = 0.57. All pairwise comparisons were subsequently conducted for statistically significant simple main effects, the results indicated that when using one robot, mean harmful outcomes were not significantly different between the no latency group (M = 1.5, SD = 1.08) and the with latency group (M = 1.38, SD = 0.90); however, when using two robots, the mean harmful outcomes were higher for the with latency group (M = 1.63, SD = 0.92) than the no latency group (M = 1.18, SD = 0.55), with a mean difference of 0.45, 95% CI (0.11, 0.79), p = 0.01 as seen in Fig. 5. There was a statistically significant two-way interaction between number of robots and level of latency on the reliability “the system is reliable” item of the trust in automation scale, F (2,76) = 5.59, p = 0.02, η2 = 0.07. However, the simple effects were not statistically significantly different from zero, indicating that the slopes have different signs, one positive and one negative; therefore, they are different from each other, but each is not different from zero. The results seen in Fig. 6 indicated that mean reliability was less for the with latency group (M = 4.43, SD = 1.26) compared to the no latency (M = 4.95, SD = 1.80) group when using two robots, F (1,76) = 2.33, with a mean difference of 0.53, 95% CI (−0.16, 1.21), p = 0.13. However, mean reliability showed no difference between the with latency group
Fig. 3. Average time taken to complete the task.
Fig. 4. Average physical demand.
frustration [50]. The results of the analysis showed no statistically significant difference in the total workload among the groups (M = 58.67, SD = 17.4); however, there was a statistically significant main effect of latency level on the physical demand, F (2,76) = 3.94, p = 0.05, η2 = 0.05, with the physical demand being significantly higher for the with latency group (M = 4, SD = 7.42) than the no latency group (M = 1.54, SD = 3.32) as seen in Fig. 4. 3.2.2. Trust in the system The trust items defined in the instrument developed by Jian et al. [48] were used to assess the participants' overall trust in the system in addition to the real-time trust. The items, the average value, and the standard deviation of each one are presented in Table 2. The analysis found a statistically significant two-way interaction between the number of robots and the level of latency on the harmful outcomes “the system's actions will have a harmful or injurious outcome” item of the trust in automation scale, F (2,76) = 7.215, p = 0.01, η2 = 0.09. The statistical significance of a simple main effect was
Fig. 5. Average perceived harmful outcomes. 270
Automation in Construction 99 (2019) 265–277
A. Khasawneh et al.
Fig. 6. Average perceived reliability.
Fig. 7. Average perceived underhandedness.
(M = 4.88, SD = 1.34) and the no latency group (M = 4.85, SD = 1.66) when using one robot, F (1, 76) = 0.01, p = 0.94. The analysis also indicated a statistically significant two-way interaction between automation level and number of robots for the underhandedness “the system behaves in an underhanded manner” item of the trust scale, F (2,76) = 8.62, p = 0.004, η2 = 0.10. The statistical significance of a simple main effect was again accepted at a Bonferroniadjusted alpha level of 0.025, with the results indicating a statistically significant simple main effect of automation level when the participants used two robots, F (1, 76) = 7.35 p = 0.008, but not when they used one, F (1, 76) = 0.30, p = 0.59. All pairwise comparisons were subsequently conducted for the statistically significant simple main effects. When using one robot, mean underhandedness was not significantly different between the semi-autonomous group (M = 1.63, SD = 0.95) and the manual control group (M = 1.75, SD = 1.08); however, when using two robots, mean underhandedness was higher for the semi-autonomous control (M = 2.18, SD = 1.50) than the manual control (mean = 1.45, SD = 0.75), with a mean difference of 0.73, 95% CI (0.19, 1.26), p = 0.008 as seen in Fig. 7. In addition, there was a statistically significant two-way interaction between automation level and number of robots for the suspiciousness “I am suspicious of the system's intent, action, or outputs” item of the trust scale, F (2,76) = 5.23, p = 0.025, η2 = 0.06. However, the simple effects were not statistically significantly different from zero, an outcome indicating that the slopes have different signs, one being positive and the other negative. Therefore, they are different from each other, but each is not different from zero. As seen in Fig. 8, mean suspiciousness was greater for the semi-autonomous control group (M = 1.85, SD = 1.50) than for the manual control group (M = 1.38, SD = 1.00) when using two robots, F (1,76) = 2.83, with a mean difference of 0.48, 95% CI (−0.09, 1.04), p = 0.097. However, the results found no difference in mean suspiciousness between the semi-autonomous control group (M = 1.50, SD = 1.13) and the manual control group (M = 1.55, SD = 1.09) when using one robot, F (1, 76) = 0.04, p = 0.841.
Fig. 8. Average perceived suspiciousness.
the study. The mean and standard deviation of each item and the overall usability of the system were system usability (M = 5.43, SD = 1.17), information quality (M = 5.7, SD = 0.80), interface quality (M = 5.40, SD = 1.10), and the overall usability (M = 5.52, SD = 0.91). 3.3. Time dependent measures Hierarchical linear modeling was used to analyze repeated measures models with measurement occasion (every 2 min) and number of robots being level one predictors and latency and automation level being level two predictors of trust and error rate across the study [51]. This analysis allows for the examination of the influence of higher level variables (e.g., latency and automation levels) on lower level variables (e.g., measurement occasion and number of robots) either through direct effects on intercepts or through cross-level moderation of slopes [52].
3.2.3. The system usability The usability of the system was measured subjectively using the IBM-CSUQ [49]. The participants rated the usability of the system through three different 7-point Likert scales. There was no statistically significant difference in usability between the different conditions of 271
Automation in Construction 99 (2019) 265–277
A. Khasawneh et al.
Table 3 Trust over time.
Effect Level 1 measurement occasion Level 1 number of robots Level 2 latency Level 2 automation Measurement occasion × number of robots Change in model R2
Model 1
Model 2
Models 3
Model 4
Model 5
Model 6
Estimate (standard error)
Slope estimate (standard error)
Slope estimate (standard error)
Slope estimate (standard error)
Slope estimate (standard error)
Slope estimate (standard error)
5.00 (0.14)
5.07 (0.14) –0.03 (0.02)
5.03 (0.14) −0.01 (0.01) 0.17 (0.10)
–
< 0.001
< 0.001
5.03 −0.02 0.19 −0.24
(0.14) (0.01) (0.06) (0.27)
< 0.001
5.03 −0.02 0.19 −0.24 −0.27
(0.14) (0.01) (0.06) (0.27) (0.27)
5.04 (0.14) −0.03(0.01) 0.04 (0.08) −0.23 (0.27) 0.06 (0.02)⁎
< 0.001
0.055
Note: N = 80. R2 is the HLM version of the reduction in variance. ⁎ Significant at 95% confidence level.
3.3.1. Trust over time Given the hierarchical nature of the study, it was important to first determine a baseline (or null) model for the individual-level trust over time [51]. The results from the Mixed Models analysis of the intercept only model, which is Model 1 in Table 3, indicate that 69% of the total over time trust variance exists between participants and 31% of the total over time trust variance exists within participants, results supporting using the mixed model approach. The level one variable, the number of robots, and the level two variables of latency and automation level were mean-centered, and the measurement occasion was set so that Time 1 was zero to provide a meaningful zero point (i.e., time zero) [65]. The predictors were entered into the model hierarchically to determine the incremental prediction of trust after each addition (see Table 3). In Model 2, to determine the linear trend of trust over time, the measurement occasion was entered as both a fixed effect and a random effect to determine the random error term, the results found that trust did not significantly differ over the measurement occasions (p = 0.136). In Model 3, the number of robots was entered as both a fixed and random effect, the results indicated no significant difference (p = 0.07). In Model 4, the level two variable, latency, was entered as a fixed effect, the results indicated that it was not significantly related to trust over time (p = 0.382). In the fifth model, the automation level, which was entered as a fixed effect, did not show any significant relation to the trust over time (p = 0.318). Model 6 included the interaction term between the level one variable of measurement occasion and the level one variable of number of robots as a fixed effect, the results found a statistically significant interaction between them (p = 0.005), and the model reduced the residual variance by 5.5%. The simple slopes analysis indicated that the trust level did not differ across time when operators controlled two robots, p > 0.05; however, the trust level decreased over time when the operators controlled one robot, p < 0.05 as seen in Fig. 9. No other interactions were significant.
Fig. 9. Trust over time by the number of robots.
fixed effect and did not show any significant relation to the error rate (p = 0.935). Model 5 included the interaction term between the level one variable, measurement occasion, and the level one variable, number of robots, as a fixed effect, the results found a statistically significant interaction between the number of robots and the measurement occasion, p < 0.01. The simple slopes analysis indicated that the effect of measurement occasion on the error rate when the participants controlled one robot (B = −0.219, p < 0.01) was higher than when they controlled two robots (B = −0.19) as seen in Fig. 11. Model 6 included the interaction term between measurement occasion and the latency level as a fixed effect, the results indicated a significant interaction between measurement occasion and latency level, p = 0.004. Simple slopes analysis showed that the effect of measurement occasion on error rate was higher in the no latency condition (B = −0.19, p < 0.01) than in the with latency condition (B = −0.11, p < 0.01) as seen in Fig. 12. In the last model, Model 7, we included the interaction between the level one variable, the number of robots, and the level two variable, automation level, as a fixed effect, and found a significant interaction between the level of automation and the latency level, p = 0.034. Pairwise comparisons showed no difference in the error rate in the manual control condition when participants used one or two robots. However, participants made fewer errors with one robot in the semi-autonomous condition as seen in Fig. 13.
3.3.2. Error rate Given the hierarchical nature of the study and that the error rate followed a Poisson distribution, we used a generalized linear mixed model with a Poisson distribution to analyze the error rate data. As seen in Table 4, the predictors were again entered hierarchically. In Model 1, to determine the linear trend of error rate, measurement occasion was entered as both a fixed effect and as a random effect to determine the random error term. Errors significantly differed over the measurement occasions, p < 0.01: as the measurement occasion increased by one unit (2 min), the error rate reduced 0.86 units as seen in Fig. 10. In Model 2, the number of robots, which was also entered as both a fixed and random effect, did not show any significant difference (p = 0.565). In Model 3, the level two variable, latency level, was entered as a fixed effect, the results indicated it was not significantly related to error rate (p = 0.372). In the fourth model, automation level was entered as a
3.4. Qualitative data Map Helpfulness and Second Robot Helpfulness scales were used in the debriefing surveys, the former being included after every trial and the latter only after those trials in which the operator controlled two 272
Automation in Construction 99 (2019) 265–277
A. Khasawneh et al.
Table 4 Error rate.
Effect Intercept Level 1 measurment occasion Level 1 number of robots Level 2 latency Level 2 automation Measurment occasion number of robots Measurment occasion latency Number of robots automation
Model 1
Model 2
Model 3
Model 4
Model 5
Model 6
Model 7
Estimate (standard error)
Estimate (standard error)
Estimate (standard error)
Estimate (standard error)
Estimate (standard error)
Estimate (standard error)
Estimate (standard error)
2.96 −0.15 0.06 0.09
2.96 −0.15 0.06 0.09 0.01
2.96 (0.05) −0.15 (0.02)⁎
2.95 (0.05) −0.16 (0.02) 0.03 (0.06)
(0.05) (0.02) (0.02) (0.10)
(0.14) (0.02) (0.02) (0.10) (0.10)
2.95 (0.05) −0.17 (0.02) −0.05 (0.06) 0.05 (0.10) −0.01 (0.10) 0.04 (0.01)⁎
2.95 (0.05) −0.17 (0.02) −0.05 (0.06) 0.03 (0.10) −0.01 (0.1) 0.05 (0.01) 0.09 (0.03)⁎
2.95 −0.17 −0.05 0.03 0.01 0.04
(0.05) (0.02) (0.06) (0.10) (0.10) (0.01)
0.08 (0.03) 0.23 (0.11)⁎
Note: N = 80. ⁎ Significant at 95% confidence level.
Fig. 12. Average error rate by latency. Fig. 10. Average error rate.
Fig. 11. Average error rate by number of robots. Fig. 13. Average error rate by number of robots and latency.
robots to gain the operator's perception of having an additional robot. These scales were rated from 1 to 5, with 1 being not helpful at all and 5 being extremely helpful. The map helpfulness mean rating was 4.44 and rated as 4.46 among those who specifically stated they used the map in
their strategies. Overall, the second robot was rated 3.30 among those who stated they used it. The operators used a variety of strategies. Of the users in the one273
Automation in Construction 99 (2019) 265–277
A. Khasawneh et al.
robot trials, 33 of 80 mentioned using a room-by-room or a circular search pattern. Of the users in the two-robot trials, 43 of 80 divided the map into two halves, allotting one robot to each; 8 used the second robot when they became frustrated or stuck; 3 mentioned using the robot but gave no specific details, and 11 participants stated they did not use the secondary robot. An additional two-robot strategy that was used in only 7 of the 40 trials in the semi-autonomous condition was sending one robot on a path, then adding the secondary robot, using both almost simultaneously. Overall, in 54 of the 160 trials, the participants indicated they relied on the map in their strategy.
operators were not as immersed in the environment as they were in the manual control condition, resulting in their committing more errors. In general, the analysis of error rate over time indicated some adaptation to the system. As the measurement occasion increased, the average error rate across conditions decreased, indicating that as operators completed the task, the error rate decreased, an effect that was seen more in the onerobot condition than the two-robot condition and in the no latency versus the with latency condition. In general, operators adapted to the system over time, making fewer errors, but they adapted faster when controlling only one robot or in the no latency condition. Trust in automation is an important measure to consider in a Human System Interaction, with numerous frameworks having been developed to explain the factors affecting it in an interaction. Here we used a dynamic model of trust developed by Lee and See [41] to link results seen in workload and performance to explore how trust is developed in a system. Trust in this framework is developed over time in a cycle, and segments of that cycle are affected by different factors as discussed in the introduction. There were two-way interactions between latency and system complexity on two items of trust: harmful outcomes (the rating of the belief that the system has harmful outcomes which decreases trust) and reliability (increased reliability indicates higher trust in the system). In both items, as system complexity increased (i.e. as the number of robots increased from 1 to 2), the effect of latency on trust was lower when controlling two robots with latency than without latency. This expected result, which was hypothesized in Hypothesis 6, can be understood using the model and the performance and workload results. Based on the model, as workload increased in the latency condition because of overcorrection, errors increased, the combination of these elements affected the intention formation stage of the cycle and reduced trust. In this scenario, relying on the general automation (the use of the teleoperated platform) was required, meaning reduced intention and reduced overall trust. The decrease in trust seen in the with latency condition can also be explained by the factors affecting the automation segment of the cycle. The data found here about automation including the performance affected by latency suggest that a chronic error in the automation reduces trust in it. Automation moderated the effect of system complexity on two items of trust, underhandedness and suspiciousness. In the semi-autonomous condition, trust was lower than in the manual control condition, but only in the more complex task (the two-robot condition). This result, which was expected and hypothesized in Hypothesis 7, can be explained through the automation stage of the trust model. As the level of automation (or in the element list, level of detail) increased, trust in the system changed. In the semi-autonomous condition, operators not only had to assess their trust in the teleoperated system but the ability of the autonomous control as well. The disparity seen as system complexity increases indicates a change in interface features, an element that affects the information assimilation stage. The introduction of the second robot changed the operator's initial assessment of the system. Another explanation for this disparity is Endsley's theory of situation awareness. Developing levels 1 through 3 of situation awareness requires increased effort as the system complexity increases: specific to the research reported here, having to maintain two frames (two local environments created by two robots) could affect the trust in the system [53]. While general measures of overall trust (either through questionnaires after each trial or by the average of real-time ratings) were obtained, an analysis of the real-time trust measures was conducted to determine if trust changed over time based on condition. There was an interaction between system complexity and measurement occasion such that trust decreased over time in the one-robot condition but remained relatively the same in the two-robot condition. In general, trust decreased over time in the one-robot condition possibly because controlling only one robot created a perception of a longer time period to
4. Discussion The objective of this study was to investigate the effect of latency on a teleoperated robot operator's perceived workload, performance, and trust at different levels of system complexity and to determine how this effect changes at different levels of automation. We hypothesized that latency and system complexity would negatively affect performance and workload, and that the automation level would reduce this effect at the cost of reducing the operator's trust level. Below we discuss the significant effects of the independent variables and the interactions between them on each dependent variable to better understand the factors that play a role in such teleoperated tasks, including suggesting design implications and areas for future research. The effect of latency on perceived workload was clear: as latency increased physical workload increased. Participants in this study experienced more physical demand in the with latency condition than those in the no latency condition because of the overcorrection tendencies seen in the former. Many times, in a task with latency involved, operators will overcorrect or over compensate for the lack of immediate response seen in the simulation and continue moving the controls, resulting in more movement than necessary. This overcorrection also resulted in the participants committing more errors in the with latency condition, a situation discussed in the following sub-section. Performance was assessed based on the time to complete the task and the error rate. Latency level and system complexity (number of robots) affected the operator's performance: as latency increased, performance decreased by increasing the time to complete the task. In terms of error rate, performance decreased over time as more errors were committed, especially when the operators controlled two robots or experienced some latency. This result can be explained by the time delay in visual feedback confirming that the operator's order had been carried out, which resulted in the delayed perception of the operator about the system's current state; thus, it took the operator more time to form intentions about the next move and increased the time to complete the task. In addition, the delayed perception of the system's current state meant that the operator based their intention formation on delayed information representing a previous state of the system. Thus, the operator did not act in response to the current system state, which lead to overcorrection and more errors. When the operators used more than one robot, each time they switched from one to the other, they established an understanding of its orientation and location in the overall environment and manipulated it in the environment. Thus, they performed two tasks concurrently, mentally understanding the surroundings and controlling the robot. Since understanding the location and orientation is most likely the primary goal, operators allocated more attentional resources to these tasks, reducing their performance of the secondary task of controlling the robot, thus accumulating more errors. We assumed that automating the robots would address this effect; however, we found that the average error rate also changed based on an interaction between system complexity and automation level such that the operators committed more errors when they controlled two semi-automated robots instead of one. We hypothesized that when the operators used two robots, the task of mentally understanding the surroundings when switching between them would become more difficult and take more time since the 274
Automation in Construction 99 (2019) 265–277
A. Khasawneh et al.
complete the task, whereas in the two-robot condition users were more engrossed in completing the task. This decrease in trust could also be explained by a feeling of an increased time constraint, which could affect trust in the reliance action stage. There was no difference in system usability among the different conditions of the study. This result was expected in Hypothesis 10 since there was no difference in the overall system interface across all the conditions. The results of this study showed increasing the latency level increased the operators' perceived workload and reduced their performance. This increase in workload in addition to the effect of increased system complexity led to reduced operator trust in the system. We expected that automating the system would enhance the performance at the cost of the operator's trust. However, operators with a higher level of automation did not perform significantly better than the operators with manual control; that is, the automation added to the system was not enough to compensate for the reduction in performance caused by the latency level; however, it reduced the trust level. The automation level was enough to remove the operators from the loop, leaving them to make decisions at the intersection points only and remain idle while the robot was in “auto mode,” causing the operator to distrust the system without any performance improvement. This situation implies that a higher level of automation might be needed in such systems to increase performance. The designer of the task should also consider keeping the operator in the loop through other tasks such as allowing simultaneous control of more than one robot that either work together to accomplish the same mission or different missions. Performance could be improved or at least maintained at the same level as that of the manual control, but the productivity of the operators could be increased by allowing them to perform other missions in parallel. Based on our observations and the debriefing surveys, we suggest several design considerations. The system could be more efficient by providing redundant information about the places that have already been searched so that the operator does not have to inspect them again, steps in the process which decrease performance and increase the workload. Highlighting moving objects would help the operator better discriminate between targets (victims) and non-targets (obstacles, i.e. chairs). Algorithms similar to those used in motion detecting security cameras could be used [54]. This could provide a decision aid to support quick scanning of areas, a behavior observed in these studies. Better mapping between the features on the screen and the corresponding features on the map could help to divide attention between the two sources of information. Higher fidelity displays, which impact performance, workload or trust in this task could be considered [55,56]. Also, divided attention could be enhanced by relying on different sensory channels to relay the necessary information to the operator. For example, haptic feedback and sonification could be used to provide directions to the operator. In addition, enhancing the capability of the automation itself by allowing it to use the strategies that operators usually employ could improve efficiency. For example, splitting the environment into 2 sections to reduce travel distance when 2 robots are available could influence the use of the second robot such that each is assigned only half the overall area. Another strategy influencing design could use the robots simultaneously in the semi-autonomous condition as increasing the level of automation could facilitate using the robots simultaneously with higher efficiency. Increasing the automation level further and allowing the operator to handle more robots simultaneously will affect an operator's situational awareness and trust. Reduction in an operator's situational awareness hinders the development of a correct and complete mental model and degrades the operator's skills and self-confidence, which affects intentional formation and ultimately, trust, as suggested by Lee and See [41]. Future studies need to investigate methods to increase user situational awareness and achieve an appropriate level of operator trust in automation to enhance system performance and reduce the overall workload. For example, using sonification to increase the amount of information available to the operator while limiting the strain on the
display or haptic feedback, which has been seen to mitigate latency effect under 2 s [57], could be introduced. Sonification could be simple audio feedback to enhance the feeling of presence and gather information about surroundings [58] or specific sound cues to guide navigation [59], in either case sonification has been shown to be an effective tool in improving performance in teleoperated tasks [12]. Haptic feedback also enhances performance in teleoperated tasks by providing distance information of objects in the robot's environment to prevent collisions [60,61] or force feedback, providing haptic responses when a robot collides with an object or gets within a certain range, which is more commonly researched in the area of robot assisted surgery [62,63]. This study has several limitations including, that the study participants were primarily undergraduate and graduate students who were computer savvy and likely to have video game experience, meaning they may have performed better on average than the general public. However, they were not trained search and rescue operators. In addition, this study did not use a physical robot controlled remotely in real time and the latency was constant, both of which limit the generalizability of the results found here. Future work could test a higher level of automation, and then investigate not only how it affects the operator's trust in automation and performance, but how it's going to affect situational awareness as well. 5. Conclusion This research investigated the effect of latency, automation level, and system complexity on the teleoperation of unmanned vehicles. The results found that different measures of performance were affected by the latency level such that an increased level of latency reduced the operator's performance by increasing the time it took to complete the task and increasing the error rate. The findings from this study reinforce the notion that robot reliability predicts trust in automation. The results suggested increased levels of workload with increased latency. Specifically for tasks with latency, operators tend to overcorrect or over compensate for the lack of immediate response seen in the simulation and continue moving the controls, resulting in more movement than necessary. This overcorrection resulted in the participants committing more errors in the with latency condition. Increased latency also led to reduced trust level. We hypothesized increasing the automation level would improve operator performance, but with a deterioration in the trust level. However, this was not the case: the increased automation level neither improved performance nor reduced trust. This indicates that a higher level of automation could improve performance without reducing the operator's trust in the system. In fact, trust could potentially increase due to the improved performance. Considering operational environments that may include distributed, non-collocated adaptive mixed agent-teams with variable transmission latencies, subsequent studies will develop quantitative time-series trust models for real-time teleautonomous robotic operations and trust-based function allocation schemes for effective human-automation collaboration for search and rescue tasks. Acknowledgements Hunter Rogers was supported by the United States Department of Defense SMART program. References [1] S. Cebollada, L. Payá, M. Juliá, M. Holloway, O. Reinoso, Mapping and localization module in a mobile robot for insulating building crawl spaces, Autom. Constr. 87 (2018) 248–262, https://doi.org/10.1016/j.autcon.2017.11.007. [2] P. Chotiprayanakul, D.K. Liu, G. Dissanayake, Human–robot–environment interaction interface for robotic grit-blasting of complex steel bridges, Autom. Constr. 27 (2012) 11–23, https://doi.org/10.1016/j.autcon.2012.04.014. [3] D. Lee, N. Ku, T.-W. Kim, K.-Y. Lee, J. Kim, S. Kim, Self-traveling robotic system for
275
Automation in Construction 99 (2019) 265–277
A. Khasawneh et al.
[4] [5] [6]
[7]
[8]
[9] [10] [11]
[12] [13]
[14] [15] [16] [17]
[18] [19]
[20]
[21]
[22] [23]
[24]
[25] [26] [27] [28]
autonomous abrasive blast cleaning in double-hulled structures of ships, Autom. Constr. 19 (8) (2010) 1076–1086, https://doi.org/10.1016/j.autcon.2010.07.011. E. Menendez, J.G. Victores, R. Montero, S. Martínez, C. Balaguer, Tunnel structural inspection and assessment using an autonomous robotic system, Autom. Constr. 87 (2018) 117–126, https://doi.org/10.1016/j.autcon.2017.12.001. B. Sutter, A. Lelevé, M.T. Pham, O. Gouin, N. Jupille, M. Kuhn, ... P. Rémy, A semiautonomous mobile robot for bridge inspection, Autom. Constr. 91 (2018) 111–119, https://doi.org/10.1016/S1474-6670(17)45204-9. C. Bulich, A. Klein, R. Watson, C. Kitts, Characterization of delay-induced piloting instability for the Triton undersea robot, Proceedings of the IEEE Aerospace Conference 2004, Vol. 1 IEEE, 2004, , https://doi.org/10.1109/AERO.2004. 1367625. J. Casper, R.R. Murphy, Human-robot interactions during the robot-assisted urban search and rescue response at the World Trade Center, IEEE Trans. Syst. Man Cybern. B Cybern. 33 (3) (2003) 367–385, https://doi.org/10.1109/TSMCB.2003. 811794. L.A. Nguyen, M. Bualat, L.J. Edwards, L. Flueckiger, C. Neveu, K. Schwehr, M.D. Wagner, E. Zbinden, Virtual reality interfaces for visualization and control of remote vehicles, Auton. Robot. 11 (1) (2001) 59–68, https://doi.org/10.1023/ A:1011208212722. T. Fong, C. Thorpe, Vehicle teleoperation interfaces, Auton. Robot. 11 (1) (2001) 9–18, https://doi.org/10.1023/A:1011295826834. T. Fong, C. Thorpe, B. Glass, Pdadriver: a handheld system for remote driving, IEEE International Conference on Advanced Robotics (No. LSRO2-CONF-2003-004), 2003 infoscience.epfl.ch/record/29962/files/ICAR03-TF.pdf. A. Sekmen, A.B. Koku, S. Zein-Sabatto, Human robot interaction via cellular phones, Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, 2003, Vol. 4 IEEE, 2003, October, pp. 3937–3942, , https://doi.org/ 10.1109/ICSMC.2003.1244503. J.Y. Chen, E.C. Haas, M.J. Barnes, Human performance issues and user interface design for teleoperated robots, IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 37 (6) (2007) 1231–1245, https://doi.org/10.1109/TSMCC.2007.905819. R.R. Murphy, S. Tadokoro, D. Nardi, A. Jacoff, P. Fiorini, H. Choset, A.M. Erkmen, Search and rescue robotics, Springer Handbook of Robotics, Springer, Berlin Heidelberg, 2008, pp. 1151–1173, , https://doi.org/10.1007/978-3-540-303015_51. L.H. Frank, J.G. Casali, W.W. Wierwille, Effects of visual display and motion system delays on operator performance and uneasiness in a driving simulator, Hum. Factors 30 (2) (1988) 201–217, https://doi.org/10.1177/001872088803000207. C.D. Wickens, S.E. Gordon, Y. Liu, J. Lee, An Introduction to Human Factors Engineering, (1998) ISBN: 0-13-18373-2. H.G. Stassen, G.J.F. Smets, Telemanipulation and telepresence, International Federation of Automatic Control Proceedings, Vol. 28 (15) 1995, pp. 13–23, , https://doi.org/10.1016/S1474-6670(17)45204-9. I.S. MacKenzie, C. Ware, Lag as a determinant of human performance in interactive systems, Proceedings of the INTERACT'93 and Computer-Human Interaction'93 Conference on Human Factors in Computing Systems, ACM, 1993, May, pp. 488–493, , https://doi.org/10.1145/169059.169431. J.Y. Chen, M.J. Barnes, Human–agent teaming for multirobot control: a review of human factors issues, IEEE Trans. Hum. Mach. Syst. 44 (1) (2014) 13–29, https:// doi.org/10.1109/THMS.2013.2293535. S.R. Ellis, K. Mania, B.D. Adelstein, M.I. Hill, Generalizeability of latency detection in a variety of virtual environments, Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 48, No. 23 Sage CA, Los Angeles, 2004, September, pp. 2632–2636, , https://doi.org/10.1177/154193120404802306. J.C. Lane, C.R. Carignan, B.R. Sullivan, D.L. Akin, T. Hunt, R. Cohen, Effects of time delay on telerobotic control of neutral buoyancy vehicles, Proceedings of the IEEE International Conference on Robotics and Automation, 2002, Vol. 3 IEEE, 2002, pp. 2874–2879, , https://doi.org/10.1109/ROBOT.2002.1013668. M.A. Goodrich, D.R. Olsen, J.W. Crandall, T.J. Palmer, Experiments in adjustable autonomy, Proceedings of IJCAI Workshop on Autonomy, Delegation and Control: Interacting with Intelligent Agents, American Association for Artificial Intelligence Press, Seattle, WA, 2001, August, pp. 1624–1629, , https://doi.org/10.1109/ ICSMC.2001.973517. R. Parasuraman, T.B. Sheridan, C.D. Wickens, A model for types and levels of human interaction with automation, IEEE Trans. Syst. Man Cybern. Syst. Hum. 30 (3) (2000) 286–297, https://doi.org/10.1109/3468.844354. H.A. Ruff, S. Narayanan, M.H. Draper, Human interaction with levels of automation and decision-aid fidelity in the supervisory control of multiple simulated unmanned air vehicles, Presence Teleop. Virt. 11 (4) (2002) 335–351, https://doi.org/10. 1162/105474602760204264. H. Rogers, M. Al Ghalayini, K.C. Madathil, A preliminary study to investigate the sensemaking process of UAV reports by operators after periods of disconnect for threat assessment, Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 62, No. 1 SAGE Publications, Los Angeles, CA, 2018, September, pp. 1796–1800, , https://doi.org/10.1177/1541931218621407. T.B. Sheridan, Telerobotics, Automation, and Human Supervisory Control, Massachusetts Institute of Technology Press, 1992 ISBN: 0-262-19316-7. C.D. Wickens, A.S. Mavor, J. McGee, N.R. Council, et al., Panel on human factors in air traffic control automation, Flight to the Future: Human Factors in Air Traffic Control, 1997 ISBN: 0-309-52525-X. E. Wiener, D. Nagel, Human Factors in Aviation, Gulf Professional Publishing, 1988 ISBN: 978-0-12-374518-7. N.B. Sarter, D.D. Woods, Pilot interaction with cockpit automation: operational experiences with the flight management system, Int. J. Aviat. Psychol. 2 (4) (1992) 303–321, https://doi.org/10.1207/s15327108ijap0204_5.
[29] N.B. Sarter, D.D. Woods, Pilot interaction with cockpit automation II: an experimental study of pilots' model and awareness of the flight management system, Int. J. Aviat. Psychol. 4 (1) (1994) 1–28, https://doi.org/10.1207/ s15327108ijap0401_1. [30] L. Bainbridge, Ironies of automation, Analysis, Design and Evaluation of ManMachine Systems 1982, 1983, pp. 129–135, , https://doi.org/10.1016/B978-0-08029348-6.50026-9. [31] B.M. Muir, Trust in automation: part I. theoretical issues in the study of trust and human intervention in automated systems, Ergonomics 37 (11) (1994) 1905–1922, https://doi.org/10.1080/00140139408964957. [32] R. Parasuraman, R. Molloy, I.L. Singh, Performance consequences of automationinduced ‘complacency’, Int. J. Aviat. Psychol. 3 (1) (1993) 1–23, https://doi.org/ 10.1207/s15327108ijap0301_1. [33] E.L. Wiener, R.E. Curry, Flight-deck automation: promises and problems, Ergonomics 23 (10) (1980) 995–1011, https://doi.org/10.1080/ 00140138008924809. [34] P.A. Hancock, D.R. Billings, K.E. Schaefer, J.Y.C. Chen, E.J. de Visser, R. Parasuraman, A meta-analysis of factors affecting trust in human-robot interaction, Hum. Factors 53 (5) (2011) 517–527, https://doi.org/10.1177/ 0018720811417254. [35] K.A. Hoff, M. Bashir, Trust in automation: integrating empirical evidence on factors that influence trust, Hum. Factors 57 (3) (2015) 407–434, https://doi.org/10.1177/ 0018720814547570. [36] J.E. Bahner, A.-D. Hüper, D. Manzey, Misuse of automated decision aids: complacency, automation bias and the impact of training experience, Int. J. Hum. Comput. Stud. 66 (9) (2008) 688–699, https://doi.org/10.1016/j.ijhcs.2008.06.001. [37] A. Ponathil, S. Agnisarman, A. Khasawneh, S. Narasimha, K.C. Madathil, An empirical study investigating the effectiveness of decision aids in supporting the sensemaking process on anonymous social media, Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 61, No. 1 SAGE Publications, Los Angeles, CA, 2017, September, pp. 798–802, , https://doi.org/10.1177/ 1541931213601693. [38] A. Khasawneh, A. Ponathil, N. Firat Ozkan, K. Chalil Madathil, How should i choose my dentist? A preliminary study investigating the effectiveness of decision aids on healthcare online review portals, Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 62, No. 1 SAGE Publications, Sage CA: Los Angeles, CA, 2018, September, pp. 1694–1698, , https://doi.org/10.1177/ 1541931218621383. [39] S. Agnisarman, A. Khasawneh, A. Ponathil, S. Lopes, K.C. Madathil, A qualitative study investigating the sensemaking process of engineers performing windstorm risk surveys, Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 62, No. 1 SAGE Publications, Sage CA: Los Angeles, CA, 2018, September, p. 1776, , https://doi.org/10.1177/1541931218621402. [40] R. Parasuraman, V. Riley, Humans and automation: use, misuse, disuse, abuse, Hum. Factors 39 (2) (1997) 230–253, https://doi.org/10.1518/ 001872097778543886. [41] J.D. Lee, K.A. See, Trust in automation: designing for appropriate reliance, Hum. Factors 46 (1) (2004) 50–80, https://doi.org/10.1518/hfes.46.1.50_30392. [42] H. Rogers, A. Khasawneh, J. Bertrand, K.C. Madathil, An investigation of the effect of latency on the operator's trust and performance for manual multi-robot teleoperated tasks, Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 61, No. 1 SAGE Publications, Los Angeles, CA, 2017, September, pp. 390–394, , https://doi.org/10.1177/1541931213601579. [43] R. Parasuraman, M. Barnes, K. Cosenzo, S. Mulgund, Adaptive automation for human-robot teaming in future command and control systems, Army Research Lab Aberdeen Proving Ground MD Human Research and Engineering Directorate, 2007 www.dtic.mil/dtic/tr/fulltext/u2/a503770.pdf. [44] T. Ferris, N. Sarter, C.D. Wickens, Cockpit automation: still struggling to catch up, Human Factors in Aviation, Second edition, 2010, pp. 479–503, , https://doi.org/ 10.1016/B978-0-12-374518-7.00015-8. [45] E.E. Geiselman, C.M. Johnson, D.R. Buck, T. Patrick, Flight deck automation: a call for context-aware logic to improve safety, Ergon. Des. 21 (4) (2013) 13–18, https:// doi.org/10.1177/1064804613489126. [46] J.D. Lee, N. Moray, Trust, self-confidence, and operators' adaptation to automation, Int. J. Hum. Comput. Stud. 40 (1) (1994) 153–184, https://doi.org/10.1006/ijhc. 1994.1007. [47] D.W. Cunningham, A. Chatziastros, M. Von der Heyde, H.H. Bülthoff, Driving in the future: temporal visuomotor adaptation and generalization, J. Vis. 1 (2) (2001) 88–98, https://doi.org/10.1167/1.2.3. [48] J.-Y. Jian, A.M. Bisantz, C.G. Drury, Foundations for an empirically determined scale of trust in automated systems, Int. J. Cogn. Ergon. 4 (1) (2000) 53–71, https:// doi.org/10.1207/S15327566IJCE0401_04. [49] J.R. Lewis, IBM computer usability satisfaction questionnaires: psychometric evaluation and instructions for use, Int. J. Hum. Comput. Interact. 7 (1) (1995) 57–78, https://doi.org/10.1080/10447319509526110. [50] S.G. Hart, L.E. Staveland, Development of NASA-TLX (Task Load Index): results of empirical and theoretical research, Advances in Psychology, Vol. 52 1988, pp. 139–183 North-Holland https://doi.org/10.1016/S0166-4115(08)62386-9. [51] S.W. Raudenbush, A.S. Bryk, Hierarchical Linear Models: Applications and Data Analysis Methods, Vol. 1 SAGE, 2002 ISBN: 0-7619-1904-X. [52] D.A. Hofmann, An overview of the logic and rationale of hierarchical linear models, J. Manag. 23 (6) (1997) 723–744, https://doi.org/10.1177/014920639702300602. [53] M.R. Endsley, Toward a theory of situation awareness in dynamic systems, Hum. Factors 37 (1) (1995) 32–64 ISBN: 9781351548564. [54] J.J. Uebbing, U.S. Patent No. 7,643,055, U.S. Patent and Trademark Office, Washington, DC, 2010patentimages.storage.googleapis.com/48/f9/4e/
276
Automation in Construction 99 (2019) 265–277
A. Khasawneh et al. 7c5bfc672f9ff9/US7643055.pdf. [55] J. Bertrand, A. Bhargava, K.C. Madathil, A. Gramopadhye, S.V. Babu, The effects of presentation method and simulation fidelity on psychomotor education in a bimanual metrology training simulation, 2017 IEEE Symposium on 3D User Interfaces (3DUI), 2017, pp. 59–68, , https://doi.org/10.1109/3DUI.2017.7893318. [56] A. Bhargava, J.W. Bertrand, A.K. Gramopadhye, K.C. Madathil, S.V. Babu, Evaluating multiple levels of an interaction fidelity continuum on performance and learning in near-field training simulations, IEEE Trans. Vis. Comput. Graph. 24 (4) (2018) 1418–1427, https://doi.org/10.1109/TVCG.2018.2794639. [57] J.-Y. Zhou, J.-P. Zhou, Z.-C. Jiang, Design and validation of novel teleoperation rendezvous and docking system, J. Aerosp. Eng. 27 (5) (2012) 04014017, , https:// doi.org/10.1061/(ASCE)AS.1943-5525.0000264. [58] P.R. Liu, M.H. Meng, Acoustic display for navigation in internet-based teleoperation, Intelligent Robots and Systems, 2005. 2005 IEEE/RSJ International Conference on, IEEE, 2005, August, pp. 4161–4165, , https://doi.org/10.1109/ IROS.2005.1545300. [59] A. Vasilijevic, K. Jambrosic, Z. Vukic, Teleoperated path following and trajectory tracking of unmanned vehicles using spatial auditory guidance system, Appl. Acoust. 129 (2018) 72–85, https://doi.org/10.1016/j.apacoust.2017.07.001. [60] N. Diolaiti, C. Melchiorri, Teleoperation of a mobile robot through haptic feedback, Haptic Virtual Environments and Their Applications, IEEE International Workshop 2002, IEEE, 2002, pp. 67–72, , https://doi.org/10.1109/HAVE.2002.1106916. [61] C.E. Lathan, M. Tracey, The effects of operator spatial perception and sensory feedback on human-robot teleoperation performance, Presence Teleop. Virt. 11 (4) (2002) 368–377, https://doi.org/10.1162/105474602760204282. [62] B.T. Bethea, A.M. Okamura, M. Kitagawa, T.P. Fitton, S.M. Cattaneo, V.L. Gott, ... D.D. Yuh, Application of haptic feedback to robotic surgery, J. Laparoendosc. Adv. Surg. Tech. A 14 (3) (2004) 191–195, https://doi.org/10.1089/ 1092642041255441. [63] R. Kokes, K. Lister, R. Gullapalli, B. Zhang, A. MacMillan, H. Richard, J.P. Desai, Towards a teleoperated needle driver robot with haptic feedback for RFA of breast tumors under continuous MRI, Med. Image Anal. 13 (3) (2009) 445–455, https:// doi.org/10.1016/j.media.2009.02.001. [64] S.O. Agnisarman, K.C. Madathil, K. Smith, A. Ashok, B. Welch, J.T. McElligott, Lessons learned from the usability assessment of home-based telemedicine systems,
Appl. Ergon. 58 (2017) 424–434, https://doi.org/10.1016/j.apergo.2016.08.003. [65] J.J. Hox, M. Moerbeek, R. Van de Schoot, Multilevel Analysis: Techniques and Applications, Routledge, 2017 ISBN: 9-7813-1730-8683. [66] K.C. Madathil, G.F. Alapatt, J.S. Greenstein, An investigation of the usability of image-based CAPTCHAs, Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 54, No. 16 Sage, Los Angeles, CA, 2010, September, pp. 1249–1253, , https://doi.org/10.1177/154193121005401603. [67] B. Sadrfaridpour, H. Saeidi, J. Burke, K. Madathil, Y. Wang, Modeling and control of trust in human-robot collaborative manufacturing, in: R. Mittu, D. Sofge, A. Wagner, W.F. Lawless (Eds.), Robust Intelligence and Trust in Autonomous Systems, Springer US, Boston, MA, 2016, pp. 115–141, , https://doi.org/10.1007/ 978-1-4899-7668-0_7. [68] T.B. Sheridan, Humans and automation: system design and research issues, Human Factors and Ergonomics Society, 2002, https://doi.org/10.1108/k.2003.06732iae. 001. [69] Tobii. (2015, April 27). Retrieved May 10, 2018, from www.tobii.com/ [70] Unity, (n.d.). Retrieved May 10, 2018, from, www.unity3d.com/. [71] S. Fish, UGVs in future combat systems, Unmanned Ground Vehicle Technology VI, Vol. 5422 International Society for Optics and Photonics, 2004, September, pp. 288–292, , https://doi.org/10.1117/12.537966. [72] M.J. Barnes, J.Y.C. Chen, K.A. Cosenzo, D.K. Mitchell, Human robot teams as soldier augmentation in future battlefields: an overview, Proc. 11th Int. Conf. Hum.–Comput. Interact. Vol. 11 2005 0-8058-5807-5. [73] J.C. Lane, C.R. Carignan, B.R. Sullivan, D.L. Akin, T. Hunt, R. Cohen, Effects of time delay on telerobotic control of neutral buoyancy vehicles, Robotics and Automation, 2002. Proceedings. ICRA'02. IEEE International Conference, Vol. 3 IEEE, 2002, pp. 2874–2879, , https://doi.org/10.1109/ROBOT.2002.1013668. [74] I.L. Singh, R. Molloy, R. Parasuraman, Automation-induced “complacency”: development of the complacency-potential rating scale, Int. J. Aviat. Psychol. 3 (2) (1993) 111–122, https://doi.org/10.1207/s15327108ijap0302_2. [75] M. Barnes, R. Parasuraman, K. Cosenzo, Adaptive automation for military robotic systems, Uninhabited Military Vehicles (UMVs): Human Factors Issues in Augmenting the Force, NATO Brussels, Belgium, 9789283700609, 2006, pp. 423–443.
277