A Hybrid Approach to Training Expert Skills in Highly Automated Systems: Lessons from Air Traffic Management

A Hybrid Approach to Training Expert Skills in Highly Automated Systems: Lessons from Air Traffic Management

13th on 13th IFAC/IFIP/IFORS/IEA IFAC/IFIP/IFORS/IEA Symposium SymposiumAvailable on online at www.sciencedirect.com Analysis, Design, and Evaluation ...

529KB Sizes 1 Downloads 56 Views

13th on 13th IFAC/IFIP/IFORS/IEA IFAC/IFIP/IFORS/IEA Symposium SymposiumAvailable on online at www.sciencedirect.com Analysis, Design, and Evaluation of Human-Machine Analysis, Design, and Evaluation of Human-Machine Systems Systems Aug. 30 Sept. 2, 2016. Kyoto, Japan Aug. 30 - Sept. 2, 2016. Kyoto, Japan

ScienceDirect

IFAC-PapersOnLine 49-19 (2016) 207–211

A Hybrid Approach to Training Expert Skills in Highly Automated Systems: Lessons Lessons from from Air Air Traffic Traffic Management Management Brian Brian Hilburn Hilburn 

Center Center for for Human Human Performance Performance Research, Research, CHPR CHPR BV, BV, Voorburg, Voorburg, the the Netherlands Netherlands ([email protected]) ([email protected]) Abstract: In In air air traffic traffic management management (ATM), (ATM), as as in in many many other other domains, domains, automation automation is is increasingly increasingly Abstract: capable of performing more strategic and “cognitive” aspects of system performance. This paper sets out out capable of performing more strategic and “cognitive” aspects of system performance. This paper sets aa potential potential hybrid hybrid approach approach to to automation automation design, design, which which assumes assumes qualitatively qualitatively different different challenges challenges at at the the introductory and mature phases of automation implementation. Whereas operator acceptance seems the introductory and mature phases of automation implementation. Whereas operator acceptance seems the critical issue issue at at the the time time of of automation automation introduction, introduction, skills development and and maintenance maintenance seem seem most most critical skills development significant as as expertise expertise accrues. accrues. This This proposed proposed hybrid hybrid marries marries the the notions notions of of strategic strategic conformance conformance and and significant adaptive automation automation to to achieve achieve aa design design approach approach in in which, which, over over the the span span of of the the skill skill acquisition acquisition cycle, cycle, adaptive automation is is fitted fitted to to the the novice, novice, and and the the expert expert is is fitted fitted to to the the automation. automation. Further, Further, this this approach approach automation assumes that that training training can can (and (and indeed indeed must) must) ultimately ultimately extend extend the the training training criterion from that that of of criterion from assumes (heuristically based) expert operator performance, to that of (algorithmically based) optimized system (heuristically based) expert operator performance, to that of (algorithmically based) optimized system performance. performance.

© 2016, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved. Keywords: Keywords: Training, Training, adaptive adaptive automation, automation, human human centered centered automation, automation, air air traffic traffic management. management. 

1. 1. INTRODUCTION INTRODUCTION Over the the next next 20 20 years, years, global global demand demand for for air air travel travel is is Over predicted to to increase increase roughly roughly 5% 5% annually, annually, and and the the in-service in-service predicted jet Meanwhile, jet fleet fleet to to grow grow by by nearly nearly 90% 90% (JADC, (JADC, 2015). 2015). Meanwhile, commercial and environmental concerns commercial and environmental concerns will will mean mean increasingly increasingly complex complex routing, routing, to to minimize minimize fuel fuel burn burn and and delays. delays. Together, Together, these these factors factors are are driving driving the the need need for for improved improved planning planning and and coordination coordination functions, functions, which which in in turn turn will will likely likely require require increasingly increasingly sophisticated sophisticated automation. automation. In air air traffic traffic management management (ATM), (ATM), as as in in many many other other domains, domains, In automation has has historically historically been been used used to to control control more more automation mundane tasks, with with higher higher level level tasks tasks left left to to mundane “housekeeping” “housekeeping” tasks, the the human human operator. operator. This This view view was was captured captured many many decades decades ago ago in in Paul Paul Fitt’s Fitt’s Machines-are-better-at, Machines-are-better-at, Men-are-better-at Men-are-better-at (MABA MABA) (MABA MABA) approach approach to to function function allocation, allocation, which which assigned assigned individual individual tasks tasks to to the the more more capable capable agent, agent, either either human human or or machine machine (Fitts, (Fitts, 1951). 1951). For For instance, instance, whereas whereas humans are are more more adept adept at at perception, perception, judgment, judgment, humans improvisation and and induction, induction, machines machines excel excel at at speed, speed, power, power, improvisation computation computation and and short short term term memory memory tasks. tasks. The The MABA MABA MABA MABA view view has has fallen fallen into into disfavour disfavour over over the the years, years, as as technological capabilities have evolved, and the line technological capabilities have evolved, and the line between between human human and and machine machine capabilities capabilities has has blurred blurred (Bye (Bye et et al, al, 1999; 1999; Hoffman et al., 2002). Automation is increasingly capable Hoffman et al., 2002). Automation is increasingly capable of of assuming assuming greater greater authority authority and and autonomy, autonomy, and and performing performing more strategic strategic and and “cognitive” “cognitive” aspects aspects of of system system more performance. performance. One overarching overarching challenge challenge currently currently facing facing various various work work One domains domains is is how how to to design design advanced advanced automation automation in in such such aa way way

that that it it is is both both used, used, and and used used in in aa beneficial beneficial manner manner (Parasuraman & Riley, 1997), to balance the potentially potentially (Parasuraman & Riley, 1997), to balance the competing competing demands demands of of greater greater automation automation and and operator operator skill skill retention. This paper sets out, based on theoretical retention. This paper sets out, based on theoretical and and empirical empirical evidence, evidence, aa potential potential hybrid hybrid approach approach to to automation design design that that supports supports the the training training and and maintenance maintenance automation of expert expert skills. skills. As As laid laid out out in in the the following following sections, sections, this this of paper assumes assumes that that future future automation automation design design faces faces paper qualitatively qualitatively different different challenges challenges at at the the introductory introductory and and mature phases of implementation. mature phases of implementation. 2. HUMANVS TECHNOLOGY-CENTERED 2. HUMANVS TECHNOLOGY-CENTERED APPROACHES TO AUTOMATION APPROACHES TO AUTOMATION DESIGN DESIGN The The technology-centered technology-centered approach approach (TCA) (TCA) to to automation automation design views views the the human human as as aa source source of of potential potential error, error, and and design starts from from the the position position that that tasks tasks should should therefore therefore be be starts automated whenever whenever possible. possible. In In aa sense, sense, this this view view dates dates automated backs backs to to aa paradox paradox recognized recognized by by Fitts Fitts in in the the 1950s: 1950s: If If we we understand how a human performs a task, we can construct understand how a human performs a task, we can construct aa mathematical mathematical model model of of that that task task that that should should allow allow us us to to create create aa device, program or computer to perform the task at device, program or computer to perform the task at least least as as well well as as the the human. human. To To the the extent extent that that the the human human can can be be compared to to aa machine, machine, he he can can be be replaced, replaced, and and designed designed out out compared of the the system. system. TCA TCA argues argues that that keeping keeping the the human human in in the the of loop is, is, by by definition, definition, impossible impossible if if system system performance performance is is loop extended extended through through fundamentally fundamentally new new tasks, tasks, or or tasks tasks that that cannot cannot be be overseen overseen or or performed performed by by the the human. human. If If the the system system is to perform tasks that the human is incapable of performing is to perform tasks that the human is incapable of performing (or (or performing performing say say at at the the same same rate rate or or accuracy) accuracy) then then the the underlying process should be a black box to the operator. underlying process should be a black box to the operator. According to to this this view, view, it it is is sufficient sufficient that that input input // output output According

2405-8963 © 2016, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved. Peer review under responsibility of International Federation of Automatic Control. Copyright © 213 Copyright © 2016 2016 IFAC IFAC 213 10.1016/j.ifacol.2016.10.522

2016 IFAC/IFIP/IFORS/IEA HMS 208 Aug. 30 - Sept. 2, 2016. Kyoto, Japan

Brian Hilburn / IFAC-PapersOnLine 49-19 (2016) 207–211

relationships are clear to the operator, and design need not consider transparency of the intervening processing. After all, the argument goes, “dumbing down” automation to the level of the human will limit system performance, and risks recreating human error modes.

EUROCONTROL’s CORA project set out to build a prototype advanced advisory system for strategic deconfliction (Kirwan & Flynn, 2002). The project recognized that controller initial acceptance was critical to its introduction (Hilburn, 2000), and tried to ensure that the system would solve problems like a human would. Although the project offered some promising results, it (like similar efforts before) was hindered in one important regard: one cannot guarantee similarity between human and machine solutions in a built system. More recently, Westin and colleagues (Westin et al, 2013) explored the impact of controller acceptance in a more fundamental way, by asking: If automation were to perform in a way that perfectly matches that of the controller, would controllers accept such automation? The concept of “strategic conformance” was defined as

The notion of human-centered automation (HCA) traces its roots to the work of Billings (1997). Based on empirical evidence from the flight deck and ATM, his seminal (and lengthy) treatise argued that operators in an automated system must be keep informed, active and in-the-loop. The risks of an out-of-the-loop operator include: Situation awareness problems, reversion-to-manual difficulties under off-nominal conditions, and mis-calibrated trust (either insufficient or excessive). The aim of this paper is not to wade into the debate between the technology- and human-centered automation schools, but to note that these two viewpoints draw a clear contrast between the problem solving styles typically employed by human and automation—the former tend to rely on heuristic “rules of thumb,” the latter on optimized algorithms. In the case of ATM, a mathematically-optimized solution (e.g., that minimizes total flightpath distance across a traffic pattern) might not fit with that of the human (Nantanen & Nunes, 2005; Prevot et al., 2012). Even if the human could derive the optimized solution, it might be too mentally demanding to implement and monitor. This is evident in controllers’ “setand-forget” strategy of turning aircraft on parallel headings, to ensure separation, or tending to turn slower aircraft behind faster ones (Kirwan & Flynn, 2002). Although these strategies might sometimes be sub-optimal from a mathematical standpoint, they ease cognitive burden and safeguard against failures to detect a future loss of separation.

…the degree to which automation’s behavior and apparent underlying operations match those of the human. Westin et al assessed controller acceptance of “automated” air traffic scenarios that were in fact unrecognizable replays of either a given controller’s own previous performance (by definition, this was “conformal” with his / her own strategy), or that of a colleague (who had chosen a slightly different solution strategy). They found that acceptance of automated advisories was significantly higher for conformal solutions, 76% vs 57% (F(1,15)=10.6, p<.01). Similarly, agreement was significantly higher, and response time significantly lower, for conformal vs non-conformal advisories. If, as these results suggest, strategic conformance can help foster acceptance of new automated advisory systems, the benefit of this would appear to lie in the initial deployment phase, when controllers are first introduced to new advisory automation. Ultimately, though, if we are to realize benefits of such advanced automation, it is not enough that it simply matches the controller’s way of working. It must extend the controller’s capabilities, as discussed above in section 1. But how do we design automation in such a way that it helps develop and maintain expert skills? One promising approach, as outlined in the following section, draws on the rich body of evidence on adaptive automation and intelligent tutoring system concepts.

This difference in human vs machine problem solving styles, and underlying mechanisms, is potentially important as we consider how to design advanced automation systems so as to best develop and maintain expert skills. As the following section discusses, this issue might be especially critical at early stages, when such automation is first introduced. 2. INTRODUCING ADVANCED AUTOMATION: THE ACCEPTANCE PROBLEM Acceptance has been identified as one of the greatest obstacles to introducing new ATM automation (JPDO, 2011; Hilburn, 2003). Trends in ATM suggest that automation will likely become more strategic in both timescale and control authority, less transparent to the controller, and act via resolution advisories. Picture a system that advises the controller, say 20 minutes in advance, to resolve mediumterm conflicts which the controller might have difficulty evaluating. In a real sense, automation would become an agent in the ATM process, much like a human colleague— and as with a human colleague, its advice can be ignored. This is especially likely to happen if its benefits are not perceived. Herein lies a paradox: a controller might only rely on such an advisory system if its benefits are obvious, yet those benefits will not be obvious until it is used.

3. OPTIMIZING ADVANCED AUTOMATION: THE EXPERTISE PROBLEM The potential challenges that experts would face under this type of envisioned automation would be slightly different from the preceding. Automation would have to work handin-hand with the controller, but priority would now shift from matching the controller, to handling complex traffic flows. This would require high performance automation assuming control of some higher level functions. Much has been written over the years about the potential costs of static automation, in which a system operates at a fixed level of automation (LOA), and task allocation remains fixed between human and machine. Potential human performance costs include problems relating to monitoring and supervisory 214

2016 IFAC/IFIP/IFORS/IEA HMS Aug. 30 - Sept. 2, 2016. Kyoto, Japan

Brian Hilburn / IFAC-PapersOnLine 49-19 (2016) 207–211

control, complacency, vigilance, situation awareness, and skill maintenance.

overlap. For instance, a measurement-based scheme in fact assumes some underlying model. Where discussion of the modeling approach is useful, in our present context, is in specifying the criterion measure/s against which performance is judged. As discussed later, in the proposed hybrid training concept this criterion changes over the course of the skill acquisition cycle.

The concept of closed-loop adaptive automation (AA), in which task allocation can transfer dynamically between human and machine, has been proposed as a countermeasure to the types of potential human- and system performance problems cited above (Oppenheimer, 1993; Parasuraman, 2000). AA generally assumes that the system itself has the authority to reallocate tasks between human and machine (the related notion of adaptable automation also assumes task reallocation, albeit under the control of the operator). William Rouse, one of the theoretical pioneers in the field of AA (cf. Rouse, 1988), proposed some general guidelines for system design. He argued that, as task demands increase, the system should [a] vary its level of assistance, [b] increase humanmachine interaction, and [c] assume greater authority for task allocation.

The precise form of adaptation is not specified here. System parameters (i.e., the factors that define performance), threshold values, and gain levels, would all have to be set and tuned, so as to achieve closed-loop stability and learning response. Parameters might include, for instance, intervention timescale, path deviations, etc.). In the end, and as proficiency increased, automated support should reach a minimal value. At this point, the system could remain in a monitoring role, and thus serve as a backstop and occasional tutor, as required. Training could be embedded naturalistically within the operational task, thereby blurring the line between the backup and tutor functions. Such an approach would embrace one important construct of HCA— that automation should be capable of monitoring and overseeing the human, and intervening when necessary.

AA concepts underlie aspects of both adaptive training and intelligent tutoring (which additionally consider pedagogical elements of training efficiency, feedback timing and duration, learning hysteresis, etc.). Essentially, the level of support is dynamic across the skill acquisition cycle. As a novice develops expertise, the system itself can adapt its level of support, as well as its training methods and criterion measure (i.e., its training goal). This raises two fundamental issues: 

Trigger mechanisms-- On what basis does an adaptive system infer the need for adaptation?



Training goals— What should the criterion performance measure be, and how does this vary across the skill acquisition cycle?

3.2 Training goals Over the years, various studies into the “best practices” of air traffic controllers (Kirwan and Flynn, 2002: Loft et al., 2007) have shown that expert and novices differ in their control strategies. Controller training, historically, has relied on the concept of training the ab initio (novice) controller to mimic the performance of an expert. As noted in section 1, however, automation may fundamentally alter the role and context of the air traffic controller, and place greater emphasis not only on performing like an expert, but performing in a way that maximizes system performance in the context of a new automated system.

3.1 Trigger mechanisms One of the fundamental design issues associated with AA is the mechanism by which the system infers the need for adaptation. Broadly speaking, three different “trigger mechanisms” have been distinguished (Rouse, 1988): 

Operator measurement— relies on measures of, for instance, an operator’s actual performance, or on physiological measures (e.g. eye-tracking, or brain wave derived measures) as proxies for pending performance change;



Critical events logic—adaptation is triggered so as to adhere to strict mission or organizational doctrine; and



Performance modeling—inference is based on some underlying model (for example, a simple memorybased model might predict performance degradation when short term memory store exceeds seven items).

209

Take a hypothetical (and overly simplified) example: it is widely acknowledged that controllers tend not to use speed adjustments in en route airspace as a method for fine tuning spacing. It is also acknowledged that this is largely a perceptual problem. But what if a simple “speed tuning” tool were available that allowed the result of speed adjustments to be visualized? The point is simply that the training goal would become not one of matching expert performance, but of moving toward optimized (in this case, speed-tweaked airspace) performance. This paper therefore argues that the traditional view of ATM training, which aims to mimic expert performance, while necessary, is not sufficient. Again, automation, if it is to be useful, must extend the current capabilities of the unaided human operator. Therefore, training will have to have as its goal not merely matching the performance of the expert, but ultimately of optimizing the performance of the system (which, admittedly, will have the effect of redefining expertise). In effect, at the sharp end of the skill acquisition cycle, the expert will have to come to think more like the machine.

Notice that these trigger mechanisms are not necessarily distinct and orthogonal. That is, there is generally some 215

2016 IFAC/IFIP/IFORS/IEA HMS 210 Aug. 30 - Sept. 2, 2016. Kyoto, Japan

Brian Hilburn / IFAC-PapersOnLine 49-19 (2016) 207–211

In simple terms, this process can be thought of in three stages (in fact, the process is continuous) of skill acquisition, as follows: In the first (Conformance) stage, the system mimics the operator’s strategies so as to foster initial acceptance and usage. In the second (Expert) stage, the system nudges the operator toward expert strategies. For example, the system might propose to a controller solutions more consistent with those of experts in a similar situation. In the third (Optimization) stage, operator performance is nudged from heuristic- toward algorithmic-based strategies. For example, the system might propose to a controller solutions that produce higher global or local airspace resilience values. 4. TOWARD A HYBRID MODEL OF AUTOMATION ASSISTANCE This article has argued that the fundamental challenges faced by designers of advanced automation are qualitatively different at the novice and expert skill levels—namely, how to foster acceptance in the novice, and also develop and maintain skill in the expert? Although the challenges are different at the two levels, one can reasonably argue that the underlying factors are the same: a mismatch between human and machine strategy. As outlined in the previous section, optimizing this match might rely in the first instance on nudging the novice toward expert strategies, but also (ultimately) on nudging performance from heuristic to optimized control strategies.

Fig. 1. A hybrid adaptive approach to training advanced automation skills. 5. CONCLUSIONS Despite decades of effort, the drive for truly human-centered automation (HCA) has not resulted in clear prescriptive guidance. Moreover, the split between the human- and technology-centered camps seems as strong as ever. The former rightly argues that the human should be kept in the loop, if only for unforeseen events. The latter rightly argues that automation must extend system capabilities beyond those of the human, if there is to be any net benefit.

This notion is captured in figure 1. As experience with advanced automation accrues, automation should rather quickly decrease its reliance on strategic conformance. That is, once an operator begins to use the system, the system should stop mimicking the operator, and rather quickly adapt its level of support so as to move the human strategy toward optimized performance. The training criterion would be, in the first instance, defined by the “best practices” or heuristics of experts (cf. Kirwan & Flynn, 2002). Later, performance would be judged against “algorithmically” optimized performance.

One potential way forward, to bridge this divide, might lie in marrying the concepts of strategic conformance and adaptive training, in such a way that automation and human capabilities are fitted to one another. Coupling the potential benefits of strategic conformance (for novices) and adaptive training (for experts) might enable us to realize true humancentered automation, in which the machine is fitted to the human, the human is fitted to the machine, and each oversees the other. REFERENCES

The level of adaptive support (the top black curve) would be variable. This “second order” adaptive automation would mean that adaptation rate would lessen as performance asymptotes, and only minor corrections would likely be required. This diagram is not meant to suggest that the human necessarily ever equals machine performance on a given task-- perceptual or memory limitations, or display shortcomings, might prevent this. The aim of adaptive training would be to shift the task performance line vertically at asymptote; that is, for an adaptive training capability embedded within automation to bring human performance closer to the optimal.

Billings, C.E. (1997). Aviation automation: The Search for a human-centered approach. Mawah, NJ, USA: Erlbaum. Bye, A., Hollnagel, E. & Brendeford, T.S. (1999). Humanmachine function allocation: a functional modelling approach. Reliability Engineering & System Safety 64, 291-300. Fitts, P.M. (1951).Human engineering for an effective air navigation and traffic control system. US National Research Council: Washington, DC. Hilburn, B. (2003). Evaluating Human Interaction with Advanced Air Traffic Management Automation. Report RTO-MP-088. Brussels: NATO. Hilburn, B. (2000). Controller Acceptance of Automation, CORA2 Final Report. EUROCONTROL technical report. Hoffman, R.R. et al. (2002). A rose by any other name…would probably be given an acronym. IEEE Intelligent Systems, 17, 72-80. 216

2016 IFAC/IFIP/IFORS/IEA HMS Aug. 30 - Sept. 2, 2016. Kyoto, Japan

Brian Hilburn / IFAC-PapersOnLine 49-19 (2016) 207–211

JADC (2015). Worldwide Market Forecast 2015-2034. Tokyo: Japan Aircraft Development Corporation. JPDO. (2011), NextGen avionics roadmap: Joint planning and development office: Next Generation Air Transportation System. Kirwan, B., & Flynn, M. (2002). Investigating air traffic controller conflict resolution strategies, No. ASA.01.CORA.2.DEL04-B.RS. EUROCONTROL, Brussels Loft, S., Sanderson, P., Neal, A. & Mooij, M. (2007).Modeling workload in en route air traffic control: Critical review and broader implications. Human Factors, 49(3), 376-399. Opperman, R. (1994). Adaptive user support. Hillsdale, NJ: Erlbaum. Parasuraman, R. (2000). Designing automation for human use: Empirical studies and quantitative models. Ergonomics, 43(7), 931-951. Parasuraman, R. & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, June, 39 (2), 230-253. Prevot, T., Homola, J. R., Martin, L. H., Mercer, J. S., and Cabrall, C. D. (2012), Toward automated air traffic control: Investigating a fundamental paradigm shift in human/systems interaction. International Journal of Human-Computer Interaction, 28(2), 77-98. Rantanen, E.M., & Nunes, A. (2005). Hierarchical conflict detection in air traffic control. The International Journal of Aviation Psychology, 15(4), 339-362. Rouse, W. (1988). Adaptive aiding for human / computer control. Human Factors, 30(4), 431-443. SESAR (2007). Deliverable D3: The ATM Target Concept, Publication DLM 0612 001 0200a. Brussels: SESAR Joint Undertaking (SJU). Westin, C.A, Borst, C. & Hilburn, B. (2013). Mismatches between automation and human strategies:An investigation into future air traffic management decision aiding. In Proceedings of the International Symposium on Aviation Psychology (ISAP) 2013 conference. Dayton, Ohio, USA: Wright State University.

217

211