Cognitive Behavior Based Framework for Operator Learning: Knowledge and Capability Assessment through Eye Tracking

Cognitive Behavior Based Framework for Operator Learning: Knowledge and Capability Assessment through Eye Tracking

Antonio Espuña, Moisès Graells and Luis Puigjaner (Editors), Proceedings of the 27th European Symposium on Computer Aided Process Engineering – ESCAPE...

NAN Sizes 0 Downloads 21 Views

Antonio Espuña, Moisès Graells and Luis Puigjaner (Editors), Proceedings of the 27th European Symposium on Computer Aided Process Engineering – ESCAPE 27 October 1st - 5th, 2017, Barcelona, Spain © 2017 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/B978-0-444-63965-3.50498-0

Cognitive Behavior Based Framework for Operator Learning: Knowledge and Capability Assessment through Eye Tracking Laya Dasa, Babji Srinivasanb, Rajagopalan Srinivasanb,c a

Department of Electrical Engineering, Indian Institute of Technology Gandhinagar, Gandhinagar 382355, India b Department of Chemical Engineering, Indian Institute of Technology Gandhinagar, Gandhinagar 382355, India c Department of Chemical Engineering, Indian Institute of Technology Madras, Chennai 600036, India [email protected]; [email protected]

Abstract Safety in process plants is of paramount importance. With the predominant contributor to accidents in process industries being repeatedly identified as human error, it is necessary to have skilled operators to prevent accidents and minimise the impact of abnormal situations. Such knowledge and skills are imparted to operators using operator training simulator (OTS) which offer a simulated environment of the real process. However, these techniques emphasize on assessing operator’s ability to follow standard guidelines – assessment of the operator’s process knowledge and imparting an adequate mental model to the operator is not addressed. Further, understanding cognitive behavior of operators, identified as crucial to enhancing their skills and abilities is often neglected. In this work, we develop a systems engineering framework to operator training with emphasis on accounting for the cognitive abilities of the human-in-theloop. The framework consists of three distinct components: (1) Design of suitable training tasks, (2) Measure and analyse operator’s cognitive response while performing the tasks, and (3) Infer operator’s mental model through knowledge and capability assessment. Consider the operator as a system whose input is information acquired from the process through the human machine interface (HMI) and output are actions taken on the process (such as manipulating valves). We demonstrate in this paper that the available input (from eye tracking) and output (operator actions) data when suitably analysed with respect to the process state can aid in inferring the operator’s mental model at any given time. Based on the model, the operator’s current knowledge can be deduced and gaps identified. New training tasks can then be designed to address these gaps. In this article, we describe the proposed framework for operator learning and illustrate it using experimental studies. Keywords: Process Safety, Operator Training, Mental Model, Eye Tracking, Directed Graph

1. Introduction Safety in process plants is of paramount importance. Industries across the globe have developed and adopted several technological advancements such as sophisticated control and automation to prevent accidents from happening, and/or for minimizing

2978

L. Das et al.

their impact whenever they occur. However, continued occurrence of accidents despite these advancements have led to research attention being focused on detailed analysis of such events, which has revealed that human errors is one of the major causes behind such accidents (US CSB, 2011). With increasing process complexity, acquiring an adequate mental model of the process has become increasingly challenging for operators (Kludge et al., 2014) which leads to gaps between the mental model and process operation. Moreover, most of the training provided in process industries is provided on the job, where not all possible training/learning scenarios are considered. As a result, while the acquisition of a mental model is ensured, but acquisition of an adequate model is somewhat left to chance (Kludge et al., 2014). In fact, inadequate operator training on new equipment was identified as one of the reasons of the Bayer CropScience explosion in 2008 (US CSB, 2011). It is therefore important to develop training programs with emphasis on the cognitive abilities of the operator, and tailor training tasks for operators so as to impart an adequate mental model of the process. A major module of training programs is operator training system (OTS) that allows trainee operators to understand and operate the process in a simulated environment that mimics the “look and feel” of the process. Research attention has been focused on developing comprehensive OTS that can accurately mimic plant behavior in different operating conditions and abnormalities (Patle et al., 2014). Intelligent training systems based on expert behavior (Shin and Venkatasubramanian, 1996) and techniques for operator performance assessment based on deviation of the sequence of actions taken by operator from a predetermined sequence (Lee et al., 2000) have been proposed. Recent studies emphasize that use of virtual reality for training operators has been claimed to improve operator’s cognitive readiness (Manca et al., 2013). However, most of these techniques emphasize on improving simulation capabilities of OTS and assessing operator’s ability to follow standard guidelines – the task of explicitly imparting knowledge of fundamental relations and operating principles of the process is not addressed. Further there are hardly few approaches that focus on understanding cognitive behavior of operators, identified as crucial to enhancing operators’ skills and abilities (Bullemer and Nimmo, 1994). Our previous work revealed that accounting for the cognitive processes of operators during an OTS task is crucial to evaluating their ability (Sharma et al., 2016). In this work, we attempt to address the gap in training program by proposing a framework that can explicitly account for the cognitive behavior of the operators. In this framework, we evaluate the learning process by inferring the operator’s mental model. The mental model is developed based on the input information (from eye tracking data) obtained by the operator using the Human Machine Interface (HMI) and their actions performed during the task. Using this approach we focus on the cognitive skills of trainee operators and propose techniques to identify gaps in the operators’ mental model which can then be addressed by tailoring the training tasks to suit the individual operator.

2. Learning: A Systems Engineering Framework The typical setup of a process plant control room environment consists of an operator interacting with the plant through human machine interface (HMI) of the distributed control system (DCS). The operator also receives information from the plant, alarm systems and operator support systems (such as fault diagnosis modules). In such an

Cognitive behavior based framework for operator learning

2979

environment, the operator continuously acquires information about the process and takes control actions to maintain the plant within operating limits, which depends on the accuracy of the operator’s mental model of the process. In a training/learning task, the operator builds/updates their mental model based on the information acquired using the HMI in relation to the actions performed (such as slider actions to manipulate valve movement). This iterative process of information acquisition, hypothesis generation and actions to confirm the diagnosis updates the mental model of the operator in a manner similar to “punctuated equilibrium” in evolution wherein the development of the model occurs in isolated events of rapid development followed by long periods involving little or no development (Denzau and North, 1994).

Figure 1 Operator learning from an HMI with information and action and systems engineering approach to mental model identification

Let us denote all the information acquired by the operator through the HMI including about the process, its control, alarm system, and support systems at any given instant as I(t). Based on this information, during an abnormal situation, the operator develops a mental model M(t), of the plant and generates a hypothesis about the action(s) required. In order to confirm the hypothesis and bring the plant to normal condition, the operator performs actions (A(t)) such as opening/closing control valves to bring the plant back to normal conditions. The operator then establishes a relation between the actions and behaviour of the process through information acquired at I(t+1), and develops/updates their mental model, denoted as M(t+1) as depicted in Figure 1. This process of learning continues during the task execution. In order to develop an effective training program, it is necessary to understand and evaluate the accuracy of the operator’s mental model. This is traditionally carried out by analysing the response of operator, such as correctness of actions and time to completion of task. Such a method evaluates the mental model based solely on actions taken by (output from) operator, while neglecting the relation with the information acquired by (input to) the operator prior to taking actions. The input information I(t) acquired by the operator can now be identified using eye tracking data. Both eye gaze and pupil measurements can be used to identify I(t). Based on the knowledge of both I(t) and A(t), we can adopt system identification techniques to infer the operator’s mental model as it evolves in time.

2980

L. Das et al.

3. Experimental Studies An experimental setup for observing and evaluating learning in a process plant control room environment has been used in this study. The process plant consists of a continuously stirred tank reactor (CSTR) that produces ethanol from water and ethene. The unreacted reactants and the product of CSTR are fed to a distillation column, which produces distilled water and ethanol at the bottom most and top most trays respectively. There are three valves that can be manipulated with sliders to maintain the operation of the plant – V102 (flow of water to CSTR), V301 (flow of coolant water into CSTR jacket) and V401 (reflux flow of distillation column). A detailed description of the process is provided in (Sharma et al., 2016). Participants with a background in control theory (who are considered novices in controlling a chemical process) are asked to understand the relationship between different valves and variables in the process by interacting with an HMI. A brief description of the process and the valves that can be used to control its operation are provided. Basic information regarding the operation through HMI is also provided. The participants are then provided a series of training tasks in which they are asked to manipulate a slider and observe the effect on different variables. We use eye tracking to obtain information regarding input to operator at any time and develop a relationship with operator’s actions (output) to identify their mental model. In this work, the HMI depicting the process is divided into distinct areas of interest (AOIs) based on the spatial location of variable tags, trends, instruction panel, etc. The task performed by operator is also divided into segments (or stages) based on the type of action performed by them such that each segment comprises only one type of action, i.e., one slider moved in one direction. This allows establishing the relationship between the variables the operator has gazed on, in relation to a fixed type of control action taken by the operator. In each segment/stage of a task the gaze transition probability (GTP) between pairs of AOIs is calculated and a directed graph (digraph) is constructed where nodes correspond to AOIs (marked as active if the operator has gazed on it, else passive) and edges representing transition of gaze between two AOIs. The width of an edge connecting two nodes is proportional to the corresponding GTP, quantifying the extent to which the relationship between two variables is examined by the operator. The evolution of such a digraph is studied to observe and analyse the dynamic allocation and distribution of attention by the participant.

4. Results In one of the learning tasks, the participant is asked to move the slider for flow of water to CSTR (V102) and observe its effect on other variables. The participant was not provided any hint regarding the variables that might get affected upon manipulation of V102 and was therefore free (and in fact, expected) to look at all variables. The variables that are affected by the feed water to CSTR (according to the model equations) are the flow rate of water to the CSTR (F101), temperature inside CSTR (T103), temperature of cooling water leaving the CSTR jacket (T102) and concentration of ethanol in CSTR (C101). In every segment (obtained as discussed above) of the task, the participant makes a single type (one slider in one direction) of control action A(t), acquires the information I(t) regarding different variables of the process from the HMI and builds/updates the

Cognitive behavior based framework for operator learning

2981

relationships between the valve and different gazed variables that constitute the mental model M(t). This process continues through different stages of the task resulting in the mental model getting updated over time, which is captured through the gaze derived digraph. For example, in the first stage of the task, the participant fixates on instruction panel (IP), C101 (and its trend panel, TP-C101), T103, T101 (temperature of cooling water entering the CSTR jacket with flow rate F102) and T102. In the second stage of the task, the participant fixates on a subset of variables that they have fixated on in the first stage (all variables fixated on earlier except T101). Similarly, in the sixth stage of the task, the participant gazes on T102, T103 and C101 alone. Figure 2 shows the true process digraph for the task and the digraph obtained from eye gaze data of the participant by considering the entire task as one stage. This represents the product of learning that is obtained by removing all variables that remain constant upon manipulation of V102, which in this case is T101. This graph is obtained by modifying the gaze based association graph using the information obtained by the participant from HMI. Comparison of the graphs reveals that the participant has gazed on and transitioned between all tags corresponding to the CSTR except F101, which is affected by manipulation of V102 (shown in the final digraph with no edges connected to it). This is representative of the fact that the participant has not acquired a piece of information from the HMI. The digraph technique therefore serves as a powerful tool for monitoring participant’s attention allocation over time by monitoring active nodes and edges, which can be used in real time to customize training tasks based on individual operator’s gaze patterns to impart an adequate mental model.

Figure 2 True process digraph and final (accumulated) digraph obtained from participant's gaze data

5. Conclusions A framework for observing, understanding and analysing the learning of concepts in an operator training program is proposed. The proposed extracts the information provided

2982

L. Das et al.

to the operator through eye tracking. Based on this information, a directed graph is constructed with gaze transition probabilities. The evolution of active nodes and edges of this digraph over the span of the task is studied to understand the process of learning. Though, we have demonstrated only on a single participant, similar studies on several participants (not included due to space constraints) reveal that the proposed framework can help understand and assess the process of learning. Our future work will be directed towards refining the eye gaze based association graphs by incorporating the process knowledge and the actions taken by the operator which will lead to a model representative of product of learning. The proposed learning framework can be used for identification of gaps in operator’s mental model and to adaptively customize tasks so as to aid individual operators in acquiring a detailed mental model.

References A. Kluge, S.Nazir, D. Manca, 2014. Advanced applications in process control and training needs of field and control room operators. IIE Transactions on Occupational Ergonomics and Human Factors, 2(3-4), 121-136. A. T. Denzau, D. C. North, 1994. Shared mental models: ideologies and institutions. Kyklos, 47(1), 3-31. C. Sharma, P. Bhavsar, B. Srinivasan, R. Srinivasan, 2016. Eye gaze movement studies of control room operators: A novel approach to improve process safety. Computers & Chemical Engineering, 85, 43-57. D. Manca, S.Brambilla, S. Colombo, 2013. Bridging between virtual reality and accident simulation for training of process-industry operators. Advances in Engineering Software, 55, 1-9 D. Shin, V. Venkatasubramanian, 1996. Intelligent tutoring system framework for operator training for diagnostic problem solving. Computers & Chemical Engineering, 20, S1365S1370 D. S. Patle, Z. Ahmad, G. P. Rangaiah, 2014. Operator training simulators in the chemical industry: review, issues, and future directions. Reviews in Chemical Engineering, 30(2), 199216. P. T. Bullemer, I. Nimmo, 1994, Understanding and supporting abnormal situation management in industrial process control environments: a new approach to training. In Systems, Man, and Cybernetics. IEEE International Conference on Humans, Information and Technology.1, 391396. S. Lee, I. Jeong, M Il, 2000. Development of evaluation algorithms for operator training system. Computers & Chemical Engineering, 24(2), 1517-1522. US CSB, 2011. INVESTIGATION REPORT Pesticide Chemical Runaway Reaction Pressure Vessel Explosion