Towards increased systems resilience: New challenges based on dissonance control for human reliability in Cyber-Physical&Human Systems

Towards increased systems resilience: New challenges based on dissonance control for human reliability in Cyber-Physical&Human Systems

Annual Reviews in Control 44 (2017) 316–322 Contents lists available at ScienceDirect Annual Reviews in Control journal homepage: www.elsevier.com/l...

2MB Sizes 0 Downloads 16 Views

Annual Reviews in Control 44 (2017) 316–322

Contents lists available at ScienceDirect

Annual Reviews in Control journal homepage: www.elsevier.com/locate/arcontrol

Review article

Towards increased systems resilience: New challenges based on dissonance control for human reliability in Cyber-Physical&Human Systems F. Vanderhaegen University of Valenciennes, LAMIH, CNRS UMR 8201, F-59313 Valenciennes, France

a r t i c l e

i n f o

Article history: Received 15 February 2017 Revised 19 September 2017 Accepted 22 September 2017 Available online 6 October 2017 Keywords: Dissonance oriented stability analysis Dissonance control Human reliability Resilience Risk assessment Cooperation Learning Reinforcement Cyber-Physical&Human Systems (CPHS)

a b s t r a c t This paper discusses concepts and tools for joint human and cyber-physical-systems analysis and control in the view of increasing the whole system resilience. More precisely, it details new challenges for human reliability based on dissonance control of Cyber-Physical&Human Systems (CPHS) to improve the system’s resilience. The proposed framework relates to three main topics: the stability analysis in terms of dissonances, the dissonance identification, and the dissonance control. Dissonance oriented stability analysis in this sense consists in determining any conflicting situations resulting from the human behaviors interacting with Cyber-Physical Systems (CPS). Frames of reference support the assessment of stable or unstable gaps among stability shaping factors and the identification of dissonances. Dissonance control consists in reinforcing the frames of reference by applying reinforcement modes. It aims then at accepting or rejecting the identified dissonances by using supports such as expert judgment, feedback of experience, simulation, learning or cooperation. An example in road transportation illustrates the interest of the proposed framework by studying possible dissonances between car drivers and CPS. As automation spreads out in society by generating close interactions with humans, the ideas of the paper will support the design of new analysis and control tools jointly made by researchers from social and control sciences to study the resilience of the whole CPHS in terms of dissonances. © 2017 Elsevier Ltd. All rights reserved.

Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . 2. Dissonance oriented stability analysis 3. Dissonance identification . . . . . . . . . . . 4. Dissonance control . . . . . . . . . . . . . . . . 5. Conclusion. . . . . . . . . . . . . . . . . . . . . . . Acknowledgements . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

1. Introduction Systems have to be designed not for prohibiting individual unsafe acts but for preventing human error occurrence or reducing their potential consequences by specifying adequate barriers or defenses (Reason, 20 0 0). Human error can then be seen as a consequence of a failed defense instead of a cause of an unsafe event. However, more than 70% of accidents remain due to human errors and 100% of them are directly or indirectly linked with human fac-

E-mail address: [email protected] https://doi.org/10.1016/j.arcontrol.2017.09.008 1367-5788/© 2017 Elsevier Ltd. All rights reserved.

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

316 317 319 320 321 321 321

tors (Amalberti, 2013). Moreover, even if a technical system such as a Speed Control System (SCS) is designed to improve the safety or the comfort of the car driver, its use can produce unsafe situations due to the reduction of the inter-distance between cars, the increasing of the reaction time or the decreasing of the human vigilance (Dufour, 2014). This paper proposes some ways to analyze such dilemma between the design and the use of a system. It extends the proposal by presenting some new challenges for assessing and controlling human reliability of Cyber-Physical&Human Systems (CPHS) where a Cyber-Physical System (CPS) or several CPS interact with human operators. It is an extension of the ple-

F. Vanderhaegen / Annual Reviews in Control 44 (2017) 316–322

nary session given by the author at the first IFAC conference on CPHS entitled “Human reliability and Cyber-Physical&Human Systems” in Brazil. Based on the author’s experience and on literature reviews, several challenges are discussed and motivate a new framework proposal for human reliability study in CPHS. Human reliability usually confronts the problem of its definition and its assessment in the course of the design, the analysis or the evaluation of CPHS such as human-machine systems, joint cognitive systems, systems of systems, socio-technical systems, multi-agent systems, manufacturing systems or cybernetic systems. Human reliability of CPHS may be defined by distinguishing two sets of frames of reference or baselines: (1) the frame related to what the human operators are supposed to do, i.e. their prescriptions, (2) the frame related to what they do outside these prescriptions. Human reliability can then be seen as the capacity of human operators to realize successfully the tasks required by their prescriptions and the additional tasks, during an interval of time or at a given time. Human error is usually considered as the negative view of human behaviors: it is the capacity of human operators not to realize correctly their required tasks or the additional tasks. Methods for analyzing human reliability exist and are well explained and discussed on published states-of-the-art (Bell & Hollroyd, 2009; Hickling & Bowie, 2013; Kirwan, 1997a,b; Pan, Lin, & He, 2016; Reer, 2008; Straeter, Dolezal, Arenius, & Athanassiou, 2012; Swain, 1990; Vanderhaegen, 2001, 2010). They consider mainly the first set of tasks, i.e. they study the possible human errors related to what the users are supposed to do. Human reliability assessment methods remain unsuitable or insufficient, and new developments have to be done considering new constraints such as the dynamic evolution of a system upon the time, the variability of a human operator or between human operators, or the creativity of human operators who are capable to modify the use of a system or to invent new uses. Regarding such new requirements for human reliability study, many contributions present the concept of resilience as an important issue for organization management and for controlling criteria such as safety, security, ethics, health or survival (Engle, Castle, & Menon, 1996; Hale & Heijer, 20 06; Hollnagel, 20 06; Khaitan & McCalley, 2015; Orwin & Wardle, 2004; Pillar, 2016; Ruault, Vanderhaegen, & Kolski, 2013; Seery, 2011; Wreathall, 2006). Resilience is usually linked with the system stability, and it is defined as the ability or the natural mechanisms of a CPHS to adjust its functioning after disturbances or aggressions, in order to maintain its stable state, to come back to a stable state or to recover from an instable state. The more stable a system, the less uncertain the human attitudes related to beliefs and intentions (Petrocelli, Clarkson, Tormala, & Hendrix, 2010). On the other hand, other studies present the organizational stability as an obstacle for being resilient, and the instability as an advantage to survive (Holling, 1973, 1996; Lundberg & Johansson, 2006). Then, a system such as a CPHS with regular important variations that provoke its instability may survive and be resilient for a long period of time, whereas an isolated stable CPHS that does not interact with others may not be resilient when an external aggression occurs and makes it instable. This paper proposes new challenges for the human reliability study of CPHS based on the above mentioned concept of stability applied to human behaviors. The analysis of human stability is interpreted in terms of dissonances and the successful control of these dissonances makes the CPHS resilient. The concept of dissonance is adapted from Festinger (1957) and Kervern (1994), and a dissonance is a conflict of stability. Three main topics are then discussed in Sections 2–4 respectively: the dissonance oriented stability analysis, the dissonance identification, and the dissonance control. In parallel, a case study based on road transportation illustrates an application of such new ways to treat human reliability of a CPHS by taking into account the integration of different CPS

317

Fig. 1. Dissonance oriented stability analysis challenges.

into a car, i.e., a Speed Control System (SCS) and an Adaptive Cruise Control (ACC) that replaces the previous SCS. 2. Dissonance oriented stability analysis Human stability relates to the equilibrium of human behaviors, i.e. human behaviors or their consequences remain relatively constant around a threshold value or an interval of values whatever the disturbances that occur. Out of this equilibrium, human behaviors or their consequences are unstable. The threshold value or the interval of values can be determined qualitatively or quantitatively by taking into account intrinsic and extrinsic factors such as technical factors, human factors, environmental factors or organizational factors. Human factors are physical, cognitive or physiological parameters or their impact factors for instance. Human stability is then analyzed by using these intrinsic or extrinsic factors and by comparing input factors with output ones. Fig. 1 gives a non-exhaustive list of human stability challenges that will be discussed hereafter. The input and output factors relate to human behaviors and to their consequences. Their assessments are noted sin and sout when it is a single value, or noted SIN and SOUT when it is a matrix of successive values related to different measurements. The resulting gap is a single value or a matrix, noted ε . Input and output factors are human stability shaping factors and a given factor can have an impact on the same factor or other factors. Measurement criteria are then required in order to assess and compare them. These measurements aim at detecting stable or unstable gaps of human behaviors or those of their impacts by studying instantaneous values of gaps, the evolution of these values upon the time, the frequency, the duration or the shape of this evolution for instance (Richard, Vanderhaegen, Benard, & Caulier, 2013; Vanderhaegen, 1999a, 2016c). Their analysis can also determine the associated risks of human stability or instability by taking into account instantaneous, variable or regular gaps. The variability and the sustainability of gaps related to the study of irregular or regular evolutions of gaps can require risk analyses on different intervals of time. A measure of a risk is usually defined as the product between a measure of the occurrence of an undesirable event and a measure of its gravity. As a matter of fact, the risk of human behaviors is sometimes explained as a compromise between several criteria by taking into account good and bad practices (Vanderhaegen & Carsten, 2017). The so-called Benefit-Cost-Deficit (BCD) model takes into account the positive and negative gaps for different criteria (Sedki, Polet, & Vanderhaegen, 2013; Vanderhaegen, 2004; Vanderhaegen, Zieba, & Polet, 2009; Vanderhaegen, Zieba, Polet, & Enjalbert, 2011). The positive gaps are benefits, the negative but acceptable ones are costs and the unacceptable negative ones are deficits or dangers. The BCD parameters can be weighted with a probability of success or of failure of human behaviors

318

F. Vanderhaegen / Annual Reviews in Control 44 (2017) 316–322 Table 1 Examples of stability gap assessment. Example

Assessment

Constrains

Global stability

|sout − sin | < εmax |sout − sin | ∈ [εmin , εmax ] |sout − sin | > εbenefit |sout − sin | < εcost |sout − sin | > εdeficit |SOUT − SIN | < εMAX

tin ≤ tout , ɛmax tin ≤ tout , [ɛmin , ɛmax ] tin ≤ tout , sout < sin , ɛbenefit tin ≤ tout , sout > sin , ɛcost tin ≤ tout , sout > sin , ɛdeficit TIN ≤ TOUT , ɛMAX

Positive stability Acceptable negative stability Unacceptable negative stability Shape stability

(Vanderhaegen, Cassani, & Cacciabue, 2010) or with other transformation functions (Polet, Vanderhaegen, & Zieba, 2012). The socalled inverted-U curve makes other hypotheses on the relations between a low or high instantaneous level of workload, of task demand or of stress with a low level of performance or of safety for instance (Hancock & Ganey, 2003; Weiner, Curry, & Faustina, 1984). Such correlations study between human factors or their impacts with the CPHS performance or human behaviors are other interesting ways to identify the human stability shaping factors, and to study gaps between an interval of values of acceptable required or desired behaviors and the results with the real ones. The research on these correlations can also draw inspiration from works on performance shaping factors (Kim, Park, Kim, Kim, & Seong, 2017; Rangra, Sallak, Schön, & Vanderhaegen, 2016) by adapting them to calculate and analyze gaps. Based on these different contributions, Table 1 gives some examples of gap assessment, considering that sin and sout are positive values, and if sout < sin (or if sout > sin ), there is an improvement (or a degradation respectively) of sout regarding sin . Global stability is assessed by taking into account a maximum gap value εmax or an interval of minimum and maximum values [ε min , ε max ]. The occurrence dates of sin or sout are noted tin or tout respectively. Gap calculation can then relate to the same time or to a time delay. Positive stability is linked with a positive gap (i.e. sout < sin ) that is over a minimum value ε benefit . Negative stability is a negative gap (i.e., sout > sin ). An acceptable negative gap is under a maximum value ε cost and an unacceptable one is over a maximum value ε deficit . Inputs and outputs can relate to successive values of gaps for different factors or criteria. They are then matrices noted SIN and SOUT to study the evolution of gaps for each factor or criterion regarding a matrix of maximum successive values ε MAX . The occurrence dates for SIN and SOUT are noted TIN and TOUT respectively to compare values at same times or with time delays. However, such gap assessments need the definition of the frames of reference of the required or desired behaviors or results to identify gaps with the real behaviors or results. These gaps are analyzed to identify conflicts of stability that are dissonances. Fig. 2 is an example of analysis processes regarding different possible frames of reference. Therefore, a dissonance oriented stability analysis can differ from or be completed by another one. Several criteria can be used for a same factor in order to compare the conclusions of the analyses. When a single frame of reference exists (Fig. 2-a), assessments such as those given in Table 1 can be applied. They have to be adapted when this frame does not exist or is unclear (Fig. 2-b). The presence of several frames of reference needs other adaptations in order to determine the interpretation of gaps between same human behaviors or results with different points of view (Fig. 2-c with the frames of reference of two CPHS noted CPHS1 and CPHS2 ). Fig. 3 proposes an example with three CPHS related to two CPS integrated into a car: a SCS and an ACC that replaces the SCS. The SCS and the ACC are capable to control the car speed automatically regarding a speed setpoint selected by the driver. The ACC is also a collision avoidance system. Suppose that the frame of reference of the first CPHS, noted CPHS1 , relates to rules noted R1 to R7 dedicated to the manual control of the car,

Fig. 2. Examples of frames of reference for dissonance oriented stability analysis.

Fig. 3. Examples of car driving frames of reference.

F. Vanderhaegen / Annual Reviews in Control 44 (2017) 316–322

319

Table 2 Examples of dissonance with one frame of reference. Stability shaping factor

Dissonance identification

Examples of publication

Knowledge

Learning stability breakdown Contradication Tunneling effect Erroneous cooperation Organizational change Drowsiness Non respect of an optimal command Lack of competence, availability, or prescription

Aïmeur (1998) Vanderhaegen (2016b) Dehais et al. (2012) Vanderhaegen (1999b), Zieba et al. (2011) Brunel and Gallen (2011), Telci et al. (2011) Rachedi et al. (2013) La Delfa et al. (2016) Vanderhaegen (2016a), Zieba et al. (2010)

Perception Allocation Information Vigilance Consumption Autonomy

Table 3 Examples of dissonance with multiple frames of reference. Stability shaping factor

Dissonance identification

Examples of publication

Interest

Competition Barrir removal Anamotphosis Difficult decision Automation surprise Affordance

Vanderhaegen et al. (2006) Polet et al. (2003), Vanderhaegen (2010) Dali (1976), Massironi and Savardi (1991) Ben Tahia et al. (2015), Chen et al. (2014) Inagaki (2008), Rushby (2002), Vanderhaegen (2016b) Gibson (1986), Vanderhaegen (2014)

Perception Problem-solving Intention Use

i.e. the car speed control, the traffic flow control, the aquaplaning control and the fuel consumption control. The frames of references of the CPHS are composed by simple rules that detail the required actions regarding goals to be achieved. The frame of reference of the CPHS2 includes the rules of the CPHS1 , and integrates the rules of the use and the functioning of the SCS (i.e., additional rules noted R8 to R15). Finally, the frame of reference of the last CPHS3 supposes that the ACC replaces the SCS, has the same rules of use and functioning than the SCS related to the car speed control process, and has one additional rule dedicated to the obstacle control (i.e., rule noted R16). Stability shaping factors to compare human stability of the three frames of reference can be the number of rules or the inconsistency between rules. Sections 3 and 4 will reuse this example to illustrate the dissonance identification and control processes. 3. Dissonance identification The frames of reference support the interpretation of stable or unstable gaps in terms of dissonances. The case of serendipity is an example of a dissonance related to a conflict of goal when the frame of reference is wrong. Indeed, it is a way to find results fortuitously and the initial goal of the work was wrong. A lack of knowledge relates to another dissonance without any frame of reference. When human operators have to control unprecedented situations, they have to react rapidly in case of dangerous situations and to imagine new action plans by applying behaviors such as wait-and-see or trial-and-error behaviors (Vanderhaegen & Caulier, 2011). Tables 2 and 3 give other examples of dissonances. Table 2 interprets dissonances associated to a single frame of reference, and Table 3 to multiple frames of reference. Contradictions and learning stability breakdowns are conflicts of individual knowledge. A breakdown of the learning stability aims at verifying or consolidating knowledge (Aïmeur, 1998). The genesis of a dissonance such as a breakdown of the stability, i.e. to make a stable CPHS into an unstable, is sometimes useful for improving human capacity. Contradiction between knowledge occurs when two opposite actions are possible to control the current situation (Vanderhaegen, 2016b). The tunneling effect is a conflict of perception where for instance alarms run correctly but human operator cannot perceive them (Dehais, Causse, Vachon, & Tremblay, 2012). A wrong allocation of tasks made by a given decision maker can lead with erroneous cooperation (Vanderhaegen, 1999b; Zieba, Polet, & Vanderhaegen, 2011). This can also provoke a con-

flict between information in case of organizational change for instance (Brunel & Gallen, 2011; Telci, Maden, & Kantur, 2011). Conflict of arousal is a gap between the required vigilance and the real one. This can lead to drowsiness and to dangerous situation (Rachedi, Berdjag, & Vanderhaegen, 2013). Current effort for reducing the car or the train energy consumption can generate dissonance when the optimal command or procedure is not respected (La Delfa, Enjalbert, Polet, & Vanderhaegen, 2016). Stability can also be assessed regarding the autonomy of a CPHS component in terms of variability of competence, availability and prescription (Vanderhaegen, 2016a; Zieba, Polet, Vanderhaegen, & Debernard, 2010). Regarding the dissonances involving several frames of reference (Table 2), the conflict of interest concerns competition between components of CPHS or barrier removals. The benefits, costs and deficits for a given human competitors can differ from those of another one (Vanderhaegen, Chalmé, Anceaux, & Millot, 2006). Barrier removals involve several viewpoints on the impact of a nonrespect of a barrier (Polet, Vanderhaegen, & Amalberti, 2003; Vanderhaegen, 2010). Conflict between alternatives during problemsolving of a human operator or of a group of people can lead to difficult or hazardous decision (Chen, Khoo, Chong, & Yin, 2014; Ben Yahia, Vanderhaegen, Polet, & Tricot, 2015). An anamorphosis is a conflict of perception between human operators. It concerns possible multiple interpretation of artwork (Dali, 1976) or of image perception (Massironi & Savardi, 1991). A conflict of intention between a human operator and a CPS is called automation surprise (Inagaki, 2008; Rushby, 2002). Interferences are particular automation surprise when the human operator and the CPS may decide opposite actions in the same time (Vanderhaegen, 2016b). Tangible interactive object is an important issue for user-centered design (Boy, 2014). However, the use of such tangible objects can lead to the creation of new system functions. They are called affordances for which there are invariant and variant uses of the same object for a given human operator or between human operators (Gibson, 1986; Vanderhaegen, 2014). Regarding the multiple frames of reference of Fig. 3, different dissonances can be identified. Fig. 4 gives some examples of affordances, contradictions and interferences between required rules. The affordances A1 and A2 consider the button “+” and “−” of the ACC as an accelerator and breaking system respectively. Contradiction C1 concerns possible opposite actions of the car driver. Interferences I1, I2, I3 and I4 propose possible inconsistencies between the SCS behavior and the car driver related to the car speed control process.

320

F. Vanderhaegen / Annual Reviews in Control 44 (2017) 316–322 Table 4 Subjective validation of affordances, contradictions and interferences by 43 drivers (H: High, M: Medium and L: Low). Agree

A1 A2 C1 I1 I2 I3 I4

29 29 29 23 21 19 18

Certainty level H

M

L

24 24 22 16 14 10 10

5 4 6 6 6 8 5

0 1 1 1 1 1 3

Disagree

10 11 5 11 16 13 14

Certainty level H

M

L

5 5 5 4 9 12 6

3 3 0 0 3 4 4

2 3 1 2 1 4 4

No opinion

4 3 9 9 6 11 11

Fig. 4. Examples of dissonances between CPS and car driver.

The last challenge for human reliability of CPHS is the dissonance control process. This process can require a new dissonance oriented stability analysis or dissonance identification in order to study the impact of dissonances on a CPHS. As the dissonance control may affect factors such as workload, discomfort or stress, some dissonances can be rejected, and others can be considered as acceptable and will motivate the updating of the frames of reference. 4. Dissonance control The dissonance control process consists in assessing the impact of dissonances, in rejecting or accepting them and in reinforcing the frames of reference accordingly. The reinforcement of the CPHS frame of reference can use several mechanisms: • The rejection mode without any modification of the frame of reference. When the control of a dissonance is considered as difficult, irrelevant, useless or unacceptable, it is then easier to refuse to apply any revision of the frames of reference. • The neutral reinforcement mode of the CPHS frame of reference without taking into account the new behaviors associated to the dissonance. Here again, it is easier to emphasize the importance of the current frame of reference by reinforcing its justification. For instance, regarding the affordance A1 of Fig. 4, an additional rule such as “Increase the car speed → Use only the gas pedal” can be integrated into the frames of reference of Fig. 3. • The negative reinforcement mode of the CPHS frame of reference by integrating the negative interpretation of the dissonance. This reinforcement mode is similar to the previous one but consists in adding new behaviors by describing the negative aspect of the dissonance that justifies its rejection. For instance, regarding the same affordance A1, the rule “Increase the car speed with the + button of the activated SCS → Prohibit the use of the SCS” can be added into the frames of reference. • The acceptation mode by considering the current frame of reference as erroneous, insufficient or unsuitable, and by updating its content. For example, the affordance A1 can replace the rule R1 of the frames of reference. Expert judgment, feedback of experience, simulation, learning and cooperation are examples of supports of this frame reinforcement. For instance, dissonances of Fig. 4 were assessed by expert judgments regarding their feedback of experience: 43 car drivers from 24 to 53 years old with at least 5 years of driving experience have participated to a questionnaire in order to obtain feedback about the identified dissonances. Car drivers were invited to say if they agreed with the application of the affordances A1 and A2

Fig. 5. Simulated interference with the MissRail® tool.

and with the possible occurrence of the contradiction C1 and the interferences I1, I2, I3 and I4, Table 4. They also informed about the level of certainty (i.e. high, medium or low) of their agreement or disagreement. Globally, the car drivers agree with the proposals. The advantages of the application of the affordances A1 and A2 concern an increase in the comfort and a decrease in the workload. However, they can affect safety in case of an emergency stop for example because the button “-” can decelerate the engine speed but not stop the car. The other dissonances are considered as impacting safety or consumption but some of them required more explanations for justifying and analyzing their real impacts on the CPHS. The simulation can support the control process of these interferences, or eventually the identification of new ones. For instance, related to the rules R3 and R16 of the CPHS3 frame of reference of Fig. 3, a new dissonance can be simulated. Indeed, as the tricolor is green, car drivers may decide not to stop and continue their route (i.e., application of the rule R3). However, the activated ACC can decide to stop before another car in front of them in order to avoid a collision (i.e., application of the rule R16). The application of rules R3 and R16 relates to normal behaviors during normal conditions. Such a dynamic situation was simulated by the MissRailۚ platform of the University of Valenciennes, Fig. 5. In the Picture 1, the car driver is moving next to a tramway line and is behind another car respecting the safety distance managed by the

F. Vanderhaegen / Annual Reviews in Control 44 (2017) 316–322

ACC. In Pictures 2 and 3, the driver approaches the tricolor signal that is green. In Picture 4, the ACC stops the car to avoid a frontal collision with another car. However, the ACC stops the car on rail tracks, and a potential lateral collision between the car and a train can then occur. Simulation tools such as MissRailۚ can then provide human drivers with pedagogical feedback to support the dissonance control process of a CPHS. When human operators or CPS of a CPHS are capable to control alone dissonances, they apply a self-learning process. If they are not capable to treat dissonances, other control modes can be applied (Vanderhaegen, 2012): the cooperation mode or the co-learning mode. Cooperation process is a way to control interferences between goals of different decision-makers in order to facilitate the achievement of their individual activities (Millot & Hoc, 1997). Interferences between goals of a same decision-maker can also require cooperation with another one in order to support their control. Cooperative activities can produce interferences but can lead to solve other dissonances. The control of these interferences requires a so-called common frame of reference (Hoc & Carlier, 2002; Millot & Pacaux-Lemoine, 2013). The co-learning process can be a cooperative learning or a joint learning involving a single frame of reference or several frames of reference. Common CPHS frames of reference in a cooperation mode or co-learning mode can provide the CPHS components with joint or shared knowledge or with information or explanation about the behavior of a component. However, these common frames are predefined and their content is static without any possibility to reinforce them. Moreover, they do not guarantee the absence of dissonances. Principles of multilevel cooperation developed on (Vanderhaegen, 1997, 1999c) should be then adapted in order to specify multiple frames of reference for facilitating the frame reinforcement by cooperation and learning between different hierarchical levels of a CPHS or of several CPHS. This reinforcement of the frames of reference can be done dynamically by adapting algorithms from e-learning, data mining or machine learning principles (Enjalbert & Vanderhaegen, 2017; Huddlestone & Pike, 2008; Ouedraogo, Enjalbert, & Vanderhaegen, 2013; Vanderhaegen & Zieba, 2014). Computer-supported cooperative work and computer-supported learning are suitable concepts to apply to CPS-supported dissonance control process. However, they also have to be extended to human-supported control process in order to combine the human and technical abilities of CPHS, to optimize mutually the frames of reference, and to increase the autonomy of the CPHS components. Human centered design approaches (e.g., Boy, 2014; Scheuermann, Tandon, Bruegge, & Verclas, 2016) have to be adapted in order to minimize risks of erroneous learning or cooperation and to take into account the capacity of a CPS and of a user to learn alone or discover dissonances. New approaches are needed in order to study dissonances in a retrospective, prospective and on-line viewpoint regarding the variability or the sustainability of human stability. 5. Conclusion This paper has detailed new challenges for human reliability study in CPHS. It is based on an original new framework for human reliability based on stability and dissonance. Resilience is defined as the capacity of a system to control stability and instability successfully. Three main topics are then discussed in order to make a CPHS resilient by integrating positive and negative contributions of human operators: the dissonance oriented stability analysis, the dissonance identification, and the dissonance control. This framework can be applied for prospective, retrospective and online study of human reliability. Dissonance oriented stability analysis aims at defining the stability shaping factors and criteria, and at assessing gaps by taking into account different frames of reference. Inexistent, erroneous, single or multiple frames of reference

321

are the baselines for assessing the variability or the sustainability of stable or unstable gaps, and for identifying dissonances by interpreting them as possible conflicts between human behaviors or results. The dissonance control can require another dissonance oriented stability analysis to assess its impact for the CPHS. It aims at reinforcing the content of the frames of reference. A simple but realistic example on car driving domain illustrates an application of dissonance oriented stability analysis, dissonance identification and dissonance control processes. Extended works are required in order to develop deeply each of them, and validate new proposals for analyzing and controlling human reliability in CPHS. The proposed framework is also a practical base for future contributions on self-learning, co-learning and cooperation dedicated to the study of the resilience or the vulnerability of CPHS by studying the causes or the consequences of stability in terms of dissonances. Ethical, juridical or cultural aspects are other important key issues for future works on CPHS. The case of autonomous systems such as Lethal Autonomous Weapons Systems (LAWS) is a good example that generates intensive debates on killer robots between delegations and associations at the United Nations Office. The role of human operators is crucial for updating the frames of reference and controlling possible dissonances that may make LAWS vulnerable (Vanderhaegen, 2015). Joint contributions between scientists from social and control domains will be then useful to assess gaps and their impacts considering several stability shaping factors and select the best compromises to share authority between human operators and CPS. Acknowledgements The present research work is supported by the International Research Network on Human-Machine Systems in Transportation and Industry (GDR I HAMASYTI). The author gratefully acknowledges the support of this network. References Aïmeur, E. (1998). Application and assessment of cognitive dissonance – Theory in the learning process. Journal of Universal Computer Science, 4(3), 216–247. Amalberti, R. (2013). Human error at the centre of the debate on safety. In R. Amalberti (Ed.), Navigating safety – Necessary compromises and trade-offs – Theory and practice, SpringerBriefs in Applied Sciences and Technology (pp. 19–52). Bell, J., & Holroyd, J. (2009). Review of human reliability assessment methods HSE report, RR679 Research Report. Ben Yahia, W., Vanderhaegen, F., Polet, P., & Tricot, N. (2015). A2PG: alternative action plan generator. Cognition, Technology & Work, 17, 95–109. Boy, A. G. (2014). From automation to tangible interactive objects. Annual Reviews in Control, 38, 1–11. Brunel, O., & Gallen, C. (2011). Just like cognitive dissonance. 27th International congress of French association of marketing, 18–20 May 2011, Brussels. Chen, C.-H., Khoo, L. P., Chong, Y. T., & Yin, X. F. (2014). Knowledge discovery using genetic algorithm for maritime situational awareness. Expert Systems with Applications, 41, 2742–2753. Dali, S. (1976). Gala contemplating the mediterranean sea. Dalí Theatre-Museum, Figueres Spain. Dehais, F., Causse, M., Vachon, F., & Tremblay, S. (2012). Cognitive conflict in human-automation interactions: A psychophysiological study. Applied Ergonomics, 43(3), 588–595. Dufour, A. (2014). Driving Assistance technologies and vigilance: Impact of speed limiters and cruise control on drivers’ vigilance. Seminar on the impact of distracted driving and sleepiness on road safety April. Paris, La Défense. Enjalbert, S., & Vanderhaegen, F. (2017). A hybrid reinforced learning system to estimate resilience indicators. Engineering Applications of Artificial Intelligence, 64, 295–301. Engle, P. L., Castle, S., & Menon, P. (1996). Child development: Vulnerability and resilience. Social Science & Medicine, 43(5), 621–635. Festinger, L. (1957). A theory of cognitive dissonance. Stanford, CA: Stanford University Press. Gibson, J. J. (1986). The ecological approach to visual perception. Hillsdale: Lawrence Erlbaum Associates Originally published in 1979. Hale, A., & Heijer, T. (2006). Defining resilience. In E. Hollnagel, D. D. Woods, & N. Leveson (Eds.), Resilience engineering: Concepts and precepts, UK: Ashgate (pp. 9–17). Hancock, P. A., & Ganey, H. C. N. (2003). From the inverted-U to the extended-U: The evolution of a law of psychology. Journal of Human Performance in Extreme Environments, 7(1), 5–14.

322

F. Vanderhaegen / Annual Reviews in Control 44 (2017) 316–322

Hickling, E. M., & Bowie, J. E. (2013). Applicability of human reliability assessment methods to human–computer interfaces. Cognition Technology & Work, 15(1), 19–27. Hoc, J.-M., & Carlier, X. (2002). Role of a common frame of reference in cognitive cooperation: Sharing tasks between agents in Air Traffic Control. Cognition Technology & Work, 4, 37–47. Holling, C. S. (1973). Resilience and stability of ecological systems. Annual Review of Ecology and Systematics, 4, 1–23 1973. Holling, C. S. (1996). Engineering resilience versus ecological resilience. Engineering Within Ecological Constraints., 1996. Hollnagel, E. (2006). Resilience – the challenge of the unstable. In E. Hollnagel, D. Woods, & N. Leveson (Eds.), Resilience engineering – Concepts and precepts, UK: Ashgate (pp. 9–17). Huddlestone, J., & Pike, J. (2008). Seven key decision factors for selecting e-learning. Cognition, Technology & Work, 10(3), 237–247. Inagaki, T. (2008). Smart collaboration between humans and machines based on mutual understanding. Annual Reviews in Control, 32, 253–261. Kervern, G.-Y. (1994). Latest advances in cindynics. Paris: Economica Editions. Khaitan, S. K., & McCalley, J. D. (2015). Design techniques and applications of cyberphysical systems: A Survey. IEEE Systems Journal, 9(2), 350–365. Kim, A. R., Park, J., Kim, Y., Kim, J., & Seong, P. H. (2017). Quantification of performance shaping factors (PSFs)’ weightings for human reliability analysis (HRA) of low power and shutdown (LPSD) operations. Annals of Nuclear Energy, 101, 375–382. Kirwan, B. (1997a). Validation of human reliability assessment techniques: Part1 – Validation issues. Safety Science, 27, 25–41. Kirwan, B. (1997b). Validation of human reliability assessment techniques: Part2 – Validation results. Safety Science, 27, 43–75. La Delfa, S., Enjalbert, S., Polet, P., & Vanderhaegen, F. (2016). Eco-driving command for tram-driver system. IFAC-PapersOnLine, 49(19), 444–449. Lundberg, J., & Johansson, B. (2006). Resilience, stability and requisite interpretation in accident investigations. In Proceedings of 2nd Resilience Engineering Symposium, Juan-les-Pins, France, November 8–10 (pp. 191–198). Massironi, M., & Savardi, U. (1991). Why anamorphoses look as they do: An experimental study. Acta Psychologica, 76(3), 213–239. Millot, P., & Hoc, J.-M. (1997). Human-machine cooperation: Metaphore or possible reality? In Proceedings of the 2nd European conference on cognitive science, April 9-11, Manchester, UK, (pp. 165–174). Millot, P., & Pacaux-Lemoine, M.-P. (2013). A common work space for a mutual enrichment of human-machine cooperation and team-situation awareness. In IFAC Proceedings Volumes: 46 (pp. 387–394). Orwin, K. H., & Wardle, D. A. (2004). New indices for quantifying the resistance and resilience of soil biota to exogenous disturbances. Soil Biology & Biochemistry, 36, 1907–1912. Ouedraogo, A., Enjalbert, S., & Vanderhaegen, F. (2013). How to learn from the resilience of Human–Machine Systems? Engineering Applications of Artificial Intelligence, 26(1), 24–34. Pan, X., Lin, Y., & He, C. (2016). A review of cognitive models in human reliability analysis. Quality and Reliability Engineering International. doi:10.1002/qre.2111. Petrocelli, J. V., Clarkson, J. J., Tormala, Z. L., & Hendrix, K. S. (2010). Perceiving stability as a means to attitude certainty: The role of implicit theories of attitudes. Journal of Experimental Social Psychology, 46, 874–883. Pillay, M. (2016). Resilience engineering: A state-of-the-art survey of an emerging paradigm for organisational health and safety management. In Advances in safety management and human factors, Switzerland: Springer International Publishing (pp. 211–222). Polet, P., Vanderhaegen, F., & Amalberti, R. (2003). Modelling Border-line tolerated conditions of use (BTCUs) and associated risks. Safety Science, 41, 111–136. Polet, P., Vanderhaegen, F., & Zieba, S. (2012). Iterative learning control based tools to learn from human error. Engineering Applications of Artificial Intelligence, 25(7), 1515–1522. Rachedi, N., Berdjag, D., & Vanderhaegen, F. (2013). Probabilistic techniques to diagnose human operator state. In Proceedings of the 10th Berliner Werkstatt Mensch-Maschine-Systeme, Berlin, Germany, (pp. 153–166). Rangra, S., Sallak, M., Schön, W., & Vanderhaegen, F. (2016). Integration of human factors in safety and risk analysis of railway operations: Issues and methods from the perspective of a recent accident. International Railway Safety Council Paris, France, October. Reason, J. (20 0 0). Human error: Models and management. BMJ 2000, 320, 768–770. Reer, B. (2008). Review of advanced in human reliability analysis of errors of commission – Part 2: EOC quantification: 93 (pp. 1105–1122). Richard, P., Vanderhaegen, F., Benard, V., & Caulier, P. (2013). Human stability: Toward multi-level control of human behavior: 46 (pp. 513–519). Ruault, J., Vanderhaegen, F., & Kolski, C. (2013). Sociotechnical systems resilience: A dissonance engineering point of view. IFAC-PapersOnLine, 46(15), 149–156. Rushby, J. (2002). Using model checking to help discover mode confusions and other automation surprises. Reliability Engineering & System Safety, 75(2), 167–177. Scheuermann, C., Tandon, R., Bruegge, B., & Verclas, S. (2016). Detection of human limits in hazardous environments: A human-centric cyber-physical system. In 14th International conference on embedded systems, cyber-physical systems, & applications, July 5–28, Las Vegas, USA (pp. 3–9).

Sedki, K., Polet, P., & Vanderhaegen, F. (2013). Using the BCD model for risk analysis: An influence diagram based approach. Engineering Applications of Artificial Intelligence, 26(9), 2172–2183. Seery, M. D. (2011). Challenge or threat? Cardiovascular indexes of resilience and vulnerability to potential stress in humans. Neuroscience and Biobehavioral Reviews, 35(7), 1603–1610. Straeter, O., Dolezal, R., Arenius, M., & Athanassiou, G. (2012). Status and needs on human reliability assessment of complex systems. life cycle Reliability and Safety Engineering, 1(1), 44–52. Swain, A. D. (1990). Human reliability analysis: Need, status, trends and limitations. Reliability Engineering and System Safety, 29, 301–311. Telci, E. E., Maden, C., & Kantur, D. (2011). The theory of cognitive dissonance: A marketing and management perspective. Procedia Social and Behavioral Sciences, 24, 378–386. Vanderhaegen, F. (1997). Multilevel organization design: The case of the air traffic control. Control Engineering Practice, 5(3), 391–399. Vanderhaegen, F. (1999a). Toward a model of unreliability to study error prevention supports. Interacting With Computers, 11, 575–595. Vanderhaegen, F. (1999b). Cooperative system organisation and task allocation: Illustration of task allocation in air traffic control. Le Travail Humain, 63(3), 197–222. Vanderhaegen, F. (1999c). Multilevel allocation modes – Allocator control policies to share tasks between human and computer. System Analysis Modelling Simulation, 35, 191–213. Vanderhaegen, F. (2001). A non-probabilistic prospective and retrospective human reliability analysis method – Application to railway system. Reliability Engineering and System Safety, 71, 1–13. Vanderhaegen, F. (2004). The benefit-cost-deficit (BCD) model for human analysis and control. 9th IFAC/IFORS/IEA symposium on analysis, design, and evaluation of human-machine systems Atlanta, GA, USA, 7–9 September 2004. Vanderhaegen, F. (2010). Human-error-based design of barriers and analysis of their uses. Cognition Technology & Work, 12, 133–142. Vanderhaegen, F. (2012). Cooperation and learning to increase the autonomy of ADAS. Cognition, Technology & Work, 14(1), 61–69. Vanderhaegen, F. (2014). Dissonance engineering: A new challenge to analyse risky knowledge when using a system. International Journal of Computers Communications & Control, 9(6), 750–759. Vanderhaegen, F. (2015). Can dissonances affect the resilience of autonomous systems? Meeting of experts on lethal autonomous weapons systems (LAWS), United Nations Office, 13–17 April Geneva, Switzerland. Vanderhaegen, F. (2016a). Toward a petri net based model to control conflicts of autonomy between Cyber-Physical&Human-Systems. IFAC-PapersOnLine, 49(32), 36–41. Vanderhaegen, F. (2016b). A rule-based support system for dissonance discovery and control applied to car driving. Expert Systems With Applications, 65, 361–371. Vanderhaegen, F. (2016c). Mirror effect based learning systems to predict human errors – Application to the Air Traffic Control. IFAC-PapersOnLine, 49(19), 295–300. Vanderhaegen, F., & Carsten, O. (2017). Can dissonance engineering improve risk analysis of human–machine systems? Cognition Technology & Work, 19(1), 1–12. Vanderhaegen, F., Cassani, M., & Cacciabue, P. (2010). Efficiency of safety barriers facing human errors. In IFAC proceedings: 43 (pp. 1–6). Vanderhaegen, F., Chalmé, S., Anceaux, F., & Millot, P. (2006). Principles of cooperation and competition. Application to car driver behavior analysis. Cognition Technology & Work, 8(3), 183–192. Vanderhaegen, F., & Caulier, P. (2011). A multi-viewpoint system to support abductive reasoning. Information Sciences, 181(24), 5349–5363. Vanderhaegen, F., & Zieba, S. (2014). Reinforced learning systems based on merged and cumulative knowledge to predict human actions. Information Sciences, 276(20), 146–159. Vanderhaegen, F., Zieba, S., & Polet, P. (2009). A reinforced iterative formalism to learn from human errors and uncertainty. Engineering Applications and Artificial Intelligence, 22, 654–659. Vanderhaegen, F., Zieba, S., Polet, P., & Enjalbert, S. (2011). A Benefit/Cost/Deficit (BCD) model for learning from human errors. Reliability Engineering & System Safety, 96(7), 757–766. Weiner, E. L., Curry, R. E., & Faustina, M. L. (1984). Vigilance and task load: In search of the inverted U. Human Factors, 26(2), 215–222. Wreathall, J. (2006). Properties of resilient organizations: an initial view. In E. Hollnagel, D. D. Woods, & N. Leveson (Eds.), Resilience Engineering: Concepts and Precepts, Ashgate (pp. 275–285). Zieba, S., Polet, P., Vanderhaegen, F., & Debernard, S. (2010). Principles of adjustable autonomy: A framework for resilient human machine cooperation. Cognition, Technology and Work, 12(3), 193–203. Zieba, S., Polet, P., & Vanderhaegen, F. (2011). Using adjustable autonomy and human-machine cooperation for the resilience of a human-machine system – Application to a ground robotic system. Information Sciences, 181, 379–397.