Error analysis and applications in transportation systems

Error analysis and applications in transportation systems

Amid. Anal. & Prrv. Vol Prmted m Great Britam 21. No 5. pp. 419-426. ERROR Traffic Ouol-4575189 $3.00 + .w 0 1989 Pergamon Press plc 1989 ANALY...

809KB Sizes 0 Downloads 40 Views

Amid. Anal. & Prrv. Vol Prmted m Great Britam

21. No

5. pp. 419-426.

ERROR

Traffic

Ouol-4575189 $3.00 + .w 0 1989 Pergamon Press plc

1989

ANALYSIS AND TRANSPORTATION

Research (Received

Centre, 16 July

APPLICATIONS SYSTEMS

PETER F. LOURENS P.O. Box 69, 9750 AB Haren, 1988; in revised form

IN

The Netherlands

14 February 1989)

Abstract-This paper presents an overview of the field of error analysis. Section 1 shows why discussions about human error are relevant for societal safety. With regard to safety research, it is important to predict abnormal events. At the machine side, reliability studies proved their value. but to predict failures in the human factor has been shown to be very difficult. Therefore, problems in how to define the notion of human error (Section 2) and how to classify different types of error (Section 3) are discussed. Some researchers started to use systematical classifications of human error types based on a recent, hierarchical model of human task performance. The outline of the model is presented. Examples of error analysis studies from the field of transportation research (Section 4) provide some basic suggestions on how to reduce error rates. Some conclusions on error control are given in Section 5. The responsibility of managers and system designers in this respect is strongly emphasized.

1. INTRODUCTION

In man-machine studies, it is a generally recognized notion that there is a strong need for a better understanding of the failure mechanisms involved in human task performance. There is ground to believe that a better understanding can be achieved now, because of recently increased interest for studies on human error coming from important developments in scientific approaches and societal consciousness. As Singleton (1973) said, the increased consequences and penalties of human error are now one of the fundamental problems of man in a technological society. Rasmussen, Duncan, and Leplat (1987) described the situation in the epilogue of their prominent book: “Behaviourist approaches have influenced human error and reliability studies since the earliest days. In consequence, analysis was performed only in achievement terms, the method of analysis became dependent on the context, and different approaches evolved in industrial risk analysis, work safety research, and traffic safety. The swing towards cognitive studies appears to lead to the possibility of more coordinated research in the traditionally different reliability and safety research areas.” It can be clearly shown that errors are often mentioned in the context of discussing so-called “dynamic” task performance. The picture given in the literature of the individual in such a task environment is as a processor of information who occasionally makes errors and causes the system to deviate from an ideal or designed state, but-as stated by Hale and Glendon (1987)-who is also well equipped to detect and correct errors and restore the system to normal. As far as accident causation is concerned, human error seems to be a key concept. KjellCn (1987) referred to Heinrich (1950), who claimed that 88% of all industrial accidents are caused primarily by dangerous acts on the part of the individual worker. Approximately 70% of aircraft accidents and incidents have in the past been attributed to human error (Feggetter 1982). Sivak (1981) referred to Treat et al. (1977), who cited human factors as definite or probable causes in 93% of the 420 accidents investigated. Road users errors were a contributory factor in 95% of 2,130 accidents studied by Sabey and Taylor (1980). There is, however, opposition to these numbers and accompanying views. Does it follow from the impressive numbers that effective accident prevention should be concerned primarily with driver factors? “No,” said Sivak (1985). The proportion of accidents assigned to driver error may be inflated by a bias to “blame the victim.” There are serious problems in the systems used to classify accident-related factors. Sivak mentioned a study by Brown (1971) in which two different systems were used to classify the 419

420

P. F. LOURENS

same accidents. The two systems yielded 19% and 89% driver factors, respectively! According to Singleton (1973), almost any accident involving a man-machine system can with equal validity be traced to inadequate machine design, inadequate training, inadequate instructions, inadequate attention, an unfortunate coincidence of relatively rare events, or just to human error. As far as pilot errors are concerned, Hill and Pile (1982) came to the conclusion that there are few objective signs of those. The “diagnosis” may be one of exclusion, no other factor having been found to account for the accident. Recently, the problem was indicated again by Sanders and McCormick (1987) in the latest edition of their book on human factors in engineering and design: “The percentage of accidents one attributes to human error depends on how one defines human error, the data source used to compile the statistics, and the alternative causes (other than human error) included in the tabulation.” It is not so amazing, then, that there is confusion and some strange things going on with how people and communities think about and deal with human error. The perception that human errors cannot be avoided leads to the conclusion that new technologies should be designed and implemented to soften their consequences. Thus. the realization of error-tolerant systems is a way to cope creatively with errors. Most of the time, however, one boggles at the investment prices of such solutions or prefers another possibility: intensive training of people involved. This has a strong tradition in aircraft piloting. The situation in road transportation is, on the other hand, extremely poor. Johannsen (1986) wondered about the unexplainably high tolerance threshold among people for traffic fatalities as compared with other kinds of fatalities. Winkier (1977) stated, that traffic safety in the sense of avoidance of risks (defensive driving) is neither a goal in driver education nor a criterion in driver exams. In the exam safety-enhancing attitudes and actions even give more often an unfavourable impression. Brown, Groeger, and Biehl (1987) wrote a devastating article on today’s driver training. Drivers are said to fail in appreciating traffic hazards and to overestimate their ability to cope with those hazards. Both matters represent evidence of a learning failure, which points to inadequacies in driver training. Only if these contributory factors in accidents were attributable to recklessness (and there is little evidence of that) could driver training be cleared of responsibility for the behaviours in question. It has turned out that modifying the driver has been less successful than originally hoped. For McKenna (1982), it seems to stem from the fact that we lack a good theoretical understanding of human error. He also expressed the opinion that a successful modification of the environment is dependent upon our hypothesis concerning the nature of the human error. Although accident prevention is the ultimate aim of most transportation research, there are some strong arguments to focus attention upon other aspects of the transportation systems than accidents. According to Grayson and Hakkert (1987), in-depth accident investigation studies are very expensive: many studies have produced little worthwhile information at very great cost. Accident analyses, in general, suffer from severe reporting problems: there is a large percentage of underreporting and the existing reports are far from perfect in accuracy and completeness. In a similar fashion. Leplat (1987) concluded that if one considers an accident as an indicator of system malfunctioning, one is led to search for other indicators of this malfunctioning; errors and incidents may contribute to their discovery. Another reason why error analysis is valuable and, in fact, necessary was given by KjellCn (1987). He noted that the desip of high technology systems requires the coordination and integration of various expertise. AS a consequence, the distribution of roles and tasks between operators and the rest of the system is rarely optimized from a safety point of view. During the operrttional phase data on deviations including human errors are needed for corrective and preventive control measures. 2. DEFINITION

OF

HUMAN

ERROR

To give a conclusive definition of human error is not easy. The formulations of several authors have to be considered. Rasmussen, Duncan. and Leplat (1987) defined

Error analysis and applications in transportation

421

systems

human error as an act that is counterproductive with respect to the person’s (private or subjective) intentions or goals. A group of experts at the OECD defined human error as behavior, or its effects, which lead a system to exceed acceptable limits (Nicolet 1987). For Mashour (1974), error is the deviation of actual performance from the desired performance or criterion. Kruglanski and Ajzen (1983) described error as the type of experience a person might have following an encountered inconsistency between a given hypothesis, conclusion, or inference, and a firmly held belief. All these definitions imply that a judge is needed to decide whether there was an error involved in a situation or not. A drunken man who decides to drive his car is not in error according to Rasmussen’s definition. Unless traffic laws are seen as part of the traffic system, he is not in error, according to the OECD, as long as everything goes well, but he is if he gets involved in an accident. The same ambiguity exists in Mashour’s and in Kruglanski and Ajzen’s definition: whose criterion do we have to consider, that of the drunken man or that of the traffic authorities; and who is the person experiencing an inconsistency with a firmly held belief? Therefore, it seems that at the moment an objective definition of human error cannot be given. It depends upon the person(s) who is (are) doing the judging. The labelling or definition issue of human error is not so disturbing, however. Analysis can and should be concentrated on revealing the interrelations between system components (man, machine, environment, and regulations) with the objective of identifying a design, training, or organizational solution that will prevent future system failures (Patrick 1987). A very interesting observation was made by Lewis (1981) for nonformal situations in which there are no definitive standards of correctness. In those situations, there exist alternative signals of error. They are so commonplace that people seldom see their significance. They include signals, to the effect that one is being led into “ad-hoc-ery” in one’s thinking, and also include such private discernible phenomena as feeling hurried or confused or depressed. The feeling of time-pressure-of overload and a consequent sense of haste and impatience-is simply not recognized by many people as a sign of error that deserves to be taken seriously. “In the long run, haste is always iatrogenic,” Lewis summarized his notion. 3. ANALYSIS

AND

CLASSIFICATION

OF HUMAN

ERROR

A first way to analyze human errors is to see whether they can be categorized. Many attempts have been made to classify the varieties of human error. One of the simplest classification schemes for individual, discrete actions is the distinction between errors of omission (failure to act at all) and errors of commission (the correct function at the wrong time). This distinction, however, very quickly runs into problems (Hale and Glendon 1987). Omission and commission are defined in terms of the task, not of the person carrying it out. The same reason can lie behind both types of errors, and many different reasons behind any one omission or commission: the classification has no psychological validity. To go beyond a purely behavioral level of description, several authors used an information processing model to classify human errors, usually in terms of input, mediation, and output errors. On this basis Nicolet (1987), for instance, came to eight main groupings of error:

6)

error of perception, not perceiving a signal error in decoding, misunderstanding a signal error in mental representation, wrong ideas nonrespect of a procedure or rule, wrong attitude error in communication (man-man) I?) (vi) delay in decision making (vii) error in the selection of an action (viii) error in the scale of effort/intensity of the action (ii) (iii)

[input] [input] [mediation] [mediation] [mediation] [mediation] [output] [output].

Although identifying the source of errors in terms of input-mediation-output

be-

422

P. F.

LOliRENS

haviors is a first step in understanding the mechanisms of errors, it is still a descriptive categorization with little practical relevance. An important step further was made by Rasmussen (1987) and Reason (1987). They took advantage of recently developed notions on cognitive control in human task performance. The distinction between controlled and automatic processing gave rise to the conception that cognitive task performance should be conceived as hierarchically structured. In many task domains, researchers claimed that three levels are needed to account for human performance: an automatic level, a semi-automatic/semi-controlled level, and a controlled level. Psychomotor tasks, such as traffic behavior, are also modelled in a three-level hierarchy (Van der Molen and Botticher 1987), where one speaks of the operational, the tactical, and the strategical level of task performance. From Rasmussen comes the now widely accepted distinction between: 1. Skill-bused behavior, for which no conscious control is needed (the sensed information is perceived as time-space signals); 2. Rule-bused behavior, which is controlled by a stored rule or procedure (information is perceived as signs); 3. Knowledge-bused behavior, where performance is goal-controlled (information must be perceived as symbols). During highly trained, skill-based routines, the decision sequence is not at all activated; and during familiar tasks based on know-how, the higher-level decision phases are bypassed by stereotype rules and habits. In new situations or when a new problem arises in a familiar situation, the knowledge-based level of functioning is addressed: this level is only engaged when a person deliberately chooses to do so or when the rulebased level is seen to fail. Figure 1 (from Rasmussen 1987) gives a schematic presentation of these ideas. With respect to error observability, it is a problem at the skill- and rule-based levels that the goals are not explicitly controlling the activity. This means that errors during performance may only be evident at a very late stage. Detection depends on the ability to monitor the process. Therefore, Reason (1987) emphasized the distinction between monitoring failures and decision or problem-solving failures. Monitoring failures occur in routine action, where action slips depend upon failures of attentional checking: either by inattention (omitting to make a check) or by overattention (check at inappropriate

Knowledge-basea

behov’o”r

Rule-bosecl behovlour

i

Skdl-hosed benovlour

r

Fig. 1. The hierarchical

control

model

of human behavior with permission).

(from

Rasmussen

1987: reproduced

Error analysis and applications in transportation

423

systems

moment). Problem-solving failures occur in nonroutine action, where action mistakes depend upon failures of diagnostic inferences from knowledge-based mental models: people have been shown to use simplified heuristics, which bias their inferences and lead occasionally to different types of error (Kahneman, Tversky, and Slavic 1982). To sum up the state-of-the-art of error classification, one inherent problem is evident. Simple systems describing the behavioral category in which an error expressed itself have little explanatory power: they only give an answer to the question “What went wrong?” With deeper classifications based on psychological conceptions the more important question of “Why did a particular wrong action occur?” is addressed, but these depend upon inferences about what was going on inside the head of the error maker. Most of the latter classifications still suffer from inadequately or ambiguously defined categories and give rise to unreliable results, even with expert judges (Hale and Glendon 1987). Nevertheless, most of the necessary building blocks for a useful classification of error are known. With a careful design and appropriate training, reliable and valid judgments of psychologically relevant error types can be collected (Lourens 1988).

4. APPLICATIONS

OF ERROR

ANALYSIS

IN TRANSPORTATION

DOMAINS

The volume of this paper does not allow for an exhaustive review of applied error analyses. To illustrate, however, the way applied researchers do error analyses, and the results they offer in terms of suggestions on how to reduce human error, some examples drawn from the field of transportation research will be given. The airline industry has long been concerned with reducing pilot error. Reasonably accurate and well-defined empirical databases are prerequisites for error management. Much of the success of the airlines in safety enhancement is due to the meticulous and longer-term establishment of databases on accidents and near-accidents (Doderlein 1987). Every six months, commercial pilots receive a medical examination and their skills are reassessed in full-scale simulator exercises (Senders 1980). Errors made during aircraft operations in a simulator were investigated by Rouse and Rouse (1983). They aggregated error frequencies in two groups: (1) errors where something is omitted or clearly done incorrect; and (2) errors where something is unnecessarily added. The introduction of computer-based display of information (compared to hardcopies) significantly reduced incorrect actions, but did not affect unnecessary actions. Rouse and Rouse, therefore, suspected unnecessary actions to have a very different cause than incorrect ones. An interesting source of error is the reluctance to interfere and to tell people that they have made a mistake (Hale and Glendon 1987). This phenomenon is well documented in the analysis of aircraft accidents and was called a “cockpit gradient”: too great a difference in experience or rank creates a slope that is too steep for the junior member to climb. The first systematic study to compare two error classification systems was done by Langan-Fox and Empson (1985). They collected air-traffic control errors and tried to classify them according to a system proposed by Reason and another by Norman. Reliability was established by discussing each error with the subjects who made them and one other controller. Only when the three prople were in agreement on the category of error was the error classified. The results showed that using Reason’s classification, 94% of the errors were categorizable. Little success was obtained with Norman’s system: over half of the collected errors could have been categorized under as many as three subcategories and across major category boundaries. Wagenaar and Groeneweg (1987) analyzed 100 accidents at sea. To summarize their results: only four of the accidents occurred without any preceding human error; accidents were typically caused by more than one human error; usually the errors were made by one or two people; and the most frequent categories of errors were false hypotheses and dangerous habits. The most significant cause of truin accidents are failures to confirm a railway signal. Types of error that occur are (Mashour 1974): (1) detection error, misses plus false alarms; (2) perceptual error, a signal is not perceived correctly, mostly due to inadequacies of the signal’s physical characteristics; and (3) recognition error, which occurs

424

P. F. LOURENS

in the meaning of a signal. Haga (1984) found that redundancy in information between signals facilitated automatic reactions without sufficient attentional control. The two most frequent human errors in road traffic are, according to Rumar (1985), inadequate information acquisition (recognition errors) and information processing (decision errors). Berg, Knoblauch, and Hucke (1987) studied 79 vehicle-train accidents. The presence of alcohol was considered to dominate any other contributing factor. Hypothesized were recognition, decision, or action errors. The findings revealed that the credibility of the warning device is a more important problem than its conspicuity (530/c-71% were decision errors). Lack of credibility occurs because of unnecessary long warning times. Warnings of danger should be absent when the hazard is absent; otherwise people rapidly learn that it is not necessarily dangerous in that area. When drivers saw flashing warning signs on freeways. only 50% said they slowed down immediately; a further 20% did so if they could see road works at that point, while the rest drove on at undiminished speed (Hale and Glendon, 1987). These few examples already indicate a number of methods and starting points for safety managers who want to take the management of human error seriously: 1. Create a well-defined human factor database on incidents and accidents; 2. Reassess operators’ performance on a regular basis; 3. Do not only reassess their skill when they are fully motivated. but study also the habits they develop in their routine activity; 4. Introduce computer-based display of information to provide unambiguous and credible signs of momentary value in the dynamic flow of activities (instead of static signs with only generalized meanings); 5. In group tasks, watch for risk-enhancing factors in social interaction patterns.

5.

DISCUSSION

“Human fallibility has its origin in useful and adaptive processes. In particular, they arise from a natural tendency to minimize cognitive strain.” In accepting this view of Reason (1988). two points of discussion remain: (1) how can we control the undesirable effects of human error; and (2) what place do we give to human error in our view of the world? Errors in routine tasks are mostly of a special nature. Through practice and experience, cognition becomes routinized so that it proceeds outside the limits of continuous, conscious control. Control actions might, indeed, interfere in the smooth performance of. say. professional pianists. Where, however, a real danger of physical harm exists. automatism in information processing should be avoided to a considerable degree, especially when people are operating at the rule-based level (Haga 1984). An operator who is characterized by only rule-based behavior is a dangerous operator. An operator should understund the system all the time and predict system state directions (anticipation) to prevent the process from going into unsafe states (Wirstad 1988). Summarized. human error has three key elements (Rouse 1985): (1) misunderstandings (causing mistakes); (2) incompatibilities between task characteristics and human abilities and limitations (causing slips); and (3) catalysts-factors that affect the human’s attentional state such that the effects of misunderstandings or incompatibilities are influenced (positively or negatively. e.g. workload, motivation). Mistakes are mainly due to a lack of expertise. They are variable in form and difficult to predict in advance. but can be systematically reduced by appropriate design of systems, training programs, and selection methods. Slips can be reasonably predicted. They are a form of misapplied expertise and mainly involve inappropriate selection of well-used routines. The primary means of reducing the frequency of slips is by improved equipment and job design. Society has not resolved the distinction between two approaches regarding human error (Singleton 1972). One assumes that human beings are responsible for their own actions and. therefore, for the errors they make. The opposite view is that errors are an inherent component in all human performance. that they should be planned for, and

Error

analysis

and applications

in transportation

systems

425

they do occur, the fault should be traced to the system designer rather than the operator. There is a reluctance or inability on the part of the management to enforce rules and, consequently, norms develop that are based on (slight) violation of the rules (Edwards 1981). These norms often have the effect of enabling work to be completed more quickly and easily. However, in the event of an accident, the violation of the rules is cited as a cause. Thus, error is considered avoidable by greater effort on the part of the human operator. Accidents are assumed to be determined by localized factors and the possibility that they might be useful as a data source for assessing system functioning is rarely considered. Hale and Glendon (1987) clearly stated their position on the issue: “It is still common to see designers or managers blame the victims and to see no need for task redesigns, because they claim that the individual could have acted otherwise. This is an understandable, but usually false attribution. Managers and designers with such beliefs are, in effect, denying their own responsibility to take action. They are in effect saying, as decision makers, that they are not prepared to accept the costs involved in the redesign of the tasks in return for a real improvement in safety. It is the behavior and attitudes of the decision makers, and not those of the potential victims, which need to be modified.” Most research on human error is still empirically based and much of it is not driven by strong theoretical notions (Patrick 1987). Lately, researchers such as Rasmussen and Reason presented models that can be used to support more theoretically driven research in the area. With tools like that it seems high time, as Langan-Fox and Empson (1985) stated, to take error collection into the real world-and as Reason suggested “put flesh on the theoretical bones.”

when

REFERENCES Berg, W. D.; Knoblauch, K.; Hucke, W. Causal factors in railroad-highway grade crossing accidents. Transpn. Res. Rec., X47:47-54; 1987. Brown, G. W. Analysis of 104 eastern Iowa motor vehicle casualty accidents. In: Proceedings of the Third Triennial Congress on Medical and Related Aspects of Motor Vehicle Accidents. Ann Arbor, MI: Highway Safety Research Institute; 1971:216-21X. Brown, I. D.; Groeger, J. A.; Biehl, B. Is driver training contributing enough towards road safety? In: Rothengatter, J. A.; de Bruin, R. A., editors. Road users and traffic safety. Assen: Van Gorcum; 1987:135156. Doderlein, J. M. Introduction. In: Singleton, W. T.: Hovden. J., editors. Risk and decisions. Chichester: John Wiley & Sons; 1987:1-8. Edwards, M. The design of an accident investigation procedure. Appl. Ergon. 12(2):111-115; 1981. Feggetter, A. J. A method for investigating human factor aspects of aircraft accidents and incidents. Ergon 25(11):1065-1075; 19X2. Grayson, G. B.; Hakkert, A. S. Accident analysis and conflict behaviour. In: Rothengatter, J. A.; de Bruin. R. A., editors. Road users and traffic safety. Assen: Van Gorcum; 1987:27-59. Haga, S. An experimental study of signal vigilance errors in train driving. Ergon. 27(7):7SS-765; 19X4. Hale, A. R.; Glendon, A. I. Individual behaviour in the control of danger. Amsterdam: Elsevier; 1987. Heinrich, W. W. Industrial accident prevention. New York: McGraw-Hill; 1950. Hill, I. R.; Pile, R. L. C. Some legal implications of pilot error. Aviat. Space Environ. Med. 53(7):6X7-693; 1982. Johannsen, G. Vehicular guidance. In: Willumeit, H.-P., editor. Human decision making and manual control. Amsterdam: North-Holland; 1986:3-16. Kahneman, D.: Tversky, A.; Slavic, P., editors. Judgment under uncertainty: Heuristics and biases. New York: Cambridge University Press: 1982. Kjellen. U. A changing role of human actors in accident control-implications for new technology systems. In: Rasmussen, J.; Duncan, K.; Leplat, J., editors. New technology and human error. Chichester: John Wiley & Sons; 1987: 169-175. Kruglanski, A. W.; Ajzen, I. Bias and error in human judgment. Euro. J. Sot. Psych. 13(1):1-44: 1983. Langan-Fox, C. P.; Empson, J. A. C. “Actions not as planned” in military air-traffic control. Ergon. 2X(11):1509-1521; 19X5. Leplat, J. Accidents and incidents production: Methods of analysis. In: Rasmussen. J.; Duncan, K.: Leplat. J., editors. New technology and human error. Chichester: John Wiley bt Sons; 1987:133-142. Lewis, B. N. An essay on error. Instruc. Sci. 10(3):237-257; 1981. Lourens, P. F. Learner drivers: Analysis of their errors and characteristic actions. In: van Schagen, I., editor. Annual report 19X7. Haren: Traffic Research Centre, University of Groningen; 1988:59-62. Mashour. M. Human factors in signalling systems. Specific applications to railway signalling. New York: John Wiley & Sons; 1974. McKenna, F. P. The human factor in driving accidents. An overview of approaches and problems. Ergon. 25(10):867-877; 1982.

426

P. F. LOURENS

Nicolet, J. L. Human error typology. Paper presented at the conference “La maitrise des risques technologiques,” Paris, December 1987. Patrick, J. Methodological issues. In: Rasmussen, J.; Duncan. K.; Leplat. J., editors. New technology and human error. Chichester: John Wiley & Sons; 1987:327-336. Rasmussen, J. Reasons, causes, and human error. In: Rasmussen. J.; Duncan. K.; Leplat, J., editors. New technology and human error. Chichester: John Wiley & Sons: 1987:293-301. Rasmussen, J.; Duncan, K.; Leplat, J.. editors. New technology and human error. Chicester: John Wiley & Sons; 1987:293-301. Reason, J. Generic Error-Modelling System (GEMS): A cognitive framework for locating common human error forms. In: Rasmussen, J.; Duncan. K.; Leplat. J.. editors. New technology and human error. Chichester: John Wiley & Sons; lY87:63-83. Reason, J. Framework models of human performance and error: A consumer guide. In: Goodstein, L.; Andersen, H.; Olsen. S., editors. Tasks, errors and mental models. London: Taylor & Francis; 198X:3549. Rouse, W. B. Optimal allocation of system development resources to reduce and/or tolerate human error. IEEE Trans. Syst. Man. Cybern. SMC-15(5):620-630; 1985. Rouse, W. B.; Rouse. S. H. Analysis and classification of human error. IEEE Trans. Syst. Man. Cybern. SMC-13(4):539-549; 1983. Rumar, K. The role of perceptual and cognitive filters in observed behavior. In: Evans, L.: Schwing, R. C., editors. Human bchaviour and traffic safety. New York: Plenum Press, 1985: 151-165. Sabey, B. E.; Taylor. H. The known risks we run: The highway. In: Schwing. R. C.; Albers Jr.. W. A.. editors. Societal risk assessment: How safe is safe enough? New York: Plenum Press; 1980:43-65. Sanders, M. S.; McCormick, E. J. Human factors in engineering and design. Singapore: McGraw-Hill, lY87. Senders. J. W. Is there a cure for human error‘? Psych. Today April:52262; 1980. Singleton, W. T. Techniques for determining the causes of error.’ Appl. Ergon. 3(3):126-131; 1972. Singleton, W. T. Theoretical approaches to human error. Ergon. 16(6):727-737; 1973. Sivak, M. Human factors and highway-accident causation: Some theoretical considerations. Accid. Anal. Prcv. 13:61-64; 1981. Sivak, M. Multiple ergonomic interventions and transportation safety. Ergon. 2X(8): 114331153; 1085. Treat, J. R.; Tumbas, N. S.; McDonald, S. T.; Shinar, D.; Hume, R. D.; Mayer, R. E.; Stanaifer, R. L.; Castellan, N. J. Tri-level study of the causes of traffic accidents: Final r-cport. Vol. I: Causal factor tabulations and assessments. Bloomington. IN: Institute for Research in Public Safety. Indiana University: March 1977. Van der Molen, H. H.; Botticher, A. M. T. Risk models for traffic participants: A concerted effort for theoretical operationalizations. In: Rothengatter. J. A.; de Bruin, R. A., editors. Road users and traffic safety. Assen: Van Gorcum; 1987:61-81. Wagenaar, W. A.; Groeneweg, J. Accidents at sea; multiple causes and impossible consequences. Intern. J. Man Mach. 27:587-598; 1987. Winkler, W. Systembedingungen des Fahranfaengers. Unfall- und Sicherheitsforschung Strassenverkehr. Koeln: BAST Heft 8; 1977. Wirstad, J. On knowledge structures for process operators. In: Goodstein, L.; Andersen. H.; Olsen. S., editors. Tasks, errors and mental models. London: Taylor & Francis; lYSX:SO-69.