Human reliability analysis: a human point of view

Human reliability analysis: a human point of view

Reliability Engineering and System Safety 38 (1992) 71-79 Human reliability analysis: a human point of view Yushi Fujita Mitsubishi Atomic Power Indu...

907KB Sizes 0 Downloads 57 Views

Reliability Engineering and System Safety 38 (1992) 71-79

Human reliability analysis: a human point of view Yushi Fujita Mitsubishi Atomic Power Industries, Inc., 4-1, Shibakouen 2-Chome, Minato-ku, Tokyo 105, Japan Present human reliability analysis (HRA) is like a Centaur in that it is only half human. Error psychology is expected to provide a path leading us in the right direction. AI technology can be utilized to better understand errors. It is pointed out that the next generation HRA will require two additional components in which an analysis is made to identify two sets of error prone situations (EPSs): one in which human operators are likely to initiate critical events, and another in which they are likely to override engineered safeguard features. Carefully designed empirical studies are believed to be useful in obtaining general knowledge about critical EPSs. It is suggested that any future effort should be made based on sound psychological concepts. Historically, H R A has evolved from reliability technology for hardware systems. Hence, conventionally the probability that humans are unable to accomplish tasks assigned to them (i.e. human error probability: HEP) is considered to be equivalent to component failure rate. Functional event trees or fault trees are then developed in terms of HEPs to obtain a prediction of overall human reliability. Though H R A can be an independent discipline, its marriage with probabilistic safety assessment (PSA) is one of the recent central technical issues. It is generally assumed that PSA takes the lead. PSA identifies critical tasks assigned to humans. Then H R A is used to estimate the human reliability of those tasks. Important premises here are as follows:

INTRODUCTION Many of the domain specialists and error psychologists argue that none of the existing human reliability analysis (HRA) methods can adequately handle errors which might occur at nuclear power plants (NPPs). The existing H R A methods are all said to be stop-gap models and therefore less than adequate. The H R A community is seething with arguments.1 One researcher questions 'Where shouldst thou [HRA] turn?' and stresses the need for a second coming. 2 Another deplores the ways arguments are being made by stating ' . . . to assert that nothing in the discipline is useful is not constructive. '3 This chaos within the H R A community exists, I believe, because H R A is like a Centaur in that it is only half human. H R A research is unable to find a path that will take it closer to a more human form. This is a review paper which attempts to give an overview of the current situation in the H R A community. This paper also proposes a direction for H R A that will make it more fully human and useful.

- - H o w reliable humans are can be quantified in terms of probability. - - T h e impact that humans can give on a human-machine system is predictable by evaluating the likelihood of failures of the critical tasks assigned to humans. - - H R A is a subordinate component of PSA.

THE CENTAUR THE EVOLUTION OF HRA

H R A is defined as any method by which human reliability is estimated. 4 Here, human reliability is the success probability of human activities of which failures are likely to give significant impact on the reliability of a human-machine system.

The history of H R A methods has already been well documented, s-7 Here, only a very brief summary is given to facilitate the understanding of subsequent discussions. The beginning of H R A dates back to the early nineteen fifties. One of the earliest studies was made

Reliability Engineering and System Safety 0951-8320/92/$05.00 © 1992 Elsevier Science Publishers Ltd, England. 71

72

Yushi Fujita

at Sandia National Laboratory (SNL) in the U.S.A. for the study of nuclear weapon systems. For this study, the conventional reliability analysis was adopted. Throughout the nineteen fifties, similar studies were made at SNL on the manufacturing and field handling of nuclear weapons. Much effort was made to develop a human reliability database called the American Institute for Research (AIR) Data Store until the beginning of the nineteen sixties. 8 As the sixties progressed, a group of researchers at SNL began to publish their work. In 1964, a symposium was held in the USA at which the then state-of-the-art H R A methods were presented. 9 The earliest form of the Technique for Human Error Rate Prediction (THERP) was introduced here. This was later refined into the so-called 'Handbook' (i.e. NUREG/CR-12781°). Among other methods presented at the symposium was one based on the Monte Carlo simulation. In the early nineteen seventies, Swain says that THERP had reached a point where it could be applied to real-world industrial problems. At the same time, the Monte Carlo simulation technique emerged as another significant research method. Siegel and his colleagues made a number of studies based on the technique in this period, u From the late nineteen seventies to the mideighties, an increasing variety of H R A methods began to emerge. 12-23 No doubt, the Three Mile Island accident which occurred in 1979 had given a strong impetus to research activities. Since the technical spectrum is so diverse it is not possible to summarize all these methods here. Only a few topics can be touched on. Although T H E R P continued to be identified as the most generic H R A method during that period, researchers began to realize the need for establishing a method capable of handling cognitive errors; namely diagnosis. An answer that many of them reached was to use a time measure (i.e. time available for diagnosis). The recognition of time-error trade-off no doubt motivated this, too. Among the earliest form of this approach proposed in the nuclear industry was Time Reliability Correlation (TRC) developed by Wreathall and his colleagues. 24 A more elaborate treatment was done by Dougherty. 7 In its final version, the Handbook also introduced a similar approach. ~° The S - R - K scheme developed by Rasmussen gave a strong influence on human factors research fields in the nuclear industry. 25 H R A was not an exception. Hannaman and his colleagues attempted to combine the S - R - K scheme with the time-reliability correlation concept, and proposed a H R A method called Human Cognitive Reliability (HCR). z6 Incorporation of H R A into PSA was another technical interest in the mid-eighties leading to the

emergence of the framework models. The most typical of them is Systematic Human Action Reliability Procedure (SHARP) proposed by Hannaman et al. 27 SHARP adopts H C R and T H E R P as standard methods of evaluating cognitive errors and procedural errors respectively. 28 Most of the technical topics identified from the mid-sixties to the mid-eighties continued to be issues from the late eighties to the present. However, the following noticeable new trends have emerged recently: --inclination toward error psychology; --utilization of artificial intelligence (AI) technology to develop cognitive simulators; --realization of the need for research into the impacts of organizational and group factors.

HYBRIDS Questions on the P S A - H R A framework

This question has not been broadly asked, but this should have come across everyone's mind at least once: Is human reliability quantifiable? Hollnagel argues that human reliability is a concept and cannot be expressed by numbers. 29 He suggests that human actions be analyzed in terms of a contextual framework, and proposes a function-oriented task analysis. Another fundamental question was posed by Woods about an inherent limitation of the current P S A - H R A framework. 3° He points out that the current framework cannot handle an accident sequence that a human has initiated, but most disasters were considered to be initiated by humans. He concludes that general characteristics of disasters and near misses could provide a basis for meaningful measures of disaster potential. These are views that many cognitive scientists and human factors engineers share. More discussions are presented in the next section. Cognitive errors 1 Time-reliability correlation The time-reliability correlation approach was considered previously as a candidate breakthrough, but is now taken as one of the stop-gap models. The fundamental assumption was that the longer the time available for the diagnosis the higher the reliability. In 1986, the Electrical Power Research Institute (EPRI) in the USA launched a series of simulator experiments. It aimed at validating H C R based on data collected from simulator experiments. Several assumptions were made. Among those assumptions,

Human reliability analysis the following are particularly important: --Response time (RT) distribution (or nonresponse time distribution) can be standardized by an appropriate time factor. Mean time was proposed. ---Any RT distribution can be categorized into one of three standardized distributions. They are characterized by the skill-, rule-, or knowledgebased behavior proposed by Rasmussen. The first assumption is particularly attractive from the engineering viewpoint. Once this is validated, H R A practitioners can obtain the distribution for an arbitrary event just by estimating the standardization time factor. EPRI concluded that the first assumption appears to be valid in their final project report. 31 However, the second one appears to be invalid, but there still exist three classes of RT patterns. It is said that those patterns can be accounted for by the difference in cue-response structure influenced by the logic of procedures. It has also been concluded that the log-normal function appears to be the best mathematical representation for the following reasons: - - N o significant difference was observed in the level of data fitting by the log-normal, two-parameter Weibull (WE2), and three-parameter Weibull (WE3) functions. --The log-normal is preferable from the viewpoint of statistical data handling. Although H C R once established a high reputation, it is now receiving fierce criticism. 32-34 This criticism can be summed up in the following two points: --It lacks a sound psychological foundation. Consequently, a variety of problems arise: • it cannot differentiate cognitive processes that result in the same RT or RT pattern; • it cannot account for the rapid switching between the skill-, rule-, and knowledge-based behaviors. --Time has an inherent limitation that causes problems such as the following: • it cannot treat errors of commission that could happen during the course of diagnosis; • it cannot account for a phenomenon where the chance to err increases as the time to complete increases. It could be said that the foundation of the original H C R is very weak from the viewpoint of psychology. (See Section 'Way of modeling' that appears later in this paper for the analysis of RT data.)

73

2 Error psychology It seems that the community did not know of the existence of the school of error psychology until quite recently. Illustrating this school, Reason published a book in which he details its ideas. 35 In the book, Reason defines errors as follows: Errors will be taken as a generic term to encompass all those occasions in which a planned sequence of mental or physical activities fails to achieve its intended outcome, and when these failures cannot be attributed to the intervention of some agency. He, then, describes an elaborate theoretical framework for error categorization. Firstly, error types and error forms are differentiated. The error type refers to a way of differentiating errors occurring at different cognitive stages. Three primary error types are considered. These are mistakes, lapses, and slips. Each of them happens at a corresponding cognitive stage; mistakes at the stage of planning, lapses at the stage of storage, and slips at the stage of action. There is a way to relate the primary error to Rasmussen's performance levels; slips and lapses with the skill-based level, rule-based mistakes with the rule-based level, and knowledge-based mistakes with the knowledge-based level. The error form refers to varieties of errors found repeatedly in all kinds of cognitive activities regardless of their types. An important concept here is that this ubiquitous nature indicates the presence of universal processes in which the error forms are rooted. Similarity bias and frequency bias are considered to be factors that shape error forms. Reason thinks that this way of differentiating errors can provide some practically usable generalizations. He concludes that cognitive operations which are underspecified for some reason tend to default to contextually appropriate, high-frequency responses.

3 Cognitive simulator Another forefront of research into cognitive errors is the development of what can be called 'cognitive simulators'. It is a study in which a computerized human model is developed and run to simulate both cognitive and action behaviors. This helps better to understand the underlying cognitive mechanisms that induce errors. Reason calls such a cognitive simulator 'a fallible machine'. 35 Development of a simulated human model has its own history. 36 It was mentioned earlier in this paper that Siegel and his colleagues conducted a number of simulation studies based on the Monte Carlo simulation technique. In their review paper, Bittner & Morrissey identify these as level two m o d e l s . 37 They state that this line of studies has reached a point of evolving into its third level: 'integrated microprocess simulation models'. Here, microprocess simulation

74

Yushi Fujita

models refer to models that treat in more detail individual sub-tasks. The main topics in this field include both performance and workload. The time required to complete a task (i.e. RT) and accuracy are used as performance measures. A slightly different line of studies has recently emerged, mainly concerned with better understanding of cognitive processes utilizing AI technology. Recent progress in AI technology has reached a point where researchers can benefit from rich computer environments. They enable researchers to examine models using a variety of new approaches (e.g. knowledgebased approach, neural networks). Much of this work was motivated by an early attempt at level-three modeling: Human Operator Simulator (HOS). Reason developed a knowledge-based cognitive model which can err. 35 In that model, heuristic knowledge is used to model an error-inducing mechanism influenced by frequency gambling. Similarly, Ispra is attempting to develop an integrated cognitive simulator (COSIMO) in which interactions between human ' operators and a large-scale process plant are modeled. 38 A variety of cognitive characteristics are planned to be implemented using AI techniques. Woods and Roth have specified a cognitive simulator called Cognitive Environment Simulation (CES) for the U.S. Nuclear Regulatory Commission (NRC). 39 The final goal is to utilize the simulator for P S A - H R A . The Nuclear Power Engineering Center (NUPEC) of Japan has also launched a similar project (a model called CAMEO) .4o Though specific AI techniques used in those projects are diverse, there exist several common factors. One such common factor is an error-inducing mechanism influenced by the limitation of attentional resources. The basic idea is that when a human is in a 'cognitively loaded' situation, he or she tends to show the following characteristics: --loss of an ability to allocate attentional resources to topics outside of his or her immediate interest; - - a tendency to use 'stronger' knowledge first. Here, the level of 'strongness' is considered to be influenced by the frequency of previous corresponding situations and other factors that entrench in the memory (e.g. earlier learning). When a strong knowledge chunk is excited and matched partially with observations, then a human tends to conclude that the observations are resolved. This process can be implemented by a simple knowledge-based approach. What practical benefit can be obtained from cognitive simulations needs to be asked. However, it will undoubtedly benefit researchers in obtaining better insights into cognitive processes.

HEP classification and data collections Ways of classifying HEPs should reflect the model that specifies how to handle errors. It means that, depending on models considered, different classification schemes can be devised. Error psychologists consider that one at a conceptual level needs to be adopted. There were two basic classification schemes adopted in early H R A methods: one looking at behavioral and situational elements, another at task elements. Altman proposed a classification scheme in which the combination of behavioral and situational influences were described in terms of molar or data cell classification. 41 Meister proposed a different scheme which was based on task units comprising manipulated equipment and their associated actions. 42 Rasmussen points out that these existing classification schemes could only scratch the surface: none of them can explain underlying cognitive error mechanisms. A classification scheme that incorporates a cognitive structure was then proposed. 43 Reason says that errors can be seen from three viewpoints; behavioral, contextual, and conceptual. 35 According to this framework, most of the existing classification schemes are based on the behavioral view. The contextual view looks at the relationships between error type and situational or task characteristics. It can provide a much better classification scheme than one based on the behavioral view. He argues, however, it is of lesser utility than one based on the conceptual view because the contextual view cannot account for the fact that the same or similar situations do not always trigger the same error forms. He stresses that the conceptual view is the most essential and useful. His categorization of the error types and error forms summarized earlier forms the basis of the classification scheme. Approaches proposed by Rasmussen and Reason are at the forefront of error classification. However, a question of whether or not these can be incorporated into the P S A - H R A framework still remains unanswered. Even when accepting the current classification schemes as stop-gap models, there are many problems. Researchers consider that these are responsible for the paucity of data. Due to the difficulty of obtaining a suflicient amount of field data, researchers have been trying to collect the data by means of subjective judgments and simulator experiments. Classical psychological scaling techniques were utilized to collect subjective data. Though such a technique was already proposed in the late nineteen sixties, 44 the most noticeable one in this family is Success Likelihood Index Methodology (SLIM) developed by Embrey and his colleagues. 45 Although

Human reliability analysis SLIM and other subjective techniques involve statistically sound processes, many H R A specialists think that the validity of subjective judgments is dubious. Another approach is the utilization of simulators. It is a straightforward approach in which a full-scope training simulator is utilized. A study made by General Physics reports that HEPs obtained from simulations generally support HEPs listed in the Handbook. 46 Similar activities are identified in Japan. 47,~s Though some specialists still hope to benefit from such an approach, many doubt the absolute accuracy of data. There is much evidence that the following are de facto problems: ---simulator bias; ----difficulty of extrapolating data collected from a limited number of experiments.

TOWARDS THE NEXT GENERATION Way of modeling Here I will allude to studies of RT data to discuss a way of modeling. Both Kantowitz and I criticized the logic behind the H C R model. ~ The heart of the critique was that H C R attempts just to fit data with little consideration made about the underlying processes. We argued that one must carefully distinguish between mathematical description (or curve fitting) and mathematical modeling. What needs to be done is as follows: ----develop a conceptual model with theoretically sound hypotheses; --find an appropriate quantitative representation; --collect the right kind of data; ---estimate parameters.

75

distribution. (Task difficulty (TD) appeared to be a PSF of the latter group which characterizes the data at hand.) --Mean time will be monotonically proportional to TD. --When overloaded, variance will be larger. ---For lower TD, the distribution is expected to be positively skewed, since a large fraction of population can finish a relevant task faster. - - F o r moderately high TD, the distribution is expected to become symmetric. It is, then, negatively skewed for even higher TD. The reason is that some fraction of population begins to respond slowly as a function of TD. - - F o r very difficult TD, some fraction of data will be lost. The loss of data is expected to start from the fight end of the distribution. Hence, this will have an effect of skewing the distribution in the positive direction. This conceptual model requires a mathematical representation of which parameters can be related to the TD as hypothesized above. WE3 appears to be a good candidate. 52 Data collected from four events (the same data as used in the previous two studies) were fitted. 58 On the other hand, scores on TD were obtained using subjective judgments by expert raters. It then appeared that the mean time and variance were increasing functions of TD, as expected. Figure 1 shows a relationship between scores on TD and the skewness. Here, two sets of skewness are presented; one estimated from data fitting by WE3, another calculated directly from data. As expected, the skewness starts with a positive value for the easiest event (i.e. ATWS). It goes down to the negative side, 2

. . . .

,

. . . .

,

. . . .

,

. . . .

,

. . . .

,

. . . .

i

. . . .

I

If one starts out without developing a conceptual model, then he or she might end up with a set of merely mathematically fitted data which tells us nothing about the underlying processes. Kantowitz and his colleagues have supported this view based on their analysis of data collected from a training simulator using a WE3 initially, 49 and then followed on cascaded WE3s. 5° Both studies concluded that the original logic of H C R cannot be supported. A model described below is presently being studied that builds on this earlier work. 5~ --PSFs attributable to individual crews or crew members are considered to systematically cause variations of RT distribution. However, PSFs attributable to external factors or factors which represent population are considered to shape the

Rn,~s

o* LOCA

o

o SGTR

"

-i

-2

. . . .

~

. . . .

1

J

. . . .

2 TASK

D

i

3 IFF

. . . .

J

. . . .

4 [ C U L T Y

5

Fig. 1. Relationship between task difficulty and skewness. Events used are Anticipated Transients Without Scram (ATWS), Steam Generator Tube Rupture (SGTR), Feedwater Line Break (FLB), and Loss of Coolant Accident (LOCA) with rupture occurring at steam phase. Circle and dot represent estimated (i.e. estimated from data fitting) and directly calculated skewness, respectively. R 2, a measure of the goodness of data fitting is 0.925 (N = 44), 0.978 (N = 49), 0.946 (N = 47), 0.978 (N = 38) for ATWS, SGTR, FLB, LOCA, respectively. (Nm~x= 49)

76

Yushi Fujita

and swings back into the positive direction as TD increases. This tendency supports our hypotheses. However, the skewness becomes positive for the most difficult event (i.e. LOCA). The level of data loss is 10, 0, 4, 20% for ATWS, SGTR, FLB, LOCA, respectively. The fraction for ATWS is believed to be much lower, since there was a difficulty during the data reduction process. Hence, the level of LOCA is considerably higher than that of other events. This is believed to have caused the distribution to be positively skewed. This result implies that WE3 cannot fully account for the characteristics of TD data, since the skewness mathematically converges to zero when the shape parameter approaches infinity. (Note this problem of truncation is mathematically testable.) I will not expound further on our studies, since it is not the purpose of this paper to discuss them in detail. Nevertheless, I would like to stress that this example, though much more needs to be examined further, clearly illustrates how more useful insights may be obtained from a carefully developed model.

Quantify situations first? 1 Additional HRA components Error psychology does not alone tell whether or not H R A for PSA is possible. But, it does tell that the current P S A - H R A framework is based on postulates with fatal flaws. Humans cannot be a mere component which executes only what designers assign to it. On the contrary, humans are an agent which acts with its own intention that is sometimes wrong. This could negate engineered features regardless of their own excellency. I do not think that H R A can stay as a subordinate component of PSA. One constructive way of discussing the issue is to assume a position where we expand the present PSA framework such that it can more meaningfully incorporate human reliability. To that end, I surmise that the next generation H R A is required to add two components (i.e. HRA-0 and HRA-2 described below) to the current framework (i.e. HRA-1): --HRA-0: A H R A for identifying initiating events that have potentials to threaten the integrity of the core. - - H R A - I : A H R A that evaluates how reliable human operators can be in carrying out assigned tasks, an improved version of the current HRA. --HRA-2: A H R A for assessing the probability of human operators mistakenly overriding the engineered safeguard features. Conventionally, PSA selects critical initiating events that have potential for threatening core integrity. This is a keystone of the technology and therefore needs to

be maintained. It is the role of PSA specialists to define critical initial plant conditions which have the potential to threaten core integrity. Then, HRA-0 is carried out to assess the probability of human operators leading the plant to such conditions. Simply stated, PSA is a technology used to evaluate the level of engineered safety features. It is therefore most meaningful to evaluate the possibility of human operators overriding them. This is not considered in current PSA. However, this can be incorporated without changing its mathematical framework. HRA-2 is carried out for this end.

2 Errorprone situations In order to furnish these two new H R A concepts, we of course need to know more about errors. Recent trends in error psychology appears to be the right way to follow. However, we are still left wondering how it can dovetail neatly into the P S A - H R A framework. As a practitioner who has long been involved in empirical studies, I am inclined to take an ecological approach, 54 and study error prone situations (EPSs) in greater depth. One-to-one mapping between contexts (i.e. EPSs) and errors is not possible as Reason says. However, knowing EPSs and classes of errors that are likely to occur is informative to us. There are several reasons: --Humans' decisions are based on good reasons, but which are later judged to be inappropriate and therefore errors. Many of these reasons have a good foundation in their own world (i.e. situations recognized by them), especially when humans are trained experts (e.g. NPP operators). Without knowing how situations are recognized and what affects this recognition process, it is difficult to understand errors. --There seem to exist sets of general EPSs which cause human operators to recognize the situation inappropriately. Some EPSs almost always cause errors (i.e. critical EPSs). - - W e need to know situations where errors occur. Without knowing where errors might occur in a given accident sequence, we cannot evaluate reliability within the framework of PSA. - - I t may be easier to quantify situations than humans.

3 Empirical studies It is now widely recognized that we cannot obtain accurate data from simulator experiments in an absolute sense. Nevertheless, it does not necessarily impair their usefulness. I believe simulators are tools that help us obtain insights, and develop and test hypotheses. There are scientific ways to conduct empirical studiesY When experiments are carefully designed, invaluable information can be obtained. 56

Human reliability analysis Systematic observations of operator behavior reveal the following general tendencies in transient situations (e.g. plant trip, safety injection): 47 ----operators tend to jump onto the end action without verifying conditions or finishing preparatory actions; ---operators tend to postpone (and sometimes neglect, as a consequence) tasks which are judged less important; ---similarly, the higher the task importance is, the fewer the departures from procedures which occur. The existence of these tendencies implies that the operators are in higher workload situations. Empirically, we can easily create critical EPSs by adding one or more small failures or interventions to such high workload situations. Addition of spurious information, for instance, has a striking impact. 4° If, the status of a valve, which is designed to be automatically closed by an interlock, is spuriously indicated 'closed', then there is a good chance that the operators will believe that it is closed. The cause is rooted in a tendency to give component status information the highest priority in such a case. In highly loaded situations, the operators tend to verify the valve status alone and move quickly to other higher priority tasks. The presence of additional misleading information could increase the probability of making errors even when the information itself is correct. A disturbance in the downstream process status is such an additional misleading piece of information. The addition of intervening tasks can cause lapses. 4° Generally, operators can successfully cope with intervening tasks. However, it sometimes happens that intervening tasks are left incomplete. It can also make the rest of the task difficult because of delay caused by the intervention. This can create situations in which mistakes are likely to occur. The occurrence rate of these small failures is supposed to be moderately high. In the real world, latent errors, failures of instrumentation, and support systems' failures are believed to create similar EPSs. The existence of the empirical evidence encourages me to think that there must be ways to systematically find EPSs that nearly automatically cause the operators to trigger critical initiating events or override engineered safety features (i.e. critical EPSs). Given that we have general knowledge about critical EPS types, we may be able to postulate specific EPSs and quantify their occurrence rate in terms of the failure rate of relevant components or other quantifiable parameters. This could be a good measure of error-proneness, as, by definition, the operators will nearly automatically err once these situations occur. Empirical studies are believed to be a

77

good means of obtaining general knowledge about critical EPSs which sustain the foundation of HRA-0, HRA-1, and HRA-2.

CONCLUSIONS It is unfortunate that current H R A is only half human and manifests itself as a Centaur to the author. The fatal flaw of the present P S A - H R A framework is the postulate that humans are a mere component which executes only assigned tasks. This postulation needs to be corrected, and a new P S A - H R A framework conceived. It seems appropriate to consider two additional H R A s in which the probability of human operators causing critical initiating events and overriding engineered safeguard features are evaluated. A fundamental question of whether or not H R A can take on a fully human form within the framework of PSA remains uncertain. However, a recent trend in error psychology has begun to lead us in the right direction. AI technology offers rich computer environments in which researchers have opportunities to better understand errors through the effort of modeling humans. In pursuing the understanding of errors, it is believed to be indispensable to see them as a consequence of interactions between internal error mechanisms and situations. To know how the world looks to humans and what makes them look at it in this way is very important. In-depth understanding of error prone situations must therefore be crucial. Carefully designed empirical studies are expected to contribute to that end. There is a good reason for psychologists to criticize engineers that they are still trying to treat humans like machines. A sound psychological foundation is crucial in any future H R A efforts.

REFERENCES

1. Reliability Engineering & System Safety, 29 (1990), Special Issue on Human Reliability Analysis. 2. Dougherty, E. M., Jr., Human Reliability Analysis--Where Shouldst Thou Turn? Reliability Engineering & System Safety, 29 (1990) 283-99. 3. Spurgin, A. J., Another View of the State of Human Reliability Analysis (HRA). Reliability Engineering & System Safety, 29 (1990) 365-70. 4. Swain, A. D., Human Reliability Analysis: Need, Status, Trends and Limitations. Reliability Engineering & System Safety, 29 (1990) 301-13. 5. Meister, D., Human Reliability. In Human Factors Review: 1984, ed. F. A. Muckler. Human Factors Society, 1984. 6. Swain, A. D., Human Error and Human Reliability. In Handbook of Human Factors, ed. G. Salvendy. Wiley-Interscience, New York, Wiley & Sons, 1987.

78

Yushi Fujita

7. Dougherty, E. M., Jr., Human Reliability Analysis. Wiley-Interscience, New York, Wiley & Sons, 1988. 8. Munger, S. J., Smith, R. & Payne, D., An Index of Electronic Equipment Reliability: Data Store. AIRC43-1/62-RP(1), American Institute for Research, 1962. 9. The Human Factors Society 1962 Meeting. 10. Swain, A. D. & Guttmann, H. E., Handbook of Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications. NUREG/CR-1278, USNRC, 1983. 11. Siegel, A. I. & Wolf, J. A., Man-Machine Simulation Models. Wiley, New York, 1969. 12. Bello, G. C. & Colombari, V. E., Empirical Technique to Estimate Operator's Errors (TESEO). Reliability Engineering, 1 (1980) 3-14. 13. Potash, L. M., Stewart, M., Dietz, P. E., Lewis, D. M. & Dougherty, E. M., Jr., Experience in Integrating the Operator Contributions in PRA of Actual Operating Plants. In Proceedings of the ANS /ENS Topical Meeting on Probabilistic Risk Assessment. American Nuclear Society, Port Chester, U.S.A., 1981, pp. 1054-63. 14. Seaver, D. A. & Stillwell, W. G., Procedures for Using Expert Judgment to Estimate Human Error Probabilities in Nuclear Power Plant Operations. NUREG/CR2743, USNRC, 1983. 15. Comer, M. K., Seaver, D. A., Stillweil, W. G. & Graddy, C. D., Generating Human Reliability Estimates Using Experts Judgment. NUREG/CR-3688, vol. 1, 2, USNRC, 1984. 16. Kozinsky, E. J., Grey, L. H., Beare, A. N., Burks, D. B. & Gomer, F. E., Safety-Related Operator Actions: Methodology for Developing Criteria. NUREG/CR3515, USNRC, 1984. 17. Siegel, A. J., Bartter, W. D., Wolf, J. J., Knee, H. E. & Hass, P. E., Maintenance Personnel Peformance Simulation (MAPPS) Model: Summary Description. NUREG/CR-3626, USNRC, 1984. 18. Kopstein, F. F. & Wolf, J. J., Maintenance Personnel Performance Simulation (MAAPS) Model: User's Manual. NUREG/CR-3634, USNRC, 1985. 19. Apostolakis, G. E., Methodology for Time-Dependent Action Sequence Analysis Including Human Actions. UCLA-ENG-P-5547-N-84, UCLA, 1984. 20. Phillips, L. D., Humphreys, P., Embrey, D. E. & Seiby, D. L., A Socio-Technical Approach To Assessing Human Reliability (STAHR). Appendix-D of NUREG/CR-4022, USNRC, 1985. 21. Weston, L. M,, Whitehead, D. W. & Greaves, N. L., Recovery Actions in PRA for the Risk Methods Integration and Evaluation Program (RMIEP). NUREG/CR-4834, USNRC, 1987. 22. Samanta, P. K., O'Brien, J. N. & Morrison, H. W., Multiple-Sequence Failure Model: Evaluation of and Procedures for Human Error Dependency. NUREG/CR-3837, USNRC, 1985. 23. Bley, D. C. & Stetkar, J. W., The Significance of Sequence Timing to Human Factors Modeling. In Proceedings of 1988 IEEE Fourth Conference on Human Factors and Power Plants, Monterey, CA. IEEE 1988, pp. 259-67. 24. Hall, R. E., Fragola, J. R. & Wreathall, J., Post Event Human Decision Errors: Operator Action Trees/Time Reliability Correlation. NUREG/CR-3010, USNRC, 1984. 25. Rasmussen, J., Outlines of a Hybrid Model of The Process Operator. In Monitoring Behaviour and Supervisory Control, ed. T. B. Sheridan & G.

Johannsen. Plenum Press, New York, 1976, pp. 371-83. 26. Hannaman, G. W., Spurgin, A. J. & Lukic, Y. D., Human Cognitive Reliability Model for PRA Analysis. NUS-4531, NUS Corporation, San Diego, CA, 1984. 27. Hannaman, G. W., Spurgin, A. J. & Fragola, J. R., Systematic Human Action Reliability Procedures (SHARP), Interim Report, NP-3583. Electric Power Research Institute, 1984. 28. Recently, SHARP has been revised to SHARP1. 29. Hollnagel, E., What is Man that He can be Expressed by a Number? In Proceedings of the International Conference on Probabilistic Safety Assessment and Management (PSAM), Beverly Hills, CA, 4-7 February 1990. Elsevier Science Publishing, New York, NY, pp. 501-6. 30. Woods, D. D., Risk and Human Performance: Measuring the Potential for Disaster. Reliability Engineering & System Safety, 29 (1990) 387-405. 31. Spurgin, A. J., Moieni, O., Gaddy, C. D., Parry, G., Orvis, D. D., Spurgin, J. P., Johsimovich, V., Gaver, D. P. & Hannaman, G. W. Operator Reliability Experiments Using Power Plant Simulators, NP-6937, vol. 1, 2. Electric Power Research Institute, 1990. The report consists of three volumes, but the volume 3 is available only to EPRI licensees. 32. Senders, J. W., Morey, N. & Smiley, A., Modeling OPerator Cognitive Interactions in Nuclear Power Plant Safety Evaluation. Report prepared for the Atomic Energy Control Board, Ottawa, Canada, 1985. (See Ref. 35.) 33. Embrey, D., Personal communication to J. Reason, 1989. (See Ref. 35.) 34. Kantowitz, B. H. & Fujita, Y., Cognitive Theory, Identifiability and Human Reliability Analysis (HRA). Reliability Engineering & System Safety, 29 (1990) 317-28. 35. Reason, J., Human Error. Cambridge University Press, Cambridge, 1990. 36. Chubb, G. P., Laughery, K. R., Jr. & Pritsker, A. A. B., Simulating Manned Systems. In Handbook of Human Factors, ed. G. Salvendy. John Wiley, 1987, pp. 1298-327. 37. Bittner, A. C., Jr. & Morrissey, S. J., Integrated Performance and Workload Modeling for Industrial and Other System Applications. In Advances in Industrial Ergonomics and Safety II, ed. B. Das. Taylor & Francis, 1990, pp. 857-64. 38. Cacciabue, P. C., Decortis, F. & Masson, M., Cognitive Models and Complex Physical Systems: A Distributed Implementation. Paper presented at the 7th European Annual Conference on Human Decision Making and Manual Control, Paris, France, 18-20 October, 1988. 39. Woods, D. D. & Roth, E. M., Cognitive Environment Simulation: An Artificial Intelligence System for Human Performance Assessment. NUREG/CR-4862, vol. 1-3, USNRC, 1987. 40. Unpublished Annual Report, Nuclear Power Engineering Center, Institute of Human Factors, Japan, 1990. 41. Altman. In Symposium on Reliability of Human Performance in Work, ed. W. B. Askren. AMRL-TR67-88, Aerospace Medical Research Laboratories, Wright-Patterson Air Force Base, Ohio, 1967. 42. Meister, D., Use of a Human Reliability Technique to Select Desirable Design Configurations. Paper presented at the 8th Reliability and Maintainability Conference, Denver, U.S.A., 1969. 43. Rasmussen, J., Human Errors. A Taxonomy for Describing Human Malfunction in Industrial Installa-

Human reliability analysis

44.

45.

46.

47.

48.

49.

tions. Rise-M-2304, Rise National Laboratory, Denmark, 1981. Swain, A. D., Field Calibrated Simulation. In Proceedings of the Symposium on Human Performance Qualification in Systems Effectiveness. Naval Material Command and the National Academy of Engineering, Washington, U.S.A., 1967, pp. IV-l-IV-21. Embrey, D. E., Humphreys, P., Rosa, E. K., Kirwan, B. & Rea, K., SLIM-MAUD: An Approach to Assessing Human Error Probabilities Using Structural Expert Judgments, vol. 1, 2. NUREG/CR-3518, USNRC, 1983. Beare, A. N., Dorris, R. E., Bovell, C. R., Crowe, D. S. & Kozinsky, E. J., A Simulator-Based Study of Human Factor Errors in Nuclear Power Plant Control Room Tasks. NUREG/CR-3309, USNRC, 1984. Tanaka, I., Kimura, T., Utsunomiya, S., Uno, K., Endo, T., Tani, M., Fujita, Y., Kurimoto, A., Mikami, A., Kishimoto, N., Narikuni, K., Kawamura, M., Kubo, S., Maeyama, K., Ishigaki, N., Tsukumoto, T., Nishimura, Y., Morita, A., Shono, M. & Morita, M., Studies of Operator Human Reliability Using Training Simulator (1)-(5). In Proceedings of 1989 Fall Meeting on the Atomic Energy Society of Japan, Tokai, Japan, 1989. Atomic Energy Society of Japan, A18-A22, Vol. 1, pp. 18-22. Yoshimura, S., Ohtsuka, T, Itoh, J. & Masuda, F., An Analysis of Operator Performance in Plant Abnormal Conditions. In Proceedings of 1988 IEEE Fourth Conference on Human Factors and Power Plants, Monterey, CA, 1988, 509-12. Kantowitz, B. H., Bittner, A. C., Jr. & Fujita, Y., Mathematical Description of Crew Response Times in

50.

51.

52. 53. 54.

55. 56.

79

Simulated Nuclear Power Plant Emergencies. In Proceedings of the Human Factors Society, 34 (1990) 1127-31. Kantowitz, B. H., Bittner, A. C., Jr., Fujita, Y. & Schrank, E., Assessing Human Reliability in Simulated Nuclear Power Plant Emergencies Using Weibull Functions. In Advances in Industrial Ergonomics and Safety III, ed. W. Karwowski & J. W. Yates. Taylor & Francis, 1991 pp. 847-54. Preliminary results are planned to be presented at 1991 Fall Meeting of the Atomic Energy Society of Japan, Fukuoka, Japan, 15-18 October, 1991. Figure 1 is translated from a manuscript prepared for the conference proceedings. Lehman, E. H., Shapes, Moments and Estimators of the Weibull Distribution. IEEE Transactions on Reliability, (1963) 32-8. The NLIN Procedures. In Chapter 23 of SAS/STAT User's Guide, Release 6.03 Edition. SAS Institute Inc., 1988. Rasmussen, J., Cognitive Engineering, A New Profession? In Tasks, Errors and Mental Models, ed. L. P. Goodstein, H. B. Andersen & S. E. Olsen. Taylor & Francis, London, 1988, pp. 325-34. Elmes, G. D., Kantowitz, B. H. & Roediger III, H. L., Research Methods in Psychology, Third Edition. West Publishing Company, St Paul, MN, 1989. Fujita, Y., Toquam, J. & Wheeler, W. B., Collaborative Cross-Cultural Ergonomics Research: Problems, Promises, and Possibilities. Paper presented at llth Congress International Ergonomics Association, Paris, 1991.