Human error in structural reliability assessments

Human error in structural reliability assessments

Reliability Engineering 7 (1984) 61-75 Human Error in Structural Reliability Assessments R. E. Melchers Department of Civil Engineering, Monash Unive...

585KB Sizes 27 Downloads 133 Views

Reliability Engineering 7 (1984) 61-75

Human Error in Structural Reliability Assessments R. E. Melchers Department of Civil Engineering, Monash University, Clayton, Victoria 3168, Australia (Received: 10 January, 1983)

ABSTRACT The importance of human error in structural engineering reliability assessments is reviewed and some recent research results on human error in simple design tasks given. Design processes such as table-look-up, calculation and ranking of numbers are considered. A tentative procedure to allow for human error in reliability theory is also outlined.

1

INTRODUCTION

The important role played by human error in the reliability of real structures is evident from the detailed investigations of the reasons for structural collapse or malfunctioning of a number of structures published in recent years. 1- 8 It is now generally accepted that human error cannot be allowed for merely by increasing the factor of safety in design rules, 9-12 and that larger, more general measures, such as education, control, legal sanctions, etc. are required. However, the necessary degree to which such measures must be applied remains a matter for speculation. The present work arises from attempting to set realistic levels of control (inspection, quality control, design checking, etc.). It is evident that such measures constitute acost against a project, yet their effectiveness is largely unknown. Moreover, to investigate their effectiveness, the linkage between human error and control on the one hand, and human error and 61 Reliability Engineering 0143-8174/84/$03.00 © Elsevier Applied Science Publishers Ltd, England, 1984. Printed in Great Britain

62

R. E. Melchers

structural failure, on the other hand, must be known. This immediately raises several problems. The first is the definition of a 'failure event'. This presents little difficulty for cases of total or near-total collapse, but there are many more cases of considerably less damage in which much of the structure survives. The extreme of this spectrum is the category of 'serviceability' failure, when design criteria, such as deflection or vibration or cracking limits, are violated. It seems very likely that as a result of the current orientation towards design for strength seen in most structural engineering design codes, the incidence of serviceability failure is much higher than evident from the available statistics. The second difficulty lies in the definition of human error and its link to structural failure. Clearly not all human errors cause failure; there is probably not a single area of human endeavour in which errors do not occur. Only some human errors cause failure, and some errors cause more 'significant' failures than others. The work to be presented below is part of an attempt to understand this relationship. The relationship between control measures and human error (and hence structural failure) will constitute a later part of the investigation programme, and is not discussed here.

2

M O D E L S F O R R E L I A B I L I T Y C O N T R O L MEASURES

Several attempts have been made to set up models to predict the possible effect of human error. However, not all of these are able to deal with control measures. A binary event-tree approach was adopted by Nowak,13 with binary branching depending upon whether a human error was made or not made. As will be evident, many human errors involve selection of incorrect quantities; yet this aspect was not considered. Hypothetical error rates were employed. A refinement of this model, proposed by Lind, 1~ in which he shows that three critical errors are highly likely to lead to failure (and fewer errors less likely to lead to failure) is open to the same general criticism. A completely different approach using fuzzy set theory has been extensively advocated by Blockley. 14 Subjective expert opinions about various aspects of the project being considered for analysis are combined using fuzzy set theory in order to predict, in verbal terms, the chances of

Human error in structural reliability assessments

63

success of the project. The technique may be criticized on the grounds that informed independent expert opinion about many aspects of design and construction is difficult to obtain for any one particular project, and that only the most obvious cases would be selected as projects likely to be in danger. Although there is no apparent direct relationship with conventional reliability estimates,15 various efforts in this direction are being made. 16 Suggestions have been made over many years that socio-economic optimization is a valid procedure for setting both safety factors and levels of control. 17'25 What this means, in simple terms, is that the quantity E(CT), the expected total cost of a project, should be minimized: min: E(C1-) =

C l

+ PFCF

(1)

where E(CT) = Cl = CM+ L = Cc = Cr = PF =

expected total cost initial cost = C M+ L -~- CC cost of material, labour, supervision, etc. cost of control (checking) discounted costs of failure of the structure probability of failure

As noted above, the main difficulties with this approach are knowledge of C F and the relationship between PF and C c. It is likely that C F will be a probabilistic, conditional quantity, since its value will be dependent on time of day, level of usage and so on of the structure. The value ofpF will depend on the modes of failure considered, the thoroughness of the original design and the subsequent construction, and also on the effectiveness of control measures. Independently, CIRIA ~s and Melchers 19 proposed that PF be split into two parts: PF = P, + Pv

(2)

where Pu ----probability of failure due to uncertainty effects such as gross errors Pv = probability of failure due to natural variability of materials loads, etc. with Pu being subject to degree of checking effort.

64

R. E. Melchers

One possible relation between Pu and the cost of checking, given a level of control effectiveness ~, may be given as pu = A(1 -

(3)

where A = a 'constant', such that for ~ = 0, Pu represents observed rates of failure due to gross errors in unchecked designs. A is clearly related to the error content of designs, and this itself may vary depending on the competence of the original design. Hence some knowledge about the design is required before an appropriate level of control can be selected. This approach has been exemplified by Allen, z° who noted a possible trade-off between competency and safety level in design and control measures. The relationship between human error and rates of failure of structures, implied by the 'constant' A, is the subject of present research. As noted, this relationship will be affected by design competence, and also by the degree of conservatism inherent in the work of designers, the type and complexity of codes of practice used for design, organizational environment and leadership and construction competence. 3,s The final relationship between structural defects and structural failure is a problem in conventional reliability theory; the difficulty is to predict the relevant structural defect(s). Some preliminary work to throw light on this aspect, for design tasks only, is described below.

3 H U M A N ERROR IN DESIGN TASKS

3.1. Background Human reliability research has become focused largely upon man-machine interaction tasks, such as gauge reading, dial setting and visual interpretation. In a broad sense, the functions of primary interest are psycho-motor: monitoring, controlling and visual inspection. Results from such work are used to improve the design of the man-machine interface, so that applications have been largely in the industrial, military and nuclear industries. Virtually no information appears to exist about tasks such as those performed in design; these are mainly cognitive in nature and almost invariably very complex. 2 a Design is an important function in structural engineering, since virtually all structures are unique and provision for

Human error in structural reliability assessments

65

proof loading and prototype testing seldom exists. There is some evidence to suggest that about 40 ~o of structural-engineering failures have their origin in errors during the design phase. 1 Design is therefore a phase of the structure engineering process of considerable interest for any reliability assessment.

3 . 2 Task analysis and classification An overview of the various tasks which make up a typical structural design process was obtained by performing a number of task analyses. 22 It was found that even a simple design involves quite complex cognitive tasks, typically having no visible output other than the recording of the final result. Further, successful completion of a design task within a reasonable time demands both experience and structural-engineering insight. Neither appear to be capable of measurement; nor can design quality be easily assessed. Moreover, design is a process of synthesis so that a satisfactory end result can be achieved in many different ways. All of these factors complicate analysis. Three quite fundamental tasks were selected for initial study. These were: (1) Table look-up. (2) Numerical calculations (calculator). (3) Ranking of numbers. These tasks correspond to structural member selection from a table of given sizes (tasks 1 and 3), simple stress calculations (task 2) and comparison to a predefined criterion (task 3). A summary of factors commonly recognized as contributing to structural failure is given in Fig. 1. The tasks selected for study were tasks D, B and E. The error in each task or sub-task (see task B) were taken to consist of two components: a random-error component and a gross-error component. Arbitrarily, values departing from the correct value by more than 2.5 % were considered to be 'gross errors'.

3.3. Experimental design For each task selected, a question to reflect the task was formulated, using random-number tables as appropriate. The questionnaire used is given in

ConceptError($)]

[

lo.oEr,or!,!]

IStructuralFailure1 r ....

,

]

Design-,Con;t.i'nterfoceError(s)I

IoE~~i~,,J,o,io~o]

coo~

JInterpretation [ Error E

R, Rondom

S- Syslemotierror c

6 Fig. I.

I ....

oo' oi l Interpretotio[n

]

67

Human error in structural reliability assessments TABLE 1

Task 1: Sample Size and Gross Error Frequency Year

Ist Year

2nd Year

3rd Year

4th Year

Average

Sample size n Frequency Pe of gross error

783

207

132

147

1 269

0-0166

0.0048

0'0152

0

0.0126

the Appendix. Some 423 engineering students at various levels of the undergraduate course completed all or part of the questionnaire. Care was taken to collect completed questionnaires as soon as possible; and the students were strictly supervised. They were not informed of the purpose of the tasks, and enquiries were met with vague replies. 3.4. Task results 3.4.1. Table look-up (task 1) The results for this task are given on a year-by-year basis in Table 1. Because of some unfamiliarity with the table supplied for this question, the result for first year engineering students exhibits a higher error rate. If the raw data is assumed homogeneous, the error distribution shown in Fig. 2 is obtained. The shaded location represents the correct result. Location of error around the correct result only is shown, since the exact character of the distributions depends on the actual distribution of values in the table. To eliminate boundary effects, all values required to be located were placed away from the edges of the table.

NUMBER OF ERRORS

Fig. 2.

R. E, Melchers

68 n

2414 l d

to 9 8

7 6 5 4

3 2 1

0

!, J II ;0

~-2

10

I-1

,j l,

10

1-0

I

IIll Io

I2

10

! I

3

~--

:x/:~

Fig. 3.

3.4.2. Numerical calculations. calculator (task 2) The results were divided into three categories: computation error, decimal error and round-off error (Fig. 1). The last type of error was found to be so small in the population sampled that it was ignored in further analysis. Decimal errors result mainly from errors in entering data into, and reading it from, a calculator and from recording the result. Computation errors result from entering data and from incorrect key punching, in particular for the mathematical function key. The results, standardized to give a sample mean of unity, are plotted in the histogram of Fig. 3. Analysis of the data reveals gross errors of the frequency given in Table 2.

3.4.3. Ranking of numbers (task 3) The relative frequency of misreading the question or selecting the incorrect result was found to be of the order of 0.014 (Table 3). Rather TABLE 2

Task 2: Calculation Error Frequency

Computation Decimal Overall

One-step

Two-step

Combined

Sample size = 1244 PE

Sample size = 1 211 PE

Sample size = 2 455 PE

0007 2 0.005 6 0.012 8

0"015 7 0.004 9 0.020 6

0.011 4 0.005 3 0.016 7

69

Human error in structural reliability assessments

TABLE 3

Task 3: Results Sample size

Gross error frequency of gross error Probability of misunderstood question

1st Year 783

2nd Year 207

3rd Year 129

4th Year 149

Total 1257

0.011

0.015

0.031

0'007

0-014

0.052 2

0-030

0.077 5

0.020

0'048

similar values were obtained for task 1, with which task 3 has some similarities. However, task 3 has much greater verbal content, and from comparing the degree of difficulty of the questions, it was (subjectively) estimated that the probability of miscomprehension was about 0.05. This is quite high and has important implications for tasks such as reading of code requirements. 3.5.

Discussions

The above work can be criticized on a number of grounds: use of students as subjects, too few data points, artificiality of survey, etc. Despite these deficiencies, the results which show error rates are around 10-z and agree generally with the range of values commonly found for other error rates, such as for psychomotor tasks, a3 Despite this interesting comparison, it is evident that before humanerror research can progress very far, more reliable data than that given here is needed, since such data will very likely be a function of the task itself as well as the operator (designer) and his environment. It is also likely that the type of factors that affect the performance of skills in psychomotor tasks will also affect the performance of tasks in the cognitive domain.

4

I N C O R P O R A T I N G H U M A N ERROR IN STRUCTURAL SAFETY ASSESSMENT

Available evidence suggests that human error most commonly affects the resistance of a structure, for example, when errors in design or construction produce a structure of lower strength or resistance than

70

R. E. Melchers

actually anticipated. Cases of human error where the applied loading is affected do occur, however, such as in structural abuse (e.g. in industrial buildings) or overload due to incorrect plant or equipment operation (e.g. nuclear-power plant malfunction). Lack of adequate knowledge about extreme environmental loading (hurricanes, earthquakes, etc.) cannot be included here, unless available knowledge was not used in design. Even then, the error affects the design loads and not the actual loading of the structure. In relation to plant operation, human error has been categorized 24 as: (1) (2) (3) (4) (5)

Errors of omission (e.g. failure to perform a task). Errors of commission (e.g. incorrect performance of a task). Extraneous acts. Sequential errors. Time-limit errors (e.g. failure to perform within allotted time).

Of these, the limited available evidence suggests that the first two categories are probably of most importance for structural-engineering projects; with the last item being of only minor importance. The experimental observations reported above fall in category (2), but clearly an important part in any real design is category (1). A first attempt to improve Nowak's model 13 to treat structural reliability analysis with human error is sketched in Fig. 4 for a single task in a design process. The probability of an error of omission of task 'i' is shown as Pi and is accumulated as one descends down the design process. Similarly for (1 - pi), the probability of no error of omission. In the latter I

!

Error of Omission ? Task '~.'

Error of Commission : Variability in Task Per forrrmnce

I ~ Omission : \~.,,/ Tosk ~+t

( \~.j/

Fig. 4.

Human error in structural reliability assessments

71

case, some variability due to errors of commission and normal human variability is introduced; the appropriate error distribution has been denoted fE,, and would correspond to the types of distributions described in Section 3. By developing an event-tree structure containing all the design (and construction) processes, considering all the branches implied by errors of omission and the variability introduced by errors of commission, the total reliability of the design and construction process with human error incorporated can be calculated. For example, in a very simple process, with, say, four steps, each with an error term E i with distribution function fEi, the reliability component for no errors of omission has a limit state given by: G( ) = 0 =

E,R-S

l-I ( 1 - p i ) i=1

(4)

i=1

where the first rI term represents the probability of zero errors of omission (assuming independent events) and the second term represents the limit state equation with the resistance R modified by four error terms E~. S represents the actual applied stress resultant. In second-moment reliability theory, the probability of failure is then approximated by

Pv ~-

0[° 3

(5)

where tr G, is the standard deviation of G', the standardized limit state function such that G' = G'(X), with X = N(0, 1). The total probability of failure is obtained by summing the probabilities calculated for descending through all possible combinations of events in the event tree. Considerable work can be saved by eliminating branches with (say) two or more errors of omission if p~ is very small. 5

CONCLUSION

Human error is an important component in proper reliability assessment, yet it appears that the evaluation of human error effects in design (and construction) is a much neglected field. Recent observations of human error in structural-engineering designrelated tasks are reported and shown to agree generally with human-error rates for other tasks. A scheme for incorporating human-error

72

R. E. Melchers

assessments into existing limit-state type formulations for assessing second-moment reliability is outlined.

ACKNOWLEDGEMENT Part o f the work reported here was supported by the Australian Research Grants Scheme under Grant F81 15110.

REFERENCES 1. Matousek, M. and Schneider, J. Untersuchungen zur Strucktur des Sicherheitsproblems bei Bauwerken, Bericht Nr. 59. Institut fiir Baustatik und Konstruction, Ziirich, February 1976. 2. Wearne, S. H. Review of Reports of Failures, Proc. Inst. Mech. Engnrs, 193, 1979, pp. 125-36. 3. Melchers, R. E. Influence of Organisation on Project Implementation, J. Constn. Divn., ASCE, 103, no. CO4, December 1977, pp. 611-25. 4. Smith, D. W. Bridge Failures, Proc. Inst. Civil Engnrs., part 1, 60, 1976, pp. 367-82. 5. Sibly, P. G. and Walker, A. C. Structural Accidents and their Causes, Proc. Inst. Civil Engnrs., part 1, 62, May 1977, pp. 191 208. 6. Pugsley, A. G. Sajety of Structures, Edward Arnold, London, 1962. 7. Pugsley, A. G. The Prediction of Proneness to Structural Accidents, The Structural Engineer, 51 (1973), p. 195. 8. Walker, A. C. Study and Analysis of the First 120 Failure Cases. In: Structural Failures in Buildings, The Institution of Structural Engineers, 1981, pp. 15-39. 9. Boe, C. Risk Management--The Realization ojSajety.. Introductory Report, l lth Congress, Int. Assn. Bridge and Structural Engng., Vienna, 1980, pp. 237-46. 10. Knoll, F. Sajety, Building Codes and Human Reliability: Introductory Report, 1lth Congress, Int. Assn. Bridge and Structural Engng., Vienna, 1980, pp. 247-58. 11. Lind, N. C. Optimizations, Cost-Benefit Analysis, Specifications, Proc. 3rd Int. Conf. Applications Stat. Pub. Soil and Structural Engineering, Sydney, 1979, pp. 402-17. 12. Melchers, R. E. Societal Options Jor Assurance oJ Structural Per/brmance, 1lth Congress, IABSE, Final Report, Vienna, 1980, pp. 983-8. 13. Nowak, A. S. Effects of Human Error on Structural Safety, Journal of the American Concrete Institute, Proc. 76, no. 9, September 1979, pp. 959-72.

Human error in structural reliability assessments

73

14. Blockley, D. I. Predicting the Likelihood of Structural Accidents, Proc. ICE, 59, part 2, December 1975, pp. 659-68. 15. Ditlevsen, O. Formal and Real Structural Safety: Influence of Gross Errors, IABSE Proc. P36/80, November 1980, pp. 185-204. 16. Yao, J. T. P. Damage Assessment of Existing Structures, Journal of the Engineering Mechanics Division, ASCE, 106, no. EM4, August 1980. 17. Lind, N. C. and Basler, E. Safety Level Decisions, Intl. Conf. Planning and Design of Tall Buildings, ASCE-IABSE, preprint vol. 16-10. August 21-26, 1972, pp. 53-64. 18. CIRIA. Rationalization of Safety and Serviceability Factors in Structural Codes, Construction Industry Research and Information Association, Report 63, London, 1977. 19. Melchers, R. E. The Influence of Control Processes in Structural Engineering, Proc. Inst. Civil. Engnrs., part 2, 65, 1978, pp. 791-807. 20. Allen, D. E. Criteria for Design Safety Factors and Quality Assurance Expenditure. In: Structural Safety and Reliability, Moan, T. and Shinozuka, M. (eds), Elsevier Scientific, Amsterdam, 1981. 21. Embrey, D. E. Human Reliability in Complex Systems: An Overview, National Centre of Systems Reliability, Report NCSR.RI0, UKAEA, 1976. 22. McCormick, E. J. Human Factors in Engineering, McGraw-Hill, New York, 1964. 23. Meister, S. Human Factors in Reliability. In: Reliability Handbook, Ireson, W. G. (ed.), McGraw-Hill, New York, 1966, pp. 12.29-35. 24. Swain, A. Estimating Human Error Rates and Their Effects on System Reliability, Report SAND77-1240, Sandia Laboratories, Albuquerque, New Mexico, 1978. 25. Rackwitz, R. Note on the Treatment of Errors in Structural Reliability, Berichte zur Sicherheitstheorie der Bauwerke, S. F. B. 96, Techn. Universitat, M/Jnchen, Heft 21, 1977. APPENDIX MONASH UNIVERSITY D E P A R T M E N T O F CIVIL E N G I N E E R I N G Please complete the following tasks accurately and quickly. For all tasks, write your answers in the boxes provided next to the question. Task 1

F r o m the attached table marked 'Channels', select the value of Z about X - X for members having the nominal sizes below.

74

R. E. Melchers

Nominal size 305 × 89 [ 102 × 51 [ 229 × 76 [

Task 2

Use your calculator to provide answers to the following simple calculations: 735 × 21.7 [

959.72 × 351.38 I

113 × 48.3 [

200.09 × 122.25 [

92.3 × 3627 [

478.13 × 709.08 [

899.1 × 17-5 × 602.2 I

423"4 × 89"2 × 976.29 I

672.6 × 49.4 1.982

140"09 × 64"3 1'192

85.9(1.124+7.19)[

76"53(8.375 + 4"68)

Task 3

7020 5310 4550 3320 2650

2280 1900 1330 853

Human error in structural reliability assessments

From the table printed above, select:

Ca) the next lowest number to 2100

I

(b) the number closest to 2400

[

(c) the lowest number which exceeds 2660 I Thank you for your help. Please return this questionnaire to your supervisor.

75