Improving an algorithm for classifying error types of front-line workers: Insights from a case study in the construction industry

Improving an algorithm for classifying error types of front-line workers: Insights from a case study in the construction industry

Safety Science 48 (2010) 422–429 Contents lists available at ScienceDirect Safety Science journal homepage: www.elsevier.com/locate/ssci Improving ...

542KB Sizes 0 Downloads 38 Views

Safety Science 48 (2010) 422–429

Contents lists available at ScienceDirect

Safety Science journal homepage: www.elsevier.com/locate/ssci

Improving an algorithm for classifying error types of front-line workers: Insights from a case study in the construction industry Tarcisio Abreu Saurin a,*, Mara Grando Costella a, Marcelo Fabiano Costella b a b

Industrial Engineering and Transportation Department, Federal University of Rio Grande do Sul, Av. Osvaldo Aranha n 99, 5 andar, Porto Alegre, CEP 90035-190, RS, Brazil Regional University of Chapecó, Rua Quintino Bocaiuva, 390-D. Chapecó, CEP 89801-080, SC, Brazil

a r t i c l e

i n f o

Article history: Received 6 September 2009 Received in revised form 8 December 2009 Accepted 11 December 2009

Keywords: Accident investigation Human error Safety Construction

a b s t r a c t The objective of this study was to propose improvements in an algorithm for classifying error types of front-line workers. The improvements involved: (a) making recommendations on organizing the data needed to apply the algorithm (e.g. identify actions and decisions that may serve as a reference for analysing the types of errors) and (b) drawing up guidelines for interpreting the questions that are part of the algorithm (e.g. how to define what counts as a procedure). The improvements were identified on the basis of testing the algorithm on construction sites, an environment in which it had not yet been implemented. Thus, 19 occupational accidents which had occurred in a small-sized construction company were investigated and the error types of both workers who had been injured and crew members were classified. The accidents investigated were used as a basis both to illustrate how the improvements proposed should be put in practice and to illustrate how practical insights for safety management might be derived from the algorithm. Ó 2009 Elsevier Ltd. All rights reserved.

1. Introduction There is broad consensus, when the perspective of ergonomics is used, that human errors are symptoms of deeper problems in a system, rather than their simply being the main cause of unwanted events. Thus, identifying errors is only the starting point for an investigation, which may lead to preventive actions over a broad spectrum of issues, which range from providing training events to re-designing products and processes (Dekker, 2002). Nevertheless, the literature points out that each type of error has certain causal patterns, which means that remedial measures should have different emphases for each type of error (Reason, 2008). For example, the use of error-proof devices is particularly recommended for tackling memory lapses and slips, since these types of error occur when behaviours become automated and, such devices, by definition, operate independently of the operators’ attention span. As to violations, characterized by deliberate deviations from safe working practices, they typically require improvements in procedures or improvements in safety culture (Saurin et al., 2008). Therefore, it is important to gather knowledge of the most frequent error types, especially based on data which allow long-term trends to be identified, so as to inform the design of health and safety (HS) management systems. The classifications of error types * Corresponding author. Tel.: +55 51 3223 8009; fax: +55 51 3308 4007. E-mail addresses: [email protected] (T.A. Saurin), [email protected] (M.G. Costella), [email protected] (M.F. Costella). 0925-7535/$ - see front matter Ó 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.ssci.2009.12.014

are useful as they make it viable to organize data and as they contribute to our understanding of the modes through which errors are caused and how they can be prevented (Sanders and McCormick, 1993). Nevertheless, there is a lack of methods for comparing the taxonomies of error types, such a lack making it difficult to identify which are best for what purpose (Baker and Krokos, 2007). Moreover, the literature offers little guidance on how to classify an error systematically, whatever the taxonomy considered. This might be a major source of unreliability when human error data are analysed and tabulated (Grabowski et al., 2009). Shappell et al. (2007) identified and classified errors involved in commercial aviation accidents, based on inferences made from accident reports by a panel of experienced pilots who received a basic training in human factors. Reason (1990) reported on laboratory studies in which errors were identified and classified based on front-line workers´ descriptions of them. Saurin et al. (2008), Van der Schaaf (1992) and Rasmussen (1982) drew up algorithms for classifying error types. Also, Reason (1997) proposed an algorithm for identifying the degree of workers’ culpability for unsafe acts, which provided insights on error types. It is worth noting that two of the algorithms mentioned (Rasmussen, 1982; Reason, 1997) were not apparently envisioned as tools for either practitioners or researchers, but simply as means of explaining the theoretical routes that lead either to erroneous or successful performance. One drawback shared by all of the studies mentioned is that key concepts are underspecified, thus leaving too much room for interpretation. For example, what should count as a procedure? What

T.A. Saurin et al. / Safety Science 48 (2010) 422–429

should count as a routine task? Indeed, there are various possible ways of interpreting these questions and underspecification makes it difficult for others to follow or criticize the methods (Dekker, 2007; Dekker and Hollnagel, 2004). Given this gap, this study sets out to propose improvements in one of the previously cited methods for classifying error types. It emphasizes the need to develop both practical implementation guidelines and more precise concepts for supporting the users of the method with regard to interpreting the questions required to classify an error. Although all of the methods cited are suitable candidates for improvements of this nature, this article addresses the method proposed by Saurin et al. (2008), since it is the result of previous studies on human error classification conducted by the authors of this article. Thus, the data available for proposing improvements concern this method rather than other ones. Costella and Saurin (2005) originally drew up and tested a method for classifying error types in order to analyse accidents in a factory making agricultural equipment. Later, Bassols et al. (2007) enhanced the tool as a result of their analysis of accidents in a fuel distribution company. In fact, the study by Saurin et al. (2008) compared the results of these earlier studies and proposed additional minor changes to previous versions of the method. Basically, the method consists of an algorithm with a series of questions, with answers of the yes or no type, which allows the error types by front-line operators to be classified, based on the SRK (skill, rule and knowledge) classification put forward by Reason (1990, 1997). In this classification, the errors are differentiated according to the levels of cognitive performance at which they occur, thus providing a more abstract classification than those based on observable characteristics of behaviour (e.g. omissions and repetitions) as well as classifications that highlight local contextual factors, such as stress, interruptions and distractions (Reason, 2008). It is against this background that this study sets out to make recommendations to facilitate the application and interpretation of questions of the algorithm, since studies conducted hitherto have indicated difficulties of this nature. Improvements in the algorithm were identified based on applying them to the investigation of occupational accidents in the construction industry, an environment in which they had not hitherto been tested. Moreover, unlike the previous applications, the researchers were able to interview all the workers who had been injured. Thus, the context of applying the tool was clearly different from that of the earlier studies, and this contributed to identifying opportunities for improvement. Moreover, applying the algorithm generated exploratory data, which are scarce in the literature, on the most frequent error types among construction workers.

2. Definition of human error and classification of error types adopted in this study Although no definition of human error is widely accepted, common characteristics can be identified in the various definitions available. According to Reason (1990), human error is a generic term for designating the occasions on which a planned sequence of mental or physical activities does not reach its objectives, without it being possible to attribute these failings to chance. On the other hand, Sanders and McCormick (1993) consider that human error is an inappropriate or unwanted decision or behaviour that reduces, or has the potential to reduce, efficiency, safety and other dimensions of the performance of a system. According to Reason (2008), the majority interpretation in the academic literature is that errors involve some kind of deviation. From this point of view, especially in highly standardised systems,

423

it is easier to label a failure as a human error, compared to less standardised activities, such as maintenance or construction (Rasmussen et al., 1994). In this study, it is considered that a human error has one or both of the following characteristics: (a) there was a deviation with regard to the correct method of execution, assuming that those who were performing the task had the resources (e.g. a favourable context of supervision and a supply of appropriate materials) at their disposal to carry out the correct method or (b) a wrong decision was taken, assuming that resources for making the correct decision were available. It is worth noting that the definition adopted does not necessarily mean that a human error leads to undesirable results, since chance can lead to good results even if there were faults in planning or implementation. As previously mentioned, this study adopts the SRK classification proposed by Reason (1990, 1997), which divides errors into three categories: (a) Skill-based errors (SB): at this level, the operator uses automatic and routine behaviours, but does so at a low level of awareness. The errors involve failures of execution, lapses and slips being the most common. While lapses typically involve not carrying out a step of a certain task at the right time (or completely neglecting the step), slips involve carrying out a step correctly and, often suddenly, deviating from the right course of action. Both slips and lapses are unintentional errors and they are associated with three causal factors: the performance of a routine or habitual task in familiar circumstances; attention being devoted to a preoccupation or distraction; a change, either in the plan of action or in the surroundings. Lapses or slips occur prior to the detection of a problem. (b) Rule-based errors (RB): at this level, operators raise their awareness in order to apply familiar rules to deviations which are also familiar in routine situations. Three basic types of failure may occur at the RB level: application of a bad rule; application of a good rule, but inappropriate to the situation in question; non-application of a good rule. In this study, only this latter type of RB failure is considered as one type of error of front-line workers, it being designated by the term violation. It is assumed that the application of bad rules, or the application of a good rule inappropriate to the context, are types of errors that should be attributed to those responsible for designing the rules, who are not usually the front-line workers. (c) Knowledge-based errors (KB): at this level the operator acts at a high level of awareness so as to solve problems for which there are no rules. Errors are very likely when the operator is required to operate at this level, because, among other reasons, there are usually organizational pressures that limit the time and resources for decision making. When the operator commits an error at the KB and RB levels, he/she is aware that a problem exists; there is intention, therefore, in his/her actions. On the other hand, the errors at the SB level are not intentional, since the actions were not taken consciously. It should be emphasized that each of the three major categories of error can be further divided into subcategories. For example, memory lapses may involve failure in storing information and failure to retrieve information, while violations may involve routine violations and necessary violations (Reason, 2008). 3. Algorithm for classifying error types The algorithm proposed by Saurin et al. (2008) consists of ten questions, which can lead to five types of final answers (Fig. 1): slips, memory lapses, violations, knowledge-based errors, and there was no worker error.

424

T.A. Saurin et al. / Safety Science 48 (2010) 422–429

Fig. 1. Algorithm for classifying error types (Saurin et al., 2008).

After determining which type of error occurred, or obtaining the conclusion that no error occurred, question 10 of the algorithm should always be asked (‘‘was there any other worker involved?”). This question was introduced to emphasize that the algorithm should be applied to everyone who formed part of the team of operators involved at the scene of the accident, rather than just the victim. 4. Research method With a view to identifying opportunities for improving the algorithm, it was applied in order to analyze accidents that had occurred in a small business in Brazil, which constructs residential and commercial buildings. During the period when this study was conducted, the company employed 70 workers on their construction sites (none of which were contracted out), besides 3 civil engineers and 1 architect. The main criterion for choosing this company was the ease of access that the researchers had to data. HS management was characterized by the attempt to comply with legislation, with no use being made of any HS best practice in the construction industry, such as those identified by Hinze (2002). The main person in charge of HS management was one of the

engineers, who happened to be a safety engineer. Although a mandatory safety committee as per regulations had been formally set up, it had not held meetings regularly. In addition, there were no procedures to guide the investigation of accidents, nor were there documented records of accidents that had occurred. Thus, it was necessary for a member of the research team to visit the company’s three construction sites that were in progress while the research study was being conducted. She asked the 70 employees individually if they had experienced or witnessed an accident in the company studied. Based on these questions, 26 accidents were identified. However, only 19 events were selected for inclusion in this study, given that, in the other cases, the workers who had suffered injuries and who, therefore, could have added important information, were no longer employees of the company. In the next stage of the research, interviews were conducted with the following stakeholders and in the following sequence: (a) with the workers who had been injured, (b) with the workers who were on the team in which the victim had worked, and (c) with the company’s three engineers. These interviews aimed to clarify the context in which each event occurred, to underpin the understanding of its causes and to obtain support for applying the algorithm. With regard to the workers (31 interviews in all),

T.A. Saurin et al. / Safety Science 48 (2010) 422–429

the interviews lasted on average for about 30 min and were not tape-recorded in order to minimize embarrassment or inhibitions. Since there was only one interviewer, she was also responsible for taking notes of the reports. Of course, the use of two interviewers can be beneficial, especially in the case of more complex events which might well result in extensive reports. In these cases, a protocol could be used similar to the one adopted in the critical decision method (CDM, a well known method for conducting cognitive task analysis), in which one interviewer acts as the primary facilitator and the other interviewer takes notes and keeps track of the overall plan for the interview (Crandall et al., 2006). In this study, interviews with workers were in three stages: (a) Initially, they were asked to provide information which allowed a basic characterization of their profile and the severity of the event, such as their length of working service in the company; education; age; position; the length of time they were laid off as a result of the event. (b) Next, the researcher asked the interviewee to give his version of the accident. Then, based on this, the researcher recounted the story back to him in order to check if the researcher had correctly understood what had really happened. (c) In the last stage, questions were asked based on a script drawn up by Dekker (2002) to support the understanding of the organizational context that led to the human errors. This script gave rise to other questions during the interview and included questions such as: has a similar situation happened before? Were you trained to deal with this situation or was it a new or unforeseen situation? What safety procedures or work performance procedures clearly apply in this situation? Were these procedures followed? Was the task carried out under pressures of time, cost or other similar ones? Do you think another colleague would do the same thing you did or would he do it differently? As to the interviews with the engineers, they were held to clarify technical aspects related to each event, such as checking whether the work performance procedures used at the time of the accident had been the ones usually used in the company. In addition, photographic records were made of the environment in which each event occurred. It should be stressed that the possibility of conducting interviews with the workers and engineers, as well as unrestricted access to the construction sites, allowed the event to be described with a relative wealth of detail, especially in comparison to earlier studies that applied the algorithm, and which were based on the reports at hand in the companies themselves. The accident reports written by these specific companies were often superficial, to the extent that even members of the safety staff in one of the companies were unable to reconstitute the facts (Saurin et al., 2008). Specific interviews, such as those conducted in this study with workers and managers so as to apply the algorithm, could be shortened or even eliminated if the accident reports prepared by the companies explicitly addressed the 10 questions of the algorithm. After the interviews, a detailed description of each accident was written up and the algorithm was applied from the perspective of each of the workers who had been injured and from the perspective of each team member. However, in this first round of applications, the researchers noted difficulties in interpreting the questions, and sometimes the result was inconsistent with the context of the event. Thus, modifications were made to the algorithm, such that the results presented in this article reflect how the modified version of the tool was applied. For each event, the result of the application of the algorithm was based on a consensus achieved by a team formed by three

425

researchers (the authors of this article). While one of the team members was a business administrator who did not have previous experience with the algorithm, the other two researchers were civil engineers who had participated in the studies in which the algorithm was originally developed. One of the civil engineers was also a manager of the company investigated, which facilitated understanding the events and their causes. While this researcher could be more subject to biased interpretation of the algorithm, the more independent perspectives of the other researchers minimized this. Moreover, the involvement of the company’s engineer simulates real-world settings, in which the staff of any company interested in using the tool can apply the algorithm.

5. Results 5.1. Recommendations on how to apply the algorithm This item presents recommendations for applying the algorithm which were not explained in the earlier studies. These recommendations are illustrated by drawing on the accidents investigated: (a) Recommendation 1: based on the description of the accident, episodes should be identified that may serve as a reference for analysing the types of error. Such episodes can be both decisions taken by front-line workers and the actions they took. On the one hand, a candidate action to be chosen as a basis on which to apply the algorithm is the one that triggered the accident, which in turn is characterized by a sudden release of energy. The action is of primary interest and should be selected if front-line workers were directly involved in the energy release. For example, the collapse of a trench clearly involves the sudden release of energy, but this is not necessarily related to the actions of any front-line workers. In this case, it might be important to select additional episodes as a basis on which to apply the algorithm, such as the decision to build the trench in a certain specific way. On the other hand, it is more difficult to select the decisions chosen as the basis for applying the algorithm, since they are both non-observable and take place before the energy release. As a result of these assumptions, it can be concluded that the algorithm could be also applicable for analyzing near misses, since they are interpreted as proposed by Cambraia et al. (2010): an instantaneous event, which involved the sudden release of energy and had the potential to generate an accident. In the field study, the need to adopt recommendation 1 became clear for the events in which, in addition to an action having occurred which immediately triggered the accident, the worker, moreover, did not use the necessary personal protective equipment (PPE). In these situations, the algorithm was applied first to analyze the action and once again to analyse the decision not to wear the PPE. As examples, two similar accidents can be cited in which labourers, not wearing gloves, had had their fingers jammed between the cable of the cart carrying concrete and the door frames. Taking into account the action of pushing the cart which culminated with the impact against the door (i.e. the sudden release of energy), the application of the algorithm followed the sequence 1-2-3-4-5-10, which characterizes a slip. It is worth mentioning that in question 3, all that was evaluated was if the proper procedure for pushing the cart had been followed, but what was not assessed was the procedure that required the use of gloves. (b) Recommendation 2: based on recommendation 1, we see that it is possible, even without the application of the algorithm, to conclude that there was no error by the workers involved. Such cases include situations in which there was no action or decision by the workers which might serve as references for the application of the algorithm. For example, there was an accident in which the shoring

426

T.A. Saurin et al. / Safety Science 48 (2010) 422–429

of a foundation trench did not support the loads and a worker was partially buried. In this case, although the application of the algorithm was unnecessary, it was nevertheless used (sequence 1-23-4-10) in order to validate it for such scenarios. (c) Recommendation 3: in case of doubts arising about the answer to a question, a good practice is to test different alternatives in order to check if the end result will be the same or not. As an example, the case may be cited in which, while a steel bar was being cut in a saw, a spark from the cutting disc flew into the eye of the operator, who was wearing safety glasses with side shields. If use is made of both the hypothesis that the procedure was incorrect because the glasses were not of the appropriate model (sequence 1-2-10), and the hypothesis that the glasses were adequate, but perhaps faulty (sequence 1-2-3-4-10), the conclusions are that there was no error by the worker. (d) Recommendation 4: similarly to what happened in this study (see item 4), the application of the algorithm should be undertaken by a team and include the participation of members with experience of the domain in question. The latter guidance was not followed in the previous applications of the algorithm. In the study by Costella and Saurin (2005), there was passive collaboration from the company’s safety specialists, who limited themselves to providing information but did not apply the algorithm directly. In the study by Bassols et al. (2007), the researchers did not have the opportunity to discuss the use of the tool with the company´s representatives.

5.2. Modifications in the algorithm and recommendations as to interpreting the questions This item presents seven modifications or recommendations for interpreting the questions of the algorithm, as follows: (a) Modification or recommendation 1: since question 1 (‘‘was the worker aware of the content of the procedures and/or was he/she trained?”) was difficult to interpret when there were no documented procedures, it was replaced with the following question: was it a routine/habitual task for the worker? This change kept the essence of the original version, since a question on whether the worker was familiar with his work was still asked. However, due to this change, the basis for assessing such familiarity was on the worker´s acquaintance with the task, rather than with either the procedures or training. For this purpose, a task was defined as the outcomes that workers were trying to achieve, since it is not always the literal action sequences (the procedures) that matter as the fact that practitioners are trying to get things done (Crandall et al., 2006). As an example of using the modified question, an accident can be cited in which a worker, without protective glasses, was splashed in the eye with concrete while it was being poured. Although he did not know the ideal procedure for this situation (wearing glasses), nor had he received formal training to do so, the work was routine – i.e. the task was the same whether or not he wore glasses. Thus, in this case, the answer to question 1 was yes. It is worth mentioning that in the version of the algorithm shown in Fig. 1, the researcher could be induced to give a negative answer to question 1. This would imply the assumption of an inappropriate task have been assigned to that specific worker, which does not match the context of the example cited. (b) Modification or recommendation 2: similarly to what happened with question 1, the interpretation of question 2 (‘‘was the procedure and/or training adequate and applicable?”) was difficult when there were no documented procedures specifying the steps and rules applicable to the task, as was the case in the company investigated. In this case, it is proposed that the procedure adopted

as a reference should be the one described in regulations or the one that is tacitly accepted as correct by workers and managers. If it becomes evident from interviews that there is no consensus about what the procedure tacitly accepted as correct is, the answer to question 2 should be no. This situation can be illustrated by an accident while the tower of a hoist was being dismounted. In that event, a worker who was inside the building receiving the elements of the tower (each 1 m  2 m in size) broke a finger when his hand was squashed between the piece being received and the structure of the tower. The sequence in the algorithm was 1-2-10, for all team members, making it clear that the procedure was inadequate. In fact, there was no consensus either about how many people should remove the pieces, or about whether or not to use gloves for this task, or about what the responsibilities of the employees involved should be. Although there is an intrinsic risk of something falling from a height in this task, it is likely that the lack of consensus arises from the fact that this operation is usually performed only once during the life-cycle of a construction site. (c) Modification or recommendation 3: since the previous proposal is taken into account, it is unnecessary to make mention of the word ‘training’ in questions 2, 3 and 6. If the word ‘procedure’ incorporates those that are tacit, this implies that it also covers situations where the operator knows the procedures only through training, whether these are either formal or informal occasions based on learning from more experienced colleagues. (d) Modification or recommendation 4: question 3 (‘‘was the procedure and/or training followed?”) should be answered from the perspective of everyone involved in the team who performed the task, rather than just in terms of the worker to whom the algorithm is being applied. This means that, should any member of the team not have followed the procedure, the answer to question 3 should be no. Two similar accidents illustrate the need for this recommendation. They involve events in which workers had their feet pierced by nails that were protruding from pieces of wood scattered on the floor. If the proposed recommendation is adopted, the sequence of applying the algorithm is 1-2-3-6-7-10 (there was no worker error), given that other team members did not follow the procedure of removing the nails from the bits of wood and storing them in piles, in an organized manner. (e) Modification or recommendation 5: it is proposed that in question 6 (‘‘if the procedure and/or training had been followed, would the incident happen?”) be added the expression ‘‘with the same severity”. The need for this change is illustrated by analysing the decision not to wear gloves in the accidents already commented on, and in which the workers had their fingers crushed against a door frame when they were pushing a wheelbarrow. If the algorithm were to be used in its original form, this decision would be analysed according to the sequence 1-2-3-6-10 (there was no worker error). However, although using gloves would not avoid the occurrence of an accident, it is likely their use would minimize its consequences, which is sufficient justification for their use. Thus, according to this proposal, the analysis of the decision not to wear gloves leads the algorithm to the sequence 1-2-3-6-7-8-10 (violation), the option which was considered in the tabulation of the results. These accidents also indicated, as an opportunity for improvement, that the design of the carts should be re-assessed in order to facilitate their passage through the doors. (f) Modification or recommendation 6: the case study indicated that question 7 (‘‘would another worker behave in the same way in the same situation?”), referred to as the substitution test by Reason (1997), remains subjective even when it is possible to compare a worker’s performance with that of his team-mates. In fact, when there is a team involved, subjectivity still persists to the extent that the performance of different teams should be compared.

T.A. Saurin et al. / Safety Science 48 (2010) 422–429

Thus, it is proposed that, if in doubt, a supporting question be used in order to answer question 7 (it is suggested that this question should not be included in the graphical representation of the algorithm): were resources (e.g. a favourable context of supervision and the supply of proper materials) for applying the procedure available, without there being any dependence on others? This question aims to make explicit one of the main reasons that can lead to the breach of an ostensibly proper procedure, namely the lack of the resources needed to apply it. If the answer is yes (the resources were available), this probably means that other workers would not act in the same way. If the answer is no, this probably indicates that other workers would act in the same way, thus characterizing the absence of error. Four examples are cited which justify the contribution of the support question, it being the case that the first two resulted in violations and the last two in the absence of error. The first example refers to an accident when three workers were transporting steel bars. On arrival at the place of unloading, the bundle of bars spilled open and a worker had his hand squashed between the bars. However, this worker was the only one of the three who was not wearing leather safety gloves, which, according to the investigation were available. Therefore, given his decision not to wear gloves, the application of the algorithm for the injured worker had the sequence 1-2-3-6-7-8-10 (violation). As to the second example, unlike the previous one, this illustrates a situation in which all team members acted in the same way. In this case, three workers were preparing a cement mixer to be manhandled over a short distance, without having set a safety lever that would have maintained its loader fixed on top of the mixer. However, one of the team members crossed in front of the mixer (the tacit rule was always to cross behind the mixer, since the loader could only fall forwards) at the same moment as the loader fell and struck him on the back. In this case, the resources to perform the correct decision were available (the lever was in good condition and was accessible) and the application of the rule was immediate, depending only on the workers’ action. In applying the algorithm, the decision not to set the safety lever was counted as three violations, one for each team member. Nevertheless, other accidents revealed situations in which resources were not available and the procedure was not easy to apply. In one such case, the worker used a portable ladder as a support when nailing the mould for a beam, and probably due to losing his balance, he hit his finger with a hammer. In another case, in order to get down from a suspended scaffold, a worker jumped from a height of 1.50 m to reach the ground, lost his balance and broke an arm. In both accidents, the algorithm indicated the sequence 1-2-3-6-7-10 (there was no error). In fact, the procedures set out in Brazilian regulations and good practices in the industry recommend the use of scaffolding as a means to get access to the beams and to use a ladder to descend from scaffolding. However, such resources (scaffold and ladder) were not made available, which made the workers choices necessary and regarded as normal. (g) Modification or recommendation 7: the original version of question 8 (‘‘was the error intentional?”) was replaced with the following one: was the action or decision intentional? The purpose of this change is to make explicit the concept of error presented in item 2 of this article. If all the proposals presented are taken into consideration, one can arrive at a new version of the algorithm (Fig. 2).

427

5.3. Error types in the accidents investigated

made that either the frequencies of error types or the correspondent remedial measures are generalizable to other companies. The 19 accidents investigated made it possible to apply the algorithm 34 times, with 22 of these applications referring to the points of view of the workers who had been injured and 12 referring to those of their team-mates. From the perspective of those who had been injured, the number of applications of the algorithm was greater than the number of workers involved. In fact, whereas 19 workers had been injured, the algorithm was applied 22 times to them, given that, in some events, there was more than one action or decision that served as a reference for the application. Table 1 summarizes the types of errors for all the workers, only for those injured and only for their team-mates. Based on Table 1, it can be seen that, whatever the perspective, the category of ‘‘no worker error” was predominant. These results are consistent with a study by Suraji et al. (2001), which analyzed the causes of 500 accidents on building sites in the UK. In their study, the authors concluded that in 70.1% of events there was no error committed by workers involved in the accidents. There was also consistency with a study conducted by Saurin et al. (2005), who investigated the frequency with which errors by workers contributed to failures in the planning and control of HS on five building sites. In that study, the conclusion was that, taking the average of the five sites into consideration, there was no worker error in 72.8% of the events. Studies by Suraji et al. (2001) and Saurin et al. (2005) used classifications of types of error different from those used in this study, thus invalidating comparisons for each type of error. Table 2 presents the frequency with which each sequence of the algorithm was used, which helps to clarify the different kinds of situations in which there was no worker error. The high frequency of the sequence 1-2-3-6-7-10 means, in general, that the breach of good rules was a common practice and accepted as normal by workers and managers, as well as meaning that the workers had a passive role in the sequence of events. It is also important to note that two categories of errors were not associated with any application of the algorithm. With regard to the absence of errors at the level of knowledge, this is compatible with the nature of the tasks performed on the building sites of the company investigated, such tasks being relatively repetitive and predictable. On the other hand, these kinds of tasks are susceptible to memory lapses, especially in dynamic environments such as building sites, where there tend to be several interruptions in routine activities that give rise to lapses (e.g. interruptions due to both interference among crews and the constant flow of people and materials). The absence of lapses can mean either that their consequences have not been strong enough to contribute to an accident, so much so that such errors are difficult to identify because they generally lead to the omission of some activity, which in turn makes it difficult to observe the error. Considering the results of Tables 1 and 2 together, in the context of the construction company investigated, there is evidence that the greatest potential for advances in HS, in that company, lies in tackling latent conditions rather than workers´ active failures. In fact, this indicates that regardless of being a human error investigation tool, the proposed algorithm does not adopt a person model of accident causation (Reason, 2008). It induces the investigation of the context in which errors happened (a feature of system models of accident causation), allowing the identification of contributing factors that are temporally and physically distant from the accident scenario. In particular, at the sites investigated, the causal factors were strongly associated with management problems, such as:

This section presents the results of the application of the algorithm in the case study, in order to illustrate how its use might produce practical insights for improving HS management. No claim is

(a) The non-existence of formal procedures, there being excessive confidence by both managers and the workers, in the tacit knowledge of the latter. Some tacit procedures were not consen-

428

T.A. Saurin et al. / Safety Science 48 (2010) 422–429

Fig. 2. New version of the algorithm. Shadowing highlights changes in comparison with the version presented in Fig. 1.

Table 1 Error types in the accidents investigated at construction sites.

There was no error Violations Slips

Applications for all the workers (n = 34)

Applications for the workers injured (n = 22)

Applications for their team-mates (n = 12)

24 (70.5%) 8 (23.5%) 2 (5.9%)

15 (68.2%) 5 (22.7%) 2 (9.1%)

9 (75%) 3 (25%) 0 (0.0%)

Table 2 Sequences in the application of the algorithm. Sequence

Frequency

1-2-3-6-7-10 (there was no error) 1-2-3-6-7-8-10 (violation) 1-2-3-4-10 (there was no error) 1-2-10 (there was no error) 1-2-3-4-5-10 (slip)

14 (41.2%) 8 (23.5%) 6 (17.6%) 4 (11.8%) 2 (5.9%)

sual or, when there was a consensus, they were notably faulty and in conflict with legislation and good industrial practices (e.g. the non-use of safety glasses in concreting activities was accepted as normal).

(b) Technical failures in physical barriers (for example, the collapse of the shoring of an embankment) or simply the absence of physical barriers required by the regulations (for example, the collapse of another embankment, which did not have any shoring).

6. Conclusions The main objective of this study was to identify opportunities for enhancing an algorithm for classifying types of human errors, based on its application to the investigation of accidents on building sites. Unlike previous applications of the algorithm, in this study, the analysis of accidents was not based on investigation re-

T.A. Saurin et al. / Safety Science 48 (2010) 422–429

ports prepared by the company. Thus, it fell to the researchers to reconstitute the facts. However, this was not necessarily a disadvantage in relation to previous applications of the algorithm given that, quite unlike them, this time the researchers had full access to interviewing those involved in accidents and to visiting the settings in which they had occurred. Indeed, the greater access to the details of accidents made the application of the algorithm easier and more robust, since the questions established by the tool were answered with less uncertainty. Based on the case study, four recommendations were established for applying the algorithm: (a) identifying episodes that serve as references for analysis, such as the workers’ actions or decisions, (b) testing different alternatives in case of doubt, given that the end result is often the same, (c) acknowledging that the application may be unnecessary, or be conducted only for the purpose of testing the tool, in situations where there are no workers’ actions or decisions which serve as references, and (d) applying the algorithm as a team effort and with the participation of experts in the domain. Since the users of the algorithm should share a common understanding of the questions put forward by the tool, key concepts were defined and guidelines for interpreting the questions were provided. These modifications involved: (a) providing guidelines for differentiating actions from decisions (e.g. actions involve a release of energy, while decisions are non-observable); (b) defining what counts as a procedure and what types of procedures should be considered in an order of priorities (e.g. first check whether the company has written/formal procedures; if they do not, then consider what is required by regulations and, as a last alternative, check if there is a tacit procedure accepted as correct by workers and managers); (c) making the assumption that training workers is equivalent to making workers aware of procedures, whether these be tacit or formal; (d) defining what counts as a routine task, based on the definition of task proposed by Crandall et al. (2006); (e) making the assumption that to assess whether there was a worker error, it is necessary to check whether any other crew member did not follow the procedures, rather than this being the responsibility of the injured worker alone – this is in line with the fact that there is interdependence among tasks in many settings, such as construction sites; and (f) providing a support question for conducting the substitution test suggested by Reason (1997). It is also worth stressing that the definition of human error adopted in this study was stricter than previous definitions, since the label of human error was only assigned when resources were available for carrying out either the proper action or decision. This largely explains why 70.5% of the applications in the case study indicated the absence of error. This was a strong indication that, in the building sites investigated, HS actions should be primarily targeted on the design of the HS management system, rather than being focused on the behaviours of the workers. Furthermore, it is fundamental to emphasize that the algorithm should not be used as a tool to identify the degree of culpability of those involved in the accidents. The classification of the types of error is only the starting point for further in-depth research that should seek the root causes for the lack of safety. The results also contribute to the formation of databases and identification of long-term trends, with the consequent targeting of preventive measures in accordance with the most frequent types of error. The recommendations and guidelines developed in this study resulted in a new version of the algorithm, which has made two important advances in comparison with the previous version: (a) it reduces the degrees of freedom that users have to interpret the

429

questions and to organize the process of data collection and analysis and (b) it establishes a more precise conceptual basis for further improvements and validation. Even though is not possible to give an assurance that this new version is generalizable to all possible occupational accidents on construction sites (let alone other settings), the evidence accumulated so far indicates that the changes in the algorithm have been made mostly as a result of a lack of conceptual precision, rather than the particularities of any one domain. On the other hand, testing the algorithm in a variety of domains has been a means to subject it to variability, which in turn requires that the underlying assumptions of the tool be made more explicit and informative. Arising from what has been done in this research study, opportunities can be identified for further studies, such as: (a) applying the algorithm in conjunction with other tools for investigating accidents (e.g. the critical decision method), in order to identify the complementarities among them; (b) developing and applying criteria for inferring the validity of the algorithm, such as its reliability, diagnosticity and usability. The improvements of the algorithm presented in this article set the basis on which it would be worth applying a validation framework in a future study. References Baker, D., Krokos, K., 2007. Development and validation of aviation causal contributors for error reporting systems (ACCERS). Human Factors 43 (2), 185–199. Bassols, F.F., Ballardin, L., Guimarães, L.B.M., 2007. Análise dos tipos de erros em uma distribuidora de produtos derivados de petróleo. In: Encontro Nacional de Engenharia de Produção, 27. Anais...Associação Brasileira de Engenharia de Produção (ABEPRO). Cambraia, F.B., Saurin, T.A., Formoso, C.T., 2010. Identification, analysis and dissemination of information on near misses: a case study in the construction industry. Safety Science 48 (1), 99. Costella, M., Saurin, T.A., 2005. Proposta de método para identificação de tipos de erros humanos. In: Encontro Nacional de Engenharia de Produção, 25. Anais...Associação Brasileira de Engenharia de Produção (ABEPRO). Crandall, B., Klein, G., Hoffman, R., 2006. Working Minds: A Practitioner’s Guide to Cognitive Task Analysis. MIT Press, Cambridge. Dekker, S., 2002. The Field Guide to Human Error Investigations. Ashgate, London. Dekker, S., 2007. Just Culture: Balancing Safety and Accountability. Ashgate, London. Dekker, S., Hollnagel, E., 2004. Human factors and folk models. Cognition, Technology and Work 6, 79–86. Grabowski, M., You, Z., Zhou, Z., Song, H., Steward, M., Steward, B., 2009. Human and organizational error data challenges in complex, large scale systems. Safety Science 47 (8), 1185–1194. Hinze, J., 2002. Making Zero Injuries A Reality. Report 160, Construction Industry Institute, Gainesville, FL, 110 p. Rasmussen, J., 1982. Human errors: a taxonomy for describing human malfunction in industrial installations. Journal of Occupational Accidents 4, 311–333. Rasmussen, J., Petersen, A., Goodstein, L., 1994. Cognitive Systems Engineering. John Wiley & Sons, New York. Reason, J., 1990. Human Error. Cambridge University Press, Cambridge. Reason, J., 1997. Managing the Risks of Organizational Accidents. Ashgate, Burlington (252 p). Reason, J., 2008. The Human Contribution: Unsafe Acts, Accidents and Heroic Recoveries. Ashgate. Sanders, M., MCcormick, E., 1993. Human Factors in Engineering and Design, seventh ed. McGraw-Hill, New York, NY. Saurin, T.A., Formoso, C.T., Cambraia, F.B., 2005. Analysis of a safety planning and control model from the human error perspective. Engineering, Construction and Architectural Management 12 (3), 283–298. Saurin, T.A., Guimarães, L.B.M., Costella, M.F., Ballardin, L., 2008. An algorithm for classifying error types of front-line workers based on the SRK framework. International Journal of Industrial Ergonomics 38, 1067–1077. Shappell, S., Detwiller, C., Holcomb, K., Hackworth, C., Boquet, A., Wiegmann, D., 2007. Human error and commercial aviation accidents: an analysis using the human factors analysis and classification system. Human Factors 43 (2), 227– 242. Suraji, A., Duff, R., Peckitt, S., 2001. Development of causal model of construction accident causation. Journal of Construction Engineering and Management 127 (4), 337–344. Van der Schaaf, T., 1992. Near Miss Reporting in the Chemical Process Industry. Ph.D. Thesis, Eindhoven University of Technology.