Building confidence in models for multiple audiences: The modelling cascade

Building confidence in models for multiple audiences: The modelling cascade

Available online at www.sciencedirect.com European Journal of Operational Research 186 (2008) 1068–1083 www.elsevier.com/locate/ejor Decision Suppor...

991KB Sizes 21 Downloads 40 Views

Available online at www.sciencedirect.com

European Journal of Operational Research 186 (2008) 1068–1083 www.elsevier.com/locate/ejor

Decision Support

Building confidence in models for multiple audiences: The modelling cascade Susan Howick a

a,*

, Colin Eden a, Fran Ackermann a, Terry Williams

b

Strathclyde Business School, University of Strathclyde, 40 George Street, Glasgow G1 1QE, UK b School of Management, Southampton University, Southampton SO17 1BJ, UK Received 3 October 2006; accepted 2 February 2007 Available online 23 March 2007

Abstract This paper reports on a model building process developed to enable multiple audiences, particularly non-experts, to appreciate the validity of the models being built and their outcomes. The process is a four stage reversible cascade. This cascade provides a structured, auditable/transparent, formalized process from ‘‘real world’’ interviews generating a rich qualitative model through two intermediate steps before arriving at a quantitative simulation model. There are a number of advantages of the cascade process including; achieving comprehensiveness, developing organizational learning, testing the veracity of multiple perspectives, modeling transparency, achieving common understanding across many audiences and promoting confidence building in the models. The paper, based on extensive work with organizations, discusses both the cascade process and its inherent benefits. Ó 2007 Elsevier B.V. All rights reserved. Keywords: Simulation; Validity; Cognitive mapping; Cause mapping

1. Introduction Simulation models are useful for testing possible organizational changes from for example policy making (Forrester, 1961; Sterman, 2000) or improved process design (Law and Kelton, 2000; Pidd, 2003, 2004; Robinson, 2004) and also for accounting for the impact of past changes (Ackermann et al., 1997). Indeed a model that can adequately account for historical change is an important part of evaluating future change * Corresponding author. Tel.: +44 141 548 3798; fax: +44 141 552 6686. E-mail address: [email protected] (S. Howick).

(Forrester and Senge, 1980; Pidd, 2004; Robinson, 2004). It can be argued that understanding the subtle dynamic working of an organization is usually crucial to considering options for changing it (Forrester, 1961; Sterman, 2000). However, simulation models, and the process of building them, are often seen as opaque by those who have to place their trust in them. These audiences need to have confidence in the outputs – by assessing their ability to replicate the past in a form that is recognizable to those who were a part of it and understanding the underlying structure. They also need to recognize the characteristics of the model in terms that they would use – their own descriptions of how things work.

0377-2217/$ - see front matter Ó 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.ejor.2007.02.027

S. Howick et al. / European Journal of Operational Research 186 (2008) 1068–1083

For model builders this is about ‘model validity’. There is a vast literature on simulation validation. For example, in the discrete-event simulation literature, Robinson (2004) and Pidd (2004) explore the general concepts of verification and validation, Balci (1994) provides a review of verification and validation techniques and Gass (1983, 1993), Gass and Joel (1981), Balci (2001), Fossett et al. (1991) discuss procedures for assessing model validity. In the system dynamics simulation literature, there are many articles dealing with the philosophical aspects of model validation (for example, Forrester, 1961; Bell and Bell, 1980; Forrester and Senge, 1980; Richardson and Pugh, 1981; Barlas and Carpenter, 1990). In addition, a comprehensive list of tests to build confidence in models was set down by Forrester and Senge (1980). Other authors have since added to these tests, for example Barlas (1989, 1996), Sterman (1984, 2000) and Coyle and Exelby (2000). In the majority of texts on validation, it is agreed that the purpose of model validation is to gain confidence in the model from the model’s audience. Although the literature on model validation provides many important, even essential, criterion and associated methods (for example Forrester and Senge, 1980; Sterman, 1984; Barlas, 1989; Law and Kelton, 2000; Pidd, 2004; Robinson, 2004), many of these are opaque and complex from the point of view of the manager (Coyle and Exelby, 2000). Models have to be both qualitatively and quantitatively valid. They have to create an understandable and tight description of how the ‘world’ works, and in many cases, depending on the purpose of the modelling, create a numerical assessment of the consequences of change. Over the last 12 years the authors have been involved in developing qualitative and quantitative models of complex engineering and construction projects that have to satisfy multiple audiences (Ackermann et al., 1997). These audiences include multiple non-scientific as well as scientific/expert audiences (Howick, 2005; Williams et al., 2003). Those contributing to the construction of the models wanted to (i) account for the consequences of change interventions made by different participants, and (ii) learn from these accounts in order to make changes to management processes. The first of these was expected to result in successful litigation where some of the blame for outcomes can be apportioned to the customer. The audiences were both internal and external: engineering and construction managers, lawyers, and modeling experts as well as jury members and judges.

1069

This paper reports on a model building process developed to enable multiple audiences, particularly non-experts, to appreciate the validity of the models and thus gain confidence in these models and the consulting process in which they are embedded. The process is a reversible cascade that uses a formal qualitative model to guide the development of quantitative simulation model from which the logic and results, in turn, lead to the review and recreation of the qualitative model. The cascade involves two intermediate steps that formalize the process of converting natural language to numerical simulation and back again. The paper first introduces the cascade process of model building before discussing its benefits. As such it is presented as a method likely to be particularly relevant where there are multiple audiences and users of the model. 2. Background The experiential research basis for the paper derives from the authors’ postmortem in-depth analysis of many large and complex projects as part of claims analysis, particularly ‘‘delays and disruptions’’ claims for projects whose total expenditures appeared at first inexplicable or surprising. The analysis involved constructing system dynamics simulation models to enable a quantitative evaluation of the consequences of disruptions (interventions), and the construction of qualitative models to aid building ‘the case’ as well as enabling building the confidence of all the audiences in the outcomes of the simulation model (Ackermann et al., 1997). In addition the authors’ continued their involvement after litigation through the development of new organizational processes, particularly risk analysis and risk mitigation processes (Ackermann et al., 2006). The projects analyzed were in the fields of aerospace, mechanical engineering, and civil construction. Typically the projects lasted 18–30 months and were valued at $30–500 m. 3. The cascade The cascade process provides a structured, auditable/transparent, formalized process from ‘‘real world’’ interviews to a qualitative model to a quantitative model. Fig. 1 shows the process, and the four stages of modeling. The models produced at each of these stages reinforce the same argument to the modeling audience. However, each stage presents the material in a different format, whether that

1070

S. Howick et al. / European Journal of Operational Research 186 (2008) 1068–1083

Cognitive maps (CoM)

Stage 1 Cause map (CaM) [aggregation of CoM’s]

Influence Diagram (ID)

Stage 3

Stage 4

Stage 2

System dynamics formal ID (SDID)

System Dynamics Simulation Model

Fig. 1. The cascade model building process.

is through a representation of the semantically rich story given in the first stage cognitive and cause map or through the more formal structure of a simulation model. By repetitively presenting the same arguments about the behaviour of the system in different formats, each of the models supports and tests the other and therefore makes each of the representations more believable and trustworthy. The arrows in Fig. 1 represent the progression of model creation and amendment. Initially, models are created by moving linearly down the cascade. During or after this process, changes can be made in either an upward or downward route in the cascade. If, for example, the system dynamics influence diagram (SDID) is being created, but this leads to learning about the cause map, the modeler will then move back up the cascade through the influence diagram (ID). This does not mean that cyclical learning does not take place, for example learning about the ID whilst working with the system dynamics model, or learning about the SDID whilst working with the cause map. It does, however, mean that the route taken to make any amendments must go through all the intermediary models. 3.1. Stage 1: Qualitative cognitive and cause maps The process of initial elicitation can be achieved in two ways. One option is to interview, and build cognitive maps (Eden, 1988; Ackermann and Eden, 2004) for each participants views. Thus the aim is to gain a deep and rich understanding that taps the wealth of knowledge of each individual. These maps

act as a preface to getting the group together to review and assess the total content represented as a merged cause map (Eden and Ackermann, 1998). The second option is to undertake group workshops where participants can contribute directly, anonymously and simultaneously, to the construction of a cause map. The participants are able to ‘piggy back’ off one another, triggering new memories, challenging views and developing together a comprehensive overview (Ackermann and Eden, 2001). The second approach provides the greatest risk of ‘urban myths’ being the dominant view, or even being created by the group. However, the use of processes that allow for anonymity encourage multiple perspectives to emerge. These perspectives are usually additive but can also show conflicting points of view. As contributions from one participant are captured and structured to form a causal chain, this process triggers thoughts from others and as a result a comprehensive view begins to unfold. Furthermore by being able to see the complexity participants may move away from a feeling of guilt – they did not do a good enough job – to appreciating the complexity of the project. The first option provides participants with an environment where they can be more reflective and honest than they would be if placed in a typical group meeting. By having time, and surfacing a view that is totally related to their world view more depth than breadth is achieved. Whereas, the second option provides a faster route to both surfacing and integrating the views. In each case the material is not only captured but also structured through developing the chains of argument following detailed coding formalisms for the construction of the model (Bryson et al., 2004). The continual development of the qualitative model, sometimes over a number of group workshops, engenders clarity of thought predominantly through its adherence to the coding formalisms used for cause mapping. Members of the group are able to debate and consider the impact of contributions on one another. Through bringing the different views together it is also possible to check for coherency – do all the views fit together or are there inconsistencies? This is not uncommon as different parts of the organizations (including different discipline groups within a division e.g. engineering) encounter particular effects. For example, during an engineering project, manufacturing can often find themselves bewildered by engineering processes – for example, why are designs so late. However, the

S. Howick et al. / European Journal of Operational Research 186 (2008) 1068–1083

first stage of the cascade process enables the views from engineering, methods, manufacturing, commissioning, etc. to be integrated. Arguments are tightened as a result, inconsistencies identified and resolved and detailed audits (through aspects in the modeling software) undertaken to ensure consistency between both modeling team and model audience. The legal team also provide a useful check as the model provides detailed explanation to the case, and is therefore examined to ensure a further consistency. In some instances the documents generated through reports about the organizational situation can be coded into a cause map and merged into the interview and workshop material (Eden and Ackermann, 2004). Fig. 2 shows an extract from a typical ‘story’ unfolding that has been represented as what would be a cognitive map (if from a one-to-one interview) or cause map (if from a group session). The figure shows how manufacturing are forced to manage cabling consequences of design changes imposed by the client, not by engineering’s original intent. Computer supported analysis of the causal map can inform further discussion. For example, it can reveal those aspects of causality that are central to understanding what happens. Feedback loops can be identified and examined. Events that have multi-

1071

ple consequences for important outcomes can be detected. The cause map developed at this stage is usually large – containing up to 1000 nodes. The use of software facilitates the identification of sometimes complex but important feedback loops that follow from the holistic view that arises from the merging of expertize and experience across many disciplines within the organization. 3.2. Stage 2: Cause map to influence diagram (ID) As noted above, the causal model typically is very extensive – a model of 1000+ statements is not unusual given that the qualitative model’s purpose is to capture the different views of what occurred on the project with all their inherent richness and detail. However this extensiveness requires that a process of ‘filtering’ or ‘reducing’ the content be undertaken – leading to the development of an influence diagram (the second step of the cascade process). Partly this is due to the fact that many of the statements captured whilst enabling a detailed and thorough understanding of the project, are not relevant when building the system dynamics model in Stage 4 (as a result of the statements being of a commentary like nature rather than a discrete variable). Another reason is that for the most part

Fig. 2. Using a cause or cognitive map to understand the implications of changes to the design.

1072

S. Howick et al. / European Journal of Operational Research 186 (2008) 1068–1083

system dynamics (SD) models comprise fewer variables/auxiliaries to help manage the complexity (necessary for good modeling as well as comprehension). Thus to arrive at an appropriate level of detail and thus build the influence diagram (ID) a number of analyses, built into the mapping software (Decision Explorer1) are executed. The steps involved in moving from a cause map to an ID are as follows: Step 1: Determining the core/endogeneous variables of the ID (i) Identification of feedback loops: The first analysis comprises examination of feedback loops using the ‘loop’ command. It is not uncommon to find over 100 of these (many of these may contain a large percentage of common variables) when working on large projects with contributions from all phases of the project. (ii) Analysis of feedback loops: Once the feedback loops have been detected they are scrutinized to determine (a) whether there are nested feedback ‘bundles’ and (b) whether they traverse more than one stage of the project. This analysis would be carried out by listing all the feedback loops and then mapping those loops which have a large number of common variables (past practice suggests those loops with at least 60% common variables). Nested feedback loops comprise a number of feedback loops around a particular topic where a large number of the variables/statements are common but with variations in the formulation of the feedback loop. For example there may be a feedback loop about ‘increase in the number of engineering hours needed’ with separate feedback loops concentrating on (a) ‘‘long hours’’ leading to ‘‘overwork’’ leading to ‘‘reducing productivity’’, (b) ‘‘new staff being brought onto the project’’ resulting in ‘‘existing staff being distracted from design’’ and (c) ‘‘overwork’’ leading to ‘‘mistakes being made’’. Once detected, those statements that appear in the most number of the nested feedback

1

Decision Explorer is software for the construction, presentation, and analysis of cognitive/cause maps – see www.banxia.com

loops are identified (using the ‘potentset loop’command) as they provide core variables in the ID model. Where feedback loops straddle different stages of the process for example from engineering to manufacturing note is taken. Particularly interesting is where a feedback loop appears in one of the later stages of the project e.g. commissioning which links back to engineering. Here care must be taken to avoid chronological inconsistencies – it is easy to link extra engineering hours into the existing engineering variable however by the time commissioning discover problems in engineering, the majority if not all engineering effort has been completed. Step 2: Identifying the triggers/exogenous variables for the ID The next stage of the analysis is to look for triggers – those statements that form the exogenous variables in the ID. Two forms of analysis provide clues which can subsequently be confirmed by the group: (i) The first analysis focuses on starting at the end of the chains of argument (the tails) and laddering up (following the chain of argument) until a branch point appears (two or more consequences). The ‘cotail’ (composite tail) command is used here. Often statements at the bottom of a chain of argument are examples which when explored further lead to a particular behaviour e.g. delay in information, which provides insights into the triggers. (ii) The initial set of triggers created by (i) can be confirmed through a second type of analysis – one which takes two different means of examining the model structure for those statements that are central or busy (the ‘central’ and ‘domain’ commands are used here). Once these are identified they can be examined in more detail through creating hierarchical sets (for example, using the ‘hieset’ command) based upon them and thus ‘‘tear drops’’ of their content. Each of these teardrops is examined as possible triggers. Step 3: Checking the ID Once the triggers and the feedback loops are identified care is taken to avoid double counting – where one trigger has multiple

S. Howick et al. / European Journal of Operational Research 186 (2008) 1068–1083

consequences some care must be exercised in case the multiple consequences are simple replications of one another. The resulting ID is comparable to a ‘causal loop diagram’ (Lane, 2000) which is often used as a precursor to a SD model. From the ID structure it is possible to create ‘‘stories’’ where a particular example triggers an endogenous variable which illustrates the dynamic behaviour experienced. 3.3. Stage 3: Influence diagram to system dynamics influence diagram When a SD model is typically constructed after producing a qualitative model such as an ID (or causal loop diagram), the modeller determines which of the variables in the ID should form the stocks and flows in the SD model, then use the rest of the ID to determine the main relationships that should be included in the SD model. However when building the SD model there will be additional variables/constants that will need to be included in order to make it ‘work’ that were not required when capturing the main dynamic relationships in the ID. The SDID is an influence diagram that includes all stocks, flows and variables that will appear in the SD model and is, therefore a qualitative version of the SD model. It provides a clear link between the ID and the SD model. The SDID is therefore far more detailed than the ID and other qualitative models normally used as a precursor to a SD model. Methods have been proposed to automate the formulation of a SD model from a qualitative model such as a causal loop diagram (Burns, 1977; Burns and Ulgen, 1978; Burns et al., 1979) and for understanding the underlying structure of a SD model (Oliva, 2004). However, these methods do not allow for the degree of transparency required to enable the range of audiences discussed in this paper to follow the transition from one model to the next. The SDID provides an intermediary step between an ID and a SD model to enhance the transparency of the transition from one model to another for the audiences. Moving through the modeling cascade, the stepby-step transition from qualitative to quantitative models forces the modeler to be explicit about the implications of qualitative arguments. Any quantitatively illogical statements, imprecise statements, or inconsistent statements that may occur in the argu-

1073

ments are brought to the surface, particularly when producing the SDID. With any form of group modeling, there is a danger of ‘urban myths’ becoming apparent facts. This is particularly likely when the model is being developed by one organization to ‘prove a point’ against another, as in litigation. The transition from ID to SDID enables such ‘‘facts’’ to be explored within the context of the model, and will identify illogicalities and inconsistencies. The approach used to construct the SDID is as follows: The SDID is initially created in parallel with the SD model. As a modeller considers how to translate an ID into a SD model, the SDID provides an intermediary step. For each variable in the ID, the modeller can do either of the following: (i) Create one variable in the SD and SDID: If the modeller wishes to include the variable as one variable in the SD model, then the variable is simply recorded in both the SDID and the SD model as it appears in the ID. (ii) Create multiple variables in the SD and SDID: To enable proper quantification of the variable, additional variables need to be created in the SD model. For example, Figs. 3–6 show extracts from a cause map, ID, SDID and SD model. In Fig. 4, the ID extract includes ‘‘actual engineering changes coming through’’ leading directly to ‘‘morale in engrg function’’. When capturing this in a SD model in Fig. 6, further variables were required. These variables were ‘‘total engineering changes required over the last 4 months’’, ‘‘total number changes in excess of budget over past 4 months’’ and ‘‘excessive changes impact on morale of engineers’’. These additional variables were recorded in the SDID in Fig. 5 and included and quantified in the SD model in Fig. 6. To aid transparency the SDID records both the ID label for a variable, for example ‘‘morale in engrg function’’ and the SD label for a variable, for example ‘‘ENG_MORALE’’ The SDID model forces all qualitative ideas to be placed in a format ready for quantification. However, if the ideas are not amenable to quantification or contradict one another, then this step is not possible. As a result of this process, a number of issues typically emerge including the need to add links and statements and the ability to assess the overall pro-

1074

S. Howick et al. / European Journal of Operational Research 186 (2008) 1068–1083

less eng staff time available to progress expected work

drop in morale in eng function

reluctance to release engineers immediately

forced to do unexpected eng work

losses in time from closing design work already started

felt pressure from excessive rework being needed compared to normal contingency

work in process often repeated & repeatedly subject to change

stop work on drawing and then start again later

redesign work takes priority and so interrupt to current work

hrs of level 3 work required for each drawing identified as requiring level 3 work

engineering changes well in excess of expectations

lots of work on redesigns more eng rework, generally identified after methods (in production)

eng problems in excess of reasonably expected

changes to eng work not triggered by problems arising in eng, but by client

Fig. 3. Extract from a cause map.

morale in engrg function

work being done on redesigns actual engineering changes coming through eng rework identified after methods (in prodn)

eng problems in excess of reasonably expected

eng changes not triggered by problems in eng,but by client

Fig. 4. Extract from an ID.

file of the model though examining the impact of particular categories on the overall model structure. This process can also translate back into the causal model or ID model to reflect the increased understanding. In terms of adding new statements and links, this is predominantly relationship oriented. Sometimes, statements need to be included so as to provide details on the auxiliaries necessary to undertake

the simulation modeling. However a number of different relationship additions emerge. The first of these comprises links depicting delays – for example in Fig. 7 there is a delay between 29 rate at which finished detailed design is changed DETAILED_LATE_CHANGES_1739 and 104 effect of design changes BASIC_CHANGES_DELAY. The second new form of link represents the need to link statements in order to ensure calculations can be

S. Howick et al. / European Journal of Operational Research 186 (2008) 1068–1083

morale in engrg function ENG-MORALE

excessive changes impact on morale of engineers CHANGES-TO-ENG-MORALE total number changes in excess of budget over past 4 mths EXCESS-ENG-CHANGES

work being done on redesigns REDESIGN-RATE

reluctance to release engineers immediately ENG-RELEASE-LAG

total engineering changes required over the last 4 mths TOTAL-ENG-CHANGES drawings completed by engs currently waiting to be reworked after being with methods REDESIGNS

engineering changes in excess of expectations per week EXCESS-ENG-CHANGES-PW

actual engineering changes coming through ENG-CHANGES-PW

engrework identified after methods (in prodn) REWORK-AFTER-METHODS

eng changes not triggered by problems in eng, but by client AWR-ENG-HRS

Engproblems in excess of reasonably expected PTR-AWR--ENG-HRS

Fig. 5. Extract from an SDID.

Eng_Morale Changes_to_eng_Morale

Excess_eng_Changes

Eng_Changes_pw

Total_Eng_Changes

Completed_by_Engs

Rework_after_Methods Redesigns

Completed_by_Methods

Redesign_Rate

AWR_eng_hrs

PTR_AWR_eng_hrs

Fig. 6. Extract from a SD model.

1075

Fig. 7. Extract from an SDID (Note. TD in this diagram refers to ‘Technical Design’).

1076 S. Howick et al. / European Journal of Operational Research 186 (2008) 1068–1083

S. Howick et al. / European Journal of Operational Research 186 (2008) 1068–1083

carried out. An example from Fig. 7 is the link between 77 total number of TD staff TOTAL_TD_STAFF to 81 switch to determine the % of available overtime that is to be used OVERTIME_TO_START – itself a calculating statement. A further set of links that are added involve building into the model balancing control loops. When undertaking interviews these rarely if ever are surfaced. They are part of the ‘world taken for granted’. However in order to make sure that the models replicate each other these too must be added to the SDID and the cause map model, therefore ensuring a consistent argument is agreed in all models. Any inconsistencies in the argumentation are therefore highlighted. At each point of inconsistency the modeler needs to understand why they have arisen and make the appropriate amendments before being able to proceed. This leads to increased validity as each of the models forces consistency with one another. 3.4. Stage 4: The system dynamics simulation model The process of quantifying SD model variables can be a challenge, particularly as it is difficult to justify subjective estimates of higher-level concepts such as ‘‘productivity’’ (Ford and Sterman, 1998). However, moving up the cascade reveals the causal structure behind such concepts and allows quantification at a level that is appropriate to the data-collection opportunities available. Fig. 8 provides an example. The quantitative model will require a variable ‘‘productivity’’ or ‘‘morale’’, and the analyst will require estimation of the relationship between it and its exogenous and (particularly) endogenous causal factors. But while the higher-level concept is essential to the quantitative model, simply

1077

presenting it to the project team for estimation would not facilitate justifiable estimates of these relationships. 3.5. Reversing the cascade The process of moving back up the cascade can facilitate understanding between the parties. For example, in Fig. 9 the idea that a company was forced to use subcontractors and thus lost productivity might be a key part of a case for lawyers. The lawyers and the project team might have come at Fig. 9 as part of their construction of the case. Moving back up from the ID to the cause map (i.e. Figs. 9 and 10) as part of a facilitated discussion not only helps the parties to come to an agreed definition of the (often quite ill-defined) terms involved, it also helps the lawyers understand how the project team arrived at the estimate of the degree of the relationship. Having established the relationship, moving through the SDID (ensuring welldefined variables etc.) to the SD model enables the analysts to test the relationships to see whether any contradictions arise, or model behaviours significantly different from actuality, and it enables comparison of the variables with data that might be collected by (say) cost accountants. Where there are differences or contradictions, the ID can be reinspected and if necessary the team presented with the effect of the relationship within the SD model explained using the ID, so that the ID and the

2 Use of subcontractors

1 Loss of design productvity

Fig. 9. Extract from an ID.

1 Loss of design productvity 6 overtime 7 Management distracted by looking for subcontractors

5 loss of morale

2 Use of subcontractors

4 Performing design work out of order

3 Difficulty due to lack of basic design freeze

Fig. 8. Section of an ID showing the factors affecting productivity on a project.

10 hourly-paid subcontractors have no interest in success of project

1 Loss of design productvity

2 Use of subcontractors su 9 senior staff compained too much time was spent supervising subcontractors

8 Joe P left because unhappy with over-use of subcontractors

Fig. 10. Extract from a cause map.

1078

S. Howick et al. / European Journal of Operational Research 186 (2008) 1068–1083

supporting cause maps can be re-examined to identify the flaws or gaps in the reasoning. Thus, in this example, as simulation modellers, cost accountants, lawyers and engineers approach the different levels of abstraction, the cascade process provides a unifying structure within which they can communicate, understand each other, and equate terms in each other’s discourse. As for the majority of OR modeling processes, the cascade is a cyclical process. For example, building and running the simulation model may provide additional insights into the cause map. Some of the audiences (e.g. clients or lawyers) are often highly involved in this cyclical process. Discussion on output from the simulation model may lead to further explorations into relationships within the initial cause map and result in additional interviews or data collection and validation being undertaken. As many of the different audiences are involved when the qualitative and quantitative models are used to validate one another, the models argumentation is not only tightened up, but the process also leads to increased ownership of each of the models and understanding (particularly for the lawyers). Also, as previously stated, any inconsistencies are easily highlighted. Since the client observes this happening it helps in building client confidence in the model. 4. Advantages of the cascade The cascade brings together two well-established, but hitherto separate, methods; cause mapping and system dynamics. When mixing methods such as these, the capabilities of the two approaches need to be brought together in a coherent and practical manner (Lane and Oliva, 1998). In addition to achieving this coherent and practical mixing of methods, the cascade results in a number of important advantages for the modeling process presented below. 4.1. Achieving comprehensiveness and developing organizational learning Our experience from the development of 10 model sets designed to analyze complex projects, suggests that one of the principle benefits of using the cascade process derives from the added value gained through developing a rich and elaborated qualitative model that provides the structure (in a

formalized manner) for the quantitative modeling. The cascade process takes a bottom up approach to developing the model. Immersing users in the richness and subtlety that surrounds their view of the projects ensures involvement and ownership of all of the qualitative and quantitative models. The comprehensiveness led, in all cases, to a better understanding of what occurred and enabled effective conversations to take place across different organizational disciplines. As well as being integral to the model building process the use of the cascade created significant levels of organizational learning (Williams et al., 2004) and a seamless process (Howick et al., 2006). Regardless of the mechanism used to surface the various views, the structured material provided a basis upon which to examine and consider what was believed to have occurred and seek to build a shared and richer understanding. The process often triggered new contributions as memories were stimulated and both new material and new connections were revealed. The resultant models thus act as organizational memories providing useful insights into future project management (both in relation to bids and implementation). These models provide more richness and therefore an increased organizational memory when compared to the traditional methods used in group model building for system dynamics models (for example Vennix, 1996). However this outcome is not untypical of other problem structuring methods (Rosenhead and Mingers, 2001). 4.2. Testing the veracity of multiple perspectives The cascade’s bi-directionality enabled the project team’s understandings to be tested both numerically and from the perspective of the coherency of the systemic portrayal of logic. By populating the initial quantitative model with data (Ackermann et al., 1997) rigorous checks of the validity of assertions were possible. When modeling is used to understand organizational change, both past and future, the situation is loaded with: the possibility for blame, or resistance to change. In the cases discussed here, blame is the fear of those participating in accounting for history and often restricts contributions (Ackermann and Eden, 2005). When initiating the cascade process, the use of either interviews or group workshops increases the probability that the modeling team will uncover the rich story rather than a partial

S. Howick et al. / European Journal of Operational Research 186 (2008) 1068–1083

or as is often the case with highly politicized situations, ‘sanitized’ one. By starting with ‘concrete’ events that can be verified, and exploring their multiple consequences, the resultant model provides the means to reveal and explore the different experiences. For example, where a change is made to a particular part of the design on a project there can be a number of different consequences experienced – with different designers seeing only part of the whole (see Fig. 2 where the change to the power requirements and switchboard led to changes in other systems that were the responsibility of other participants e.g. HVAC as well as implications throughout the cabling). 4.3. Modeling transparency By concentrating the qualitative modeling efforts on the capture and structuring of multiple experiences and viewpoints the cascade process initially uses natural language and rich description as the medium which facilitates generation of views and enables a more transparent record to be attained. There are often insightful moments as participants viewing the whole picture realize that the project is more complex than they thought. This realization results in two advantages. The first is a sense of relief that they did not act incompetently given the circumstances – which in turn instills an

1079

atmosphere more conducive to openness and comprehensiveness (see Ackermann and Eden, 2005). The second is learning – understanding the whole, the myriad and interacting consequences and in particular the dynamic effects that occurred on the project (that often acts in a counter-intuitive manner) provides lessons for future projects. If we look at Fig. 11 contributions from engineers working on different parts of the system (in this example mechanical, ventilation and electrical board/power) can be woven together to enable a holistic view to be attained and elucidate the different contributions to the outcome. All the ramifications of the different events experienced throughout the duration of the project are more likely to be captured, often revealing hitherto unknown aspects, or changing views about their significance. 4.4. Common understanding across many audiences The cascade process promoted ownership of the models from the mixed audience. For example, lawyers were more convinced by the detailed qualitative argument presented in the cause map (Stage 1) and find this part of greatest utility and hence engage with this element of the cascade. However, engineers got more involved in the construction of the quantitative model and evaluating the data encompassed within it.

Fig. 11. Cause map showing how contributions from different functions can be woven together.

1080

S. Howick et al. / European Journal of Operational Research 186 (2008) 1068–1083

When modeling as a part of litigation, a model needs to be able to prove the quantum (e.g. show how the events that occurred on a project led to an actual time and cost over-run on the project) (Howick, 2003), this leads to the use of mathematical models such as simulations in order to produce the time and cost over-runs. However, for an audience such as a lay juror these can be extremely difficult to understand. In the case of the SD models some of the reasons for this are that the models are large, involve quantitative reasoning and jurors would only have minimal exposure to the model (Howick, 2005). However, the rich qualitative maps developed as part of the cascade method are presented in terms which are easier for people with no modeling experience to understand. In addition, by reversing the cascade, the dynamic results that are output by the simulation model are given a grounding in the key events of the project, enabling jurors to be given fuller explanations and reasons why the project had a particular result. This process of using the rich qualitative model to illustrate particular aspects of the model has been particularly successful in arbitration where the contractor views not just the behaviour but its myriad consequences and thus the basis for the litigation argument. The models audience has also included experts whose job it was to destroy the validity of the modeling. Using the cascade method, any structure or parameters that are contained in the simulation model can be easily, and quickly, traced back to information gathered as a part of creating the cognitive maps or cause maps. Each contribution in these maps can then normally be traced to an individual witness who could potentially stand up in court and defend that detail in the model. This auditable trail aids the modeler in the process of refuting the attacks made on the model. In addition, the overall structure of the cascade method, with each model validating the other, means that the presentation to any expert modelers is more convincing. 4.5. Clarity The step-by-step process, particularly the penultimate step moving to an SDID, forces the modeler to be clear in what statements mean. We have already seen in discussing Stage 3 above, that arguments are tested and imprecise, illogical or inconsistent statements highlighted, requiring the previous stage to be revisited and meanings clarified, or inconsisten-

cies cleared up. Another important example of the way clarity is forced into the system during Stages 1–3 is the free usage of links in a cognitive map, where an interviewee will identify that two concepts are linked; the move to an ID then an SDID requires the analyst to consider whether these links represent influences, effects on rates, calculative relationships or some other type of link. While an interviewee often could not bring this level of clarity in an initial exploratory interview (and indeed, trying to do so would lose the flow of the story-telling), the cascade structure enables vagueness to be identified and clarified. 4.6. Confidence building 4.6.1. Confidence building for modeling experts As a part of gaining overall confidence in a model, any audience for the model will wish to have confidence in the structure of the model (for example Ackoff and Sasieni, 1968; Rivett, 1972; Mitchell, 1993; Pidd, 2003). When a modeler is constructing a simulation model, good modeling practice will include carrying out tests of structural validation. For SD simulation models this involves tests such as structure assessment, boundary adequacy and dimensional consistency (Forrester and Senge, 1980; Sterman, 2000). The cascade method produces a number of qualitative models which focus on the structure of the situation to be represented. Using qualitative models prior to a SD model in order to gain a clear understanding of the structure of the problem is not new. Others have discussed the advantages of this process (Coyle, 1996, 2000; Wolstenholme, 1990, 1999). When assessing confidence levels in a part of the structure of a SD model, the cascade process enables any member of the ‘client’ audience to clearly trace the structure of the SD model directly to the initial natural language views and beliefs provided from individual interviews or group sessions. For example, Figs. 3–6 show extracts from a cause map, ID, SDID and SD model. The elements included in the SD model in Fig. 6 can be traced back through the SDID, ID and finally to the cause map to enable any of the models audiences to understand how the structure of the SD model relates to the initial information that was gathered from interviews or documents. As previously mentioned this means that, through the cause map, the output from the SD model has a grounding in the key events in the minds of the various audiences.

S. Howick et al. / European Journal of Operational Research 186 (2008) 1068–1083

4.6.2. Confidence for the audience in quantitative assessments In addition to the expert modelers range of tests, other tests follow naturally from the use of the cascade. Scenarios are an important test in which the confidence of the project team in the model can be considerably strengthened. Simulation is subject to the demands to reproduce scenarios that are recognizable to the managers capturing a portfolio of meaningful circumstances that occur at the same time, including many qualitative aspects such as morale levels. In order to test this, in a way that is similar to estimating the underlying relationships at key critical memory-points of the project, the reverse process can be used at any point in the project time-line. If a particular time-point during the quantitative simulation is selected, clearly the simulated values of all the variables, and in particular the relative contributions of factors in each relationship, can be output from the model. At this point, a similar exercise to that described around Figs. 8–10 can be carried out. Thus, the simulation might show that at a particular point in a project, loss of productivity is 26% and that the loss due to: ‘‘Use of subcontractors’’ is 5% ‘‘Difficulty due to lack of basic design freeze’’ is 9% ‘‘Performing design work out of order’’ is 3% ‘‘Loss of morale’’ is 5% ‘‘Overtime’’ is 4% Asking the project team their estimates of loss of productivity at this point in time, and their estimation of the relative contribution of these five factors, will help to validate the model. In most cases this loss level is best captured by plotting the relative levels of productivity against the time of critical incidents during the life the project. Discussion around this estimation might reveal unease with the simple model described in Fig. 8, which will enable discussion around the ID and the underlying cognitive map, either to validate the agreed model, or possibly to modify it and return up the cascade to further refine the model. Just as in the process in which underlying relationships are estimated at key critical memorypoints, so in this scenario validation the cascade process provides a unifying structure within which the various audiences can communicate and understand each other. Different audiences can work at the different levels of abstraction, and by listening to each other’s reasoning relate other participants’

1081

knowledge to their own to give each a richer knowledge-set. 5. Conclusion The practical modeler needs to take situations in a messy, ill-defined world where many participants have partial and sometimes conflicting views, and build models that are clear and supportable for multiple potential audiences. While this has been recognized to some extent in the past (for example, the use of methods such as mapping as a precursor to discrete-event or system dynamics simulation modelling), there has not previously been a rigorous and well-founded methodology to take the analyst on this journey. The authors believe that the modeling cascade does indeed provide this methodology. Bringing together two well-established modeling techniques, cause mapping and system dynamics, the cascade process exploits the benefits of these two approaches. The multiple audiences therefore gain the benefits from rich, elaborated qualitative stories in addition to a quantifiable structure. The audiences are therefore able to see over time dynamics from the rich stories as well as the over time dynamics being given a grounding in the key events in the minds of the audience. However, the methodology is much more than the combination of these two techniques. The various stages provide a mechanism for bringing clarity and understanding, and the two-way nature of the cascade enables cognitive maps, cause maps, IDs, SDIDs and quantitative models to inform each other, and bring a much greater degree of confidence in the model from a variety of audiences. The methodology is not entirely simple to use. The key step of moving from the ID to the SDID has presented the authors with particular challenges in thinking through the relationship, the details of which are not within the scope of this paper. In addition, the way that each type of model informs the others makes demands on both quantitative and qualitative modelers. But we do consider that this methodology provides a robust and effective way of transparently producing models of messy situations in which multiple differing types of audiences have expressed confidence. References Ackermann, F., Eden, C., 2001. Contrasting single user and networked group decision support systems. Group Decision and Negotiation 10 (1), 47–66.

1082

S. Howick et al. / European Journal of Operational Research 186 (2008) 1068–1083

Ackermann, F., Eden, C., 2004. Using causal mapping: Individual and group; traditional and new. In: Pidd, M. (Ed.), Systems Modelling: Theory & Practice. Wiley, Chichester, pp. 127–145. Ackermann, F., Eden, C., 2005. Using causal mapping with group support systems to elicit an understanding of failure in complex projects: Some implications for organizational research. Group Decision and Negotiation. 14 (5), 355– 376. Ackermann, F., Eden, C., Williams, T.M., 1997. Modeling for litigation: Mixing qualitative and quantitative approaches. Interfaces 27 (2), 48–65. Ackermann, F., Eden, C., Williams, T., Howick, S., 2006. Systematic risk assessment: A case study. Journal of the Operational Research Society 58 (1), 39–51. Ackoff, R.L., Sasieni, M.W., 1968. Fundamentals of Operations Research. Wiley, New York. Balci, O., 1994. Validation, verification, and testing techniques throughout the life cycle of a simulation study. Annals of Operations Research 53, 121–173. Balci, O., 2001. A methodology for certification of modeling and simulation applications. ACM Transactions on Modeling and Computer Simulation 11 (4), 352–377. Barlas, Y., 1989. Multiple tests for validation of system dynamics type of simulation models. European Journal of Operational Research 42 (1), 59–87. Barlas, Y., 1996. Formal aspects of model validity and validation in system dynamics. System Dynamics Review 12 (3), 183– 210. Barlas, Y., Carpenter, S., 1990. Philosophical roots of model validation: Two paradigms. System Dynamics Review 6 (2), 148–166. Bell, J.A., Bell, M.F., 1980. System dynamics and scientific method. In: Randers, J. (Ed.), Elements of the System Dynamics Method. MIT Press, Cambridge, MA. Bryson, J.M., Ackermann, F., Eden, C., Finn, C., 2004. Visible Thinking: Unlocking Causal Mapping for Practical Business Results. Wiley, Chichester. Burns, J.R., 1977. Converting signed digraphs to Forrester schematics and converting Forrester schematics to differential equations. IEEE Transactions on Systems, Man, and Cybernetics SMC 7 (10), 695–707. Burns, J.R., Ulgen, O.M., 1978. A sector approach to the formulation of system dynamics models. International Journal of Systems Science 9 (6), 649–680. Burns, J.R., Ulgen, O.M., Beights, H.W., 1979. An algorithm for converting signed digraphs to Forrester’s schematics. IEEE Transactions on Systems, Man, and Cybernetics SMC 9 (3), 115–124. Coyle, R.G., 1996. System Dynamics Modelling: A Practical Approach. Chapman & Hall, London. Coyle, R.G., 2000. Qualitative and quantitative modelling in system dynamics: Some research questions. System Dynamics Review 16 (3), 225–244. Coyle, G., Exelby, D., 2000. The validation of commercial system dynamics models. System Dynamics Review 16 (1), 27–41. Eden, C., 1988. Cognitive mapping: A review. European Journal of Operational Research 36, 1–13. Eden, C., Ackermann, F., 1998. Analyzing and comparing idiographic causal maps. In: Eden, C., Spender, J.C. (Eds.), Managerial and Organizational Cognition. Sage, London, pp. 192–209.

Eden, C., Ackermann, F., 2004. Cognitive mapping expert views for policy analysis in the public sector. European Journal of Operational Research 152, 615–630. Ford, D., Sterman, J., 1998. Expert knowledge elicitation to improve formal and mental models. System Dynamics Review 14 (4), 309–340. Forrester, J.W., 1961. Industrial Dynamics. Productivity Press, Portland, Oregon. Forrester, J.W., Senge, P.M., 1980. Tests for building confidence in system dynamics models. In: Legasto, A.A. Jr., Forrester, J.W., Lyneis, J.M. (Eds.), TIMS Studies in the Management Sciences 14, North-Holland. Fossett, C.A., Harrison, D., Weintrob, H., Gass, S.I., 1991. An assessment procedure for simulation models: a case study. Operations Research 39 (5), 710–723. Gass, S.I., 1983. Decision-aiding models: validation, assessment, and related issues for policy analysis. Operations Research 31 (4), 603–631. Gass, S.I., 1993. Model accreditation: a rationale and process for determining a numerical rating. European Journal of Operational Research 66, 250–258. Gass, S.I., Joel, L.S., 1981. Concepts of model confidence. Computers and Operations Research 8 (4), 341–346. Howick, S., 2003. Using system dynamics to analyse disruption and delay in complex projects for litigation: Can the modelling purposes be met?. Journal of the Operational Research Society 54 (3) 222–229. Howick, S., 2005. Using system dynamics models with litigation audiences. European Journal of Operational Research 162 (1), 239–250. Howick, S., Ackermann, F., Andersen, D., 2006. Linking event thinking with structural thinking: Methods to improve client value in projects. System Dynamics Review 22 (2), 113– 140. Lane, D., 2000. Diagramming conventions in system dynamics. Journal of Operational Research Society 51 (2), 241– 245. Lane, D., Oliva, R., 1998. The greater whole: Towards a synthesis of system dynamics and soft systems methodology. European Journal of Operational Research 107, 214–235. Law, A.M., Kelton, W.D., 2000. Simulation Modeling and Analysis, third ed. McGraw-Hill, New York. Mitchell, G., 1993. The Practice of Operational Research. Wiley, Chichester. Oliva, R., 2004. Model structure analysis through graph theory: Partition heuristics and feedback structure decomposition. System Dynamics Review 20 (4), 313–336. Pidd, M., 2003. Tools for Thinking: Modelling in Management Science. Wiley, Chichester. Pidd, M., 2004. Computer Simulation in Management Science, fifth ed. Wiley, Chichester. Richardson, G.P., Pugh III, A.L., 1981. Introduction to System Dynamics Modeling with DYNAMO. MIT Press, Cambridge, MA. Rivett, P., 1972. Principles of Model Building. Wiley, London. Robinson, S., 2004. Simulation: The Practice of Model Development and Use. Wiley, Chichester. Rosenhead, J., Mingers, J., 2001. Rational Analysis for a Problematic World Revisited. Wiley, Chichester. Sterman, J.D., 1984. Appropriate summary statistics for evaluating the historical fit of system dynamics models. Dynamica 10 (2), 51–66.

S. Howick et al. / European Journal of Operational Research 186 (2008) 1068–1083 Sterman, J.D., 2000. Business Dynamics: Systems Thinking and Modeling for a Complex World. Irwin/McGraw-Hill, Chicago. Vennix, J., 1996. Group Model Building: Facilitating Team Learning using System Dynamics. Wiley, Chichester. Williams, T., Ackermann, F., Eden, C., 2003. Structuring a delay and disruption claim: An application of cause-mapping and system dynamics. European Journal of Operational Research. 148, 192–204.

1083

Williams, T.M., Ackermann, F., Eden, C., Howick, S., 2004. Learning from project failure. In: Love, P., Irani, Z., Fong, P. (Eds.), Management of Knowledge in Project Environments. Oxford Elsevier/Butterworth-Heinemann, pp. 219–236. Wolstenholme, E.F., 1990. System Enquiry: A System Dynamics Approach. Wiley, Chichester. Wolstenholme, E.F., 1999. Qualitative vs quantitative modelling: The evolving balance. Journal of the Operational Research Society 50 (4), 422–428.