Macrosimulation of project risks — a practical way forward

Macrosimulation of project risks — a practical way forward

Macrosimulation of project risks - a practical way forward J Berny and P R F Townsend* distinguish The paper describes a new methodology for analysin...

1MB Sizes 0 Downloads 38 Views

Macrosimulation of project risks - a practical way forward J Berny and P R F Townsend* distinguish

The paper describes a new methodology for analysing quantifiable risks, and it considerably expands present methods while embodying most of the results currently generated. It simpl$es the data requirements for analysis, and does not require statistical expertise. It limits analysis to prevent risks/problems, and it calculates their relative ranking. Most importantly, it initially considers a vital question which is usually omitted in, for example, PERT, namely the chance of a risk occurring; the nature of the eflect is considered. Several dimensions are considered together, e.g. time and cost. Because of its tree-structured nature, in-depth analysis can be performed. The major defects of the frequently used beta distribution are overcome by the Berny distribution. Macrosimulation is shown to simpltfy and widen the use of risk analysis because it is knowledge-embedded. It allows the consideration of both external and internal project problems/risks. Finally, practical use to date has been most successful, and enables a much wider audience to make use of risk analysis. Keywords: PERT, project management, risk prioritization, risk probability distributions, simulation

This paper is broken down into three major sections and a summary: (a) a clarification of what risk analysis is, (b) an overview of techniques, and (c) an introduction to the methodology of macrosimulation. A new technique for risk analysis, namely macrosimulation, is considered. The prefix ‘macro’ is used to School of Architecture and Civil Engineering, South Bank University, Wandsworth Road, London, UK *Turner & Townsend Project Management, 10 John Adam Street, London, UK This paper was the joint first prize winner in the Sir Monty Finniston Awards for Project Management 1993 (sponsored by British Telecom).

Vol 11 No 4 November

1993

this process from that typically required by demands that each of hundreds or thousands of activities are considered, many of which may only have a marginal influence in a project. Current software technology is PERT-like, and, as such, accounts only for project variability in the single dimension of time. Other factors, such as human resources and cost variabilities, are generated pro rata. Risk analysis by macrosimulation has the advantage of user selection of the risk-critical activities prior to the simulation process, and, more importantly, it includes external influences on the project. The additional freedoms created by macrosimulation provide greater insight into the magnitude of a ‘danger’, as seen from the combined influences on project cost and duration. We preview this paper with a consideration of traditional techniques, e.g. those in Pugh and Sodden’, and the alternative approaches developed by Berny’. The translation of the qualitative view into quantitative form and the use of project knowledge in the selection of the relevant risk distribution is also considered. To the extent that the approach contrasts with the prior definition of Ward and Chapman3, we consider that risk analysis is not a specific methodology, but a consideration of the expectations that we may hold as a result of including variability in our estimations. This paper describes not only a technique for analysing several dimensions in a pairwise manner, e.g. cost and time, but also addresses the question of risk prioritization and the ranking of specific risk factors. This provides a more realistic basis for understanding the nature of risk, for both project internal and external factors, the degree of risk being borne by each of the parties in the project, and, in particular, the financial value of insurable risk. The technology has been developed in such a manner that the practitioner is not directly exposed to the statistical calculations, and the fact that it can be used by non-statisticians is a major advance. PERT, which

0263-7863/93/040201-08

0

1993 Butterworth-Heinemann

Ltd

201

RISKMANAGEMENT

Finally, the paper concludes with a practitioner’s assessment of the technique, confirming its practical validity and use in reaching the desired goal of risk management.

WHAT IS RISK ANALYSIS? Purpose of risk analysis Risk analysis is primarily concerned with evaluating the uncertainties which are seen to affect the outcome of a plan or ongoing work. It draws attention to uncertainties which are not immediately apparent. The major consideration should be the unveiling of risks and their causes. The above attempts to define the purpose of risk analysis without a preconceived reference to a technique. This contrasts with the view taken by Ward and Chapmanj, which the authors believe to be inappropriate. Risk analysis, by definition, is a process for the analysis of potential risks, with the end aim of reducing their impact and/or reducing the likelihood of their occurrence. Such evaluations require careful analyses of the many parts which constitute programmes of work, both at their inception and throughout their lifecycle. Risk analysis should allow the translation and quantification of perceived risks and uncertainties into planning and work decisions, as well as in economic terms. The outcome should reflect the possible scenarios of the future and its uncertainties. The evaluation of risk is critically dependent on the experts in the field, who are unlikely to have much technical knowledge of the analytic tools of risk analysis. For this reason, it is necessary to minimize the need for specialist knowledge in the use of our in-depth tools. The nature of risk assessment is highly dependent on the demands of the local situation; for this reason, more than one approach is strongly recommended. The appropriate method of assessing risks can be simple or highly sophisticated. The VISIERsoftware, which was developed to cater for macrorisk analysis, embodies five techniques, and thus it does not ignore the need for differing approaches to a problem or their interaction. Its five integrated devices span common sense to in-depth levels of analysis. Information resulting from any one can reinforce results from, and allow feedback to, other risk analyses. The outcome of all the analyses should be compatible. It is precisely because, at the start of project evaluation, the different methods yield different, and even contradictory results that an integrated multidevice approach is so important. The crosschecking ability may show contradictory results which highlight real risks and those where our knowledge is weakest. Most risks are problems with uncertainty, and thus they allow for some measure of analysis and control. The totally unknown, some unquantifiable issues, and self-contradicting problems are the areas of major concern. Of these, the latter two can at least be planned for to some degree, although they are unlikely to be eliminated. A related issue is the danger of generating other risks by the very solutions which resolve them. The aim of risk analysis, when considering a specific problem, must be to reduce, not escalate, risk, 202

for example by the substitution of solutions. Only in this way can risks be managed and viable solutions achieved.

Formal definition of risk analysis It is important to define the process which is being considered, as the definition may help to remove some of the aura of the subject of which there is growing awareness. No formal definitions exist. We are helped by the definitions in the Shorter Oxford Dictionary of the words ‘risk’ and ‘analysis’: ‘To risk’ is either (a) to hazard or endanger, to expose to chance of injury or loss, or (b) to venture upon or take chances. ‘A risk’ is either (a) a hazard, danger or exposure to mischance or peril, or (b) the chance or hazard of commercial loss, especially in the case of insured property or goods. ‘Analysis’ is the resolution simple elements, or the components.

of anything complex exact determinations

into its of its

From these definitions, an appropriate explanation of the term ‘risk analysis’ is suggested by the authors to be ‘the resolution of anything complex into its simple elements which would become exposed to chance’. This definition is not so far from the specific case of, for example, construction projects, which are combinations of complex actions. It further leads to the purpose of risk analysis, which is to minimize such losses or maximize such gains as could occur as a result of variance from the idealized solution. These gains or losses could also be to property, persons, or even aspirations, as a result of undertaking a project. The critical aspect of the analysis relates to chance mechanisms. These may or may not be quantifiable, but the chance or hazard aspect is a vital ingredient. We suggest that no technique should ever be proposed to define a general process of analysis. In the first place, new techniques are always evolving; second, the extent of what is being analysed is continually changing. Finally, the term ‘risk’ relates to a much wider field than just project management. For this reason, risk analysis has firstly been defined above on the basis of dictionary definitions, and secondly by reference to the context of construction projects.

OVERVIEW

OF TECHNIQUES

Origins of risk analysis Probably the earliest industrial use of risk methods was with PERT/RISK,which originally referred to the variation of the estimates of the activity duration, and, assuming their independence, was used to calculate the probable variations of a project duration. The most frequently stated project-assessment methods use the beta distribution, which is frequently supplemented by the rectangular, triangular and normal probability distributions. For instance, Cooper et a1.4 use the above International

Journal

of Project

Management

distributions, but acknowledge the existence of interdependence, which is lacking in PERT/RISK.The need to account for time interdependence, as exemplified by network analysis, can be overcome by simulation; a good example is shown by Pugh and Sodden’. Their methodology replaces the beta distribution by a combined normal distribution. This allows a simpler user input without losing the essential property of skewness. Other techniques fall into two main areas: sensitivity analysis and decision trees. Perry* combines the results with decision-tree methods and simulation. The latter approach was used primarily for construction projects. For wider use, Neuburge? advocates decision-tree methods as a broad-brush approach. For example, he uses a ‘risk-chance’ technique for company acquisition. As a final example, Baker7, who was concerned with BP oil projects, specifies the need for rapid answers and easy-to-use computer methodology. Software developed by BP is stated to allow interactive modelling as it is supported by available data to improve the accuracy of the generated probability distribution.

Appraisal of methods used to search for risks This is the qualitative side of risk analysis. There are many approaches to the means of discovering which risks may occur in a project. The nature of the risks range from the perceived but virtually unimaginable, with little or no historical precedence, to the well documented fully analysed risk. The full spectrum should be considered at the conceptual stage of a project, and a pragmatic set should be found to use for the period planning stage to completion. Searching for risks creates a paradox. Once it has been ascertained that a risk may exist, and it has become understood to some degree, it becomes controllable. Thus it should no longer be considered a ‘risk’ but a problem with uncertainty. However, the understanding of a risk frequently results in the emergence of new risks, thus creating a rarely ending cycle. For this reason, it is necessary to generate a cut-off point when considering a risk and its implications. A typical example of a search procedure is the consideration of a list of known risks. This process has the danger that those problems that are special to a project may become overlooked, i.e. it tends toward a ‘blinkered view of the worId’. Another issue which tends to minimize the effectiveness of risk search is the tendency of experts to be risk-averse, Many systems have been developed, Dworatschek’, for assessing and searching for risks. Perhaps the most popular approach is to take the route of expert or knowledge-based systems. These are as yet highly undeveloped.

Principles of risk evaluation The methods of enquiry with a user require that he/she associate events with risks. Many such events cannot be quantified; however, one may approach such a problem by asking for extremes to be considered and then reduced to more realistic values. There is, for instance, the Delphi technique or brainstorming approach for the most difficult cases. Vol 11 No 4 November

1993

Another issue is the finding of all the pertinent risk factors. This may be approached on a group basis, with an optimum of about a dozen people; this typically enlarges the range of problem areas by up to five times that for an individual ‘expert’ assessor. Lists of known risks and their analyses should assist in this area. This will be explored in a future paper on risk management. Our concern is to analyse these evaluations considered for time, cost and/or resource effects on the whole project. This leads to specific estimates being made.

Selected risk-assessment methods Risk assessment is largely dependent on the expert knowledge of the planner or manager. It is their information which can be supplemented by risk-analysis devices. While risk-analysis techniques are clouded in the mysteries of statistics, and are very labour-intensive, the potential users remain ‘potentials’. However, what-if analyses, forecasts, and contingency techniques are all concerned with risks, and there should be no need to cloak users in ‘mystery’. Hence, our concern is to place more sophisticated and in-depth methods in the same day-to-day class, and enhance the usability of the latter tools. All the following techniques are incorporated in VISIER, which is an integrated knowledge-embedded planning and control suite of programs. What-if analysis: This well known sensitivity-analysis device has been enhanced by the inclusion of expertsystem facilities to allow the early modification and creation of alternative plans which can be compared with each other. The comparisons are expanded to allow most financial-analysis techniques, such as discounting, to be easily used. Projection ‘what-if analyses: Forecasts are a form of risk assessment. This concept has been extended to allow, for instance, a plan, marketing strategy or projection of work in progress to be made with alternative goals, and the results checked for their statistical reliability. In this way, several cost/resource breakdowns are generated. Early-warning system: This technique is concerned with the detailed activities which tend to become masked when a project is considered as a whole. By the use of statistical techniques, this facility identities those specific activities which, on a food-by-period analysis, exert most influence on the different stages of a project. It thus tends to identify those activities which have not been resolved into the appropriate level of detail. This information guides the decisions governing the generation of plan alternatives. Projection-bases risk analysis: This is a nonsimulation-model projection-based device which allows the user to include additional estimates which relate to the whole project plan, rather than rely solely on the results of statistical analyses of the activity estimates, which can lead to what appears to be an ‘unrealistic set of projections’. In fact, such projections are totally justified at the level of detail of the plan without further knowledge input. This facility allows the input of probability data for the total plan, requiring the 203

RISKMANAGEMENT user to provide the earliest, most likely, and latest expected completion dates, from which a probability distribution is calculated. The analysis results in a confidence-limit graph and histogram of project cost and duration. Such data is vital to the decision as to whether to undertake commitments, and it eliminates the requirement for the ‘10% contingency rule’. This technique has been called ‘megarisk analysis’, as it applies to the project as a whole. ~~~rosimul~tion or Macrorisk analysis: This is a Monte Carlo simulation, It is not activity-networkbased, and only the effects of the risk factors on the whole project are quantified. Again, because of the real level of detail in the project estimates, a normally arduous procedure is simplified, and solutions are provided very rapidly. The technique is used to account for (a) major uncertainties within a project plan which are of sufficient magnitude to be considered as distinct factors which can jeopardize the validity of the plan itself, but are not included as normal variations, and (b) factors which are not normally included in the project plan, but whose potential occurrence could upset the assumptions of the plan, and intrude upon the smooth execution of the plan. The methodology can be considered as an in-depth expansion of the sensitivity-analysis tools currently available, for instance Thompson’s’ spider’s web, and its extension by Yea”. It is also much faster and simpler to use. Commentary on recent techniques The most recent quantitative techniques exemplified by those developed by Yeo ‘O, Ward and Chapman3 and Pugh and Sodden’ consider the three major camps: sensitivity analysis, decision theory, and simulation, respectively. Norris, Perry and Simon” summarize this view, adding influence diagrams to the list of typical risk analysis methods. The authors’ technique principally combines both simulation and sensitivity analysis, although it does not ignore decision analysis, as this is incorporated in the prio~tization process. The multidimensionality and means of combining many diverse risks allows for the requirements of sensitivity analysis. We have started in a new direction, and expect that these proposals will form the basis of new growth in this area of study. Yea’s’* proposal incorporates the PERT/beta distribution into the spider’s_web contingency graph. This process has some drawbacks: it has to be limited to perhaps seven risks at a time, as well as relying on the beta distribution, which has been shown to have major defects12. The use of the PERT approach has, inadvertently, been further exacerbated by the calculation of the standard deviation of a standard deviation, rendering some of Yeo’siO results questionable. The beta distribution has been used to estimate the mean and standard deviations of time and cost using maximum, mode and minimum estimates. The resulting formulas give many variants of equations, the commonest of which are those used in PERT. These issues are resolved by the Berny’? distribution. Simulation methods to date have suffered from exces204

sive detail, their lack of concern for external effects, and their general limitation of one dimension being extrapolated to others, e.g. time to cost. Pugh and Sodden’ combine decision trees with simulation. The authors do see value in this approach, but have not found the practical evidence to support this dual approach, and have replaced it by allowing for what-if simulations. Ward and Chapman’s3 paper points out their concern with the lack of use of risk analysis. They analyse, to a limited extent, many of the causes, but they do not give any indication of possible solutions. The reason may be their reliance on a 20-year old technique. Further developments have been made in the intervening years, and the advances presented by the authors were made with these very points in mind. These have been shown to be valid in practice, and, in certain industrial situations, one in five projects is now analysed by macrosimulation for risks. MACROSIMULATION Validation

of macro uers~s micro

It can be argued that the overall result of only considering the effects on the final cost and duration may be countered by the Gantt-chart influences of individual activities. This assumes that no external influences are imposed on a solution. If one considers the costs alone, the variability of each cost must, by definition, contribute to the whole. Hence, it is sufficient to estimate these variations in their own context, or as a proportion of the whole job cost. The same argument applies to time, providing the activities are on one critical path. The nature of criticality is that it influences the job duration. The question which is then posed is the extent of the influence of noncritical activities; for these, only cost variations are considered. The syndrome of risk aversion does influence results. We propose that macrosimulation is valid in the fully general sense, with the tradeoff being in terms of external factors. For the above reasons, macrosimulation has more to offer in the simulation analysis of a complete project, and it is less arduous than microsimulation. It is much less prone to errors such as the ‘tunnel’ vision imposed by microsimulation, requiring all activities to be considered while ignoring external effects. Hence, macrosimulation should produce a realistic view of the ‘world’.

Methodology

of macrosimulator

Before the simulation can begin, specific estimates are required for each risk factor. These are obtained by the simple question/answer dialogue detailed below, from which an appropriate risk distribution may be attached to the risk factor. Specific estimates How likely is the risk?: What is the probability that the risk will occur? * Time and cost e@cts on projects: Is there only one alternative? If there is only one alternative, what is it? (For example, the risk of late delivery will add 10 weeks to the job.) This implies a point distribution.

0

International

Journal of Project Management

RISK MANAGEMENT

Table 1. Summary of project and risks Risk description

cost, & x lo3

Probability of occurrence, %

Type Demolition Substructure External walls Frame Roof Miscellaneous services Finishes External: gas etc. External works, e.g. landscape External fees

90.0 65.0 90.0 90.0 90.0 90.0 95.0 75.0 90.0 95.0

3N 3B 22-

i3B l-

Time, weeks

1

Value 2

Value 3

Type

Value

5.0 0.0 - 750.0 -250.0 - 750.0 0.0 -750.0 -60.0 100.0 1.0

-25.0 -2500.0 1250.0 750.0 500.0 -400.0 800.0 60.0 -400.0 0.0

30.0 2500.0 0.0 0.0 0.0 IOOO.0 0.0 0.0 1200.0 0.0

I3B llII22ll-

0.0 0.0 0.0 0.0 0.0 0.0 -1.0 1.0 1.0 0.0

Value

1

Value 2 0.0 - 1.0 0.0 0.0 0.0 0.0 I.0 5.0 0.0 0.0

Value

3

0.0 5.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

[Basic estimates: cost: flO0 000 000, time: 100 weeks. Risks annotation: type 1: single point (one alternative only) (data: 1 point, -, -); type 2: rectangular or even distribution (equal-chance case) (data: minimum, maximum, -); type 3: skewed normal (median) or Berny (mode) distribution (high chance of most likely case) (data: most likely, minimum, maximum).]

RESUL.TS - Hirtogram

of cost PLEASE NUTE THAT EACH '*' ON THR HISTOGRAXREPRESEBTS1 PERCRNT PLEASE NOTE THAT EUCH 'o* ON THE HISTOGRAMPRESETS UNDER 1 PERCENT

Cofit - t of ckxurrences

RESULTS- Histogram of Time PLEASE NOTE THAT EACH ‘*’ ON THE ~TST~~ PLEASE NOTE THAT EACH '0' ON THE ~~ST~~

TIH WEEKS 111 110 109 LO8 107 106 105 104 103 102 101 100 99 96 97

1 PRRCENT UNDER 1 PERCEET

1: 110 109 108 107 106 105 104 103 102 101 100 99 98 97

;1 ; Figure

REPRESENTS

REPRES~S

l;r 1;

30

3j

40

26

2;

Time

" I of Occurrenceo

4;

5;

I. Cost and time histograms

Vol 11 No 4 November 1993

20.5

RISK MANAGEMENT

If there are several alternatives, l

l

Typical results

then

Is there a peak value?: If there is no peak, then state a minimal and maximal effect. (For example, the return of the site to its original landscape may be easier than expected, and hence the job cost could drop by E50 000, or be more difficult and increase by El50 000.) This implies a rectangular distribution. If there is a peak, it is necessary to estimate a minimum, most likely, and maximum value. Given a maximum, most likely (mode or peak), and minimum, a further question arises. Is there a 50:50chance that the risk couldfall on either side of the peak?: If there is not a 50: 50 chance, the Berny distribution’2 should be used. Otherwise the skewed normal should be used.

The distributions which are used cover most of those recommended, but are found by the user adopting the knowledge-based questions above. Summary l l

When the quantitative information has been gathered, and the data passed through the macrosimulation, one obtains several diagnostic results. The figures and tables below show most of the information included within and derived from the software: Table 1 shows the summary of the job and its risks. Histograms of the cost and time probabilities are shown in Figure 1. Table 2 shows the order of risks. This information also generates a graph (see Figure 2) which applies to the ‘if the risks occur’ case. The greater the risk (if it occurs), the further the event is plotted from the origin. Table 2 and Figure 2 show that the third and fourth risk predominates for the dimension of cost, while the eighth risk is the greatest for time. However, the ninth risk, with effects on both cost and time, is highly significant, especially with respect to cost. The scatter diagram (see Figure 3) shows the probability of time and cost combined.

What is the probability of the risk occurring? Distinguish between the following risk distributions: point, 0 rectangular, o skewed normal or normal (50 : 50 split), Berny (a skewed distribution without the defects of o the beta distribution). 0

SUMMARY Practitioner’s

assessment

The use of the techniques described above allows for the greatest degree of flexibility available in the most

Table 2. Risk summary Risk

Measure

Demolition (with simulated probability of 91.4%)

‘Cost’, f x 10’ Time, weeks Combined effect

3.819 0.010 0.4%

3.491 0.009 0.4%

Substructure (with simulated probability of 62.2%)

‘Cost’, f x 103 Time, weeks Combined effect

3.959 0.613 21.1%

2.462 0.38 I 16.9%

External walls (with simulated probability of 86.8%)

‘Cost’, f x 103 Time, weeks Combined effect

250.279 0.010 8.6%

217.242 0.009 9.7%

Frame (with simulated probability of 90.2%)

‘Cost’, f x 103 Time, weeks Combined effect

241.325 0.010 8.5%

223.087 0.009 9.9%

Roof (with simulated probability of 87.8%)

‘Cost’, f x 10’ Time, weeks Combined effect

- 132.886 0.010 4.6%

- 116.674 0.009 5.2%

Miscellaneous services (with simulated probability of 89.0%)

Cost’, f x 103 Time, weeks Combined effect

98.373 0.010 3.4%

87.552 0.009 3.9%

Finishes (with simulated probability of 93.8%)

‘Cost’, f x lo3 Time, weeks Combined effect

8.176 -0.008 0.4%

7.669 - 0.008 0.5%

External: gas etc. (with simulated probability of 77.4%)

‘Cost’, f x 10” Time, weeks Combined effect

1.412 2.908 100.0%

1.093 2.251 100.0%

External works, e.g. landscape (with simulated probability of 89.2%)

‘Cost’, f x 103 Time, weeks Combined effect

191.930

171.202 0.892 40.4%

External fees (with simulated probability of 95.6%)

‘Cost’, f x 103 Time. weeks Combined effect

1.000 0.010 0.3%

206

Average

if risk occurs

Average

1.000 35.0%

International

for whole simulation

0.956 0.010 0.4%

Journal

of Project

Management

RISKMANAGEMENT COST

in PLO00

TIME

in WEEKS VISIER RANKING

‘C

1009 9oe T' 8OI 70% soa 50* 40%

on

(4) sinu1rrion processor average if risk occurs) -A

14 I I I I I I I I I I 15 16

0

S

Simulator GRAPH (based

f

**fc

ONLY

NUMBERED

*

9

IS

SHOWN

HIGHEST ACTIVITY -f*

x *

* *

17 110 0:

Figure 2. Priority

2 20:

10:

30:

40:

50:

60:

COST

80:

90:

8 100:

* TIME

of risks (iy they occur) analysed as far as is practicable, and that the results the risks occurring will be limited.

user-friendly form. The questions asked by the software are unambiguous, and are capable of being easily understood. It is therefore comparatively simple to leave staff who are untutored in the use of the software to progress the analysis unsupervised. They find little difficulty in mastering the use of the software, despite the intricacies of the programming and data-analysis methods used. The software has been tested on a wide range of projects, and has demonstrated the magnitude of the risks which a client has needed to address at the outset of a project. In common with other risk-analysis software, the VISIER system cannot reduce risk. It does, however, identify the order of magnitude of particular risks, enabling project managers, in collaboration with their clients, to take the necessary actions to prevent or reduce risk. By this iterative procedure of identifying risk by the use of VISIER, and taking actions to reduce or remove risk, the client can be assured that the project will be

TIME

70:

in in

of

Future work The possibilities for future research are considerable. Two case studies at opposite extremes of the spectrum are suggested below. l

Statistics of simulation runs: The nature of the technology developed will allow a statistical analysis on the length of the run required for the ‘critical mass’ of different levels of simulation to be pinpointed. The number of runs which are recommended by commercial and research organizations vary from 100 to tens of thousands. This problem must be addressed if risk analysis is to be used with confidence. A fairly straightforward programme of work should generate recommended working rules that contrast the number of runs with the needs of the problem in hand. This

f'000 WEEKS VISIER RESULTS THE GRAPH

(4)

Simulator

Simulation

Scatter Diagram of Time & Cost REPRESENTS 8 OF OCCURRENCES (* -

processor

BELOW

1%)

%

'COST' 109,000 107,000

109 107 105 103 101 99 97 95 93

105.000 103.000 101.000 99,000 97.000 95,000 93 .ooo

TINE

%

WEEKS

Figure 3. Screen output Vol 11 No 4 November

I I 94 94

I I 96 96

of cost and time probability 1993

1% 6% 7% *

t

I I 98 9%

I I 100 100

2% 15a 16% *

I I 102 102

1% 16% 17% *

I I 104 104

* 79

7% *

I I 106 106

* * *

I

I

I

I

I I

I I

110 110

112 112

114 114

108

108

graph 207

l

could be consolidated with an appropriate statistical overview to ensure that meaningful results can be obtained from simulation. Correlation between perceived and si~lu~~ted risk priorities: Risk and uncontrollability are concepts to which managers are particularly averse, and, as a result, there has been a reluctance to engage in this type of analysis. Measurements of risk aversion have been confined to utility theory, and no investigations appear to have been made into the comparisons of perceived risk and simulated magnitudes. The advantage of having qualitative assessments which can be quantified and simulated is that it provides an excellent opportunity for risk-averse attitudes to be placed in a practical perspective.

understood format. This has widened the use of the techniques described to those whose expertise lie in other specialized areas. Their inherent knowledge is capable of application within the technique. This has led to a greater acceptance of risk analysis by all levels of internal staff and clients.

REFERENCES

A workable definition of risk analysis having been established, the proposed new techniques of macrorisk analysis have been demonstrated. This technique is only one of several available, among which are what-if analyses. From these methods must be chosen the most applicable and, hence, the most effective technique. The differences between macrorisk analysis and other techniques have been highlighted, and the benefits of macrorisk analysis have been summarized. A new computer-based methodology has been developed to incorporate the principles of decision trees, sensitivity analysis, and simulation methods. An additional distribution used to overcome the defects of the beta distribution has been included within the analytical process. It further improves on current methodologies to consider both external and internal risk influences, the 2-tier prioritization of quantitative risks, and bidimensionality, e.g. with respect to time and cost. It has been the intention of the software to remove some or all of the mystique which surrounds risk analysis. This has been achieved by removing the requirement for inordinate detail and an in-depth understanding of statistics required by other methods, by ensuring that the simulation results are presented in a simple and easily

1 Pugh, L A and Sodden, R G ‘Use of risk analysis techniques in assessing the confidence of project cost estimates and schedules’ Int. J. Project Manage. Vol 4 (1986) pp 158-162 2 Berny, J ‘Forecasting and risk analysis applied to management planning and control’ PhD Thesis University of Aston, UK (1988) 3 Ward, S C and Chapman, C B ‘Extending the use of risk analysis in project management’ ht. J. Project Manage. Vol 9 (1991) pp 117-123 4 Cooper, D F, MacDonald, D H and Chapman, C B ‘Risk analysis of a construction cost estimate’ Int. J. Project Manage. Vol 3 (1985) pp 141-149 an approach for 5 Perry, J G ‘Risk management: project managers’ ht. J. Project Manage. Vol 4 (1986) pp 211-216 6 Neuburger, K ‘Assessing the risks in acquisitions risk chance analysis’ Long Range Planning Vol 11 (1986) ~~41-88 uncertainty’ Int. J. Project 7 Baker, R W ‘Handling Manage. Vol 4 (1986) pp 205-210 8 Dworatschek, S ef ai. ‘The state of the art in project management risk’ INTERNET Int. Expert Sem. (1989) 9 Thompson, P A Organisation and Economics of Construction McGraw-Hill, UK (198 1) pp 68-70 and variability 10 Yeo, K T ‘Project cost sensitivity analysis’ ht. J. Project manage. Vol 9 (1991) pp Ill-116 11 Norris, C, Perry, J and Simon, P ‘Project risk analysis and management - a guide’ Project Manage. Today (Apr 1992) pp l-6 function for risk analy12 Bemy, J ‘A new distribution sis’ J. Oper. Res. Sot. Vol 31 (1989) pp 1121-1127

Jan Berny is a duo-national, fluent in both English and Czech. Although trained as a physicist, during the last 20years he has been engaged primarily in operational research, lectures and work as an infernational consultant. He has made breakthroughs in forecasting and risk analysis, based on which he both gained his PhD (at Aston University, UK) and developed his software package VISIER. He has recently completed EC-sponsored research in the Czech and Sfovak Republics. He is involved in the economic reconstruction of post-Co~7munist countries.

Paul R F Townsend completed his BSc (honours) degree al Reading Universiry in 1976, graduating with a First Class degree. Research into computer cost modelling then followed in the Martin Centre at the University of Cambridge culminaiing in the award of a doctorate. He joined Turner & Townsend in 1979und was appointed to lead the Research and Development department. Since leaving Unirlersiry he has served on a number of committees including the Science and Engineering Research Council Building sub -commitree and a number of RICS (QS division) committees. He is currently acting on the committee of the Association of Researchers in Construction Management (A RCOM) - with which he has been associated since its inception - and is an external examiner at the University of Greenwich.

CONCLUSIONS

208

International

Journal

of Project

Management