Pergamon
International Journal of Project Management Vol. 16, No. 3, pp. 145-152, 1998 © 1998 ElsevierScienceLtd and IPMA. All rights reserved Printed in Great Britain 0263-7863/98 $19.00 + 0.00
PII: S0263-7863(97)00045-8
Project management decision making using cross-impact analysis Luis F Alarc6n Head Department of Construction Engineering and Management, Universidad Cat6lica de Chile, Vicuna Mackenna 4860, Casilla 306, Santiago, Chile
David B Ashley Chair of Civil Engineering, University of California, Berkeley, CA, USA
This paper presents a methodology to evaluate the impact of management decisions on project performance outcomes. This decision-theory based methodology consists of a conceptual qualitative model structure and a mathematical model structure. The conceptual component is a simplified structured model of the variables and interactions that influence the decision being analysed. Influences and interactions assessed by experts or members of the management team are stored in a knowledge base. The mathematical component of the methodology uses concepts of cross-impact analysis and probabilistic inference as the core of the analysis procedure. The paper describes how the cross-impact concepts have been adapted and extended. Among the extensions, a method to combine probabilistic evidence is applied in this model to perform probabilistic inference. The result is a powerful but easy to use modelling and decision making methodology. © 1998 Elsevier Science Ltd and IPMA. All rights reserved.
Introduction Most of the existing construction project performance models are limited in their ability to provide quantitative estimates of the interactions among the significant factors affecting performance. More sophisticated simulation or network models generally require a substantial effort to model and collect information. The methodology described in this paper provides a new innovative approach to this problem. It was originally developed by the authors, working with a Task Force of the American Construction Industry Institute (CII), to predict the effect of project team options on project performance. ~'16 This model, called General Performance Model (GPM) is used here as an example. Cross-impact analysis applied to project modelling Cross-Impact Analysis (CIA) is a technique specifically designed to study how the interactions of events, present in a mathematical model, affect the probabilities of those events. The general notion was first suggested by G o r d o n and Helmer ~° with the game Futures created for the Kaiser Corporation, and later expanded to a number of forecasting areas. 2~ CIA is used to analyse the numerous chains of impact that can occur, to determine the overall effect of these chains on the probability that each event will occur. Figure 1 summarises the seven major steps of the cross-impact mod-
elling process and the corresponding activities in project performance modelling. This paper shows how the methodology can be used for modelling project decisions combining experience captured from experts and assessments from the project team to develop a conceptual model for decision making. The GPM, the conceptual model developed with the Project Team Risk/Reward Task Force of the American CII, is used as an illustrative example. Project options such as organisational structures, incentive plans, and team building alternatives can be incorporated into a model knowledge base. The G P M allows management to test different combinations of project execution options and predict expected cost, schedule and other performance measures. The methodology is illustrated using a hypothetical example project, a $100 million dollar petrochemical facility based on the Gulf coast of the US. All the inputs and results shown in this paper correspond to this example project. The adaptation of the CIA technique to suit the needs of the problem, and the assumptions and simplifications required to analyse the model are described below.
Definition of events The process of determining the event set for the performance model corresponds directly to the process of defining the significant variables that resulted in the
145
Cross-impact analysis: L F Alarc6n and D B Ashley
STieS IN CROS S-IMPACT ANALYSIS
I Definethe Events 1
(
Probabilities of Events
1
C O R R E S P O N D I N G ACTIVITIES IN P I ~ F O R M A N C E MODELLING
the use of different experts for each type of knowledge, to obtain experts' judgement and knowledge that are accurate concerning the subject being assessed.
• Develop Conceptual GPM > Define Performance Dements
Initial probability of events
• Define Probabilistic Impacts Scale > Obtain Values for Performance Events
I Estimate Conditional'] Probabilities of each I
• Define Scale for Strength and Direction of Probabilistic Impacts
E~ent Pair
• Define Patterns of Impact > Obtain Assessment from Users
J
1
I Perform Cali bration'~ Run of the Cross- I 1
Impact Matrix
• Modify Simulation Algorithm > Run Initial Simulation
J
• Design the Analysis Approach and Interpret Results • Introduce Constraints > Introduce Users Preferences • Extend Analysis to Combine Probabilistic Evidence > Run Simulation
Figure 1
Cross-impact analysis applied to project performance modelling structure of the G P M shown in Figure 2. This model contains a fixed 'conceptual structure', drivers-processes-performance elements, and different knowledge modules. ~6 The modular knowledge structure permits
DRIVERS
To fit the CIA modelling format, each variable of the G P M is described using a set of five mutually exclusive and collectively exhaustive events which cover the full range of possible performance outcomes. This scale considers positive as well as negative performance outcomes and uses five symbols: N N (high negative), N (medium negative), O (normal), P (medium positive), PP (high positive). The scale values or 'variable states' can be mapped into a cumulative probability distribution and the initial probabilities can be directly obtained by definition of the scale. For instance, Figure 3 shows that a 'high positive' performance (PP) has a probability of occurrence of 0.20. To obtain quantitative measures of performance, the project management team can specify the numerical levels corresponding to the probabilistic scale to describe the variable states. For instance, the user can provide assessments about the variability of performance measures using three estimates in the same way these type of assessments are used for project planning with PERT. 7 A curve can be approximately fitted to the specified points of the cumulative distribution. The values for NN*,N*, 0*,P*, and PP* can be obtained using a procedure to discretise the distribution. 8 The output of the cross-impact algorithm are a number of scenarios where each variable takes only one of the performance values of the scale. The corresponding numerical values described above can be used to translate the analysis results into meaningful quantitative results for each one of the selected outcome measures.
PROCESSES
PERFORMANCE OUTCOMES
TEAM ACTIONS INCENTIVE PLANS ORGANIZATIONAL STRUCTURES
TEAM BUILDING ALTERNATIVES
t@ + ® OPTIONS, STRATEGIES SPECIFIC KNOWLEDGE
Figure2 146
PROJECT MANAGEMENT GENERAL KNOWLEDGE
GPM structure and knowledge inputs
PROJECT SPECIFIC KNOWLEDGE
+ DECISION MAKER'S PREFERENCES
Cross-impact analysis: L F Alarc6n and D B Ashley
= 0.20
•
_
i° l 1 0.0
Z[ill I
!lil
NN* 88
Figure 3
Derivation of initial probabilities from symbolic performance scale
Conditional probabilities for each event pair Once the model variables have been identified and incorporated into the model, cross-impact analysis uses a matrix format to capture the influences and interactions a m o n g variables of the model. One approach to estimate the conditional probability matrix is to ask directly the question ' I f event m occurs, what is the new probability of event n?'. 9 Thus, if the probability of event n was originally judged to be 0.50, it might be judged that the probability of event n would be 0.75, if event rn occurred. The entire cross-impact 'occurrence matrix' is completed by asking this question for each combination of occurring event and impact event. Similarly a 'non-occurrence matrix' can be generated for the non-occurrence of the events by following a similar procedure. However, the non-occurrence matrix is usually calculated from the information in the occurrence matrix. Another approach is to ask for an estimate of the direction and strength of the 'impact'. For instance, specifying a number between - 3 and + 3. 4 This number is then used to calculate the impact using an analytical expression. Originally, G o r d o n and H a y w a r d proposed an arbitrary quadratic equation which satisfied conditions of producing larger values of the new probabilities as the strength of the enhancing impact increases, and the reverse for inhibiting impacts. 1° New equations with more rigorous mathematical formulations have been proposed 2 and have been adopted and tested in current software implementations.a, 11
Table 1 Strength and direction of impact on probabilities Index value 3 2 1 0 -1 -2 -3
N* 0* P* PP* P--'~ORMANCE SCALE % 93 108 115
Meaning Significantly increases the probability Moderately increases the probability Slightly increases the probability No effect on the probability Slightly increases the probability Moderately increases the probability Significantly increases the probability
The approach in the cross-impact program used at Battelle 4 has been adapted to estimate the conditional probabilities. Originally in this approach, an estimate of the direction and strength of the impact is obtained from experts, using the numerical scale shown in Table 1. The respondents have to answer the question ' I f column states were to occur how would this affect the probability of row states?', for each individual pair of variable states until an occurrence matrix is completed. Our example (Figure 2), with 14 variables, with 5 states each, would require 14× 5 × 14x 5 = 4900 assessments. This approach has been modified, in order to simplify the knowledge acquisition demands, replacing 25 assessments by a single assessment for each pair of variables. The simplified questioning procedure is as follows: ' I f changes were to occur in the column states, how would this affect row states?'. The respondent must indicate the strength and direction of the impact according to the scale shown in Figure 4. This simplification was possible due to the c o m m o n state definitions used for all the variables of this model. Pattern assumptions were used to generate the 25 corresponding cross impact numerical indexes based on this particular characteristic. Assuming a pattern of impact between variables has two important benefits. First, it drastically reduces the number of assessments required to fill the occurrence matrix. Second, but not less important, it frees the user from the burden of dealing with probabilistic information and potential inconsistencies as a result of overwhelming amounts of information. Details on pattern generation and testing of this assumption can be found in Ref. t. The questioning process is repeated for all the pairs of variables in the model until the full occurrence matrix has been completed. The numerical values of the occurrence matrix are then used to calculate the impacts using an analytical expression. First, each individual index value in the matrix is converted into a coefficient value (CV) using Equation (1) or (2). Next, the coefficient value is utilised in formula (3) 147
Cross-impact analysis: L F Alarcdn and D B Ashley
O
z
o
8
~'
~
~ no
COST
SLI+
SIG+
SIG+
SIC,+
MOD~
SCHEDULE
SLI+
SIG+
SIG+
SIG+
MOJ)~
VALUE
EFFECTIVENESS
S L I + SIG+
MOD~
SLI+ SIC.+ M O D 4
SLI+
SIG+
S L I + ' SIG--~+
SAMPLE PATTERN SIC.+ NN N 0 P PP
SIG+
S i g n i f i c a n t l y in t h e s a m e direction
MOD+
M o d e r a t e l y in the s a m e direction
3
2
0
-2 ~-3
NN
SLI+
S l i g h t l y in the s a m e direction
2
1
0
-1
-2
N
NO
N o effect
0
0
0
0
0
SLI-
S l i g h t l y in o p p o s i t e direction
-2
-1
0
1
2
0 P
-3
-2
0
2
3
PP
MOD-
M o d e r a t e l y in opposite direction
SIG-
S i g n i f i c a n t l y in opposite direction
IF C H A N G E S W E R E T O O C C U R IN T H E C O L U M N STATES, H O W W O U L D T H I S AFFECT R O W STATES?
Figure 4
Simplified cross-impact sub-matrix
C V = Ilmpactl + 1 for Impact > = 0
(1)
C V = 1/(llmpactl + 1) for I m p a c t < 0
(2)
N P i = Pi x C V / ( 1 - Pi + (Pi × C V ))
(3)
NPi = Pi
Adjusted probability of event i
= Old probability of event i
The derivation of this formula is based on the fact that in general, the odds of an event happening are given by the formula O D D S = P i / ( 1 - P i ) , and the principle that new odds can be used to calculate the new probability. This is done by adjusting the current odds by the coefficient value shown in Equation (4). Further details on the derivation and use of this formula can be found in 4 and a mathematical justification of the principle in. 2 NEW ODDS = ODDS x CV
(4)
Calibration run o f the cross-impact matrix
The original cross-impact algorithm used to perform the simulation has been modified to take advantage of the specific characteristics of the G P M . In this particular cross-impact model, each 'variable' is represented by a set of five mutually exclusive and collectively exhaustive events: N N , N, 0, P, PP. Only one of these events will always occur and its occurrence matrix is equivalent to the non-occurrence matrix of the other four non-occurring events. In the modified crossimpact algorithm only occurrences are tested and therefore only the occurrence matrix is required. The modified algorithm was tested against the original showing a remarkable improvement in performance. C o m p u t a t i o n time was reduced to approximately one third of the original time required. The relative frequency of occurrence of the event in the simulation is used to compute the calibration probability of the event. 148
Sensitivity tests, policies, or actions
Sensitivity testing can be used to validate key assumptions made to develop the model itself, such as: the effect of alternative impact patterns, the effect of alternative solution algorithms and the number of simulation runs required to obtain accurate results. Also, it can be used to evaluate the results of the CIA and highlight the key variables in the model. The alternatives for each project option represent policies under examination. Strategies of combined or isolated application of options such as incentive plans or organisational structures are good examples of actions under study. Sequencing
Sequencing of events is used to introduce the assumptions and logic of the G P M into the mathematical model. There are two important reasons to impose these conditions: (1) to ensure that the model represents the way in which the events occur in reality; and (2) to direct the model to propagate the effects starting from the variables which are directly affected by the actions under evaluation. The conceptual model assumes a directionality in the propagation of effects from options to performance outcomes. Drivers' events occur first, processes' events occur second and performance outcomes' events occur last. No sequence restrictions are imposed on the internal occurrence of the drivers' events or the performance outcomes' events. However, processes require a sequence that reflects the timing of their occurrence in a construction project. Within the processes used in this example, definition/feasibility's events are the first to occur and start-up/operations' events are the last to occur. Design, construction, and procurement can be parallel processes so that, no sequence restrictions are imposed on them. Figure 5 summarises the sequence restrictions.
Interpretation of results The results produced by the model consist of a set of scenarios where each variable is in one of the predefined states (NN, N, 0, P, PP). Identical scenarios
Cross-impact analysis: L F Alarcrn and D B Ashley
o
oN
OptionAlternative "k"
0
P
l~
0
Figure 6 Drivers' settings for an option alternative t~
Figure 5 Sequence of occurrence of GPM events are grouped together and their frequencies can be used to identify the most likely scenarios providing insight regarding the likelihood of a variable state occurring. 4 The variable states' frequencies can be expressed as posterior probabilities for comparison with initial probability estimates. The fact that the initial probabilities are judged in isolation implies that the posterior probabilities can be interpreted as probabilities modified for the likely impacts of other events in the base case. 9
In the G P M structure, drivers are the variables directly affected by project options and effects of project options propagate from drivers to processes and from processes to performance elements, as shown in Figure 2. An option alternative is represented by specific settings for the driver's states, as shown in Figure 6. Option alternative k could be a specific incentive plan which is expected to have a high positive (PP) impact on project management, a medium positive impact (P) on Engineering, and no significant impact (0) on the other three drivers. These settings are specified by the experts or by the project team and they are interpreted as necessary conditions to select a smaller set of scenarios from the results generated by the CIA. These scenarios, which share the specified driver's settings, are considered the likely scenarios when the option alternative is applied. The total number of scenarios which belong to the reduced analysis universe is the frequency of that option alternative. This value is an indication of the likelihood that scenarios containing the specified driver's state settings will occur. The frequency of the option alternative can be interpreted as a relative measure of the obstacles found in achieving the level o f impact on drivers suggested by the option experts. Option alternatives with very low relative frequency may need to be reassessed to establish more realistic sets o f impacts on drivers. In this way the information from the cross-impact model can be used to improve the consistency and validity of the model itself. For example, one may be interested in a particular incentive plan (a particular option alternative) that implies specific variable state settings for drivers. By examining the outcomes of scenarios corresponding to this incentive plan, it is possible to study its effects on the variables of the model.
Figure 7 shows the type of results that can be obtained for an incentive plan in the example model. These results can be used to generate posterior probabilities for the individual variable states, expected values for variable outcomes, or other statistical measures that can be useful for comparing option alternatives. For instance, a comparison of expected project cost (expected value) for a set of option alternatives (incentive plans, organizations, etc.) could help the decision makers to select the most convenient alternative. Table 2 shows the type of comparison that can be made for three alternative incentive plans, the analyst can compare the expected results of applying these incentive plans for each one of the selected performance outcomes. In fact, it is possible to compare all the outcome measures (multiple attributes) for each alternative. It is also possible to examine the distribution of each outcome measure and compare the relative risk existing among the different options. Figure 8 shows a comparison of the cumulative probability distributions of project cost for the three incentive plans introduced in Table 2. This distributions were obtained from the frequency distributions obtained from the simulation raw results (Figure 7). Figure 8 shows that 'Incentive Plan 8' dominates the other two incentive plans also. In this case the result is consistent with the analysis of expected values, however, in some cases it might be convenient to choose alternatives with slightly higher expected cost if they offer less risk.
Exploring the simultaneous effect of different options The potential for analysing simultaneous effects of several options using the G P M opens a host of research questions to be asked and practical issues to be evaluated. A conventional approach, that follows the scheme designed for the individual options, presents tremendous practical obstacles. First of all, it requires a large number of assessments to obtain the effect of all possible combinations of options. In our example the option's knowledge base contains information about 24 incentive plans, 24 organisations, and 48
85 9 5 1 0 0 1 1 0 1 2 0
COST %
88 9 2 1 0 0 1 0 5 1 1 5
SCHEDULE %
78 8 9 1 0 0 1 1 2 1 2 5
VALUE %
Figure 7 Cross impact raw results 149
Cross-impact analysis." L F Alarc6n and D B Ashley Table 2
Expected values of performance outcomes for alternative incentive plans
Outcomes' expected values
Cost (US$ x 106)
Schedule (months)
Value (us$ x 106)
Effectiveness (months)
102.07 94.65 90.57
24.38 22.59 21.67
29.99 31.16 31.8
40.78 35.04 31.43
Incentive Plan 3 Incentive Plan 4 Incentive plan 8
team building plans. To include information about combinations of these options would require, 24 x 24 x 48 x 5 = 138 240 assessments when the three options are present simultaneously. An additional 8640 assessments would be required for cases when only two options are present. A second problem to note, is obtaining the experts that are knowledgeable simultaneously in all the options. Even if there were experts available, the complexities of the assessment process would make it very difficult to obtain reliable results. F o r all these considerations, a conventional approach was considered infeasible. Combining probabilistic evidence
The problem of evaluating the simultaneous effect of several options corresponds to the theoretical problem of obtaining the joint distributions from the existing pairwise conditional distributions, as shown in Figure 9. In general, to solve this influence diagram it is necessary to obtain a high order conditional probability P(AIB, C). But the difficulties in obtaining these assessments, and the effort required, have led researchers to investigate approximate methods. Two approximate methods were evaluated for this purpose. One of these methods, proposed by K i m and Pearl ~2'~3 as a simplified solution method for influence diagrams (ID), was adapted for the G P M . Based on Dempster's rule of combination known as 'orthogonal sum', Shafer has presented an extensive discussion on a methodology in combining distinct bodies of evidence. ~4 The method suggested by Pearl and K i m also resembles the Dempster's rule of corn-
bination. This approximate method is illustrated in Equation (5): P(A = ailB = bj, C = ck) = c(i~ x P ( A = ailB = bj) x P(A = ailC = ck)
Where ejk is a normalising factor to keep the sum of all probabilities assigned to an outcome space equal to one, and can be derived by Equation (6) ejk = 1 / Z P ( A
= ailB = bj)*P(A = ailC = ck)
while ai, bj, and c~ represent ith, jth, and kth outcomes of variables A, B, and C respectively. This method has been tested and applied by Perng ~5 to combine assessments for project risk events with multiple causes, and to perform probabilistic inference for ID. Applied in that context, the method involves a two-phase cycle: local computation followed by logical sampling. The first step involves computing, for some variable iV, its conditional distribution, given the states of all its neighbours' variables. The second phase involves sampling the distribution computed in the first step and instantiating X to the value selected by the sampling. The cycle repeats itself by sequentially scanning through all the variables in the system. ~3 The probability values used in Equation (5) and (6) are iteratively generated and used in a simulation process to solve an influence diagram. In this methodology the concepts of the simulation method were adapted to directly use the results of the modified cross-impact algorithm to obtain the joint probability distributions, assuming that probabilities can be calculated
0,9
0,9
0,8
0,8
0,7
0,7
•~
0,6
0,6
oL gh
0,5
0,5
0,4
0,4
m I~1 0,3
0,3
0,2
0,2
0,1
0,1 0
110
100
90
Project Direct Cost (US$ x 1 0 6 ) Figure 8
150
(6)
i
1
120
(5)
Cumulative probability distribution of project cost for different incentive plans
80
Cross-impact analysis: L F Alarc6n and D B Ashley
a)
10
3-
5
5
al
a2
a3
a4
a5
A : SCHEDULE B : TEAM BUILDING
=
al
a2
=
a3
a4
a5
A : SCHEDULE C : INCENTIVE PROGRAM
Figure 10 Conditional frequency distributions from crossimpact results b) ~j~ = 1 / Z
f (A = ailB = bj) × f (A = ailC = ck) i
(9)
Figure 9 Evidential combination approximately Equation (7):
from
frequency
distributions,
These formulas can be used to obtain the joint probability distribution for the variables of interest directly using the results from the CIA. In the above example, the combined effect of an incentive program and a team building approach can be obtained as shown in Table 3. The use of this method to combine the effects of different options has an additional advantage over the conventional approach discussed earlier. The model's knowledge base does not require any modification of previous assessments when a new option is included for evaluation. Only the specific information on the new option needs to be added to the knowledge base. This feature gives flexibility for expansion and facilitates the exploration of new uses for the model.
using
P(A = ailB = bj) = f (A = ailB = bj) / ~ _ ~ f ( A = aklB = bj)
(7)
k
The probability values required in Equation (5) and (6) can be obtained directly from the frequency distributions given in the cross impact results, as shown in Figure 10. The histogram on the left-hand side represents the frequency distribution of project schedule (A: schedule) when a team building alternative (B = bj) is applied. The histogram on the right-hand side represents the frequency distribution of project schedule when an incentive program (C = c/,) is applied. Each bar in Figure 10 represents a 'conditional frequency' of the type shown in Equation (7) (f(A = a i l B = b j ) ) . The expression in Equation (7) can be substituted into Equation (5) and (6), and Equation (8) and (9) can be written in terms of the frequency distributions. The frequency distributions are available from the cross-impact results (Figure 10).
Summary and conclusions The user of this methodology will be able to test the effect of individual or combinations of project options/ decisions and predict selected outcomes in a low-cost approach. The combination of probabilistic evidence allows the model to approximate these combinations by the normalising process. Thus, any combination can be tested whether or not the experts are able to directly assess the conditional probabilities. Applying cross-impact formalism to the analysis of project decisions offers a number of additional benefits over traditional modelling techniques. The performance model benefits directly from the technique's ability to capture interactions and to incorporate uncertainties present in the model. Consistency checks can be incorporated in the methodology to help users
P(A = ailB = bj, C = ck) = otjk x f ( A = ailB = bj) x f ( A
Table 3
= ailC = cD
(8)
Calculation of combined effect on probabilities
A
1(,4 = a~lB = bj) (1)
al a2 a3 a4 a5
2 8 7 4 1
J(A
= a i l C = ck) (2) 3 7 9 2 2
O) × (2) 6 56 63 8 2 E = 135 (3)
e(A = a~lB= bj,c = ck) 0.044 0.415 0.467 0.059 0.015 E = 1.000
151
Cross-impact analysis. L F Alarc6n and D B Ashley
calibrate their judgements. The intellectual exercise of systematically articulating and structuring the problem has an important value in itself. Some experts, participating in this research, have visualised multiple benefits of the modelling process f6r a construction project team. They believe the modelling has the potential to be a valuable instrument for goal setting and team building in a project team environment. To enhance this ability, the mathematical advances proposed in this model significantly reduce the data acquisition burden on the user(s), while insuring reasonable accuracy and computational efficiency. Most importantly, the GPM provides a performance prediction when alternate, more exact methods would be unusable/unacceptable because of the input requirements. It provides a very valuable, real-time planning tool for decision making at the early stages of a major engineering project. Several other management decision areas could directly use the model structure. Currently, applications to other decision problems such as evaluation of environmental policy impacts and strategic planning in the construction industry are being developed. Acknowledgements The authors gratefully acknowledge the Construction Industry Institute for providing funding to support this research effort, and they also extend their gratitude to the members of the CII Project Team Risk/ Reward Task Force for their enthusiastic guidance, support and contributions. The Universidad Cat61ica de Chile, the Chilean Fund for Science and Technology, FONDECYT, and the Corporaci6n de Investigaci6n de la Construcci6n are acknowledged for partially supporting the work of the first author.
9. 10. I I.
12.
13. 14. 15.
16.
Stover, J. and Gordon, T. J., Cross-impact analysis. In Handbook o/'Futurcs Research, ed. J. Fowles. Greenwood Press, Westport, CT, 1978. Gordon, T. and Hayward, H , Initial experiments with the cross-impact method of forecasting. Futures, 1968, 1(2), 100 116. Kane, J., A primer for a new cross-impact language KSIM. Technological Forecasting and Social Change, 1972, 4(2), 149 167. Kim, J. H. and Pearl, J., A computational model for causal and diagnostic reasoning in inference systems. Proceedings o/" the Eighth International Joint Con/~renee on Art(/icial bltelligence, VoI. 1, August, Karlsruhe, West Germany, 1983, pp. 190 193. Pearl, J., Evidential reasoning using stochastic simulation of causal models. Art(/ieial Intelligence, 1987, 32, 245 257. Shafer, G., A Mathematical Theoo' o[" Evidence. Princeton University Press, Princeton, N J, 1976. Perng, Y. H., An intelligent system approach for construction risk identification. Ph.D. dissertation, The University of Texas at Austin, Construction Engineering and Project Management, Department of Civil Engineering, September 1988. Alarc6n, L. F. and Ashley, D. B., Modeling project performance for decision making. Journal ~?f Construction Engineering and Management, ASCE, 1996, 122(3 September), 265 273.
Luis F. Alarc6n is currently Head ~[ the Department ~[ Construction Engineerhtg and Management at the Catholic University ~[' Chile. He received a PhD, a MEng and a M S in Civil Engineering from the Universi O' ~)/' Cal(/ornia. Berkeley. His teaching jocus is on pro/ect planning, pr~?/ect management, and r&k management. His research and eonsu/ting activities eoneentrate on per/ormanee intproventent in construction; project risk attalysis; strategic planning Jbr construction companies; and decision support methodologies ,[or construction projects. He has conducted research for government and private agencies in Chile and Jor the Construction lndusto' Institute in USA.
References I.
Alarc6n-Cfirdenas, L. F. and Ashley, D. B., Project performance modeling: a methodology for evaluating project execution strategies. A report to the Construction Industry Institute, The University of Texas at Austin, Source Document 80, 1992. 2. Turoff, M., An alternative approach to cross-impact analysis. Teehnologieal Forecasting and Social Change, 1972, 3(3), 309 339. 3. Enzer, S., lnterax blteractive Analysis ./or Strategie Plannbzg. Center for Futures Research, USC, 1983. 4. H o n t o n , E. J.. Stacey, G. S. and Millen, S. M., Future Scenarios: The BASICS Computational Method. Battelle, Columbus Division, Ohio, 1985. 5. Lipinski, A. J., Cross-impact models. Energy, 1990, 15(MarchApril), 379 386. 6. Schuller, A., Cross-impact analysis of technological innovation and development in the softwood lumber industry in Canada a structural modeling approach. 1EEE Transactions ~[' Engineering Management, 1991, 38(3, August), 224 236. 7. Moder, J. J., Phillips, C. R. and Davis, E. W., Project Management with CPM, P E R T and Precedence Diagrannning, 3rd edn. Van Nostrand Reinhold, New York, 1983. 8. McNamee, P. and Celona, J., Decision AnaO'sis ./br the Pr~/k,ssional with Supertree. The Scientific Press, Redwood City, CA, 1989.
152
David Ashley is the Taisei Chair oJ' Civil Engineering and Chair qf the Civil and Environmental Engineering Deparmwnt at the University ~?[' Cal([brnia. Berkeley. His teaching Jocus is on risk managenwnt, project evaluation, and pr~/eet ,financing. Researeh areas currently being explored include strategic plunning ./or construction companies and projeets." development q/'pr~]eet execution plans." privatisation ~[" itcfrastructure jacilities; and pr+~]ect management .[~r largescale or complex projects, hwluding hazardous material remediation projects. Consulting aetivities .[or both public and private clients coneentrate on eonstruction planning hTcluding risk ident(~'cation, anaO,sis and management, and analysis ~[ strategic project decisions. Prior to joining the Berkeley .[acul O, in 1989, he taught at M.I.T. and The Universio' ~?['Texas at Austin. He has been honoured with the National Science Foundation's Presidential Young Investigator Award and the American Society of Civil Engineers' Construction Management Award in recognition of hL~"research.