European Journal of Operational Research 127 (2000) 220±238
www.elsevier.com/locate/dsw
Invited Review
On criticality and sensitivity in activity networks Salah E. Elmaghraby
*
Graduate Programme in Operations Research, North Carolina State University, P.O. Box 7913, Raleigh, NC 27695-7906, USA
Abstract A review of the issues related to activity criticality and the sensitivity of the mean and variance of project completion time to changes in the mean and variance of individual activities is presented. The methodologies proposed range over the analytical, Monte Carlo sampling, and statistical sampling, in particular the use of Taguchi orthogonal arrays. Ó 2000 Elsevier Science B.V. All rights reserved. Keywords: Activity networks; Critical activities; Sensitivity analysis
1. Path criticality and activity criticality: The classical approach Interest in critical paths and critical activities stems from the need to focus management's attention on the activities that determine the progress of the project and are the determinants of the achievement of the project's objectives. For instance, management may be seeking assessment of the `risk' (as measured by the probability of delay in the completion of the project, or the expected cost of such a delay) it may run in delaying the start of an activity, or management may need to know the `criticality' of an activity, in whichever sense such `criticality' may be taken, to ensure the correct and proper attention to the activity in question; etc.
*
Tel.: +1-919-515-7077; fax: +1-919-515-5281. E-mail address:
[email protected] (S.E. Elmaghraby).
The notion of critical path was born with the development of both the CPM model [15] (deterministic activity durations) and the PERT model [18] (stochastic activity durations). The de®nitions of critical path (CP) and critical activity (CA) in the CPM model are straightforward and unambiguous: a `critical path' is one along which the four types of ¯oat are zero for each activity (see [12, chapter 1]). A critical path is the longest path from the start node to the terminal node, and it is the one which determines the project duration. Since an increase in the duration of an activity which lies on a critical path would prolong the project duration, such activities are called `critical activities'. The number of critical paths along which an activity lies forms a measure of the relative importance of the activity; i.e., provides a convenient way to rank the activities, at least ordinally. In other words, if there are 10 CP's in the network and activity a lies on all 10 of them while activity b lies on only 4, then activity a is
0377-2217/00/$ - see front matter Ó 2000 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 7 - 2 2 1 7 ( 9 9 ) 0 0 4 8 3 - X
S.E. Elmaghraby / European Journal of Operational Research 127 (2000) 220±238
pronounced of `higher criticality' than activity b. Further distinction among activities may be accomplished by securing the 2nd CP, the 3rd CP, etc., and ranking tied activities according to the number of paths on which they fall. Are there equivalent measures in the PERT model in which activities are assumed to have stochastic durations? De®nitions of `critical path' and `critical activity' may vary depending on the concerns and objectives under consideration. Our discussion, however, will be focused on the importance of paths and activities relative to their impact on the project completion time, or its variability. The methods and approaches can be generalized to ®t the situations when the focus is dierent than meeting the project due date (for instance, on ®nancial matters). Unfortunately, in the PERT model matters are not as simple as they are in the CPM model. The stochastic structure of the PERT model implies that (almost) any path ± at least any path in a subset of all paths (which may be a very large subset) ± may be critical with nonzero probability, especially if the activity durations are approximated by distributions which extend to in®nite (or at least very large) durations. Traditionally, the criticality of a path has been de®ned as follows: De®nition 1. A path is critical if its duration is not shorter than that of any other path. A possible measure of the path criticality is the probability that it is of longest duration. We refer to such measure as the `path criticality index' (PCI). Let P fph grh1 denote the set of all paths in of the network, and Z
ph denote the duration P path ph ; obviously a r.v. Then Z
ph a2ph Ya , where Ya is the duration of activity a 2 A; and we have PCI
ph Pr Z
ph P Z
pq 8pq 2 P; pq 6 ph :
1 Interestingly enough, while CPM analysis emphasized the path criticality, it must be apparent that a more relevant concept is that of activity
221
criticality, for the very reason mentioned above, viz., that (almost) any path may be critical. Consequently, it seems more meaningful to inquire into activity criticality rather than into whole path criticality. Drawing attention to `critical activities' is also appealing from a dierent point of view. We recall that the main interest in evaluating the CP's in CPM/PERT analysis is to determine the `bothersome activities' ± those that we wish to `manipulate', such as delay their start times for ®nancial or other reasons, or those that may constrain the progress of the project ± with the purpose of concentrating attention on such activities and `doing something about them'. Thus, sooner or later, we have to concern ourselves with individual activities. Therefore it is logical to focus attention from the outset on the critical activities. Emulating our approach to paths, we may de®ne a critical activity as follows: De®nition 2. An activity is critical if it falls on a `critical path'. A possible measure of the criticality of an activity is the probability that it will fall on a critical path; i.e., on a path of longest duration. We refer to such measure as `activity criticality index' (ACI). Normally, the determination of the criticality index associated with an activity can be achieved through a three step procedure summarized as follows: (1) determine the criticality indices of all paths; (2) identify the paths that contain the activity of interest; (3) compute the activity criticality by summing up the criticality indices of all paths that contain it. Although these steps are intuitively clear, their translation into practicality is computationally demanding. This is because the complete enumeration of paths and the computation of path criticalities for all paths is a very dicult task, if at all possible, for any realistic network, as a result of the interdependence among paths, let alone among activities (due to sharing of material resources or personnel). Realization of this fact has been the driving force behind the research in the development of alternative approaches for the estimation of
222
S.E. Elmaghraby / European Journal of Operational Research 127 (2000) 220±238
path and arc criticalities without necessarily going through the three-step procedure outlined above. These attempts may be brie¯y summarized as follows. There are, in the main, two dierent methodologies for estimating the criticality indices and other metrics which may be useful for dierent aspects in the planning and scheduling of the project: (i) analytical approaches and (ii) Monte Carlo sampling-based approaches. There are four analytical approaches. The ®rst is due to Martin [20] who based his method on the assumption that the c.d.f.'s of the activities are polynomial functions (of time). The second is due to Dodin and Elmaghraby [11] who developed the ®rst analytical procedure to approximate the criticality indices of all the activities without enumerating the paths. The third is due to Dodin [10] who was interested in identifying the `k most critical paths', and based his procedure on stochastic dominance relations among random variables. And the fourth is due to Kulkarni and Adlakha [17] who developed an analytical procedure for estimating the PCI in Markov activity networks. They also suggest adding up the PCI's that contain the activity of interest to compute its ACI. Bowman and Muckstadt [5] provide a recursive approach that directly computes the ACI's without enumerating paths or computing PCI's. There are three Monte Carlo sampling (MCS) approaches. The ®rst is due to van Slyke [24] who pioneered the approximation of ACI's using the methodology of MCS. The second is due to Sigal et al. [22] who suggested a conditional Monte Carlo procedure to estimate path criticality by utilizing the concept of `maximal uniformly directed cutset' (MUDC) in order to reduce the sampling task. And the third is due to Bowman [3] who combined the standard MCS with exact analysis conditioned on `node release times' 1 to estimate arc and path criticalities.
1 This is the earliest time of completion of all activities terminating at the node (in the AoA representation).
1.1. Organization of the paper The paper continues in Section 2 with the work of Williams [25] who, critical of the standard definition of activity criticality given in De®nition 2, suggested a `signi®cance index' (SI) and a `cruciality index' (CRI) to measure the relative importance of an activity with respect to the project completion time. We then present in Section 3 a taxonomy of the `sensitivity' issue, and continue to discuss three quite recent approaches that have been proposed for its estimation, all of which may be viewed as providing a partial response to Williams' critique. The sensitivity of the mean of the project duration to changes in the variance of an activity is discussed in Section 4. This section also contains a synopsis of the contribution of Gutierrez and Paul [14]. Section 5 summarizes the contribution of Cho and Yum's [7], in which they apply Taguchi's tolerance design technique to measure the eect of the uncertainty in an activity duration on the uncertainty of the project completion time, thus determining the sensitivity of the variability in the project duration to the variability in the activity duration. Finally, the analysis via the Taguchi approach was carried further by Elmaghraby et al. [13] in Section 6, who investigated the eect of varying the mean activity duration on the variability of the duration of the project. As the discussion progresses you shall discover that all these contributions, while original in thought, provide only a partial resolution of the issues raised by Williams. The ®eld is still open to further development. 2. Alternative measures of activity criticality: The SI and the CRI 2.1. Anomalies We have always labored under the commonly accepted de®nition of activity criticality stated in De®nition 2 above. And yet, careful thought reveals that there are some issues associated with that de®nition that leave one with the uneasy feeling that it falls short of re¯ecting the project
S.E. Elmaghraby / European Journal of Operational Research 127 (2000) 220±238
management's concerns. For one, it does not give an intuitively helpful metric. Indeed, managers do not necessarily pay attention to an activity with such a measure of criticality since it is based on probabilistic considerations that are far from management's view of things. And for another, the index cannot be used directly when there are resource constraints. Finally, and most signi®cantly, there are many realistic circumstances in which the measure runs counter to management's expectations. Such anomalies would drive management to be skeptical, if not downright hostile! It is not dicult to construct examples in which the ACI remains invariant under widely varying parameter values of the activities, when one would expect the criticality ranking of the activities to change. In other words, if by `managing an activity' one implies getting the greatest impact by shortening its expected duration, for example, then dierent activities would be candidates of such action under the dierent parameter values, in contradiction to the dictates of the ACI. We therefore conclude that 1. When we think of `managing activities', we naturally think of signi®cant rather than in®nitesimal changes to those activities, and we wish to know what the eects of these signi®cant changes will be on the project's duration. The more pronounced the eect, the more `critical' is the activity. 2. The eects of changes in an activity parameter on the project's completion time have, in general, signi®cant interactions with other activities' parameters. Neither of these desiderata is satis®ed by the ACI. 2.2. The SI and the CRI For all these apparent contradictions Williams [25] suggests an alternative measure of the criticality of an activity, which shall be referred to as its SI. This new measure can be deduced from the ¯oat (see [12, Section 2.1]), of the activity and the duration of the project. This signi®cance index of activity
i; j may be determined from
yij T SI
ij E yij TF
ij E
T
223
2
with TF
ij Tj
L ÿ Ti
E ÿ yij ; where SI
ij; yij ; TF
ij; T ; ET ; Tj
L; Ti
E denote, respectively, the activity's `signi®cance' measure, duration, and total ¯oat, the project completion time, the expected project completion time, the latest start time of node j, and the earliest start time of node i and E denotes expectation. In some examples, the SI's seems to provide more acceptable information on the relative importance of the activities. However, this statement cannot be generalized to all activity networks. Indeed, the new metric can yield counter-intuitive results in some cases. For example, consider a network formed by two activities in series. Let the p.m.f. of these two activities be as shown in Table 1. In this case SI
1 SI
2 1, since both lie on (the only) path 1±2, and the two activities seem to be equally signi®cant. However, it is clear that activity 1 has a larger impact on the project completion time, in the sense that the same proportional reduction in its expected duration will have a higher impact on the project duration. Another important observation at this point is that the SI in this case gave the same information that would have been obtained by applying the classical criticality indices. Therefore, combining the two metrics does not resolve the issue either! Finally, the SI is extremely demanding in its computing requirements, since there is no polynomial time algorithm which yields the exact values of ET , and the evaluation of the expectation of the expression between square brackets in Eq. (2) demands the enumeration of every possible realization of the network! Table 1 The activity durations of two activities in series Activity i
Duration Yi
Probability
1 2
100 10 20
1 0.5 0.5
224
S.E. Elmaghraby / European Journal of Operational Research 127 (2000) 220±238
For these reasons Williams [25] also reviews the so called `cruciality index' used by BAeSEMA, Ltd 2. The CRI is de®ned as the absolute value of the correlation between the activity duration Yi and the total project duration
T . That is, CRI
i jCorr
Yi ; T j;
3
where, for any two r.v.'s X and Y ; Corr
X ; Y
Cov
X ; Y rX rY
and Cov
X ; Y E
X Y ÿ E
X E
Y : Again, the immediate reaction is that, for some networks, the CRI is apparently more intuitive and re¯ects better the relative importance between the activities. It also has two main advantages. First, it can be used when there are resource constraints since it suggests the degree of dependence of the total project duration on the activity duration. Second, it can handle other uncertainty aspects of the project, such as stochastic branches (see the discussion of Generalized Activity Networks in [12, chapter 5]), which are not incorporated into the analysis of the classical PERT network. Yet, although applicable when resource constraints are present, the CRI has a few major drawbacks of its own. First, it measures the linear correlation between the activity durations Yi and the total project duration T. As is well known, the relationship between these two entities may not be linear. In fact, Cho and Yum [7], whose contribution is discussed in more detail below, demonstrate that in some cases the impact of Yi on T gets more signi®cant as Yi increases, indicating a nonlinear relationship between the two variables. Perhaps this problem can be resolved by using non-linear correlation coecients. In any event, the computation of these metrics requires MCS, which is costly. Second, the CRI is equally demanding in its computational requirements, since
2
Referenced in the paper of Williams [25].
the evaluation of the correlation between Yi and T is no minor feat. Third, the measure can also produce counter-intuitive results since it only considers the eect of the uncertainty of an activity on the project duration. In particular, if the duration of an activity is deterministic (or stochastic but with minuscule variance), then its criticality is zero (or close to it) even if the activity is always on the critical path! Therefore, it seems that the three measures: the SI, the CRI, as well as the classical ACI, fall short individually in capturing the `importance' of an activity. Perhaps they should be considered together (in some fashion) in order to obtain a meaningful indicator of the criticality of an activity? The issue is not settled as of this writing. 3. A taxonomy of sensitivity issues Close scrutiny of the questions raised in studies of `sensitivity' issues in activity networks reveal the following taxonomy of problems of `sensitivity studies' (Table 2). The cell `mean±mean' represents the impact of a change in the mean duration of an activity on the mean duration of the project. The arrows in the cell indicate monotone response; to wit, an increase (decrease) in the mean duration of an activity shall always lead to a non-decreasing (non-increasing) mean duration of the project completion time. 3 This is a self-evident result in the networks of concern to us here, and we have only the following to say about it. A measure of the relationship, which, in general, is non-linear albeit monotone, can be secured via MCS. If the random variables can be approximated by continuous p.d.f.'s then it is quite easy and relatively ecient to compute derivatives of the project completion time with respect to each activity's mean and variance. We refer here to the paper by Bowman [4] for details.
3 The assertion is not true, however, in AN's with the socalled `generalized precedence relations', in which an increase in the average duration of an activity can lead to a decrease in the mean duration of the project!
S.E. Elmaghraby / European Journal of Operational Research 127 (2000) 220±238 Table 2 Taxonomy of `sensitivity' problems in probabilistic AN's Project Activity
Mean Variance
Mean
Variance
G&P (1998)
EFT (1998) C&Y (1997)
The cell `variance±variance' represents the impact of a change in the variance of the duration of an activity on the variance of the duration of the project. We report on the contribution of Cho and Yum [7]. The cell `variance±mean' represents the impact of a change in the variance of the duration of an activity on the mean duration of the project. This issue was treated recently by Gutierrez and Paul [14], to which we add some insight based on the work of Clark [8] on normally distributed activity durations. The cell `mean±variance' represents the impact of a change in the mean duration of an activity on the variance of the duration of the project. We report on the contribution due to Elmaghraby et al. [13]. 4. Sensitivity of variance±mean: An analytical discussion How does increased variability of an activity impact the mean duration of project completion time? As early as 1981 Schonberger [21] claimed that `as a general rule' increased variability in an activity leads to increased expected project duration. This claim was recently investigated by Gutierrez and Paul [14], who concluded that the claim is not always true! Recall that Clark [8] has established that if n and g are two normally distributed r.v.'s with means ln and lg , respectively, and 1 max fn; gg then E
1 ln U
a lg U
ÿa au
a; where a2 r2n r2g 2rn rg q
n; g; a
ln ÿ lg a; u
x the d:f: of the standard normal deviate; U
x the c:d:f: of the standard normal deviate
225
with r2X as the variance of X , and where q
n; g is the coecient of linear correlation between n and g. Now suppose that the r.v. g is replaced with another r.v. g0 that has the same mean but larger 2 variance (for simplicity, we write r02 g instead of rg0 ); i.e., 02 2 g0 N
lg ; r02 g with rg > rg :
What can be said about the expected value of 10 max
n; g0 relative to 1? In particular, we are interested in the sign of the dierence ÿ E 10 ÿ E
1 ln U
a0 ÿ U
a lg U
ÿa0 ÿ U
ÿa a0 u
a0 ÿ au
a?0:
4
Note that a02 > a2 (hence a0 > a). Assume, without loss of generality, that lg P ln , and that a is determined as
lg ÿ ln =a whence a > 0. Then 0 < a0 < a, which leads to the inequalities u
a0 > u
a;
U
a0 < U
a;
U
ÿa0 > U
ÿa:
Observe that ln U
a0 ÿ U
a lg U
ÿa0 ÿ U
ÿa ln U
a0 ÿ U
a lg U
a ÿ U
a0 U
a ÿ U
a0
lg ÿ ln > 0 by the assumption that lg P ln ; and we have a0 u
a0 ÿ au
a > 0: We therefore conclude that E
10 ÿ E
1 > 0 meaning that the expected completion of the project shall indeed increase, as suggested by Schonberger. Is this true in general? The following example helps understand the answer to this question. Example 1. As a preliminary to the general discussion, consider the simple case of a project composed of two independent discrete r.v.'s in parallel with the same support, that are distributed as follows:
226
S.E. Elmaghraby / European Journal of Operational Research 127 (2000) 220±238
g g0 n
1
2
3
4
0.20 0.10 p1
0.20 0.55 p2
0.50 0.10 p3
0.10 0.25 p4
It is easy to verify that lg 2:5 l0g , while r02 Pgt 2 0:95 > rg 0:85. Furthermore, letting qt 1 pi , we have E
1 4 ÿ
0:20q1 0:40q2 0:90q3 q4 ; E
10 4 ÿ
0:10q1 0:65q2 0:75q3 q4 ; whence E
10 ÿ E
1 0:10q1 ÿ 0:25q2 0:15q3 : by
The (cumulative) probabilities are constrained
0 < q1 < q2 < q3 < 1: Suppose q3 0:95 and q2
0:15=0:25q3 0:6q3 ) q2 0:57, then any q1 in the open interval
0; 0:57 would result in E
10 ÿ E
1 > 0; i.e., the project duration would increase. However, suppose q3 0:93; and that q2 56q3 ) q2 0:775: Then E
10 ÿ E
1 0:10q1 ÿ 0:05425 < 0 for any q1 2
0; 0:5425
Having dispelled the notion that an increase in variability of an activity necessarily leads to an increase in the expected project duration, we now come to the promised general case. Gutierrez and Paul [14] demonstrated the following results. 1. For discrete r.v.'s, if their support contains exactly three points (i.e., all three r.v.'s have non-negative probabilities at the same three val2 ues x1 ; x2 ; x3 , say), and lg lg0 then r02 g > rg ) l10 > l1 . Moreover, if the support of the r.v.'s is > 3 then there exists r.v.'s n; g; and g0 de®ned on the support such that and
2 r02 g > rg ;
x
x
8 x P 0 and for i 1; . . . ; n; then Z
which implies that the project duration would decrease.
lg lg0
Numerical Example 1 illustrates this conclusion. 2. Recall the following de®nition: c0 is said to be `convexly larger' (or larger in the convex order) than c, written c0 cx c, when E/
c0 P E/
c for all convex functions / : R ! R; provided the expectations exist. We have the fol2 lowing result: If lg0 lg and r02 g > rg ; then 0 l10 > l1 if and only if g cx g. Now, for two normally distributed r.v.'s g0 and g such that 2 0 lg0 lg and r02 g > rg , it is known that g cx g. We therefore immediately conclude that l10 > l1 , meaning that the project expected value increases as the variable g is replaced by the variable g0 of the same mean but larger variance. This establishes the correctness of Schonberger's claim for the class of normally distributed r.v.'s. 3. Let Fi
u denote the c.d.f. of the r.v. ci , and let Fi
u denote its complementary c.d.f.; similarly for r.v. fi with c.d.f. Gi
u and complementary i
u. It is known that (see Marshall c.d.f. G and Proschan [19]) if Z 1 Z 1 i
u du Fi
u du 6 G
but l10 < l1 :
1 x
" 1ÿ
n Y
# Fi
u du 6
i1
Z x
1
" 1ÿ
n Y
# Gi
u du:
i1
Note that the square bracket on either side of the inequality is the complementary c.d.f. of the maximum of the n independent r.v.'s. An immediate corollary to this relationship is the following: Let fci g and ffi g be two sets of n mutually independent, positive valued and bounded r.v.'s; i 1; . . . ; n. Recall that g0 is said to be larger than g in the increasing convex order, written g icx g0 , when E/
g P E/
g0 for all increasing convex functions / : R ! R, provided the expectations exist. Suppose ci icx fi ; i 1; . . . ; n: Then it follows that maxfci g icx maxffi g: i
i
S.E. Elmaghraby / European Journal of Operational Research 127 (2000) 220±238
The import of all this background is the following signi®cant result which pertains to series±parallel activity networks: The greater the variability in the project activities, in the sense of convex order, the more the expected critical path length, as secured from the classical PERT calculations, underestimates the actual expected project durations. What if the network is not series±parallel? Or, what if interest is other than the variance±mean sensitivity issue? The answer to these questions cannot be secured by analytical arguments as presented here; hence the reliance on sampling schemes. This approach is discussed in the following two sections. 5. Sensitivity of variance±variance: A response surface approach (the uncertainty importance measure ± UIM) Dissatisfaction with the SI and the CRI discussed in the preceding section seems to have motivated the development of the method introduced by Cho and Yum [7] to measure the impact of the variability in activity durations on the variability of the project completion time. An uncertainty importance measure (UIM) is evaluated under the assumption that the durations of activities are independent and symmetrically distributed. The uncertainty in the duration Y of an activity can be measured by its variance, and is propagated through the network to the uncertainty in the project completion time T. Of course, if there is no uncertainty in the activity, or if its uncertainty is insigni®cantly small, its UIM is zero, by de®nition. An uncertainty importance measure is an interesting metric which can be used to identify which activities have more signi®cant impact on the magnitude of the uncertainty in the project completion time T. The Taguchi tolerance design technique [23] is utilized with some modi®cations. The project completion time corresponds to the `process output' or `response', in the original Taguchi approach, while the activity duration corresponds to a `factor' in the original Taguchi approach. The UIM of an activity i, denoted by UIM
i, is de®ned as
227
UIM
i
variability of T due to the uncertainty in Yi : total variability of T
The UIM of an activity i jointly with activity j, denoted by UIM
i&j, is de®ned as UIM
i&j
variability of T due to uncertainty in Yi and Yj : total variability of T
UIM
i evaluates the main eect of the uncertainty in the duration Yi on the variability of T, whereas UIM
i&j evaluates not only the main eects but also the interaction eect of the uncertainty in the durations Yi and Yj on T. 5.1. Preliminaries for evaluating UIM There are some PERT characteristics to be considered before using the Taguchi method to evaluate UIM
i and UIM
i & j: 5.1.1. Determination of the test levels of activities The test levels of an activity are determined on the basis of many factors such as path durations, number of paths, number of activities in each path, and the degree of interdependence among the paths. All of these have an impact on the project completion time which can be summarized as follows. · If the network has a predominantly longer path p , then activities not on p have negligible effects on T, while the activities on p have a linear eect on T. This P is logical since T can be approximated well by i2p Yi . This type of network is called Type A-PERT network (abbreviated as `Type-A'). · If the durations of the most critical paths are similar, the network has no predominantly longer path, the duration of each activity in these critical paths exhibits a nonlinear eect on T. The eect is more prominent as the number of the most critical paths increases. This type of network is called Type B-PERT network (abbreviated as `Type-B'). For Type-B networks, the eect of an activity duration on T tends to be
228
S.E. Elmaghraby / European Journal of Operational Research 127 (2000) 220±238
Step 2. Compare the path durations to ®nd out whether a dominant p exists. 4. Calculate the following values:
more linear as the number of activities on the paths increases. 5.1.2. Interactions between activity durations In PERT networks, the impact of the variability in the duration of activity i on T may be in¯uenced by the variability in the duration of another activity j; especially for the Type B networks. For Type A network, this interaction between the variabilities of activities is insigni®cant. Therefore, an appropriate orthogonal array (OA) and a proper assignment of the activity durations to columns are essential for the Type B networks in order to avoid the confounding eect from these durations. 5.1.3. Determination of network type The following two-step algorithm is used to determine whether a PERT network is of Type-A (has a dominant path) or Type-B (does not have a dominant path). In the ®rst step, we identify the most critical paths without completely enumerating all the paths (unless, of course, all the paths in the network are critical). In the second step, the durations of these critical paths are then compared to see if a dominantly longer path exists. Proc. C&H: Algorithm for determining the network type Step 1. Find the most q critical paths using the heuristic approach by Anklesaria and Drezner [1] as follows: 1. Set activity durations at their mean values. Calculate the slack sk for each node, sk tk
L ÿ tk
E 8k 2 N , where tk
L and tk
E are the latest realization time and the earliest realization time of node k, respectively. 2. Reduce the network by eliminating nodes with a slack greater than a certain threshold value D; and also all the arcs both ending into and emanating from these nodes. 3. If the number of the paths in the reduced network is greater than q, then choose the ®rst q paths with the largest expected path durations. Otherwise, add to the reduced network the nodes with the smallest slack until there are q paths in the reduced network. Usually, q 4 or 5 is recommended.
EZ
ph , l~h
X i2ph
VarZ
ph , r~2h
li ;
X i2ph
CovZ
ph ; Z
pq
r2i ; X
i2ph \pq
r2i ;
EZ
ph ÿ Z
pq l~h ÿ l~q ; VarZ
ph ÿ Z
pq r~2h r~2q ÿ 2CovZ
ph ; Z
pq :
5. If there exists an h 2 f1; 2; . . . ; qg such that PrZ
ph P Z
ph P 0:95 for all h, h 6 h , then the network is declared as Type A. Otherwise, it is declared as Type B. 5.1.4. Experimental strategy For Type-A networks (containing a dominant path), the UIM
i can be estimated as r2i =r2T for , and as activity i on the dominantly longest path pP 2 0 for all other activities. Note that rT i2p r2i . Type-B networks (not containing a dominant path) cannot be estimated analytically. Therefore, these networks are analyzed by utilizing the experimental method proposed by Cho and Yum [7]. A two-level orthogonal R-IV design 4 is applied in the ®rst stage for the purpose of screening out activities whose UIM's are small. A (resolution) RIV design permits the estimation of all main eects under the assumption that interactions involving three or more factors are negligible. Thus, an R-IV design provides estimates of main eects which are not contaminated by the presence of two-factor interactions. Some of the activities with a negligible linear eect on T can be discarded since they are unlikely to have a signi®cant interaction eect with any other activity. Then, in the second stage a three-level OA is applied to the screened activities obtained from the ®rst stage to detect potential non-linear eects among these activities.
4
For a brief discussion of these designs, see Section 6.1.
S.E. Elmaghraby / European Journal of Operational Research 127 (2000) 220±238
5.1.5. Construction methods of orthogonal designs A two-level (hi- and lo-levels) orthogonal R-IV design can be obtained by dierent methods. For example, Bullington et al. [6] suggested a construction of this R-IV design by selecting oddnumbered columns of the Taguchi two-level OA's. Another method is the `foldover technique' [2,9]. Consider a two-level orthogonal R-IV design D constructed by the foldover technique,
B D ÿB
J ; ÿJ
where B is a matrix of order m
m ÿ 1 obtained from a Hadamard matrix H of order m. B is constructed by deleting the ®rst column of H, and J is a unit vector with m elements. Therefore, D is a resolution IV design for 2m factorial in 2m runs. The rows of D form the combinations of factor levels of the experiment. 5.1.6. Algorithm for Taguchi tolerance design The following notation is needed for the analysis of variance (ANOVA) technique: qi qi&j SST SSi SSi&j SSE DF DFi DFi&j DFe CF MSE z zik
contribution ratio of activity i contribution ratio of the interaction between activities i and j total sum of squares sum of squares due to the main eect of activity i sum of squares due to the interaction eect of activities i and j error sum of squares (residual) degrees of freedom associated with a sum of squares degrees of freedom associated with activity i degrees of freedom associated with interaction of activities i and j degrees of freedom associated with the error (residual) correction factor mean square error mean of system characteristic zh ; h 1;...;r sum of the z's obtained at the kth level of activity i
zi&j;uv
229
sum of the z's obtained at the uth and vth levels of activities i and j, respectively number of z's obtained at the kth level of activity i. Note that m is the same for all i and k due to the balance of Taguchi's OA's number of z's obtained at the uth and vth levels of activities i and j, respectively. M is the same for all i; j; u; v in the OA's number of levels of activity i. l is the same for all i number of rows (runs) in the OA
m
M l r
Step 1. Determine the test levels of each activity (factor). Let li and r2i be the mean and variance of factor i, respectively. If a factor is assumed to have a linear eect on the system characteristic z, it may have two test levels, speci®ed at li ÿ ri and li ri , for the low and high levels, respectively. If a factor is assumed to have a non-linear eect on z,pthen three levels arepused, speci®ed at li ÿ ri 3=2, li , and li ri 3=2. Step 2. Select an appropriate OA by considering the number of factors and their levels. Step 3. Assign the factors (and the interactions among factors if they need to be estimated) to the columns of the selected OA. Step 4. Calculate the system characteristic z at each run. Step 5. Analyze the values of z by the ANOVA technique. Calculate the contribution ratio qi of factor i and if necessary, the contribution ratio qi&j of the interaction between factors i and j; as follows: qi
SSi ÿ DFi MSE 100
%; SST
qi&j
SST
SSi&j ÿ DFi&j MSE 100
%; SST
5
6
r r X X 2 2
zh ÿ z
zh ÿ CF h1
DFT R ÿ 1;
h1
7
230
SSi
S.E. Elmaghraby / European Journal of Operational Research 127 (2000) 220±238 l X z2ik ÿ CF m k1
DFi l ÿ 1 SSi&j
for i 1; . . . ; jAj;
8
l X l X z2i&j;uv ÿ SSi ÿ SSj ÿ CF M u1 v1
DFi&j DFi DFj ; i; j 1; . . . ; jAj ÿ PR
h1 zh
CF
9
2
r
:
10
Due to the orthogonality of the OA, SST in Eq. (7) can be partitioned as SST
jAj X
SSi
i1
X
SSi&j SSE:
11
i&j
In Eq. (11), the pair i&j is the pair of factors which interaction eect is of interest. The second term on the right side of Eq. (11) will disappear if most of the variability in z can be explained by main eects only. SSE represents the variation of z unexplained by the main and/or interaction eects. MSE
SSE : DFe
12
MSE represents the extra variation in z that occurred not from the changes in factor levels and it is used for calculating the contribution ratio so that the contribution ratio will represent the net variation due to the main eect or the interaction eect only. 5.2. Procedure for evaluating UIM The following two-stage procedure is used to estimate UIM
i or UIM
i&j for Type-B networks. As always, A denotes the set of all activities in the network. Stage 1. Step 1. Determine the two test levels of each activity in A, namely, li ÿ ri and li ri for the lo- and hi-levels, respectively.
Step 2. Select or construct an appropriate two-level orthogonal R-IV design by considering the number of activities in A. Step 3. Assign activities in A to the columns of the design obtained from Step 2. Step 4. Calculate the project completion time T using the longest path algorithm at each run of the selected design. Step 5. Perform ANOVA on the project completion times obtained from Step 4 using Eqs. (7) and (8). Step 6. Calculate the contribution ratio qi using Eq. (5) which is taken as an estimate of UIM
i. Stage 2. Step 7. Let A be the set of activities remaining after screening out those activities whose UIM
i obtained from Stage 1 are negligible in magnitude (<1%, say). Step 8. Determine three test levels p of each ac , namely, l ÿ r 3=2 , li , tivity p inA i i li ÿ ri 3=2 for the lo-, center, and hi-levels, respectively. Fix the duration of each activity in A ÿ A at its mean value. Step 9. Partition A into A1 and A2 such that A1 consists of activities with only main eects of interest and A2 consists of activities which main and pairwise interaction eects need to be estimated. Activities which contribution ratios obtained from Stage 1 are greater than a threshold value (5%, say) are included in A2 . Step 10. Select or construct an appropriate three-level OA and assign activities to their columns such that main eects of the activities in A1 and main and interaction eects of the activities in A2 can be estimated without confounding. Step 11. Calculate the project completion time T using the longest path algorithm at each run. Step 12. Perform ANOVA for the project completion times obtained from Step 11 using Eqs. (7)±(9). Step 13. Calculate qi and qij using Eqs. (5 ) and (6), respectively. Estimates of UIM
i and UIM
i&j are given by qi and qi qj qi&j , respectively.
S.E. Elmaghraby / European Journal of Operational Research 127 (2000) 220±238
6. Sensitivity of mean±variance: A response surface approach (the MDSM) We now focus on measuring the impact of the activity's mean duration on the variability of the project duration. It should be intuitively clear that as the activity's mean duration is varied, the variability of the project duration will be impacted, whether the activity's variability has changed or not. To see this, consider the simple project composed of three activities shown in Fig.1, and suppose that the durations of the activities are as shown in Table 3. The table also shows the mean and variance of the time of realization of node 3, denoted by E
3 and Var
3 , respectively. It is easily seen that as the mean of activity 3 is decreased from 12 to 10 or less while maintaining the same distribution about the mean (and hence the same variance), then path 1±2±3 shall dominate and Var
3 Var
Y1 Var
Y2 6:40 ) variance increased: Now increase the mean of activity 3 from 12 (case 1) to 18 (case 2), while maintaining the same distribution about the mean (and hence the same variance); the variance of the project completion time decreases from 5.8804 to 3.2224. You can easily verify that as the mean is further increased to 23 or beyond while maintaining the same dis-
Fig. 1.
231
tribution about it, activity 3 shall dominate path 5 p1±2±3 , and the variance of the project completion time shall increase to equal the variance of activity 3, which is 4 and remain at that value thereafter. These calculations are summarized in Table 4. This realization forces the question of: how can one estimate the impact of changes in the mean of an activity (or several activities) on the project variability in an economical fashion? In a recent contribution, Elmaghraby et al. [13] (EFT) address this question in some detail. The remainder of this section is devoted to a summary of their ®ndings. 6.1. On estimating the parameters of the project duration We continue to view the project duration as the `response' and the activity duration as the `factor' that aects the response, following the approach of Cho and Yum [7] discussed above. It is well known that if the eects of several factors are of interest, factorial experimental designs (the so-called full factorial designs with replication) need to be used, permitting the assessment of the main eects as well as all levels of interaction among the factors. However, as the number of factors (activities) of interest increases, the number of requisite factor combinations required also increases, but at an exponential rate! In particular, 2k runs are required to analyze all the main and interaction eects of k factors, aside from replication. Activity networks with 60 or 70 activities are commonplace, and an approach that demands full factorial designs is clearly out of the question. Fortunately, the process is usually dominated by the main eects and (perhaps) their lower order interactions (two factors or at most three factors interaction) of a small fraction of the activities. Hence their higher order interactions can be ignored and pooled to estimate the error, thus aording considerable economy in experi-
5
The path is identi®ed by its activities.
232
S.E. Elmaghraby / European Journal of Operational Research 127 (2000) 220±238
Table 3 Data and analysis of the project of Fig. 2 Case 1
Activity
Duration y
Probability p
y
1
8 12 5 9 10 14
0.2 0.8 0.6 0.4 0.5 0.5
8 12 5 9 16 20
0.2 0.8 0.6 0.4 0.5 0.5
2 3 Case 2
1 2 3
Table 4 l3
Var
3
6 10 12 18 P 23
6.40 5.88 3.22 4.00
mentation. Further, if we assume that higher order interactions are negligible, a fractional factorial design involving fewer than the complete set of 2k runs of the full factorial design can be used to obtain information on the main and low order interaction eects. This technique results in `confounding' of the information about some main or interaction eects with other main or interaction eects, but may be acceptable for the immediate objectives. An eective way to classify the fractional factorial designs according to the alias patterns they produce is to use `design resolutions'. The commonly encountered design resolutions are: Resolution III designs (R-III) in which no main factor eects are aliased with any other main factor eects, but they are aliased with two or higher factor interactions; Resolution IV designs (R-IV), in which no main factor eect is aliased with any other main factor eect or with two factors interaction. However, two and higher factors interactions are aliased with each other. Resolution V designs (R-V), in which no main factor eect is aliased with any other main factor eect or two factors interaction, and no two
E
3
Var
3
17.86
5.8804
19.24
3.2224
factors interaction is aliased with another two factors interaction. As noted in Section 5, an OA is such that for each pair of columns, all combinations of factor levels occur and they occur an equal number of times. As a preliminary to the central issue of concern, EFT address two issues [13]: 1. The ®rst issue relates to the degradation in accuracy and precision encountered when one moves away from the full factorial design to the partial factorial design and then again to the Taguchi design. To answer this question, EFT used the results from MCS as the datum, and experimented with three project sizes: small (5 activities and 4 nodes), medium (17 activities and 10 nodes), and large (38 activities and 20 nodes, see Table 5) 6. All activities are assumed to be normally distributed with the given means and variances. EFT conclude that the degradation is minimal and is quite well within the accuracy of the initial data itself. Therefore, one has ample justi®cation in adopting the Taguchi approach in such analysis, which aords a drastic reduction in the computations. 2. The second issue resolved by EFT relates to the fact that in large networks the computational requirement of Taguchi's method
6
In AoA representation.
S.E. Elmaghraby / European Journal of Operational Research 127 (2000) 220±238 Table 5 Activity
Mean
Variance
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38
10 2 13 4 12.5 1.5 2 23 4 2.5 16 23 13 2 2 5 13 5 13 11 4 2 13 2 5:5 4 3 14:5 2 23 19 19 2 13 4 8 3:5 2 1:5 10 4 1:5 7 13 13:5 10
0:412
0:622
1:412
0:25
0:25
0:622
0:412
1:122
0:622
0:632
0:822
0:822
0:622
0:622
2:452
1:082
0:622
0:412
0:412
1:122
0:822
0:822
0:502
0:622
4:902
0:622
0:822
2:862
1:122
0:822
0:502
0:822
0:822
0:502
2:582
4:902
0:502
5:482
may still be excessive, since the corresponding experimental design matrices would be quite large. They propose a `preliminary investigation' that would improve the computational eciency of the method under these circumstances, without causing any signi®cant loss of accuracy. The intent of this procedure is to facilitate the use of a smaller design matrix to estimate the parameters of the network. The suggested approach (following Anklesaria and Drezner [1]) is to compute the total ¯oat as-
233
sociated with each activity when all durations are put equal to their expected values, and to screen out any activity with a total ¯oat greater than a certain threshold s. 7 Then the analysis is conducted by applying the Taguchi method to only those activities which remain after screening. The durations of the other activities are set at their respective mean levels. In this way the mean and standard deviation of the project duration can be estimated by using a (much) smaller design matrix which has a sucient number of columns for the activities that remain after screening. EFT assessed the validity of this approximation by applying it to the large network of Fig. 2, again using MCS as the datum. The screening process reveals that 22 of the 38 activities in the network do not have signi®cant eect on the mean and variance of the project duration in a range about each activity's mean duration. 8 Taguchi's L32
231 orthogonal array 9 is then applied to the remaining 16 activities, to result in estimates of the mean and standard deviation of the project duration of 57.58 and 5.36, respectively. The accuracy of these results are comparable with those obtained from the discretization method applied to all activities with a 27 design. Considering the dierence in the number of treatments between the two designs, the level of accuracy achieved with the 16 activities that remain after screening gives sucient evidence that the screening method works eectively, at least in this particular network. With these preliminaries out of the way, we are now reasonably assured that reliable conclusions can be drawn from small Taguchi designs. EFT then addressed the issue of main concern; viz., how does the variation in the mean of the activities aect the mean and variance of the project completion time? They conducted a
7 The value of s is determined so that the proceduce does not identify more than ®ve paths at any stage of analysis. Typically, s 3r: 8 The 16 remaining activities are: 1; 2; 4; 9; 10; 12; 15; 23; 25; 27; 28; 30; 35; 36; 37; 38: 9 L32
231 signi®es 32 rows and 31 activities.
234
S.E. Elmaghraby / European Journal of Operational Research 127 (2000) 220±238
Fig. 2.
`global' experiment to gain some insight into the impact of changing the mean of each activity duration on the mean and variance of project duration, and concluded that, as expected, increasing the mean duration of each activity has a non-decreasing eect on the mean of the project duration. However, increasing the mean duration of an activity could either increase or decrease the variance of the project duration. (We have witnessed such anomalous behavior in the example of Fig. 2 above.) Then they conducted two sets of `speci®c' experiments: the ®rst assumes that the mean is varied but the variance is maintained constant, and the second assumes that the mean is varied but the coecient of variation (c.v.) is maintained constant. At any time, the mean duration of only one activity was varied. For the sake of brevity we shall report only on the ®rst experiment in which the variance was maintained constant.
6.2. Changing the activity's mean while maintaining its variance constant EFT experimented with the large network of Fig. 2 (38 activities and 20 nodes) 10. The range of values was 1; 50 which includes the current value of the mean of each activity duration, and is large enough to give a fairly good idea about the response variability. They used the Taguchi's method with 27 design as a method of approximation of the mean and variance of the project duration. For each one of the 16 residual activities, a plot was made of the variance of the project duration over the respective range of change of the mean duration of that activity. As a sample of the results
10 This network was originally proposed by Kleindorfer [16] and subsequently used by Cho and Yum [7].
S.E. Elmaghraby / European Journal of Operational Research 127 (2000) 220±238
235
Fig. 3.
Fig. 4.
Fig. 5.
obtained, the graphs corresponding to activities 1, 10 and 34 are presented in Figs. 3±5. These graphs indicate that the variance of the project duration is not aected by increasing
the mean of the activity duration until the mean reaches a threshold level, at which time it starts to change as the mean of the activity duration is increased further and, after some point, it stabilizes. Over the range of variation of the mean of an activity duration in which the variance of the project duration responds to the changes in the mean, the relationship does not seem to be linear, but displays a more complex pattern: increasing the mean of some activity duration may reduce the variance of the project duration (see activity 1), while increasing the mean of some other activity durations may increase the variance of the project duration (see activity 10), and yet still another activity may cause the variance of the project duration to ¯uctuate before settling to its steady-state value (see activity 34). 6.2.1. Application A possible application of these ®ndings in the context of a project planning network may be as follows. Suppose that the project of this network has the requirement that it should complete on time with very small variability. In order to meet this requirement we need to reduce the variance of the project duration as much as possible without changing its current mean value (57.40) to any appreciable extent. Further, suppose that in order to achieve this objective, we are only allowed to change the mean duration of the 16 residual activities 1, 2, 4, 9, 10, 12, 15, 23, 25, 27, 28, 30, 35, 36, 37, 38 in the range of one half through twofold of each activity's current mean value. Assume that
236
S.E. Elmaghraby / European Journal of Operational Research 127 (2000) 220±238
as we change the mean duration of an activity, its variance remains constant. Finally, suppose, for simplicity, that we observe the mean and variance of the project duration at only nine dierent mean durations for each one of the 16 activities. These mean levels are determined for each activity by taking nine equidistant points in the interval in which we can change the mean of that activity's duration. The variance of the project duration is plotted in these ranges of the mean duration of each activity. The corresponding graphs reveal that activities 1, 4, 10, 23, 25, 35, 37 and 38 are the ones which have the largest impact on the variance of the project duration. 11 (Activity 12 has no eect, and the other seven activities have only slight eects on the variance of the project duration in the respective range of their mean durations.) The mean duration of the project is also plotted in the same ranges of each activity duration. The corresponding graphs reveal that the residual activities 1, 4, 9, 10, 15, 23, 25, 35, 36, 37, 38 have signi®cant eects on the mean of the project duration. (Activity 12 has no eect on the mean of the project duration, and activities 2, 27, 28, 30 have only slight eects in the small neighborhood about their current mean durations.) Paying a little closer attention to those activities with signi®cant eects on the project duration, it was observed that: 1. Increasing the mean duration of activities 1, 4, 23 and 37 reduces the variance of the project duration. 2. Increasing the mean duration of activities 10, 25, 35 and 38 increases the variance of the project duration. 3. Reducing the mean duration of activities 1, 4, 23 and 37 increases the variance of the project duration. 4. Reducing the mean duration of activities 10, 25, 35 and 38 decreases the variance of the project duration.
11
The impact is measured by the ratio of maximum to minimum variance. A ratio P 2 was deemed signi®cant.
5. Increasing the mean duration of any one of these activities results in the same or in a greater mean duration for the overall project. 6. Reducing the mean duration of any one of these activities results in the same or in a smaller mean duration for the overall project. With all this information in hand, one may suggest 12, for example, increasing the mean duration of activity 37 from its current value of 13.50 to 18.90, and reducing the mean durations of activities 10 and 38 by one half of their respective current values of 13.00 and 10.00. At the proposed values of the activity durations, EFT determined the mean and standard deviation of the project duration, using straightforward Monte Carlo simulation with 10 000 iterations. They obtained the project duration mean and standard deviation of 57.40 and 2.69, respectively. The respective 95% con®dence intervals for the mean and standard deviation are computed as 57:35; 57:45 and 2:65; 2:73. Recall that the corresponding estimates before these adjustments were 57.40 and 6.06, respectively. Also, the 95% con®dence intervals with the initial estimates of the mean and standard deviation were computed as 57:28; 57:52 and 5:98; 6:14. Thus the proposed adjustments would reduce the standard deviation of the project duration by more than one half without changing the estimated mean of 57.40.
7. So, where do we stand? There is little controversy surrounding the PCI. But the picture is quite dierent concerning the classical ACI. By whichever means it is evaluated, it may not give sucient information, or even the correct information to management. The SI and the CRI suggested by Williams [25] and discussed in Section 2, are unproven alter-
12 These suggestions are made assuming that the costs involved are equal for all activities. Of course, if that were not the case, the recommendation would be dierent.
S.E. Elmaghraby / European Journal of Operational Research 127 (2000) 220±238
natives. They are not only dicult to evaluate, but they also suer from the same malaise of not giving sucient information, or even the correct information to management. Interpreting `criticality' of an activity as `sensitivity' of the parameters of project completion time (in particular, its mean and variance) to the activity's parameter of interest (in particular, the mean or variance of an activity), we are led to: (i) the MCS approach and dependence on simulationbased derivatives when interest is in the mean± mean issue; (ii) the analytical approach when the AN is series±parallel and interest is in the meanvariance issue; (iii) the response surface methodology when the AN is not series±parallel, or when other issues are of concern. Then we must resort to sampling according to some well-designed experiment, and we are led to either the UIM suggested by Cho and Yum (Section 5) if interest is focused on variance±variance, or to the MDSM measure suggested by EFT (Section 6) when interest is focused on mean±variance. Either measure can be used together with the classical criticality index ACI to analyze the project so that eort may be directed to the right activities. Unfortunately, either measure is dicult to evaluate, and the need still remains for an easier approach. Finally, there is need for an eective approach to evaluate the interaction between changes in the parameters of two or more activities, taking into account the economics of implementing such changes.
References [1] K.P. Anklesaria, Z. Drezner, A multivariate approach to estimating the completion time for PERT networks, Operations Research 37 (1986) 811±815. [2] G.E.P. Box, K.B. Wilson, On the experimental attainment of optimum conditions, Journal of the Royal Statistical Society B 13 (1951) 1±45. [3] R.A. Bowman, Ecient estimation of arc criticalities in stochastic activity networks, Management Science 41 (1995) 58±67. [4] R.A. Bowman, Stochastic gradient-based time±cost tradeos in PERT networks using simulation, Annals of Operations Research 53 (1994) 533±551.
237
[5] R.A. Bowman, J.A. Muckstadt, Stochastic analysis of cyclic schedules, Operations Research 41 (1993) 947±958. [6] K.E. Bullington, J.N. Hool, S. Maghsoodlo, A simple method for obtaining resolution IV designs for use with Taguchi's orthogonal arrays, Journal of Qualitative Technology 22 (1990) 260±264. [7] J.G. Cho, B.J. Yum, An uncertainty importance measure of activities in PERT networks, IJPR 35 (1997) 2737± 2770. [8] C.E. Clark, The greatest of a ®nite set of random variables, Operations Research 9 (1961) 145±162. [9] A. Dey, Orthogonal Fractional Factorial Designs, Wiley, New York, 1985. [10] B.M. Dodin, Approximating the probability distribution function of the project completion time in PERT networks, or Report No. 153 (Revised), or Program, North Carolina State University at Raleigh, June 1980. [11] B.M. Dodin, S.E. Elmaghraby, Approximating the criticality indices of the activities in PERT networks, Management Science 31 (1985) 207±223. [12] S.E., Elmaghraby, Activity Networks: Project Planning and Control by Network Models, Wiley, New York, 1977. [13] S.E., Elmaghraby, Y. Fathi, M.R. Taner, On the sensitivity of project variability to activity mean duration, Research Report, North Carolina State University, Raleigh, NC 27695-7906, 1998. [14] G. Gutierrez, A. Paul, Analysis of the eects of uncertainty, risk-pooling and subcontracting mechanisms on project performance, Technical Report, the University of Texas at Austin, 1998. [15] J.E. Kelley Jr., M.R. Walker, Critical path planning and scheduling, in: Proceedings of the Eastern Joint Computational Conference, vol. 16, 1959, pp. 160±172. [16] G.B. Kleindorfer, Bounding distribution for a stochastic acyclic network, Operations Research 19 (1971) 1586± 1601. [17] V.G. Kulkarni, V.G. Adlakha, Markov and Markovregenerative pert networks, Operations Research 34 (1986) 769±781. [18] D.G. Malcolm, J.H. Roseboom, C.E. Clark, W. Fazar, Applications of a technique for research and development program evaluation, Operations Research 7 (1959) 646± 669. [19] A.W. Marshall, F. Proschan, Mean life of series and parallel systems, Journal of Applied Probability 7 (1970) 165±174. [20] J.J. Martin, Distribution of time through a directed acyclic network, Operations Research 13 (1965) 46±66. [21] R.J. Schonberger, Why projects are `always' late, Interfaces 2 (5) (1981) 66±67. [22] C.E. Sigal, A.B. Pritsker, J.J. Solberg, Use the of cutsts in Monte Carlo analysis of stochastic networks, Mathematics and Computers in Simulation 21 (1979) 376± 384.
238
S.E. Elmaghraby / European Journal of Operational Research 127 (2000) 220±238
[23] G. Taguchi, System of Experimental Design, vols. 1 and 2, UNIPUB/Kraus International Publications, New York, 1987. [24] R.M. van Slyke, Monte Carlo methods and the PERT problem, Operations Research 11 (1963) 839±860.
[25] T.M Williams, Criticality in stochastic networks, Operations Research 43 (1992) 353±357.