Reliability Engineering and System Safety 27 (1990) 241-255
Phased Mission Effectiveness using a Nonhomogeneous Markov Reward Model M a r k K. S m o t h e r m a n & R o b e r t M. Geist Department of Computer Science,Clemson University, Clemson, South Carolina 29634-1906, USA (Received 23 August 1988; accepted 29 May 1989)
ABSTRACT The requirements for industrial systems often include the dependable performance of a sequentially-dependent set of tasks, each of which may require different component loadings and configurations. The performance evaluation of such systems is termed phased mission analysis. A new approach to this analysis is presented that is based on a nonhomogeneous Markov reward model in which the concept of a state transition is generalized to include phase changes as well as failures and repairs. This model allows timedependent failure rates for those phases without repair, and incorporates cumulative reward measures to provide figures of merit for work performed.
1 INTRODUCTION Many industrial systems require an ordered set of tasks to be performed, where the tasks represent different phases or stages of a mission, such as initialization, loading, duty and shutdown, or transit, placement, operation, and return. These tasks may require different system configurations and may impose different component stresses. Moreover, critical time windows may exist during which the primary objective of the system must be accomplished. Such systems are called phased-mission systems, and the different tasks and time windows constitute the differentphases of the total mission. The design of phased-mission systems requires a method to evaluate the expected performance from competing designs. This evaluation involves the prediction of the overall chance for success of the mission and the relative 241 Reliability Engineering and System Safety 0951-8320/90/$03"50 © 1990 Elsevier Science Publishers Ltd, England. Printed in Great Britain
242
Mark K. Smotherman, Robert M. Geist
impact on that success of various design decisions. Mission effectiveness should not be defined as mere system survivability (i.e. reliability); it should include assessment of the work performed and objectives accomplished. Tillman e t al., 1 provides a review and bibliography of various definitions of system effectiveness. Previous evaluations of phased-mission systems have been based on the modeling of system reliability by time-homogeneous Markov models, 2'3 combinatorial models, 4 fault trees, 2'5'6 coherent systems, 7,a Monte Carlo simulation,9-11 and Bayesian analysis. ~2 Although combinatorial models and fault trees are typically solved in an efficient manner, they cannot easily represent sequence dependencies and repairs. 2'13 Simulation offers the greatest flexibility in representation, but it and the analysis of coherent systems are often time consuming. Approximations are useful in both approaches. The Bayesian approach differs from all the others in that it requires the estimation of distributions of priors. Among these methods, Markov modeling incorporates a desirable combination of flexibility in representation and ease of solution; therefore, it is often a preferred method of evaluation for complex systems. Any evaluation of success for a phased-mission system must take into account changes in configuration and stresses. A time-homogeneous Markov model represents this dynamic behavior by providing a separate state space and set of failure and repair transitions for each different phase. These phases are then combined sequentially; the successful initiation of a next phase depends upon the system reaching a state in the current phase that represents an operational configuration for both. 2'3 In this traditional Markov approach to phased missions, each phase must be solved separately to obtain a state probability vector at the time of the phase change. Each probability vector is then linearly transformed into the appropriate initial condition vector for the next phase until the vector of the last phase produces the predicted reliability of the total mission. This traditional Markov approach is limited in its representation of dynamic behavior, since it assumes that phase changes occur at specified discrete points in time and are instantaneous and state-independent. Coupling between phases is restricted to the transformed state probability vectors. Although recent extensions have incorporated random phase durations, 3 the individual phase models are limited in coupling by the use of expected values for the components of the transformed probability vectors. Such models thus continue to make the assumptions that phase changes are state-independent and instantaneous. Moreover, any time-homogeneous Markov model is limited to the implicit assumption that state holding times, e.g. component failure times and repair times, are exponentially distributed. Such assumptions lead to
Phased mission effectiveness
243
analytically tractable models but have been criticized for their restrictive nature. 9 A time-homogeneous Markov model, by itself, cannot represent the amount of work performed or the relative values of task accomplishment. In dealing with this problem, Nathan used a (single-phase) Markov model and defined effectiveness as the product o f system readiness (i.e. availability), reliability, and accuracy.14 Nevertheless, this definition has not been widely used for Markov models of phased missions, and their solution has traditionally been reported as the probability that the system remains in a set of operational states until mission end. Other researchers have proposed measures of effectiveness for nonMarkov models. Pedar and Sarma examined a multistate coherent model for an avionics system in which three overlapping objectives provided five levels of accomplishment.7 Tillman and his colleagues used a semi-Markov model and solved for the probability of successful completion of missions within fixed durations; they included the effects of availability at mission start, environment, and operator performance. 9'1s In Section 2 we describe a new approach to phased mission analysis based on nonhomogeneous Markov reward models. This approach removes the limitations cited for traditional Markov models of phased missions andalso provides for measures of effectiveness based on reward rates. In Section 3 we provide several examples of this approach in modeling industrial systems. Section 4 contains conclusions and current directions.
2 A N E W APPROACH TO EFFECTIVENESS EVALUATION The modeling of a phased-mission system by a single nonhomogeneous Markov model removes the major limitations of the traditional phasedmission approach and greatly increases modeling flexibility, and thus the scope of practical application.l 6 By its nature, the model provides for nonexponential component failure behavior, and this can be used whenever a constant failure rate is not representative. The fundamental structure of the model is easily specified. If {X(t) It > 0} is a finite state stochastic process with state probabilities p,~t)---Pr[X(t)= O, then from the Markov assumption we can derive the following differential equations 17
p~t) = ~ p~(t)ao(t)
(1)
Y
where
a~j(t) is the transition rate from state i into state j. The system of
244
Mark K. Smotherman, Robert M. Geist
equations can be rewritten as P'(t)= P(t)A(t), where P(t)= (po(t),pl(t),..., Pn- 1(0) is the row vector of state probabilities and A(t) = [aij(t)], ×, is the transition rate matrix. The traditional time-homogeneous Markov process is the special case in which all transition rates are independent of time, i.e. A(t) = A. Since the rate functions, ao(t), may exhibit discontinuities, the general nonhomogeneous model is often difficult to solve analytically. However, its numerical solution remains relatively straightforward. Our approach to phased mission analysis is based on two important modifications of the nonhomogeneous Markov model: (1) The concept of a state transition is generalized to include phase changes, as well as failures and repairs. 16 (2) Reward measures are incorporated into the model to provide more information for system effectiveness evaluation. 2.1 Generalization of transitions In this approach, different phases are represented as different subsets of states in the single model, and phase changes are represented by timevarying transitions among these subsets. This formulation includes the impulse function as a holding-time density so that the traditional, discretetime approach becomes a special case of the new approach. Because of the single model framework, phase change transitions out of the different states in a given phase subset can have different rates or impulse functions. Thus, phase changes are state dependent. Non-instantaneous phase changes are modeled by the inclusion of intermediate states. The holding time within an intermediate state may be state dependent in order to represent different phase change durations (i.e. those required for degraded configurations). Multiple objective or rendezvous missions, in which the system must respond within a definite time window, can be modeled by including different phase change transitions from the same state. Repairs are not generally modeled by nonhomogeneous Markov systems. This restriction is necessary since a repair is assumed to return the failure process of a component to time t = 0 . For time-homogeneous Markov models (or semi-Markov models), this assumption does not present a difficulty since each transition erases all influence of the past. However, with general time-varying failure rates, the global time dependence cannot be set back to zero. The time at repair, z, is itself a random variable, and no simple offset calculation will suffice (unless z is deterministic, such as a scheduled maintenance action). However, since the phase subsets subdivide the single
Phased mission effectiveness
245
model into several submodels, repairs can be introduced into those submodels in which there are no time-varying rates other than the phase changes. In this case, the failure processes for components can be logically set back to time t = 0.
2.2 Accumulated rewards Reward models provide instantaneous and cumulative measures of weighted state occupancy. Each state has an associated weight, called a reward rate or yield, which represents the relative value of being in the state. Examples are productivity rates, such as jobs/hour or transactions/second, and economic rates, such as profit/day. Negative rates are allowed and are called costs. Rates may also be time dependent. Howard has explored the use of reward rates with semioMarkov processes and allows weights to be associated with transitions as well as states; la these weights are called bonuses (or penalties when negative). Let R(t) = (r0(t), rl(t ) .... , r,_ l(t)) r be the column vector of reward rates at time t and P(t) be defined as above. Then P(t)R(t) is the instantaneous reward of the system at time t. If Y(t) is the accumulated reward until time t, then the expected value of Y(t) is defined by ElY(t)] =
;0
P(u)R(u) du =
2fo
pi(u)ri(u) du
(2)
i
With the proper rates, this measure can give information on expected work performed or value received, or on expected time spent in a certain subset of states (i.e. a reward rate of 1). In the latter case, the accumulated reward measures provide life cycle measures, such as expected duty time and expected time under repair.
2.3 Solution technique Since the nonhomogeneous Markov model can be represented as a system of differential equations, (1), a standard initial-value solution algorithm can be used to find the state probabilities. However, because of the discontinuities of the phase change transitions, the steps of the solution algorithm must be determined by an event-queue technique similar to those found in many simulation programs. The state probabilities at each step can then be collected by a numerical integration algorithm, yielding the accumulated reward measures as implied by (2). All phase changes should be placed on the event queue, from which a step size control adjusts the next step in the solution so as to not overstep the next
246
Mark K. Smotherman, Robert M. Geist
event.~ Impulse functions, representing discrete-time phase changes, are single events that cause the instantaneous transfer of probability values between specified states. Other phase change transitions typically require two events, one to cause insertion of the rate into the transition rate matrix and one to cause deletion. Since time-varying rates exist in the transition rate matrix, those entries in the matrix must be re-evaluated at each step. To bound the local error of each step in the solution, a standard approach for adaptive step size control can be used along with the event step size control mentioned above. An additional concern arises for transition rates that approach infinity. These require a fixed-time phase change event to be performed near the point of discontinuity as the value of the rate grows large. This event transfers the residual probability of the exiting state into the entry states and is invoked by the adaptive step size control whenever the step size required to meet the local error tolerance is smaller than a minimum specified step size. Using the error tolerance and minimum step size parameters, the accuracy of the solution can be increased at the expense of efficiency. Since the single model framework requires the representation of all states of all phases, models of complex systems have potentially large state spaces. Sparse matrix techniques can reduce the computational burden somewhat, but behavioral decomposition 13 and cyclic models (one of which is discussed in the example section below) are necessary to represent the more complex systems. Since the nonhomogeneous Markov model requires that timevarying transition rates be re-evaluated at each step of the solution and that increasingly smaller step sizes be taken as one of these rates approaches a discontinuity, the solution time is typically longer than that required for a series of traditional homogeneous Markov models. This additional computational effort is the price paid for the flexibility of the model. 3 EXAMPLES This section contains three examples that illustrate the use of nonhomogeneous Markov reward models. The models were solved using PUMA, a software tool being developed at Clemson University and implementing the solution of techniques of Section 2.3.
3.1 Two-component model Consider a system with two components that is initialized and loaded (or transported and deployed) and then remains on duty until the end of a 100 h period. A model of this system is given in Fig. 1. The system starts at time t = 0 in state 0.
Phased mission effectiveness Ini"ualizalion Submodcl
Loacli~ Submodel
Fig. 1.
247
On Duty
Inactive
Submodel
Submodel
Two-component model.
During the initialization phase, state 0 represents a system with both components operational; state 1 represents a system with one operational component and one failed component; state 2 represents failure of both components; and, state 3 represents initialization failure. The components each fail at rate ).1, and the initialization system fails at rate a. The failure of the first component is recoverable with probability cl. However, the initialization system failure is unrecoverable, as is the failure of the second component. After the first phase change, represented by the rate h t(0, the initialization is complete and loading begins if at least one component is operational. During this phase, states 4 and 5 represent the availability of two or one components, respectively. The component failure rate in this phase is 2 2, and any component failure leads to an unrecoverable system failure (state 6). If two components are operational, loading is completed according to h2(t). However, if only one component is operational, loading requires a different, possibly longer, interval. Therefore a different transition rate, h3(t), is necessary. After loading, the components are on duty, and states 7 and 8 represent the availability of two or one components, respectively. Each component
248
Mark K. Smotherman, Robert M. Geist
fails at a possibly time-dependent rate 23( 0, and the first component failure is recoverable with probability c 2. The work done while on duty is proportional to the number o f active components. That is, the reward rate for two components is 2 per unit time, and the rate for one component is l per unit time. The accumulated reward is thus the measure of work performed and is accumulated for states 7 and 8 (i.e. the box in the diagram). The mission ends by deactivating the components according to h4(t). This model demonstrates the use o f state-dependent phase changes, since the phase change time out o f state 5 (one component) can differ from the phase change time out o f state 4 (two components). In Tables 1 and 2, sample model parameters and resulting performance estimates are given. All phase change times in this solution have impulse distributions; these transition rates are represented by 6(t), an impulse function at time t. The first column of values in the tables represents a constant failure rate while on duty, and the second column o f values represents an increasing failure rate based on the Weibull distribution. For the set o f model parameters, the maximum possible accumulated reward in states 7 and 8 is 197.5. Because of the probability o f failures, however, this mission value is not attained in either solution. TABLE l
Two-component Model Parameters Symbol
Interpretation
Values Constant failure rate
o 21 22 2a(t)
Increasing failure rate
initializationsystem failure rate processor failure rate during initialization processor failure rate during loading processor failure rate on station scale parameter of Weibull distribution shape parameter of Weibull distribution location parameter of Weibull distribution
10- ~ 10-4 10-3 10-3
(same) (same) (same) ct-~fl(t-),)P-1 l0 a 1.2 1-25
cI c2
coverageprobability during initialization coverage probability on duty
0.9 0.5
(same) (same)
h~(t) h4(t)
initializationcompletion (impulse) loading completion (impulse)--2 processors loading completion (impulse)--1 processor missioncompletion (impulse)
6(1.0) 6(1-25) 6(1.5) 6(100-0)
(same) (same) (same) (same)
r7 r8
reward rate--2 processors reward rate--l processor
2'0 I'0
(same) (same)
fl
h2(/) ha(t)
Phased mission effectiveness
249
TABLE 2 Two-component
Model result
Model Predictions at 100h
Interpretation
Values Constant failure rate
P2 pa P6 P9 P~o
Pr[processor failures during initialization] Pr[initialization system failure] Pr[system failure during loading] Pr[system failure on duty] Pr[mission completion]
2.000 7 x 9.9999 x 4.9986 x 9.398 1 x 9.0549 x
acc_reward(7,8) E[total work on duty] accJeward(8,8) E[time on duty in degraded mode]
10-s lO -6 10 -4 10 -2 10 -1
183.53 4.4346
Increasing failure rate 2.000 7 9-9999 4-9986 2.1907 7.8040
x x x x ×
10-s 10 -6 10 -4 10-t 10 -*
168.08 8.7910
3.2 Multiple objective model A second example of a nonhomogeneous Markov reward model is given in Fig. 2. This model represents a two-component system with multiple second phases, so that one of two mission objectives may be accomplished. This situation might occur when a system fails to complete an initial task by a given time and must switch from a primary objective to a secondary objective. In this model the system begins at time t = 0 in state 0. State 1 represents the failure of one component, and state 2 represents system failure by the loss of both components during the first phase. Repair is allowed in the first
~
Secondary Objective
h2(t)
Initial Phase
hi(0
2~3
Fig, 2.
Multiple-objective model.
Primary Objective
250
Mark K. Smotherman, Robert M. Geist
phase only and only when one component is still operational (state 1). Two phase changes occur at different times out of state 0. If the system is fully operational at the phase change time described by hl(t ), a phase change from state 0 to state 3 is made and the primary objective begins. The components each fail during the primary objective at rate 2 2, and state 4 represents system failure during the primary objective. Since no failures are tolerated while performing the primary objective, the mission reliability for the primary objective is p3(t). If the system misses the first phase change time by being partially degraded, a second phase change time described by h2(t) is possible. One failure is tolerated during the secondary objective, and the mission reliability for the secondary objective is ps(t)+p6(l). State 5 represents a system with both components operational, state 6 represents the loss of one component, and state 7 represents the loss of both components. Each component fails at rate 2 3 during the secondary objective. The value of accomplishing the objectives is represented by different reward rates. The primary objective is given a rate of 5. The secondary objective is given a rate of 2 if the system is fully operation and a rate of 1 if it is partially degraded. Tables 3 and 4 contain the model parameters and resulting performance predictions. The first column of values in Table 4 reflects a discrete phase change time, and the second column of values reflects a model in which phase change time is uniformly distributed over an hour's interval but has the same mean as the first model. This is denoted by uhr(a, b), which is the hazard rate derived from the uniform distribution over the interval a to b. Even though the individual instantaneous probabilities vary, the total reliability and overall value of the mission remain high. The TABLE 3
Multiple Objective Model Parameters Symbol
Interpretation
Values
P 22 23
first phase failure rate first phase repair rate primary objective failure rate secondary objective failure rate
0.04 5.0 0.02 0"01
(same) 5-0 (same) (same)
(same) 0"0 (same) (same)
hi(t) h2(t)
primary objective initiation secondary objective initiation
6(3"5) 6(6"5)
uhr(3,4) uhr(6, 7)
uhr(3,4) uhr(6, 7)
rl r3
degraded mode reward rate primary objective reward rate reward rate--2 processors reward rate--1 processor
1-0 5.0 2.0 1.0
(same) (same) (same) (same)
(same) (same) (same) (same)
?'5 r6
251
Phased mission effectiveness TABLE 4 M u l t i p l e Objective M o d e l P r e d i c t i o n s at 10h
Model result P3 P5 +P6 P3 + P 5 + P 6
acc_reward(l) accJeward(3) acc_reward(5, 6) acc_reward(3, 5,6)
Interpretation Pr[primary objective] Pr[secondary objective] Pr[operational mode] El-time in degraded mode] E[primary objective value] E[secondary objective value] E[total system value]
Values 7-5744 x 10 - t 7.6714 × 10 -1 5.8279 × 10 -1 1.5423 × 10 -2 3.0938 × 10 - 3 1.9446 × 10 -1 7.7286 × 10 -1 7.7023 x 10 -1 7.7725 x 10 -1 0.05531 28.11 0.105 4 28.22
0.05483 28.45 0.021 2 28.47
1.066 21.65 0.693 2 22.34
third column of values reflects the impact of an unrepairable configuration during the first phase. Here the system reliability (total probability of remaining operational) is higher than in the first or second columns, but the overall value of the mission has decreased significantly. Thus, different design decisions can result according to whether this system is evaluated merely on the basis of total operational probability or on another mission value.
3.3 Pipe leakage model Figure 3 depicts a model in which a piping system is periodically inspected and repaired. As are those above, this is a simplified model used to demonstrate phased mission models. It does not illustrate the redundant components typically found in an actual industrial plant nor does it reflect actual operating values. The system begins in state 0 at time t = 0; this represents a system operating within tolerances. State 1 signifies wall thinning, whereas state 3 represents a leak and state 5 loss of system. Wall thinning occurs with rate 21 , leakage occurs with rate 2 2 when wall thinning is present, and loss of system occurs with rate 2 3 when leaks are present. Phase changes occur according to hi(t)and ha(t), when periodic inspections are made. There is a probability c I that wall thinning is detected, when present, during an inspection and probability c 2 that a leak is detected, when present, during an inspection. Thus, state 2 represents an undetected wall thinning after inspection; it is entered after an inspection with probability 1 - c 1. In like manner, state 4 represents an undetected leak. U p o n detection of a wall thinning or leak, repair actions are begun (state 6). According to phase change rates h2(t) and h4(t),repair actions are resolved in one of three ways: the repair is successful with probability ca, and the system returns to state 0; the system is safely shut down with probability c4,
252
M a r k K. Smotherman, Robert M. Geist OW,maonal
~.~1
\
c3[h2(t)+h4(t)]
(I-c 0[h ~(t)+h3(t)] shutdown
c 4[h2(t )+h4(t )] ~
~a (1-cz)[h l(t )+h3(t )]
~3 [
/[
~ + (1-c3-c4)[h2(t)+h4(l)]
loss of system
Fig. 3.
Pipe leakage model.
and the system enters state 7; or, the system can be neither repaired nor shut down, resulting in loss of system. Moreover, there is also the possibility of loss of system if the pipe is inadvertently damaged during the repair. This latter possibility is represented by a transition with rate 24. This model includes a cycle so that repetitive phases are modeled without requiring additional states. Moreover, probabilities are associated with phase changes to indicate the different results stemming from a phase change. Here the probabilities of detection are state dependent but are the same for each inspection. An easily-obtained generalization of this model provides phase change dependence for these probabilities. The model parameters are given in Table 5 and the model results in Table 6 for a 72h period. The first model solution assumes instantaneous inspections and detections every 24 h, while the second solution assumes 2 h inspection periods with detection uniformly distributed over these intervals. The first solution yields an accumulated reward that is slightly above the second and a slightly lower loss-of-system probability. The first solution might thus be characterized as making an optimistic assumption when it simplifies the representation of the effects of inspection periods. An additional effectiveness measure can be obtained from this model by associating a penalty with transition into the loss-of-system state. The final
Phased mission effectiveness
253
TABLE 5
Leakage Model Parameters
Symbol
Interpretation
21 22, 23 24
wall thinning failure rate leakage failure rate loss-of-system failure rate with leak loss-of-system failure rate during repair
cl c2 c3 c4
detection probability of wall thinning detection probability of leak repair probability shutdown probability
hl(t ) ha(t)
inspection repair action completion inspection repair action completion
6(24"0) uhr(23.0, 25-0) 6(30-0) (same) 6(48.0) uhr(47.0, 49-0) 6(54.0) (same)
ro r~ r2 r3 r4 r5 r6 r7
operational reward rate wall-thinning reward rate wall-thinning reward rate leakage reward rate leakage reward rate loss-of-system reward rate repair reward rate shutdown reward rate
1-0 1"0 1"0 0"8 0'8 -20"0 -2.0 -1.0
h2(t )
ha(t)
Values 10-3 (same) 10-2 (same) 10-1 (same) 2 × 10-1 (same) 0-6 (same) 0"9 (same) 0-6 (same) 0.37 (same)
(same) (same) (same) (same) (same) (same) (same) (same)
system value w o u l d t h e n be the t o t a l a c c u m u l a t e d r e w a r d plus the p r o d u c t o f m u l t i p l y i n g the p r o b a b i l i t y o f loss-of-system a n d the loss-of-system penalty. This c o u l d also be d o n e for case o f s h u t d o w n . Y e t a n o t h e r cost t h a t c a n be t a k e n into a c c o u n t is the fixed cost o f inspections; this can be a d d e d once for e a c h pair o f p h a s e c h a n g e transitions h2n(t), h2n ÷ 1(0 (n > 0 ) .
TABLE 6
Leakage Model Predictions at 72 h
Model result Ps P7
acc_reward(O-7) acc_reward(6) ace_reward(7) acc_reward(5)
Interpretation
Values
Pr[loss of system] Pr[shutdown]
2.709 7 x 10-2 3'1201 x 10 - 3
2.722 9 x 10-2 3"2654x 10 -3
E[total system value] E[repair cost] E[shutdown cost] E[loss-of-system cost]
53.738 -0-19565 -0.093 223 - 16"921
53.474 -0.20024 -0"098 198 - 17"158
254
Mark K. Smotherman, Robert M. Geist
4 S I G N I F I C A N C E OF THE N E W A P P R O A C H Traditional Markov modeling is limited in its representation of phase change behavior and in its definition of system effectiveness as system reliability. The nonhomogeneous Markov reward approach removes these limitations while retaining the flexibility and efficiency of Markov modeling. This approach is characterized by the generalization of the concept of a transition to include phase changes as well as failures and repairs and the inclusion of accumulated reward measures in a phased mission context. The nonhomogeneous Markov reward approach takes into account the factors that must be addressed in the total mission environment: different tasks with different configurations, environments and stresses; sequence-, state-, and time-dependent behavior, random phase durations; and, dependability measures which include but are not limited to reliability. Future evaluation tools must support these factors in order to adequately assess the effectiveness of complex systems. The P U M A analysis tool at Clemson University is undergoing further development in an effort to meet all such requirements. REFERENCES 1. Tillman, F. A., Hwang, C. L. & Kuo, W., System effectiveness models: an annotated bibliography. IEEE Trans. Reliability, 29 (1980) 295-304. 2. Clarotti, C. A., Contini, S. & Somma, R., Repairable multiphase systems-Markov and fault-tree approaches for reliability evaluation. In Synthesis and Analysis Methods for Safety and Reliability Studies, ed. G. Apostolakis, S. Garribba & G. Volta. Plenum Press, New York, 1980, pp. 45-58. 3. Alam, M. & A1-Saggaf, U. M., Quantitative reliability evaluation of repairable phased-mission systems using Markov approach. IEEE Trans. Reliability, 35 (1986) 498-503. 4. Vujosevic, M. & Meade, D., Reliability evaluation and optimization of redundant dynamic systems. IEEE Trans. Reliability, 34 (1985) 171-4. 5. Essary, J. D. & Ziehms, H., Reliability analysis of phased missions. In Reliability and Fault Tree Analysis, ed. R. E. Barlow, J. B. Fussell & N. D. Singpurwalla. Society of Industrial and Applied Mathematics, Philadelphia, 1975, pp. 213-36. 6. Burdick, G. R., Fussell, J. B., Rasmuson, D. M. & Wilson, J. R., Phased mission analysis: A review of new developments and an application. IEEE Trans. Reliability, 26 (1977) 43-9. 7. Pedar, A. & Sarma, V. V. S., Phased-mission analysis for evaluating the effectiveness of aerospace computing-systems. IEEE Trans. Reliability, 30 (1981) 429-37. 8. Veatch, M. H., Reliability of periodic, coherent, binary systems. IEEE Trans. Reliability, 35 (1986) 504--7. 9. Tillman, F. A., Lie, C. H. & Hwang, C. L., Simulation model of mission effectiveness for military systems. IEEE Trans. Reliability, 27 (1978) 191-4.
Phased mission effectiveness
255
10. Altschul, R. E. & Nagel, P. M., The efficient simulation of phased fault trees. In Proc. 1987 ARMS, Inst. Electrical & Electronics Engineer, New York, January 1987, pp. 292-6. 11. Kolarik, W., Davenport, J., Fant, E. & McCoun, K., Early design phase life cycle reliability modeling. In Proc. 1987 ARMS, Inst. Electrical & Electronics Engineers, New York, January 1987, pp. 335-40. 12. Singh, N., Recursive estimation of phased mission reliability. IEEE Trans. Reliability, 34 (1985) 545-9. 13. Dugan, J. B., Trivedi, K. S., Smotherman, M. K. & Geist, R. M., The hybrid automated reliability predictor. AIAA J. Guidance, Control, and Dynamics, 9 (1986) 319-31. 14. Nathan, I., Mission effectiveness model for manned space flight. IEEE Trans. Reliability, 14 (1965) 84-93. 15. Lie, C. H., Kuo, W., Tillman, F. A. & Hwang, C, L., Mission effectiveness model for a system with several mission types. IEEE Trans. Reliability, 33 (1985) 346-52. 16. Smotherman, M. K. & Zemoudeh, K., A nonhomogeneous Markov model for phased mission reliability analysis. IEEE Trans. Reliability, 38(5) (1989). 17. Trivedi, K. S., Probability and Statistics with Reliability, Queueing, and Computer Science Applications. Prentice-Hall, Englewood Cliffs, 1982. 18. Howard, R. A., Dynamic Probabilistic Systems, Vol. II: Semi-Markov and Decision Processes. Wiley, New York, 1971.