An inhomogenous state graph model and application for a phased mission and tolerable downtime problem

An inhomogenous state graph model and application for a phased mission and tolerable downtime problem

Reliability Engineering and System Safety 49 (1995) 51-57 ELSEVIER 0951-8320(95)00017-8 © 1995 Elsevier Science Limited Printed in Northern Ireland...

645KB Sizes 0 Downloads 24 Views

Reliability Engineering and System Safety 49 (1995) 51-57

ELSEVIER

0951-8320(95)00017-8

© 1995 Elsevier Science Limited Printed in Northern Ireland. All rights reserved 0951-8320/95/$9.50

An inhomogenous state graph model and application for a phased mission and tolerable downtime problem ~ . Becket Technische Universitiit Berlin, lnstitut fiir Prozefl- und Anlagentechnik, Marchstr. 18, 10587 Berlin, Germany

L. Camarinopoulos & G. Zioutas Aristotelian University of Thessaloniki, Department of Mathematics, Physics and Computer Science, Thessaloniki, Greece (Received 28 October 1994; accepted 27 January 1995)

This paper provides a modelling framework for the use of inhomogenous state graph techniques for components and systems, where the duration, any given group of states is assumed, may be limited by a given value. The behavior of this type of process, including discontinuities in the transition rates, is elaborated in detail. This approach is useful e.g. to build component models with constant repair times, and to model systems with tolerable down times. An example for a problem involving tolerable down times and phases mission is provided. A systematic state graph approach applied to systems composed of several components allows the treatment of dependencies between the components precisely, which may cause large conservatisms, if Boolean modelling technique is applied.

1 INTRODUCTION

is in any state have to follow an exponential distribution. This may be relieved to some extent if semi-Markov models are applied. However, semiMarkov models impose numerical problems even in the homogenous case. Also, though durations may in fact be distributed in any way, this is in most cases not true for the life times and repair times of the components. The approach discussed in this paper does not allow for deliberate distributions, but for fixed (maximum) durations in a given set of states, i.e., a subgraph of the state graph of the process. If maximum duration is defined not for subgraphs, but for single states only, the process will belong to the class of semi-Markov models. It is simpler than a general semi-Markov model, as it requires the solution of differential equations, rather than integral equations. It covers practical needs, because most applications of semiMarkov models for reliability purposes are restricted to a mixture of exponentially distributed durations and constant durations. It is readily applicable for processes which are inhomogenous with respect to calendar time. This last property is important because

Markov processes, as well as state graph processes in general, are useful for the modelling of (small) technical systems composed of components exhibiting dependencies in their failure (respectively repair) behavior. 1 Examples of such dependencies are: - - s y s t e m behavior depending on the sequence of component failures in time; failure rate of some component depending on the state of some other component(s); - - limited repair capacities. -

-

Such problems are beyond the scope of ordinary Boolean models like fault trees and reliability block diagrams with independent basic events. An inhomogenous state graph model results, if, e.g., inspections occur at fixed times,2 or if the structure of the system or its success criteria vary with calendar time (phased mission problems)) "4 A restriction for the use of Markov models is the fact that generally the durations for which the process 51

52

G. Becker, L. Camarinopoulos, G. Zioutas

in practice, component failures can often be detected only by inspections which usually occur regularly at fixed calendar times. If such components exist, they tend to dominate the reliability behavior of the system. Only in special cases can this be modelled with a homogenous semi-Markov model. ~ A typical application for this type of process is the tolerable down time reliability problem, which is defined in the following way: there are systems for which failure is only hazardous, if the duration of system failure exceeds some value T~ot, which is given by the physical properties. As an example, consider a vessel in a chemical plant, where, due to some exothermic reaction, heat is generated which has to be removed by a cooling device. It will take some time Tot (due to the thermal heat capacities involved), until the temperature reaches a critical value. Should repair of the cooling device succeed before T,ot elapses, the consequences of the accident may be insignificant in comparison with the case when the critical temperature is exceeded. Sometimes tolerable down times occur which are time dependent themselves. Consider the residual heat removal in a nuclear power plant. A nuclear reactor cannot be switched off instantly. After shut down by insertion of the control rods, residual heat is produced which (starting from some 15% of the thermal power of the reactor) slowly decreases with time, which leads to an increasing tolerable down time. In the field of Boolean modelling, treatment of tolerable down times is well known. 5'6 As for a state graph approach, the modelling of this type of behavior is straight forward if a state (or a set of states) can be introduced in a Markovian model which is left after a fixed duration Tot, provided it has not been left before for other reasons, such as a successful repair. It should be noted though, that the resulting process will not, in general, be Markovian. Certainly, the effort required is still large compared with the treatment of independent components. In practical applications, a reasonably small group of components exhibiting some inter dependencies will be a part of a large system of independent components. There is no need (and no feasibility) to model the whole system in a large state graph model. Rather, the small group should be modelled with the appropriate state graph method and the results should be propagated to a Boolean model based on fault trees or reliability block diagrams.

2 DIFFERENTIAL EQUATIONS DESCRIBING PROCESSES WITH FIXED DURATIONS In the following sections, a system of differential equations shall be derived which is suitable to describe

a state graph process, where some states are left after a fixed maximum duration. After a formal definition of the process as a state graph, in a first step, a useful equation is derived relating the state probability with the frequency densities with which a state is reached and left. This equation does not depend on the Markovian property. In a second step, this equation is used to find the target system of differential equations mentioned. These results also apply to the case of inhomogenous (i.e., time dependent) maximum durations. To formally define the problem, consider a finite state graph G = (u, el, e2). The set of vertices v shall represent the states of the process. There are two sets of directed edges, el and e2, defined on v × v. Start and end vertices are assumed to be different, i.e., the graph contains no loops. The elements of el represent ordinary transitions and are labeled with the transition rates A,-j(t), which are functions of calendar time, if the process is inhomogenous. Calendar time may be used to model scheduled inspections or changes in environmental conditions; it cannot, in general, be used to model transition rates which depend on the life lengths of the components. Elements of e2 represent transitions occurring if the maximum (uninterrupted) duration in a given subgraph Gi*, is exceeded. All edges eij e e2 are labeled with a corresponding subgraph Gi* with the same indices i and j. The stochastic process associated with this subgraph will subsequently be called the 'inner process'. Obviously, i ~ G~*, and j ~ G* must hold. In addition, elements of e2 are labeled with according values z~j, which in general may also be time dependent. To avoid contradictions, assume that any two of these subgraphs are either disjoint or one is a subgraph of the other. Transitions are assumed to occur s-independently, which implies that in a time interval ( t , t + d t ) , there will be, at most, one transition. Subsequently, members of el shall be referred to as A-transitions, and members of e2 as z-transitions.

2.1 State probabilities and frequency densities

Let Hrj(t) be the expected value of the number of times that state j of the given process is reached in the interval (0, t), and Hij(t), accordingly, the expected value of the number of times it is left. Then, if Hrj(t) and Htj(t) are differentiable, dHrj(t)=h,j(t)dt and dHtj(t)=hlj(t)dt will define corresponding density functions. As the process has the property that, at most, one transition can occur in some interval (t, t + dt), the frequency densities may be interpreted as the

A n inhomogenous state graph model probabilities that a transition occurs into (resp. from) state j in (t, t + dr). Formally, with

53

not differentiable at this point in time. In this case the probabilities in (3) and (4) may be rewritten as

lt, = {the event that state j is left in (t, t + dt)}

(1)

hl~tk) =pr{l/~,} = Hij(tk + d t ) - H#(tk)

(12)

rt, = {the event that state j is reached in (t, t + dt)}

(2)

hr*j(tk) = pr{rit~} = nd(tk +dt)

(13)

the frequency densities may be written as

htt(t)dt = pr{lt, }

(3)

hri(t) dt = pr(rtt}.

(4)

Note that these are expressed as unconditional probabilities. They will depend on the initial state of the process at the time the process starts. This is omitted here and will be reflected subsequently by the initial conditions of the resulting differential equations. Furthermore, with

Z t, = {the event that the process is in state j at time t}

(5)

(6)

Any finite stochastic process with independent transitions and without loops where the transition frequencies are differentiable w.r.t, time obeys for all of its states (indexed with j) the equation dpt(t) = hrj(t) - hq(t). dt

(7)

Consider pj(t + dt) = pr{Zjt+d,}. The process will be in state j at t + dt if either state j is reached in the interval (t, t + dt), or if state j is assumed already at time t and it is not left during (t, t + dt), i.e.,

zt,+j, =

u z, n %.

(8)

This may be rewritten using deMorgan's laws:

Zt,+~ , = rt, tO -7 ~(Zt, f3 ~lt, ) ~- Fjt [,,.J--l(--IZjt [,,.Jlit ).

(9)

All events in (9) can be treated as exclusive events, as [concerning (-~Zt, tO It,)] a state may not be left unless it has been assumed before, and [concerning r~, U-~(-~Z t, tO It,) ], a state may not be reached starting from itself by a single transition, as it has been assumed that the state graph of the process has no loops. Thus, the resulting probability is

pr{Zt~+d,} = pr{G } + (1 -- [(1 -- pr{Zt~} ) + pr{lt,}]) = pr{rj,} + pr{Zj,} - pr{lj,}

(10)

which may be expressed as

pt(t + d t ) = h~j(t) dt + pj(t) - ho(t )dt.

Hri(tk)

i.e., they are no longer infinitesimally small values, but finite probabilities. For such tk, infinitesimal contributions are to be neglected if at least one of h~tk) and h*(tk) is non-zero, and the state probability can be found by a corresponding version of (7) as

pj(tk + d t ) -pj(tk) = hr~(tk) -- hff(tk).

(11)

Simple calculus allows to transform this to eqn (7). Now consider the case that frequencies H~t(t ) or Ho(t ) have a sudden change in some tk, i.e., they are

(14)

In order to obviate the necessity to distinguish between finite and infinitesimal values, Dirac notation may be used to express the two in a single function. This means, if there is a discontinuity at tk,

hrj(t) = h~j(t) + h~(tk)6(t -- tk)

the state probability can be expressed as

pt(t) = pr{Ztt }.

-

(15)

where h~t is the contribution of (4), where Hri is differentiable. Subsequently, frequency densities will be used in terms of (15), keeping in mind that for the solution, (14) must be used if there is a discontinuity. To summarize, note that (7) and (14) will allow determination of state probabilities for any stochastic multi-state process if the frequencies of reaching and leaving are differentiable apart from some given t,, and if they can be given. To find the state probabilities, (7) is integrated between consecutive points tk, and (14) is used to account for discontinuities. Though (7) and (14) have been derived without using the Markovian property, they hold, of course, for Markov processes. For an ordinary inhomogenous Markov process with finite transition rates, the definition of the transition rates leads immediately to

hrj(t)= Z Pi(g))tii(t) vi~j

(16)

hq(t) = pj(t) ~ Aik(t) Vk~-j

(17)

which corresponds [with (7)] to the well known system of differential equations for ordinary Markov processes. 7 Recently, 3"4"8Markov processes have been used to model phased mission problems, where the changed in the mission occurs at given points in calendar time. The change of states at some point in time tk, where the mission changes, may be interpreted as a transition rate, which will consist of a Dirac pulse 9 6(t - tk). AS change of mission is certain, the weight of this pulse will be unity. In general, it is useful to also have discontinuities which will occur with a probability different from unity, e.g., if (imperfect) inspections at fixed time points are to be modelled, 2"m so let Aq(t) = qq6(t - tk ).

(18)

G. Becker, L. Camarinopoulos, G. Zioutas

54

For atk, where at least one qij is non-zero, this leads to

h*(tk) = ~, pi(tk)qq

(19)

ht~tk) =pi(tk) ~, qjk.

(20)

vi~j

the process has been in G~*, exceeds t,j(t), there will be a t-transition at this time t. With the definition

Zk,,,*'2 = {the event that the process is in state i at time t2 and Gi* has not been left since it has been reached via state k in (tl, tl +dt)} (24)

V k ~j

If the process is solved up to tk using (7), (16) and (17), pj(tk) is known; thus, (19) and (20) can be solved to give input for (14), which will render p~(tk + dt) to reflect the appropriate change in the state probabilities.

the following holds for a z-transition r}; of the first type indicated above

U

Vi

r0

U rk,~ n z~'i,..,

V k ~ Gi~ Vt a

(25)

2.2 Frequency densities for a process with fixed durations

where the times tx are all solutions of the equation

To be able to use the result of the last section [eqns (7) and (14)], it is necessary to determine the frequency density functions h~(t) and ho(t ) for the process with maximum durations, as defined in Section 2. The equation for h~j(t) will be derived in detail, whereas for hq(t), only the final result, which may be obtained in much the same way, shall be given for the sake of brevity. Also, discontinuities will not be discussed here, as the procedure has been outlined sufficiently in Section 2.1. To determine the frequency density a state j is reached with, the event rj, will be split into the events

This means that there will be a z-transition into state j in the interval (t, t + dt), if for one of those states which are linked to j via a z-transition, the according subgraph Gi* has been reached in the interval (tx, tx + dt} via some state k, and, without having left Gi*, the process is in state i at time t. It should be noted here that

tx + zij(tx) = t.

r], = {the event, that a A-transition

r,=~

if t < 0

~,

Vi I 3rij V k e G , ~

r}~= {the event, that a t-transition

~ pr{rkt.}pr Vt x

x {Z*ki,.,, Irk,.}.

occurs into j in (t, t + d/)} (notation is as in Sections 2 and 2.1). As two independent transitions within the same infinitesimal interval (t, t + dt) are negligible, for r], and r~; the following holds:

rj, = 6 U r;;

(21)

pr{rj,} = pr{r~t} + pr{r~;}.

(22)

From the definition of the transition rates,

pr{r],} = ~ pi(t)Aij(t) dt

(27)

as the process is started at t = 0, and there are no transitions before. As at most one transition may occur in a given time interval, the terms of the union are mutually exclusive, which yields

pr{r;~} = ~

occurs into j in (t, t +dt)}

(26)

(23)

vi~j

where A;j(t) = 0, if there is no A-transition from i to j. To determine r;,, it must be noted that there are (at least) two possibilities for the understanding of the time dependent zq(t). One possibility is that t means the time when G* has been entered. The value is taken at this time and remains constant until G~*is left, or until it elapses, whichever happens first. The other possibility is that t means the time when G~* is to be left, i.e., if at some time t, the uninterrupted duration,

(28)

Now, the term pr{rk,.} in (28) just corresponds to the frequency density of reaching state k in (tx, tx + dt}, which nicely shows that the resulting process, as opposed to a strict Markov process, has memory. To determine the conditional probability in (28), another process, which is associated with Gi*, has to be evaluated. More precisely, this 'inner' process is G~* with an additional trapping state, to which all edges are connected, which emerge from Gi* without terminating there, apart from the t-edge from i to j. Solving this process for its state probabilities, starting at t, with the state k, however, means applying the same concepts as for the original state graph G. In an algorithmic sense, this is a recursive problem as long a s Gi~has z-edges. With the definition

pr{Z~i,.,, I rktx}= pg(tx, t)

(29)

applied to (22), this gives

hri(t) = ~ p~(t)Aq(t) + ~ vi~ j

E

v i [ 3rq V k ~ G , 7

~ hrk(t~)p~,.(t~, t). vt~

(30)

An inhomogenous state graph model It should be mentioned here that (26), which is used to determine the value for tx, may for some functions zu(t ) have an infinite number of solutions. This is the case when there is an interval (q, t2) where z~(t) has slope - 1 . Then the sum over all t~ in (30) will degenerate into an integral, hrj(t) will have a discontinuity, and (14) will apply. In a similar way, the frequency density ho(t ), with which the state j is left, can be found as

55

For the frequency densities, this leads to

hd(t ) = ~ pi(t)Aq(t) vi~j

× E E hrk(tx) vi I 3~,j Vke:G~ / d__~rO))p~i(tx, t ) × (1 -Minkdt,

hu(t ) = ~ Pj(t);b,(t) Vi ~j

× E

E

Vi I 3"rli VkEGi7

h#(t) = E pj(t)Aji(t) + E E E hrk(tx)p~i( tx' t). Viii Vii ~'¢ji Vk~C~ Vtx

(31) Finally, the second case of a time dependent maximum duration rq(t) is to be considered, where the value of zu is not selected at the time when the process G~ is entered, but rather, at the calendar time when it is to be decided whether the z-transition is to occur. In this case, the time tx, when G* must have been entered, will not be given by (26), but rather by t~ + ri~(t) = t.

(32)

Equation (32) has only one solution for tx, hence the first consequence of this interpretation of rq(t) will be that the triple sums in (30) and (31) will reduce to double sums. There is, however, a less obvious second consequence: should ru(t ) be decreasing with time, a r-transition will not only occur if G,-*has been reached in (t~, tx + dr), but also if G~ has been reached in a small time interval before tx, more precisely, in (tx + dr(t), t~). (Note that d r is negative in this case). If there has been a transition into G* in this interval, and the tolerable down time decreases by dr in (t, t + dt), this will also contribute to the r-transition. To model this, let rTt =

U

U r~,x n

Vi13 rij Vk ~ Gi~

z~,,..,

(34)

If d r is infinitesimally small, bearing in mind that this effect only exists if dr is negative, the resulting probability will be =

-

Min(d,, 0))

= hrk(tx)[1- M i n ( ~ , 0 ) ] dt.

]

. /d__~

\\

,.

-M,n~dt, O))pk,(t,,t)

(37)

where t, = t - %(0. If d r is finite at time t, i.e., ru(t ) has a discontinuity, this will result in a discontinuity of the frequency densities. As a sub-summary, note that eqns (7), (30), (31), (36) and (37) together from a system of differential equations which describe an inhomogenous process with fixed maximum durations for some subgraphs as it has been formally introduced in Section 2. Equation (14) provides a general way to treat discontinuities. These latter durations occur as 'dead times', showing that the Markovian property is not given in the context of this type of process. Solutions have to be found numerically in most realistic cases. It may be interesting to note that, if all subgraphs Gi* consist of exactly one state i, the resulting process is a semi-Markov process, i.e., the Markovian property is given at the transition times, or, precisely, a possibly inhomogenous semi-Markov process. The process described is useful for a variety of reliability problems, which are not covered by conventional Boolean or Markovian techniques. The maximum durations provide a way to implement constant repair times, or tolerable down times in the context of a state graph model, keeping all the flexibility of state graph models. 3 APPLICATION

r~,x = {the event that state j is

pr{r*~,} hrk(tx)(dt

×

hrk(tx)

(33)

where

reached in (tx + dr, tx + dt)}.

(36)

(35)

The modelling framework introduced in Section 2 has not been implemented completely in a computer code up to now (this would be a nice exercise, though). In order to calculate some examples, however, parts of the theory have been implemented in the M A R K code. 2 3.1 Description of an example One such example is given by the state graph given in Fig. 1. It has been used to describe the behavior of a district heating net with two sources over the period of half a year. A district heat network has a tolerable down time, because if repair occurs fast, the users will not even notice a service disruption, but if repair lasts

G. Becker, L. Camarinopoulos, G. Zioutas

56 k

Table 2. List of states for district heating net example State Phases 1 1 1 2 2

Fig. 1. State graph for district heating net example.

a long time, houses will cool down, and the users will feel uncomfortable (at least). The tolerable down time will depend on the environmental temperature; this has been implemented as a time dependent tolerable down-time by using average environment temperatures for different phases of a heating period. A detailed description of the net and the simulation to determine tolerable down times as a function of environmental temperature has been given elsewhere. H In the state graph a heating period consisting of five mission phases given in Table 1 with 14 states as defined in Table 2 has been modelled. Clearly, this is a phased mission problem because in the beginning of the heating period, first one (states 1 , 2 , 3 ) then the other source (states 4, 5, 6) will be out of service for Table 1. Mission phases for district heating net example Phase

States

1

1,2,3

2

4,5,6

3

7,8,9,10

4

11,12,13,14

5

7,8,9,10

Duration

Description

2 weeks Start of heating period; unit 1 under maintenance 2 weeks As above, but unit 2 under maintenance 8 weeks Both units available unless failed, one required 2 weeks Both units available unless failed, both required 8 weeks as phase 3

6 7 8 9

2 3, 5 3, 5 3, 5

10

3, 5

11 12

4 4

13

4

14

4

Description Unit 1 operating Unit 1 failed, tolerable down time not exceeded Unit 1 failed, tolerable down time exceeded Unit 2 operating Unit 2 failed, tolerable down time not exceeded Unit 2 failed, tolerable down time exceeded Both units operating One unit operating, the other one failed Both units failed, tolerable down time not exceeded Both units failed, tolerable down time exceeded Both units operating One unit failed, tolerable down time not exceeded One unit failed, tolerable down time exceeded Both units failed, tolerable down time exceeded

scheduled maintenance. Afterwards both sources are available (unless failed), but only one is required (states 7, 8, 9, 10). Then the environmental temperature decreases to a point that both sources are required (states 11, 12, 13, 14). Finally climate becomes milder, and the sources act as redundancies again (states 7, 8, 9, 10). Failure and repair rates are assumed to be constant; the according transitions are represented by solid lines. Transitions between the mission phases are assumed to occur at fixed points of time. These are indicated by thin lines. States 2, 5, 9, 12 represent system failure, where the tolerable down time has not yet been exceeded. These states define a subgraph G* (the only one in this example); tolerable down times will count from the time G* has last been entered. The r-transitions are indicated by wavy lines. Note that for internal reasons (the present version of the enhanced M A R K code allows only for one maximum duration at any given calendar time), it has been assumed, that in mission phase 4, where both sources are required (states ~11, 12, 13, 14), failure of the second source will lead to system failure immediately. 3.2 Data for the example and results Evaluation of the example has been performed using the following data: the failure rate A = 10-4/h the repair rate p = 0.05/h the duration of the first and the second mission phase = 2 weeks the duration of the third mission phase = 8 weeks

57

A n inhomogenous state graph model the duration of the fourth mission phase = 2 weeks the duration of the fifth mission phase = 8 weeks. The tolerable downtime has been set to values between 20 hours and 2 hours, changing at two week intervals. These correspond to environmental tempertures between + 10°C and -15°C and have been found by a thermodynamic simulation of the net. 11 A service disruption of the net has been considered tolerable if the temperatures in the buildings attached to this net will not fall below 18°C. An interesting measure for this system is the unavailability, because during the time the system is unavailable, inhabitants will use auxiliary heating devices, e.g. electric heaters with a constant consumption of electric power. Hence, the cost will be roughly proportional to the time the system is unavailable and tolerable down time is exceeded. The system is unavailable, is the process is in states 3, 6, 10 or 14. The resulting unavailability is given in Fig. 2, and, to a different scale, in Fig. 3. 4 DISCUSSION A N D FINAL REMARKS

State graph modelling techniques have been extended to processes which do not have the Markovian property. The approach is considered interesting, as the numerical effort required is, in an order of magnitude, comparable to Markovian modelling in the sense that it is based on differential equations, rather than integral equations which have to be solved, if a semi-Markov process is considered. Certainly there are still restrictions: in most practical cases, the process will be restricted to components with constant failure rates and maximum durations which might be used to model constant repair times, some inspection strategies, or tolerable down times, as in the example presented. However, such restrictions also hold for semi-Markov processes, if applied to reliability models involving more than one redundancy. Though the set of semi-Markov processes is not completely included in the modelling approach developed here, this is true for most practical applications of semi-Markovian modelling known, including the example given by Unavailability

/

0.007 0.006 0.005 0.004 0.003 0.002 0.001

--fq

I 1000

I 2000

I 3000

I 4000

t [hours] Fig. 2. Resulting unavailability for district heat network example.

Unavailability 2 0 0 x 1 0 -6 1.75 x 10 -6 1 . 5 0 x 1 0 .6 1 . 2 5 x 10 -6 1 0 0 x 10-6 7 . 5 0 x 10 .6 5 . 0 0 x 1 0 .6 -

I--

L

2 . 5 0 x 1 0 .6 -

0

I 1000

I 2000

I 3000

L_ 1 4000

t [hoursl Fig. 3. Unavailability plot scaled to 2 x 10 6. Barlow & Proschan. 1 It has been shown by an example that the approach is helpful and practicable for realistic problems.

REFERENCES

1. Barlow, R. E. & Proschan, F., Mathematical Theory of Reliability John Wiley & Sons, NY, 1965. 2. Becker, G. & Camarinopoulos, L., Mixed discrete and continuous Markovian models for components and systems with complex maintenance and test strategies. In Proc. European Safety and Reliability Conference, 1992, Kopenhagen, Denmark, 10-12, June 1992, Elsevier, Oxford. 3. Dugan, J. B., Automated analysis of phased-mission reliability. IEEE Trans. Reliability, R-40 (1991) 45-52. 4. Smotherman, M. K. & Zemoudeh, K., A nonhomogenous Markov model for phased-mission reliability analysis. IEEE Trans. Reliability, R-38 (1989) 585-590. 5. Camarinopoulos, L. & Obrowski, W., Berticksichtigung tolerierbarer Ausfallzeiten bei der Zuverl~issigkeitsanalyse technischer System. Atomkernenergie, 37 (1981) 000-000 (in German). 6. Becker, G. & Camarinopoulos, L., Time dependent tolerable downtimes. Proc. 3rd TUV Workshop on Living PSA, 1992, TUV Norddeutschland, Hamburg, 11-13 May. 7. Howard, R. A., Dynamic Probabilistic Systems, Vols 1 & 1 John Wiley & Sons, NY, 1971. 8. Smotherman, M. & Geist, R., Phased mission effectiveness using a nonhomogeneous markov reward model. Reliability Engng and System Safety, 27 (1990) 241-255. 9. Smotherman, M., Transient solution of timeinhomogenous markov reward models with discontinuous rates. In: Stewart, W. (ed.), Numerical Solution of Markov Chains, Marcel Dekker publishers, New York, 1990. 10. Becker, G. & Camarinopoulos, L., Modelling human error during scheduled inspections with Markovian techniques. Proc 12 European Annual Conference on Human Decision Making and Manual Control, 1993, Kassel, 22-24 June. 11. Bartsch, G., Becker, G., Behr, A. & LU-Kt~hler, C., A state graph approach to modelling tolerable down times for a distinct heating network. In: Tsatsaronis, G. (ed.), Proc 1 9 9 4 Engineering Systems and Analysis Conference, The American Society of Mechanical Engineers, 3 (1994), pp. 35-44.