A new efficient algorithm for computing the imprecise reliability of monotone systems

A new efficient algorithm for computing the imprecise reliability of monotone systems

Reliability Engineering and System Safety 86 (2004) 179–190 www.elsevier.com/locate/ress A new efficient algorithm for computing the imprecise reliab...

202KB Sizes 0 Downloads 79 Views

Reliability Engineering and System Safety 86 (2004) 179–190 www.elsevier.com/locate/ress

A new efficient algorithm for computing the imprecise reliability of monotone systems Lev V. Utkin* Institute of Statistics, Munich University Ludwigstr. 33, 80539, Munich, Germany Received 15 November 2002; accepted 18 December 2003

Abstract Reliability analysis of complex systems by partial information about reliability of components and by different conditions of independence of components may be carried out by means of the imprecise probability theory which provides a unified framework (natural extension, lower and upper previsions) for computing the system reliability. However, the application of imprecise probabilities to reliability analysis meets with a complexity of optimization problems which have to be solved for obtaining the system reliability measures. Therefore, an efficient simplified algorithm to solve and decompose the optimization problems is proposed in the paper. This algorithm allows us to practically implement reliability analysis of monotone systems under partial and heterogeneous information about reliability of components and under conditions of the component independence or the lack of information about independence. A numerical example illustrates the algorithm. q 2004 Elsevier Ltd. All rights reserved. Keywords: Reliability; Imprecise probability theory; Lower and upper previsions; Linear programming; Monotone system; Independence

1. Introduction A lot of methods and models of classical reliability theory assume that all probabilities are precise, that is, every probability involved is perfectly determinable. If the information we have about the functioning of components and systems is based on a statistical analysis, then a probabilistic uncertainty model should be used in order to mathematically represent and manipulate that information. However, the reliability assessments that are combined to describe systems and components may come from various sources. Some may be objective measures based on relative frequencies or on well established statistical models. A part of the reliability assessments may be supplied by experts. As a result, only partial information about reliability of some system components may be available. Moreover, it is difficult to expect that components of many systems are statistically independent. In this case the most powerful and promising tool for reliability analyzing is imprecise probability theory (also called the theory of lower previsions [1,2], the theory of interval statistical models [3], the theory of interval probabilities [4,5]), whose general framework is * Tel.: þ7-812-247-6937. E-mail address: [email protected] (L.V. Utkin). 0951-8320/$ - see front matter q 2004 Elsevier Ltd. All rights reserved. doi:10.1016/j.ress.2003.12.008

provided by upper and lower previsions. In order to compute the system reliability in the framework of imprecise probabilities by taking into account the available information, a general procedure called natural extension is used. It produces a coherent overall model [1] from a certain collection of imprecise probability judgements and may be seen as the basic constructive step in interval-valued statistical reasoning. The natural extension can be viewed as an optimization problem. By using imprecise probability theory, it is not necessary to make assumptions about probability distributions of random variables characterizing the component reliability behavior (times to failure, numbers of failures in a unit of time, etc.). At the same time, imprecise probability theory is completely based on classical probability theory and can be regarded as its generalization. Therefore, imprecise reliability models can be interpreted in terms of probability theory and conventional reliability models can be viewed as a special case of imprecise models. Moreover, the imprecise probability theory provides a unified tool (natural extension) for computing the system reliability under partial information about the component reliability behavior. Various examples of the successful application of imprecise probabilities to reliability analysis can be found in the literature. In particular, some statistical aspects of

180

L.V. Utkin / Reliability Engineering and System Safety 86 (2004) 179–190

imprecise reliability were studied in Refs. [6 –8]. The reliability analysis of different typical two-state systems (series, parallel, cold standby, bridge systems, etc.) under special types of partial information about the component reliability behavior were investigated in Refs. [9 – 13]. Models of imprecise reliability of multi-state and continuum-state systems were proposed in Refs. [15 – 17]. New models of the structural reliability taking into account the imprecision of initial information can be found in Refs. [18 –22]. Despite the vital importance of proposed models, most of them consider only some special cases of systems and initial information about their reliability. Reliability analysis of systems under arbitrary incomplete initial information meets with a complexity of optimization problems which have to be solved for obtaining the system reliability measures. Therefore, the advantages of imprecise probability theory are often useless because the computational complexity does not allow us to get accurate results and reliability assessments. One of the approaches to partially cope with this problem has been proposed in Ref. [23]. This approach covers a wide class of systems to be analyzed. However, it requires to know some explicit expressions for reliability of systems by different types of initial data. Moreover, it is rather complex from the computational point of view because does not take into account some features of the analyzed systems. Therefore, a new simplified algorithm is proposed in this paper. This algorithm allows us to practically implement reliability analysis of complex monotone systems under partial information about reliability of components and under conditions of the component independence or the lack of information about independence. A numerical example shows that. The paper is organized as follows. In Section 2, a general approach for reliability analysis of arbitrary systems under various types of initial probabilistic information is considered. The main definitions of imprecise probability theory and an imprecise reliability model is introduced in this section. Section 3 considers the theoretical basis of the proposed algorithm for computing the system reliability. In Section 4, the proposed algorithm is extended to simplify one of its parts. A possible numerical algorithm is described in Section 5. In Section 6, the proposed algorithm is illustrated by a numerical example. Additional virtues and shortcomings are discussed in Section 7. Appendix contains proofs of main statements of the paper.

2. Problem statement Consider a system consisting of n components. Suppose that partial information about reliability of components is represented as a set of lower and upper expectations aij ¼   ij ; i ¼ 1; …; n; j ¼ 1; …; mi ; of functions Efij and a ij ¼ Ef fij :  Here mi is number of judgments that are related to the i-th component; fij ðXi Þ is a function of the random time to failure

Xi of the i-th component or some different random variable, describing the i-th component reliability and corresponding to the j-th judgment about this component. For example, an interval-valued probability that a failure is in the interval ½a; b can be represented as expectations of the indicator function I½a;b ðXi Þ such that I½a;b ðXi Þ ¼ 1 if Xi [ ½a; b and I½a;b ðXi Þ ¼ 0 if Xi  ½a; b: The lower and upper mean times to failure (MTTFs) are expectations of the function f ðXi Þ ¼ Xi : According to Ref. [24], the system time to failure can be uniquely determined by the component times to failure. Denote X ¼ ðx1 ; …; xn Þ and X~ ¼ ðX1 ; …; Xn Þ: Here x1 ; …; xn are values of random variables X1 ; …; Xn ; respectively. It is assumed that the random variable Xi is defined on a sample space V and the random vector X~ is defined on a sample space Vn ¼ V £ · · · £ V: If Xi is the time to failure, then V ¼ Rþ : If Xi is a random state of a multi-state system [25], then V ¼ {1; …; L}; where L is a number of states of the multi-state system. In a case of the discrete time to failure, V ¼ {1; 2; …}; i.e., V ¼ Zþ : Then there exists a function ~ of the component lifetimes characterizing the system gðXÞ reliability behavior. Generally for a monotone system, the minimal path and cut sets presentation technique can be employed to calculate the system reliability. A minimal path of a system is a minimal set of components such that if these components work, the system works. A minimal cut is a minimal set of components such that if these components fail, the system fails. Suppose that a monotone system has p minimal paths P1 ; …; Pp ; containing l1 ; …; lp components, respectively, and k minimal cut sets K1 ; …; Kk ; containing ~ is k1 ; …; kk components. Then the system lifetime gðXÞ given by [22] ~ ¼ max min Xi ¼ min max Xi : gðXÞ 1#j#p i[Pj

1#j#k i[Kj

ð1Þ

In terms of the imprecise probability theory the lower and upper expectations can be regarded as lower and upper previsions. The functions fij and g can be regarded as gambles (the case of unbounded gambles is studied in Refs.  ij [26,27]). The lower and upper previsions Efij and upper Ef  can be also viewed as bounds for an unknown precise prevision fij which will be called a linear prevision. Since the function g is the system time to failure, then, for computing the reliability measures (probability of failure, MTTF, k-th moment of time to failure), it is necessary to find lower and upper previsions of a gamble hðgÞ; where the function h is defined by the system reliability measure which has to be found. For example, if this measure is the probability of failure before time t; then hðgÞ ¼ I½0;t ðgÞ: In this case, optimization problems (natural extension) for computing lower H and upper H previsions (expectations)  of hðgÞ are [12,14] H ¼ EhðgÞ ¼ min P  

ð Vn

hðgðXÞÞrðXÞdX;

ð2Þ

L.V. Utkin / Reliability Engineering and System Safety 86 (2004) 179–190

Ð   ¼ EhðgÞ H ¼ maxP Vn hðgðXÞÞrðXÞdX;subject to ð rðXÞ $ 0; rðXÞdX ¼ 1; n V

aij # 

ð Vn

ð3Þ

fij ðxi ÞrðXÞdX # a ij ; i # n; j # mi :

Here the minimum and maximum are taken over the set P of all possible n-dimensional density functions {rðX} satisfying conditions Eq. (3) i.e., solutions to problems (2) and (3) are defined on the set P of possible densities that are consistent with partial information expressed in the form of constraints Eq. (3). This implies that the number of optimization variables is infinite and this fact restricts the use of the natural extension in real applications. Denote Fi ¼ ðfi1 ðxi Þ; …; fimi ðxi ÞÞ; Ai ¼ ðai1 ; …; aimi Þ;    Ci ¼ ðci1 ; …cimi ÞT ;

 i ¼ ðai1 ; …; a im Þ; A i Di ¼ ðdi1 ; …; dimi ÞT :

Inequalities Ci # a and Ci # Di mean that cij # a and cij # dij for all possible j: It should be noted that optimization problems (2) and (3) are linear and dual optimization problems can be written. The dual optimization problem for computing the lower prevision H ¼ EhðgÞ of the system function hðgÞ is [3,15,28]  ( ) n X  H ¼ max c þ ðAi Ci 2 Ai Di Þ ; ð4Þ   i¼1 subject to Ci ; Di [ Rþ ; i ¼ 1; …; n; c [ R; and ;X [ Vn ; cþ

n X

Fi ðCi 2 Di Þ # hðgðXÞÞ:

ð5Þ

i¼1

The dual optimization problem for computing the upper  prevision H ¼ EhðgÞ of the system function hðgÞ is ( ) n X   ðAi Ci 2 Ai Di Þ ; ð6Þ H ¼ min c þ  i¼1 subject to Ci ; Di [ Rþ ; i ¼ 1; …; n; c [ R; and ;X [ Vn ; cþ

n X

Fi ðCi 2 Di Þ $ hðgðXÞÞ:

ð7Þ

i¼1

Here c; cij ; dij are optimization Ð variables such that c corresponds to the constraint Vn rðXÞdX ¼ 1; cij correÐ  ij ; and dij sponds to the constraint Vn fij ðxi ÞrðXÞdX # Ef Ð  ij # Vn fij ðxi ÞrðXÞdX: It corresponds to the constraint Ef turns out that dual optimization problems are simpler in comparison with problems (2) – (3) in many applications because this representation allows to avoid the situation when a number of optimization variables is infinite. It should be noted that only joint densities are used in optimization problems (2) – (3) because, in a general case, we may not be aware whether variables x1 ; …; xn are dependent or not. It is worth noticing that solutions to

181

problems (2) – (3) by hðgÞ ¼ I ½t;1Þ ðminðx1 ; x2 ÞÞ and fij ðxi Þ ¼ I ½t;1Þ ðxi Þ coincide with the well known Frechet bounds [29]. If it is known that components are independent, then rðXÞ ¼ r1 ðx1 Þ…rm ðxm Þ: In this case, the set P is reduced and consists only of the densities that can be represented as a product. This results in more precise reliability assessments. However, it is difficult to forecast how the condition of independence influences on the precision of assessments. Anyway, imprecision is reduced if independence is available in most cases of initial information and cannot be increased. If the set P in (2) –(3) is empty, this means that the set of available evidence is conflicting and it is impossible to get any solution to problems (2) –(3). For example, if two experts provide bounds for the MTTF of a component [10,12] and [14,15], respectively, this information is conflicting because these bounds produce non-intersecting sets of probability distributions and the set P of common distributions is empty. There may be three ways to cope with conflicting evidence. The first is to localize the conflicting evidence and discard it. The second is to ‘correct’ the conflicting evidence making it non-conflicting [30]. The third is to introduce some beliefs to every judgement and to deal with the second-order hierarchial models [31,32]. Most reliability measures (probabilities of failure, MTTFs, failure rates, moments of time to failure, etc.) can be represented in the form of lower and upper previsions or expectations. Each measure is defined by the gamble fij : The precise reliability information is a special case of the imprecise information when the lower and upper previsions of the gamble fij coincide i.e.,  ij : For example, let us consider a series system Efij ¼ Ef  consisting of two components. Suppose that the following information about reliability of components is available. The probability of the first component failure before 10 h is 0.01. The MTTF of the second component is between 50 and 60 h. It can be seen from the example that the available information is heterogeneous and it is impossible to find the system reliability measures on the basis of conventional reliability models without using additional assumptions about probability distributions. At the same time, this information can be formalized as follows: ð 0:01 # 2 I½0;10 ðx1 Þrðx1 ; x2 Þdx1 dx2 # 0:01; 50 Rþ

#

ð R2þ

x2 rðx1 ; x2 Þdx1 dx2 # 60:

If it is known that components are statistically independent, then the constraint rðx1 ; x2 Þ ¼ r1 ðx1 Þr2 ðx2 Þ is added. The above constraints form a set of possible joint densities r: Suppose that we want to find the probability of the system failure after time 100 h. This measure can be regarded as the prevision of the gamble

182

L.V. Utkin / Reliability Engineering and System Safety 86 (2004) 179–190

~ ¼ minðX1 ; X2 Þ and hðgÞ ¼ I½100;1 ðminðX1 ; X2 ÞÞ; i.e. gðXÞ I½100;1 ðgÞ: Then the objective function is of the form:  ¼ min ðmax Þ HðHÞ P P 

ð R2þ

I½100;1Þ ðminðx1 ;x2 ÞÞ £ rðx1 ;x2 Þdx1 dx2 :

The above bounds for the probability of the system failure after time 100 h are the best by the given information. If the considered random variables are discrete and the sample space Vn is finite, then integrals and densities in problems (2) –(3) are replaced by sums and probability distribution functions, respectively. Let us introduce the notion of the imprecise reliability model Mi ¼ kEij ; E ij ; fij ðXi Þ; j ¼ 1; …; mi l of the i-th com ponent as a set of mi available lower and upper previsions and corresponding gambles. Our aim is to  hðgðXÞÞl ~ get the imprecise reliability model M ¼ kE; E; of  the system. This can be done by using the natural extension which will be considered as a transformation of the component imprecise models to the system model and denoted ^ni¼1 Mi ! M: Models in the considered above example are M1 ¼ k0:01; 0:01; I½0;10 ðX1 Þl; M2 ¼  I½100;1Þ ðminðX1 ; X2 ÞÞl: k50; 60; X2 l; M ¼ kE; E;  If a number of judgments about the component reliability P behavior, ni¼1 mi ; and a number of components, n; are rather large, problems (2) – (7) can not be practically solved due to their extremely large dimensionality. This fact restricts essentially the application of imprecise calculations to reliability analysis. Therefore, a simplified algorithm for solving optimization problems (2) – (7) is proposed. The main idea underlying this algorithm is to decompose the difficult (non-linear by independent components) optimization problems into several simple linear programming problems whose solution presents no difficulty. In terms of the introduced imprecise reliability models, we will try to replace the complex transformation ^ni¼1 Mi ! M by a set of n þ 1 simple transformations Mi !

M0i

 hðXi Þl; i ¼ 1; …; n; ¼ kE; E; 

Lni¼1

n

L

i¼1

Moi

! M: M0i

means that all models are Here the symbol simultaneously used to obtain M: In order to give the reader the essence of the subject analyzed and make all the formulas more readable, obvious constraints for densities r Ð to optimization problems, such that rðXÞ $ 0; Vn rðXÞd X ¼ 1; Ð will not be written. Furthermore, integrals of the form Vn f ðxi ÞrðXÞdX will be denoted ErðXÞ f ðXi Þ for short.

time to failure, probabilities of failure in an interval A; respectively. These functions are non-negative, i.e., hðgÞ $ 0 for all g $ 0: Since the component times to failure and random states of components of a multi-state system are non-negative random variables, then, according to Eq. (1), the system time to failure and the system states are also nonnegative, i.e. g $ 0 if X $ 0: It is necessary to point out that a system is called monotone if it does not become better by a failure of a component. Therefore, the function g is non-decreasing due to the system monotonicity. Finally, it follows from Eq. (1) that g is always equal to a value of one of the arguments. These properties or conditions can be formally written as Cl. hðxÞ $ 0 for all x $ 0; C2. gðXÞ $ 0; ;X [ Vn # Rnþ ; C3. gðXÞ is a non-decreasing continuous function; C4. for each X; there exists a number i such that gðXÞ ¼ xi0 ; where xi0 is the i-th component of the vector X: 3.1. The lack of information about independence of components Introduce the following notation: R ¼ ðr1 ; …; rn ÞT ;

S ¼ ðs1 ; …; sn ÞT ;

H ¼ ðhðx1 Þ; …; hðxn ÞÞ;  ¼ ðh 1 ; …; h n Þ: H ¼ ðh1 ; …; hn Þ; H    Theorem 1. If conditions C1 –C4 are fulfilled, then the following optimization problem:  H0 ¼ max{r0 þ HR 2 HS};   subject to R; S [ Rþ ; r0 [ R; and ;X [ Vn ;

ð8Þ

r0 þ HR 2 HS # hðgðXÞÞ;

ð9Þ

is equivalent to the optimization problem H0 ¼ max{r0 þ HR};   subject to R [ Rþ ; r0 [ R; and ;X [ Vn ;

ð10Þ

r0 þ HR # hðgðXÞÞ:

ð11Þ

The optimization problem  2 HS}; H 0 ¼ min{r0 þ HR  subject to R; S [ Rþ ; r0 [ R; and ;X [ Vn ;

ð12Þ

r0 þ HR 2 HS $ hðgðXÞÞ;

ð13Þ

3. Decomposition of the system imprecise model

is equivalent to the optimization problem

First of all, let us determine the main properties of functions h and g used in reliability analysis of monotone systems. The typical functions hðgÞ are g; gk ; IA ðgÞ; whose lower and upper previsions are MTTFs, k-th moments of

 H 0 ¼ min{r0 þ HR};

ð14Þ

subject to R [ Rþ ; r0 [ R; and ;X [ Vn ; r0 þ HR $ hðgðXÞÞ:

ð15Þ

L.V. Utkin / Reliability Engineering and System Safety 86 (2004) 179–190

 0 ) prevision Theorem 1 states that the lower H0 (upper H  depends only on the lower (upper) previsions hi ðh i Þ: In spite  of the fact that Theorem 1 allows us to simplify optimization problems (8) – (13), it is mainly auxiliary and is needed to prove the following theorem. Theorem 2. Consider n optimization problems ði ¼ 1; …; nÞ  i D0i }; hi ¼ max{c0i þ Ai C0i 2 A   subject to C0i ; D0i [ Rþ ; c0i [ R; and ;xi [ V;

ð16Þ

c0i þ Fi ðC0i 2 D0i Þ # hðxi Þ;

ð17Þ

and the problem  H0 ¼ max{r0 þ HR 2 HS};   subject to R; S [ Rþ ; r0 [; and ;X [ Vn ;

ð18Þ

r0 þ HR 2 HS # hðgðXÞÞ;

ð19Þ

Let H be the solution to problem (4) and (5). If conditions  C1 – C4 are fulfilled, then there holds H ¼ H0 :   Consider n optimization problems ði ¼ 1; …; nÞ  i C00i 2 Ai D00i }; h i ¼ min{c00i þ A  subject to C00i D00i [ Rþ ; c00i [ R; and ;xi [ V;

ð20Þ

c00i þ Fi ðC00i 2 D00i Þ $ hðxi Þ;

ð21Þ

183

can be replaced by a set of n þ 1 relatively simple transformations (optimization problems) Mi ! M0i ; i ¼ 1; …; n; Lni¼1 M0i ! M: What are virtues of this decomposition? First of all, all previsions in the model kEij ; E ij ; fij ðXi Þ; j # mi l are related to the same random  variable Xi : This makes the corresponding optimization problems for determining lower EhðXi Þ and upper EhðXi Þ   previsions simple from the computational point of view. Second, the conditions of independence of components and of the lack of their independence are not used in the problem Mi ! M0i : Therefore, we are able to exploit the most simple form (2) – (7) of natural extension for a specific system and initial information. Third, the models ~ Lni¼1 kE; hðXi Þl and kE; hðgðXÞÞl contain previsions of iden  tical gambles h: This allows us to significantly simplify the corresponding optimization problems (see Section 4). Moreover, there exist a lot of explicit expressions for computing reliability of typical systems when initial previsions are of the same gambles. Fourth, the decomposition simplifies a procedure for localization of conflicting judgments. Corollary 1. If the initial information Lni¼1 kEij ; E ij ; fij ðXi Þ; j # mi l about system reliability is con flicting, then at least one of the problems (16) –(17) does not have a solution.

and the problem  2 HSÞ;  0 ¼ minðr0 þ HR H  subject to R; S [ Rþ ; r0 [ R; and ;X [ Vn ;

ð22Þ

r0 þ HR 2 HS $ hðgðXÞÞ;

ð23Þ

Let H be the solution to problem (6) and (7). If conditions  ¼ H0 : C1 – C4 are fulfilled, then there holds H Theorem 2 states in terms of imprecise models that n

~ L kEij ; E ij ; fij ðXi Þ; j # mi l ! kE; hðgðXÞÞl i¼1   is equivalent to ~ i # n; kEij ; E ij ; fij ðXi Þ; j # mi l ! kE; hðXÞl;  n ~ L kE; hðXi Þl ! kE; hðgðXÞÞl; i¼1   and

Corollary 1 implies that conflicting judgments can be localized by obtaining models M0i : Moreover, the decomposition allows us to determine what previsions (lower or upper) are inconsistent. 3.2. Independent components Theorem 3. If conditions C1 – C4 are fulfilled, then the optimization problems ~ H ~  0 ¼ max r ðx Þ…r ðx Þ hðgðXÞÞ; H0 ¼ min r1 ðx1 Þ…rn ðxn Þ hðgðXÞÞ; n n r1 ;…;rn r1 ;…;rn 1 1  subject to hi # Eri ðxi Þ hðXi Þ # h i ; i ¼ 1; …; n; ð24Þ  are equivalent to optimization problems with the same objective functions and constraints

~ L kEij ; E ij ; fij ðXi Þ; j # mi l ! kE; hðgðXÞÞl i¼1  

hi # Eri ðxi Þ hðXi Þ; i ¼ 1; …; n;  and

is equivalent to

Eri ðxi Þ hðXi Þ # h i ; i ¼ 1; …; n;

kEij ; E ij ; fij ðXi Þ; j # mi l ! kE; hðX~ i Þl; i # n;  n ~ L kE; hðXi Þl ! kE; hðgðXÞÞl; i¼1  

respectively.

n

This is vitally important fact because the complex transformation (optimization problem) Lni¼1 Mi ! M

Theorem 3 is similar to Theorem 1. According to  0 ) prevision is defined only Theorem 3, the lower H0 (upper H   by lower hi (upper hi ) previsions, respectively. 

184

L.V. Utkin / Reliability Engineering and System Safety 86 (2004) 179–190

Theorem 4. Consider 2n optimization problems ði ¼ 1; …; nÞ hi ¼ min Eri ðxi Þ hðXi Þ; ri 

h i ¼ max Eri ðxi Þ hðXi Þ; ri

ð25Þ

subject to aij # Eri ðxi Þ fij ðXi Þ # a ij ; j # mi ;  and problems

ð26Þ

~ H0 ¼ min r1 ðx1 Þ…rn ðxn Þ hðgðXÞÞ; r1 ;…;rn  ~ H 0 ¼ max Er1 ðx1 Þ…rn ðxn Þ hðgðXÞÞ;

ð27Þ ð28Þ

r1 ;…;rn

subject to hi # Eri ðxi Þ hðXi Þ # h i ; i ¼ 1; …; n: ð29Þ   be solutions to problems (2) – (3). If conditions Let H and H   0: C1 – C4 are fulfilled, then there hold H ¼ H0 and H ¼ H   Theorem 4 shows that the decomposition of the model Lni¼1 Mi ! M can be used in the case of independent components. Therefore, all virtues and features of the decomposition considered for the case of the lack of information about independence are valid for independent components. It is obvious that Theorems 1 –4 are also valid if Xi ; i ¼ 1; …; n; are discrete random variables. In this case, integrals and densities are replaced by sums and probability distribution functions, respectively.

4. Extending the algorithm The considered optimization problems for computing hi  and h i ; i ¼ 1; …; n; are rather simple and do not require to take into account the independence condition or the lack of information about independence. But problems (18) –(19), (22) – (23), (27) – (29) remain difficult to be solved for complex systems. Despite the availability of explicit expressions providing solution of these optimization problems for various special cases [10,12,15,16], it is necessary for generality purposes to develop unified simple algorithms of numerical solution of the problems. Below we will use the following theorem proved in Refs. [15, Theorem 3.1] and [16, Theorem 1]. Theorem Q 5. Denote C ¼ ðc1 ; …; cn Þ ; D ¼ ni¼1 ½0; T , Rn ; T

Dp ¼ {ðT ði1 Þ ; …; T ðin Þ Þlij ¼ 0:1; j ¼ 1; …; n; T ð0Þ ¼ 0; T ð1Þ ¼ T}: Consider the following optimization problems: max{c0 þ AC}; subject to C [ Rþ ; r0 [ R; ;X [ D; c0 þ XC #   gðXÞ; and min{c0 þ AC}; subject to C [ Rþ ; r0 [ R; ;X [ D; c0 þ XC $ gðXÞ:

 are vectors of lower and upper previsions of Here A and A  Xi ; i ¼ 1; …; n: Suppose that gðXÞ satisfies conditions C2 – C4. Then the system of inequalities c0 þ XC $ gðXÞ; C $ 0; is valid for ;X [ D if it is valid for ;X [ Dp ; the system of inequalities c0 þ XC # gðXÞ; C $ 0; is valid for ;X [ D if it is valid for ;X [ Dp : It follows from Theorem 5 that an infinite number of constraints to optimization problems under certain conditions can be reduced to a finite number, namely, to at most 2n constraints, where each constraint is defined only by the values 0 and T of xs ; i ¼ 1; …; n: This fact makes solutions of optimization problems to be simple from computational point of view. Let us try to adopt Theorem 5 to optimization problems (18) –(19) and (22) – (23). Let us introduce the following notation: Z ¼ ðz1 ; …; zn Þ;  zi ¼ hðxi Þ; and GðZÞ ¼ GðZÞ ¼ gðZÞ if the function h is non  decreasing, GðZÞ ¼ GðZÞ ¼ 2gð2ZÞ if the function h is  non-increasing, GðZÞ ¼ min{gðZÞ; 2gð2ZÞ};   GðZÞ ¼ max{gðZÞ; 2gð2ZÞ}; if the function h is non-monotone. It is worth noticing that if gðZÞ is the time to failure of a system determined by expression (1), then 2gð2ZÞ is the random time to failure of the corresponding dual system [24]. 4.1. The lack of information about independence of components According to Theorem 1, optimization problems (18) – (19) and (22) – (23) can be written in the form of (10) – (11) and (14) – (15), respectively, i.e., S ¼ 0:

Theorem 6. Suppose that all values of the function h are in the bounds minx hðxÞ ¼ 0 and maxx hðxÞ ¼ T: Then problems (18) –(19), (22) –(23) are equivalent to the problems H0 ¼  max{r0 þ HR}; subject to R [ Rþ ; r0 [ R; and ;zi [  0  ¼ min{r0 þ {0; T}; i ¼ 1; …; n; r0 þ ZR # GðZÞ; and H   HR}; subject to R [ Rþ ; r0 [ R; and ;zi [ {0; T}; i ¼  1; …; n; r0 þ ZR $ GðZÞ; respectively. The most important part of Theorem 6 is zi [ {0; T}; i.e. Theorem 6 states that it is not necessary to write all constraints, corresponding to values of zi [ ½0; T; in optimization problems (18) – (19), (22) – (23). We can write constraints only for zi [ {0; T}: From this point of view, Theorem 6 is similar to Theorem 5 to some extent and an infinite number of constraints to optimization problems can be reduced to at most 2n constraints. So, solution of problems (18) –(19), (22) –(23) presents no difficulty. If the function h is unbounded, then the value of T for the numerical calculation of H0 should be taken to be large 

L.V. Utkin / Reliability Engineering and System Safety 86 (2004) 179–190

enough in comparison with values of the available  previsions H: 4.2. Independent components Theorem 7. Suppose that all values of the function h are in the bounds minx hðxÞ ¼ 0 and maxx hðxÞ ¼ T and the function h is continuous. Let us represent the function GðZÞ in the form  of Eq. (1), i.e. in the form of p minimal paths P1 ; …; Pp ; containing l1 ; …; lp components, and k minimal cut sets K1 ; …; Kk ; containing k1 ; …; kk components. Then problems (27) – (29) have the following solution: 1 Y H0 ¼ max l 21 hi ; ð30Þ j¼1;…;p T j   i[P j

0

1

Y  h  0 ¼ min @T 2 T H 1 2 i A: j¼1;…;k T i[Kj

subject to N N X X fij ðxk Þpk # a ij ; j # mi ; pk ¼ 1: aij #  k¼1 k¼1

Here there are 2mi þ 1 constraints and N optimization variables. Another (dual) form of the optimization problem is 0 1 mi X hi ¼ max@c þ aij cj 2 a ij dj A;   j¼1 subject to cj ; dj [ Rþ ; c [ R; j # mi ; cþ

mi X

ðcj 2 dj Þfij ðxk Þ $ hðxk Þ; k # N:

j¼1

ð31Þ

If the function h takes only two values 0 and 1, i.e. the corresponding lower and upper previsions are lower and upper probabilities of failure in an interval, then X GðZÞp1 ðz1 Þ…pn ðzn Þ; ð32Þ H0 ¼   zi [{0;1};i#n where pi ð0Þ ¼ 1 2 hi ; pi ð1Þ ¼ hi ; and   X 0   H ¼ GðZÞp1 ðz1 Þ…pn ðzn Þ; zi [{0;1};i#n

185

Here there are N constraints and 2mi þ 1 optimization variables. Step 3. Step 2 is repeated for all i ¼ 1; …; n: As a result, we obtain hi ; …; hn :   Step 4. If there is no information about independence of components, then ( ) n X H ¼ max r þ hi ri ;   i¼1 subject to ri [ Rþ ; i # n; r [ R; and ;zi [ {0; T}

ð33Þ

where pi ð0Þ ¼ 1 2 h i ; pi ð1Þ ¼ h i : The extremely simple expressions have been obtained for  hðXi Þl ! kE;  hðgðXÞÞl ~ or Lni¼1 M0i ! M: computing Lni¼1 kE; Corollary 2. If the function h is unbounded, i.e. T ! 1; then the lower and upper bounds for the system reliability coincide with those of the case of the lack of information about independence of components. 5. Numerical algorithm for obtaining M By using results of previous sections, we write one of the possible numerical algorithms for obtaining the imprecise model M of a system. Step 1. Let us restrict sets of all values of V by 0 and T and divide the obtained interval into N 2 1 subintervals. As a result, we obtain N points 0 ¼ x1 ; …; xN ¼ T: If V is discrete, then this step is omitted. Step 2 The optimization problem for approximate computing the lower bound hi is  N X hi ¼ min hðxk Þpk ; pk  k¼1

r0 þ

n X i¼1

zi ri # Gðz1 ; …; zn Þ: 

If components are independent, then expressions (30) and (32) are used for computing H:  It should be noted that the upper bound H is similarly computed. The solution accuracy at Step 2 is defined by the value N:

6. Numerical example Let us consider a series-parallel system consisting of four components (see Fig. 1). The following information is available about reliability of the components: Component 1: 1. Probability of failure before 10 h is between 0.001 and 0.01; 2. Probability of failure before 7 h is 0.0005.

Fig. 1. The series-parallel system.

186

L.V. Utkin / Reliability Engineering and System Safety 86 (2004) 179–190

Table 1 Initial information about reliability of components (C-component, Jjudgement) C

J

fij ðXi Þ

Efij 

 ij Ef

1

1 2 1 2 1 1 2

I½0;10 ðX1 Þ I½0;7 ðX1 Þ X2 X22 I½20;1 ðX3 Þ X4 I½12;1 ðX4 Þ

0.001 0.0005 50 2000 0.999 40 0.95

0.01 0.0005 1 2600 0.999 40 0.99

2 3 4

Component 2: 1. MTTF is greater than 50 gh; 2. Second moment of time to failure is between 2000 and 2600.

Table 2 Results of computing hi and h i  Component

hi 

h i

1 2 3 4

0.0005 0 0 0

0.01 0.052 0.001 0.05

analytical methods. Therefore, we use the proposed algorithm. Let us find hi and h i ; i ¼ 1; 2; 3; 4: For example, the  optimization problem for computing h1 is  h1 ¼ min Er1 ðx1 Þ I½0;8 ðX1 Þ; r1  subject to 0:001 # Er1 ðx1 Þ I½0;10 ðX1 Þ # 0:01; 0:0005 # Er1 ðx1 Þ I½0;7 ðX1 Þ

Component 3: # 0:0005: 1. Probability of failure after 20 h is 0.999. Component 4: 1. MTTF is 40 h; 2. Probability of failure after 12 h is between 0.95 and 0.99. The formal representation of the available information is given in Table 1. Suppose that we have to find lower and upper probabilities of the system failure before time 8 h, i.e.  ½0;8 ðgÞ previsions. Here lower EI½0;8 ðgÞ and upper EI  ~ ¼ min{maxðX1 ; X2 Þ; maxðX3 ; X4 Þ}: gðXÞ The optimization problem for computing previsions  ½0;8 ðgÞ are of the form: EI½0;8 ðgÞ and EI   H ¼ EhðgÞ ¼ min Erðx1 ;…;x4 Þ I½0;8 ðgÞ; H ¼ EhðgÞ rðx1 ;…;x4 Þ   ¼

min Erðx1 ;…;x4 Þ I½0;8 ðgÞ;

rðx1 ;…;x4 Þ

subject to 0:001 # Er1 ðx1 Þ I½0;10 ðX1 Þ # 0:01; 0:0005 # Er1 ðx1 Þ I½0;7 ðX1 Þ # 0:0005; 50 # Er2 ðx2 Þ X2 ; 2000 # Er2 ðx2 Þ X22 # 2600; 0:999 # Er3 ðx3 Þ I½20;1 ðX3 Þ # 0:999; 40 # Er4 ðx4 Þ X4 # 40; 0:95 # Er4 ðx4 Þ I½12;1 ðX4 Þ # 0:99; It is obvious that this problems is very complex from computational point of view and there is no possibility to directly solve it by means of known numerical and

The corresponding dual problem is h1 ¼ max{cþ0:001c11 20:01d11 þ0:0005c12 20:0005d12 };  subject to c11 ; d11 ; c12 ; d12 [ Rþ ; i ¼ 1; …; n; c [ R; and ;x1 [ Rþ ; c þ I½0;10 ðx1 Þðc11 2 d11 Þ þ I½0;7 ðx1 Þðc12 2 d12 Þ # I½0;8 ðx1 Þ: The problems can be easily solved by the well-known simplex method [33]. The solution is h1 ¼ 0:0005: Results  of computing hi and h i are given in Table 2. By using results  of [16], we can write explicit expressions for computing lower and upper probabilities of failure before 8 h. If we do not know about independence of components, then H ¼ max{maxðh1 þ h2 2 1; 0Þ; £maxðh3 þ h4 2 1; 0Þ}          ¼ 0; H ¼ min{minðh1 ; h2 Þ þ minðh3 ; h4 Þ; 1} ¼ 0:011: If components are independent, then  H ¼ 1 2 ð1 2 h1 h2 Þð1 2 h3 h4 Þ ¼ 0; H      ¼ 1 2 ð1 2 h 1 h 2 Þð1 2 h 3 h 4 Þ ¼ 5:6997 £ 1024 : The same results can be obtained if to use the numerical algorithm. The function h is non-increasing and takes the values 0 and 1. Therefore, Theorems 6 and 7 can be used under the condition  GðZÞ ¼ GðZÞ ¼ 2gð2x1 ; ; 2x4 Þ  ¼ max{minðx1 ; x2 Þ; minðx3 ; x4 Þ}: 7. Conclusion remarks This paper can be regarded as an attempt to find a way out when it is impossible to calculate the system reliability

L.V. Utkin / Reliability Engineering and System Safety 86 (2004) 179–190

by using only available information without any additional, sometimes erroneous, assumptions about the component reliability behavior. This impossibility is due to the extreme complexity of optimization problems of a large dimensionality, which have to be solved for obtaining the correct system reliability measures. The algorithm of decomposing the difficult optimization problems for computing system reliability under partial initial information about the component reliability has been proposed in the paper. The algorithm is vitally important because it allows us to analyze reliability of many systems. The numerical example shows that this algorithm is very efficient and does not require the enormous computer power. At the same time, it should not be considered as an preferred alternative to the algorithm described in Ref. [23]. On the one hand, it can be regarded as some extension of the algorithm [23]. On the other hand, it may be more efficient by computing the reliability measures of monotone systems. The algorithm has a very important additional virtue. Suppose that there is available a precise probability distribution Fi ðtÞ of time to failure of one of the components. In this case, problems (2) and (3) have infinite number of constraints of the type Er I½0;t ðXi Þ ¼ Fi ðtÞ; where t [ V: Obviously, Eqs. (2) and (3) cannot be directly solved. The algorithm proposed in Ref. [21] cannot cope with this situation too. At the same time, by having the precise probability distribution of a random variable, we may analytically or numerically compute the expectation  EhðXi Þ ¼ EhðX i Þ of an arbitrary function hðXi Þ and construct  hðXi Þl; which is the imprecise reliability model M0i ¼ kE; E;   needed for computing EhðgÞ and EhðgÞ: This feature of the  algorithm allows us to partially answer the following question: Why should imprecise probabilities be applied to reliability analysis?. Suppose that we analyze a system whose components are described by some precise probability distributions Fi ðtÞ; i ¼ 1; …; n 2 1; of time to failure with precisely known parameters, but information about one of the components is partial, for example, the probability of failure before time tn : If we have to find the probability of the system failure before time t0 ; then, according to the introduced algorithm, precision of information about n 2 1 components does not influence on precision of the desired solution and is determined mainly by information about the n-th imprecise component. Hence, the precise distributions in this case are useless. This is an example how the imprecision of information about one component may cancel complete information about other components. In this case, the imprecise probability theory allows us to explain this example and to avoid possible errors in reliability analysis. The application of the proposed simplified algorithm for computing system reliability is not restricted by monotone systems. It can be said that arbitrary monotone systems by arbitrary initial information in the form of Eq. (3) are analyzed by means of this algorithm. At the same time, conditions C1 –C4 can be extended such that conclusions of Theorems 1– 4 remain to be valid. This extension is that

187

the function hðgÞ must be represented as a non-decreasing function gðhðX1 Þ; …; hðXn ÞÞ: This implies that some special cases of a wide class of systems may be considered. For example, if hðgÞ ¼ g; i.e. we want to find the system MTTF, ~ ¼ X1 þ · · · þ Xn ; i.e. the cold standby system [34] and gðXÞ is analyzed, then ! n n n X X X h Xi ¼ Xi ¼ hðXi Þ: i¼1

i¼1

i¼1

Therefore, Theorems 1 –4 are valid in this case and the proposed algorithm can be used. It should be noted another limitation of the algorithm. The algorithm allows us to compute the system reliability if there are comparative judgements concerning one component, for example, the probability of the i-th component failure before time t1 is less than the probability of failure of the same component after time t2 : However, it does not work if there are comparative judgements about reliability of different components [11], for example, the first component MTTF is less than the second component MTTF. Therefore, further work is needed to develop efficient methods for calculating reliability of systems under various assumptions and for extending the proposed algorithm (Enun 8,9,11-15,17).

Appendix A Proof of Theorem 1. Denote Q ¼ R 2 S: If X $ 0 and hðgðXÞÞ $ 0; then problem (8) and (9) can be rewritten as follows:  þ HQÞ;  maxðr0 þ HR 2 HR  subject to R [ Rþ ; Q [ R; r0 [ R; and ;X [ Vn ; r0 þ HQ $ hðgðXÞÞ:  2 H $ 0; then the Since S $ 0; then R $ Q: Since H  objective function is maximal when R ¼ min{0; Q}: Hence  HQÞÞ; maxðr0 þ minðHQ;  subject to Q [ R; r0 [ R; and ;X [ Vn ; r0 þ HQ $ hðgðXÞÞ: Note that ðr0 ; 0; …; 0Þ is a feasible point of problem (8) and (9). For this point, the value of the objective function is equal to r0 : Suppose that h is non-decreasing for all X [ Vn1 # Vn and is non-increasing for all X [ Vn2 # Vn : Then hðgðXÞÞ ¼ gðHÞ; ;X [ Vn1 ; hðgðXÞÞ ¼ 2gð2HÞ; ;X [ Vn2 : Denote N ¼ {1; 2; …; n}; J # N; and ( ( qk ; k  J hðxk Þ; k  J ~ kÞ ¼ q~ k ¼ ; hðx : 0; k  J 0; kJ

188

L.V. Utkin / Reliability Engineering and System Safety 86 (2004) 179–190

If ðr0 ; q1 ; …; qn Þ; is a feasible solution and there holds qk , 0 for all k [ J; then ðr0 ; q~ 1 ; …; q~ n Þ is a feasible solution corresponding to a greater value of the objective function. Indeed, there holds X X q~ i hðxi Þ ¼ r0 þ qi hðxi Þ r0 þ i[N

i[N=J

#

8 ~ < gðHÞ; :

X [ Vn1

~ 2gð2HÞ; X [ Vn2

# hðgðXÞÞ:

This implies that qk $ 0 for all k [ J: Since the set J can be arbitrary, then there holds Q $ 0: Hence  HQÞ ¼ HQ ¼ HR; minðHQ;    as was to be proved. The proof for the upper bound is similar (see also [16, Theorem 3]). A Proof of Theorem 2. According to Theorem 1, S ¼ 0 in Eqs. (18) and (19). Let ðc0i ; C0i ; D0i Þ; i ¼ 1; …; n; be optimal solutions to problems (16) and (17), i.e.  i D0i : hi þ c0i þ Ai C0i 2 A   After substituting the values hi ; i ¼ 1; …; n; into objective  function (18) under condition S ¼ 0; we get ( ) n X 0 0 0 0  i Di Þri H ¼ max r0 þ ðci þ Ai Ci 2 A   i¼1 ( ) n X  ðAi Ci 2 Ai Di Þ ; ¼ max c þ  i¼1 Pn 0 where c ¼ r0 þ i¼1 ci ri ; Ci ¼ C0i ri ; Di ¼ D0i ri :After substituting constraints (17) into Eq. (19), we get r0 þ

n X

ðc0i þ Fi ðC0i 2 D0i ÞÞri # hðgðXÞÞ:

i¼1

By using the above notation for c; Ci ; and Di ; we can rewrite the above constraints as cþ

n X

Fi ðCi 2 Di Þ # hðgðXÞÞ:

i¼1

Since Ci $ 0 and Di $ 0; then we have obtained optimization problem (4) and (5), as was to be proved. The equality  0 is similarly proved. A H ¼ H

Denote Hðx2 Þ ¼ min Er1 ðx1 Þ hðgðX1 ; x2 ÞÞ; r1  subject to h1 # Er1 ðx1 Þ hðX1 Þ # h 1 :  Then H0 ¼ minr2 Er2 ðx2 Þ Hðx2 Þ:   Let us represent the problem for computing Hðx2 Þ in the  form of the dual optimization problem Hðx2 Þ ¼ max{r0 þ h1 r1 2 h 1 s1 };   subject to r1 ; s1 [ Rþ ; r0 [ R; and ;x1 [ V; r0 þ hðx1 Þr1 2 hðx1 Þs1 # hðgðx1 ; x2 ÞÞ: According to Theorem 1, the above problem can be rewritten as H0 ðx2 Þ ¼ max{r0 þ h1 r1 }; subject to r1 [   Rþ ; r0 [ R; and ;x1 [ V; r0 þ hðx1 Þr1 # hðgðx1 ; x2 ÞÞ: This implies that Hðx2 Þ is defined only by the lower  previsions h1 : Then H0 does not depend on h 1 : We can do the   same by representing the objective function in the problem for computing H0 as  H0 ¼ min Er1 ðx1 Þ Hðx1 Þ; r1   where Hðx1 Þ ¼ minr2 Eðx2 Þ hðgðx1 ; x2 ÞÞ:  In this case, H0 does not depend on h 2 ; as was to be proved.  The proof can be easily extended to the case on arbitrary n: The equivalence of problems for computing the upper bound is similarly proved. Proof of Theorem 4. Suppose that n ¼ 2 for simplicity. Consider the proof for the lower bound. Let us rewrite the objective function for computing H in Eq. (2) as  follows: H ¼ min Er2 ðx2 Þ ðEr1 ðx1 Þ hðgðX1 ; X2 ÞÞÞ: r 1 ;r 2  Denote

Proof of Corollary 1. The proof follows from Theorem 2. Optimization problem Eqs. (18) and (19) has always a  because every pair of previsions ðhi ; h i Þ solution by H # H   concerns a separate variable xi : This implies that the conflicting judgements can be related only to one of the components. Then one of the problems (16) and (17) does not have a solution. A

Hðx2 Þ ¼ min Er1 ðx1 Þ hðgðX1 ; X2 ÞÞ; r1 

Proof of Theorem 3. Let us consider the case n ¼ 2 for simplicity. The objective function can be rewritten as

 1 D1 }; Hðx2 Þ ¼ max{c þ A1 C1 2 A   subject to C1 ; D1 [ Rþ ; c [ R; and ;x1 [ V;

ðA1Þ

c þ F1 ðC1 2 D1 Þ # hðgðx1 ; x2 ÞÞ:

ðA2Þ

H0 ¼ min Er2 ðx2 Þ ðEr1 ðx1 Þ hðgðX1 ; X2 ÞÞÞ: r 1 ;r 2 

subject to a1j # Er1 ðx1 Þ f1j ðX1 Þ # a 1j ; j # m1 :Then H ¼   minr2 Er2 ðx2 Þ Hðx2 Þ:  Let us represent the problem for computing Hðx2 Þ in the  form of the dual optimization problem

L.V. Utkin / Reliability Engineering and System Safety 86 (2004) 179–190

Taking into account Theorem 3, the same can be done for problem (27) – (29), i.e. H0 ðx2 Þ ¼ max{r0 þ h1 r1 };   subject to r1 [ Rþ ; r0 [ R; and ;x1 [ V;

ðA3Þ

r0 þ hðx1 Þr1 # hðgðx1 ; x2 ÞÞ:

ðA4Þ

 1 D01 : h1 ¼ c01 þ A1 C01 2 A   After substituting the value h1 into objective function (A3),  we get  1 D01 Þ} H 0 ðx2 Þ ¼ max{r0 þ r1 ðc01 þ A1 C01 2 A   1 D1 }; ¼ max{c þ A1 C1 2 A  where c ¼ r0 þ c01 r1 ; C1 ¼ C01 r1 ; D1 ¼ D01 r1 :After substituting constraints (17) into Eq. (A4), we get r0 þ ðc01 þ F1 ðC01 2 D01 ÞÞr1 # hðgðx1 ; x2 ÞÞ: By using the above notation for c; C1 ; and D1 ; we can rewrite the above constraint as c þ F1 ðC1 2 D1 Þ # hðgðx1 ; x2 ÞÞ: Since C1 $ 0 and D1 $ 0; then we have obtained optimization problem (A1) and (A2). In other words, Hðx2 Þ ¼ H0 ðx2 Þ: Now we have the following problems: H ¼    minr2 Er2 ðx2 Þ Hðx2 Þ; subject to a2j # Er2 ðxi Þ f2j ðX2 Þ # a 2j ; j #  0 0 m2 ; and H ¼ minr2 Er2 ðx2 Þ H ðx2 Þ; subject to h2 #    Er2 ðx2 Þ hðX2 Þ: The corresponding dual problems are  2 D2 }; H ¼ max{c þ A2 C2 2 A   subject to C2 ; D2 [ Rþ ; c [ R; and ;x1 [ V; c þ F2 ðC2 2 D2 Þ # Hðx1 Þ;  and H0 ¼ max{r0 þ h2 r2 }; subject to r2 [ Rþ ; r0 [ R; and   ;x [ V; r0 þ hðxÞr2 # H0 ðxÞ ¼ HðxÞ:   By using the same reasoning, we get H ¼ H0 ; as was to be   proved. The proof is obviously generalized to the case of arbitrary n and is similarly carried out for the upper bound. A Proof of Theorem 6. Consider the problem for computing the lower prevision. Suppose that h is non-decreasing for all X [ Vn1 # Vn and is non-increasing for all X [ Vn2 # Vn : Then (see the proof of Theorem 1) hðgðXÞÞ ¼ gðHÞ; ;X [ Vn1 ; hðgðXÞÞ ¼ 2gð2HÞ; ;X [ Vn2 :

If the function h is non-monotone, then it takes the same value for some two points X1 [ Vn1 and X2 [ Vn2 : Then we have two constraints r0 þ HR # hðgðX1 ÞÞ;

Since the i-th problem in Eqs. (25) and (26) deals with one random variable, then the corresponding dual optimization problem is of the form (16) and (17). Let ðc01 ; C01 ; D01 Þ be the optimal solution to the first problem in Eqs. (16) and (17), i.e.

189

r0 þ HR # hðgðX2 ÞÞ:

Since right sides of these constraints coincide, then both constraints follow from the constraint r0 þ HR # minðhðgðX1 ÞÞ; hðgðX2 ÞÞ: This implies that we can remain only the last constraint instead of two initial constraints. In particular, if the function h is non-decreasing, then Vn2 ¼ {B} and minðhðgðX1 ÞÞ; hðgðX2 ÞÞÞ ¼ hðgðX1 ÞÞ: The same can be said for the case of the non-increasing function h: Then constraints (11) can be rewritten as r0 þ ZR # GðZÞ: Note that the function G satisfied all conditions   of the function g in Theorem 5 and zi takes values from the interval ½0; T: Consequently, according to Theorem 5, constraints r0 þ ZR # GðZÞ may be defined only for zi [  {0; T}; i ¼ 1; …; n; as was not proved. The case of the upper prevision is similarly proved. A Proof of Theorem 7. The case hðxi Þ [ {0; 1} is obvious. Therefore, we consider a more difficult case when hðxi Þ [ ½0; T: Let zi ¼ hðxi Þ: Then optimization problems (27) – (29) coincide with the problem for computing lower and upper MTTFs of a system. According to Ref. [12], there hold for a series system ðGs ðZÞ ¼ mini¼1;…;n zi Þ n 1 Y EGs ¼ n21 hi ;   T i¼1

 s ¼ min h i ; EG i¼1;;n

and for a parallel system ðGp ðZÞ ¼ maxi¼1;…;n zi Þ

n Y h i  EGp ¼ max hi ; EGp ¼ T 2 T 12 : i¼1;…;n   T i¼1 Let us consider the lower bound. Note that the function GðZÞ can be represented in the form of Eq. (1), i.e. the  system can be represented as a parallel system whose components are series systems (minimal paths). The lower prevision (lower MTTF)Qof the i-th minimal path is defined by the expression T 12li i[Pj hi : It is worth noticing that the  lower prevision of a parallel system does not depend on the condition of independence of components. Therefore, identical components in different minimal paths do not influence on the lower bound. Hence, we get (30). According to Eq. (1), the system can be represented as a series system whose components are parallel systems (minimal cuts). The upper prevision (lower MTTF) of the j-th Q minimal cut is defined by the expression T 2 T i[Kj ð1 2 h i =TÞ: The upper prevision of a series system does not depend on the condition of independence of components. Therefore, identical components in different minimal cuts do not influence on the upper bound. Hence, we get Eq. (31). A

190

L.V. Utkin / Reliability Engineering and System Safety 86 (2004) 179–190

Proof of Corollary 2. If T ! 1; then expressions for lower and upper previsions of series and parallel systems (see the proof of Theorem 7) are EGs ¼ 0; 

 s ¼ min h i ; EG i¼1;…;n

n X  p¼ EGp ¼ max hi ; EG h i : i¼1;…;n   i¼1

These expressions coincide with the corresponding expressions obtained for the case of the lack of information about independence [12], as was to be proved.

References [1] Walley P. Statistical reasoning with imprecise probabilities. London: Chapman & Hall; 1991. [2] Walley P. Measures of uncertainty in expert systems. Artificial Intell 1996;83:1– 58. [3] Kuznetsov VP. Interval statistical models. Radio and communication, Moscow 1991; in Russian. [4] Weichselberger K. The theory of interval-probability as a unifying concept for uncertainty. Int J Approx Reason 2000;24:149 –70. [5] Weichselberger K. Elementare Grundbegriffe einer allgemeineren Wahrscheinlichkeitsrechnung. Intervallwahrscheinlichkeit als umfassendes Konzept, vol. 1. Heidelberg: Physika; 2001. [6] Coolen FPA. An imprecise Dirichlet model for Bayesian analysis of failure data including right-censored observations. Reliab Engng Syst Safety 1997;56:61–8. [7] Coolen FPA, Newby MJ. Bayesian reliability analysis with imprecise prior probabilities. Reliab Engng Syst Safety 1994;43:75–85. [8] Coolen FPA, Yan KJ. The use of right-censored data in nonparametric predictive inference. In: Langseth H, Lindqvist B, editors. Proceedings of the Third International Conference on Mathematical Methods in Reliability (Methodology and Practice), Trondheim, Norway; 2002. p. 155 –8. NTNU. [9] Kozine I, Filimonov Y. Imprecise reliabilities: experiences and advances. Reliab Engng Syst Safety 2000;67:75 –83. [10] Utkin LV. General reliability theory on the basis of upper and lower previsions. In: Ruan D, Abderrahim HA, D’hondt P, Kerre EE, editors. Fuzzy logic and intelligent technologies for nuclear science and industry. Proceedings of the Third International FLINS Workshop, Antwerp, Belgium; 1998. p. 36–43. [11] Utkin LV. Imprecise reliability analysis by comparative judgements. Proceedings of the Second International Conference on Mathematical Methods in Reliability, Bordeaux, France, vol. 2.; 2000. pp. 1005– 8. [12] Utkin LV, Gurov SV. New reliability models based on imprecise probabilities. In: Hsu C, editor. Advanced Signal processing technology. Singapore: World Scientific; 2001. p. 110–39. chapter 6. [13] Utkin LV, Kozine IO. Conditional previsions in imprecise reliability. In: Ruan D, Abderrahim HA, D’Hondt P, editors. Intelligent techniques and soft computing in nuclear science and engineering, Bruges, Belgium. Singapore: World Scientific; 2000. p. 72–9. [14] Utkin LV. Imprecise reliability of cold standby systems. Int J Qual Reliab Mgmt 2003;20(6):722 –39. [15] Gurov SV, Utkin LV. Reliability of systems under incomplete information. Saint Petersburg: Lubavich Publishers; 1999. in Russian. [16] Utkin LV, Gurov SV. Imprecise reliability of general structures. Knowledge Inform Syst 1999;1(4):459–80.

[17] Utkin LV, Kozine IO. A reliability model of multi-state units under partial information. In: Langseth H, Lindqvist B, editors. Proceedings of the Third International Conference on Mathematical Methods in Reliability (Methodology and Practice), Trondheim, Norway, NTNU; 2002. p. 643– 6. [18] Hall J, Lawry J. Imprecise probabilities of engineering system failure from random and fuzzy set reliability analysis. In: de Cooman G, Fine TL, Seidenfeld T, editors. Imprecise Probabilities and Their Applications. Proceedings of the First Inter Symposium ISIPTA’01, Ithaca, USA, Shaker Publishing; 2001. p. 195–204. [19] Tonon F, Bernardini A, Mammino A. Determination of parameters range in rock engineering by means of random set theory. Reliab Engng Syst Safety 2000;70(3):241 –61. [20] Tonon F, Bernardini A, Mammino A. Reliability analysis of rock mass response by means of random set theory. Reliab Engng Syst Safety 2000;70(3):263–82. [21] Utkin LV, Kozine IO. Structural reliability modelling under partial source information. In: Langseth H, Lindqvist B, editors. Proceedings of the Third International Conference on Mathematical Methods in Reliability (Methodology and Practice), Trondheim, Norway, NTNU; 2002. p. 647– 50. [22] Utkin LV, Kozine IO. Stress-strength reliability models under incomplete information. Int J General Syst 2002;31(6):549–68. [23] Utkin LV, Kozine IO. Computing the reliability of complex systems. In: de Cooman G, Fine TL, Seidenfeld T, editors. Imprecise probabilities and their applications. Proceedings of the Second International Symposium ISIPTA’01, Ithaca, USA, Shaker Publishing; 2001. p. 324 –31. [24] Barlow RE, Proschan F. Statistical theory of reliability and life testing: probability models. New York: Rinehart and Winston; 1975. [25] Barlow RE, Wu AS. Coherent systems with multistate components. Math Ops Res 1978;3:275–81. [26] Troffaes MCM, de Cooman G. Extension of coherent lower previsions to unbounded random variables. Proceedings of the Ninth International Conference IPMU 2002 (Information Processing and Management),Annecy, France, ESIA—University of Savoie; 2002. pp. 735–42. [27] Troffaes MCM, de Cooman G. Lower previsions for unbounded random variables. In: Grzegorzewski P, Hryniewicz O, Gil MA, editors. Soft methods in probability, statistics and data analysis. Heidelberg, New York: Phisica; 2002. p. 146–55. [28] Utkin LV, Kozine IO. Different faces of the natural extension. In: de Cooman G, Fine TL, Seidenfeld T, editors. Imprecise probabilities and their applications. Proceedings of the Second International Symposium ISIPTA’01, Ithaca, USA, Shaker Publishing; 2001. p. 316 –23. [29] Frechet M. Generalizations du theoreme des probabilities totales. Fundamenta Math 1935;25:379–87. [30] Utkin LV. Avoiding the conflicting risk assessments. Proceedings of International Scientific School Modelling and Analysis of Safety, Risk and Quality in Complex Systems, Saint Petersburg, Russia; 2002. pp. 58– 62. [31] Utkin LV. Imprecise second-order hierarchical uncertainty model. Int J Uncertainty, Fuzziness Knowledge-Based Syst 2003;11(3): 301 –17. [32] Utkin LV. A second-order uncertainty model for the calculation of the interval system reliability. Reliab Engng Syst Safety 2003;79(3): 341 –51. [33] Dantzig C. Linear programming and extensions. Princeton, NJ: Princeton University Press; 1963. [34] Kumar A, Agarwal M. A review of standby redundant systems. IEEE Trans Reliab 1980;27(4):290 –4.