Reliability Engineering and System Safety 79 (2003) 341–351 www.elsevier.com/locate/ress
A second-order uncertainty model for calculation of the interval system reliability Lev V. Utkin* Institute of Statistics, Munich University, Ludwigstr. 33, Munich 80539, Germany Received 2 May 2002; revised 18 September 2002; accepted 23 October 2002
Abstract A second-order uncertainty model of the system reliability is studied in the paper. This model takes into account the fact that the reliability description of the system components is unreliable itself and has some degree of belief to it. It is assumed that there is no information about probability distributions on both first and second levels of the proposed second-order uncertainty model, the available information about the reliability of the system components is heterogeneous, and there is no information about independence of the system components. Algorithms for computing beliefs to the system reliability measures and for reducing the second-order model to the first-order one are proposed. These algorithms are based on the solution of a set of linear programming problems. Numerical examples illustrate the model. q 2002 Elsevier Science Ltd. All rights reserved. Keywords: Imprecise probabilities; Second-order uncertainty model; Prevision; Gamble; Reliability; Linear programming
1. Introduction A lot of methods and models of the classical reliability theory assume that all probabilities are precise, that is, that every probability involved is perfectly determinable. If the information we have about the functioning of components and systems is based on a statistical analysis, then a probabilistic uncertainty model should be used in order to mathematically represent and manipulate that information. However, the reliability assessments that are combined to describe systems and components may come from various sources. Some may be objective measures based on relative frequencies or on well established statistical models. A part of the reliability assessments may be supplied by experts. As a result, only partial information about reliability of the system components may be available. Moreover, it is difficult to expect that components of many systems are statistically independent. In this case the most powerful and promising tool for reliability analyzing is the imprecise probability theory (also called the theory of lower previsions [1], the theory of interval statistical models [2], the theory of interval probabilities [3,4]), whose general framework is provided by upper and lower previsions. Some examples of * Tel.: þ49-89-21803198. E-mail address:
[email protected] (L.V. Utkin).
the successful application of imprecise probabilities to reliability analysis can be found in Refs. [5,6]. However, the expert judgements and statistical information about reliability of a system or its components may be unreliable. This leads to studying the second-order uncertainty models (hierarchical uncertainty models ) on which much attention have been focused due to their quite commonality. These models describe the uncertainty of a random quantity by means of two levels. For example, suppose that an expert provides a judgement about the mean level of component performance [7]. If this expert sometimes provides incorrect judgements, we have to take into account some degree of belief to this judgement. In this case, the information about the mean level of component performance can be considered on the first level of the hierarchical model (first-order information) and the degree of belief to the expert judgements is considered on the second level (second-order information). Many papers are devoted to the theoretical [8 – 11] and practical [12 – 14] aspects of second-order uncertainty models. It should be noted that the second-order uncertainty models have been studied in reliability, for example, in Refs. [15,16]. Lindqvist and Langseth in the paper [16] investigated the monotone multi-state systems under assumption that probabilities of the component states (first-order probabilities) can be regarded as random
0951-8320/03/$ - see front matter q 2002 Elsevier Science Ltd. All rights reserved. PII: S 0 9 5 1 - 8 3 2 0 ( 0 2 ) 0 0 2 4 2 - 9
342
L.V. Utkin / Reliability Engineering and System Safety 79 (2003) 341–351
variables governed by the Dirichlet probability distribution (second-order probabilities). A comprehensive review of hierarchical models is given in Ref. [17] where it is argued that the most common hierarchical model is the Bayesian one [18 – 22]. At the same time, the Bayesian hierarchical model is unrealistic in problems where there is available only partial information about the system behavior. Most proposed second-order uncertainty models assume that there is a precise second-order probability distribution (or possibility distribution). Moreover, most models use probabilities as a kind of the first level uncertainty description. Unfortunately, such information is usually absent in many applications and additional assumptions may lead to some inaccuracy in results. A study of some tasks related to the homogeneous second-order models without any assumptions about probability distributions has been illustrated by Kozine and Utkin [15]. However, these models are of limited use due to homogeneity of events considered on the first level. Therefore, new hierarchical uncertainty models of the system reliability have to be developed taking into account a lack of information about the probability distributions defined on the first and second levels and possible heterogeneity of initial data. In this paper, we study a second-order uncertainty model of the system reliability under rather general assumptions. It is supposed that 1. there is no information about probability distributions on both the first and the second levels of the proposed second-order uncertainty model; 2. available information about reliability of the system components is heterogeneous, i.e. this information may be different in kind, for example, we may know the mean level of performance of one multi-state component and probabilities of some states of the second component; 3. there is no information about independence of the system components.
2. Preliminary definitions Consider a system consisting of m components. Suppose that partial information about reliability of components is represented as a set of lower and upper expectations E fij and ij ; i ¼ 1; …; m; j ¼ 1; …; mi ; of functions fij : Here mi is a Ef number of judgements that are related to the ith component; fij ðxi Þ is a function of the random time to failure xi of the ith component or some different random variable, describing the ith component reliability and corresponding to the jth judgement about this component. For example, an intervalvalued probability that a failure is in the interval ½a; b can be represented as expectations of the indicator function I½a;b ðxÞ such that I½a;b ðxÞ ¼ 1 if x [ ½a; b and I½a;b ðxÞ ¼ 0 if x ½a; b: The lower and upper mean times to failure (MTTFs) are expectations of the function f ðxÞ ¼ x: According to Ref. [23], the system lifetime can be uniquely
determined by the component lifetimes. Denote X ¼ ðx1 ; …; xm Þ: It is assumed that the random vector X is defined on a sample space Q m : Then there exists a function gðXÞ of the component lifetimes characterizing the system reliability behavior. In terms of the imprecise probability theory the lower and upper expectations can be regarded as previsions. The functions fij and g can be regarded as gambles. In this case, the optimization problems (natural extension ) for computing the lower and upper expectations of the system function g are [5,6] ð Eg ¼ min gðXÞrðXÞdX; P
¼ max Eg
Qm
ð
P
Qm
subject to 8 ð > > > < rðXÞ $ 0;
ð1Þ gðXÞrðXÞdX;
Qm
ð > > > : E fij #
Qm
rðXÞdX ¼ 1; ð2Þ
ij ; fij ðxi ÞrðXÞdX # Ef
i # m; j # mi :
Here the minimum and maximum are taken over the set P of all possible n-dimensional density functions {rðXÞ} satisfying conditions (2), i.e. solutions to problems (1) and (2) are defined on the set P of possible densities that are consistent with partial information expressed in the form of constraints (2). This implies that a number of the optimization variables are infinite and this fact restricts the use of natural extension in real applications. Problems (1) and (2) are linear and the dual optimization problems can be written as follows [2,5,24] 0 1 mi m X X ¼ min @c0 þ ij 2 dij E fij ÞA; Eg ðcij Ef ð3Þ c0 ;cij ;dij
i¼1 j¼1
Eg ¼ 2Eð2gÞ;
ð4Þ
subject to cij ; dij [ Rþ ; c0 [ R; i ¼ 1; …; m; j ¼ 1; …; mi ; and ;X [ Q m c0 þ
mi m X X
ðcij 2 dij Þfij ðxi Þ $ gðXÞ:
ð5Þ
i¼1 j¼1
Here c0 ; cij ; dij are the optimization variables such that c0 Ð m corresponds to the constraint r ðXÞdX ¼ 1; cij correQ Ð ij ; and dij sponds to the constraint Q m fij ðxi ÞrðXÞdX # Ef Ð corresponds to the constraint E fij # Q m fij ðxi ÞrðXÞdX: It turns out that in many applications the dual optimization problems are more simple in comparison with problems (1) and (2) because this representation allows to avoid the situation when a number of the optimization variables is infinite. Of course, the dual optimization problems have generally an infinite number of constraints each of them is defined by a value of X: However, as it will be shown below, the number of constraints can be reduced to a finite number. Moreover, the dual problems have the certain sense.
L.V. Utkin / Reliability Engineering and System Safety 79 (2003) 341–351
The lower and upper system reliability measures are computed as some kind of the linear approximation defined by the set of the available component reliability measures (see the objective function in Eq. (3)). More information we have, more precise assessments of reliability can be obtained if this initial information is not contradictory. It should be noted that only joint densities are used in optimization problems (1) and (2) because, in a general case, we may not be aware whether the variables x1 ; …; xn are dependent or not. If it is known that components are independent, then rðXÞ ¼ r1 ðx1 Þ· · ·rm ðxm Þ: In this case, the set P is reduced and consists only of the densities that can be represented as a product. This results in more precise reliability assessments. However, it is difficult to forecast how the condition of independence influences on the precision of assessments. For example, it has been shown in Ref. [6] that if the initial information about the component reliability is restricted only by MTTFs, then the system reliability does not depend on the condition of the component independence. Anyway, imprecision is reduced if independence is available in the most cases of initial information and cannot be increased. Most reliability measures (probabilities of failure, MTTFs, failure rates, moments of time to failure, etc.) can be represented in the form of lower and upper previsions or expectations. Each measure is defined by the gamble fij : The precise reliability information is a special case of the imprecise information when the lower and upper previsions ij : For example, let us of the gamble fij coincide, i.e. E fij ¼ Ef consider a series system consisting of two components. Suppose that the following information about reliability of components is available. The probability of the first component failure before 10 h is 0.01. The MTTF of the second component is between 50 and 60 h. It can be seen from the example that the available information is heterogeneous and it is impossible to find the system reliability measures on the basis of conventional reliability models without using additional assumptions about probability distributions. At the same time, this information can be formalized as follows: ð 0:01 # 2 I½0;10 ðx1 Þrðx1 ; x2 Þdx1 dx2 # 0:01; Rþ
50 #
ð R2þ
343
The above constraints form a set of possible joint densities r. Suppose that we want to find the probability of the system failure after 100 h. Then the objective function is of the form: ¼ min ðmax Þ EgðEgÞ P
P
ð R2þ
I½100;1Þ ðminðx1 ; x2 ÞÞrðx1 ; x2 Þdx1 dx2 :
The above bounds for the probability of the system failure after 100 h are the best by the given information. If the considered random variables are discrete and the sample space Qm is finite, then integrals and densities in problems (1) and (2) are replaced by sums and probability distribution functions, respectively. ij can be also The lower and upper previsions E fij and Ef regarded as bounds for an unknown precise prevision Efij which will be called a linear prevision. Natural extension is a general mathematical procedure for calculating new previsions from initial judgements. It produces a coherent overall model from a certain collection of imprecise probability judgements and may be seen as the basic constructive step in interval-valued statistical reasoning. The following advantages of the imprecise probability theory can be pointed out: 1. It is not necessary to make assumptions about probability distributions of random variables characterizing the component reliability behavior (times to failure, numbers of failures in a unit of time, etc.). 2. The imprecise probability theory is completely based on the classical probability theory and can be regarded as its generalization. Therefore, imprecise reliability models can be interpreted in terms of the probability theory. Conventional probability models can be regarded as a special case of imprecise models. 3. The imprecise probability theory provides a unified tool (natural extension) for computing the system reliability under partial information about the component reliability behavior. 4. The imprecise probability theory allows us to obtain the best possible bounds for the system reliability by given information about the component reliability.
x2 rðx1 ; x2 Þdx1 dx2 # 60:
The imprecise reliability model allows us to take into account comparative judgements [25]. In particular, if we know that the first component MTTF is less than the second component MTTF, then this judgement can be represented as the following constraint: ð ðx1 2 x2 Þrðx1 ; x2 Þdx1 dx2 # 0: 2 Rþ
If it is known that components are statistically independent, then the constraint rðx1 ; x2 Þ ¼ r1 ðx1 Þr2 ðx2 Þ is added.
3. The problem statement Natural extension is a powerful tool for analyzing the system reliability on the basis of available partial information about the component reliability. However, it has a disadvantage. Let us imagine that two experts provide the following judgements about the MTTF of a component: (1) MTTF is not greater than 10 h; (2) MTTF is not less than 10 h. The natural extension produces the resulting MTTF ½0; 10 > ½10; 1Þ ¼ 10: In other words, the absolutely precise MTTF is obtained from too imprecise initial data.
344
L.V. Utkin / Reliability Engineering and System Safety 79 (2003) 341–351
This is unrealistic in practice of reliability analysis. The reason of such results is that probabilities of judgements are assumed to be 1. If we assign some different probabilities to judgements, then we obtain more realistic assessments. For example, if the belief to each judgement is 0.5, then, according to Ref. [15], the resulting MTTF is greater than 5 h. Therefore, in order to obtain the accurate and realistic system reliability assessments, it is necessary to take into account some vagueness of information about the component reliability measures. Consider a system consisting of m components whose reliability behavior is described by random variables x1 ; …; xm which may have the different sense, for example, times to failure, number of failures in a unit of time, etc. Suppose that we have a set of weighted expert judgements related to some reliability measures of the random variable behavior Efij ðxi Þ; i ¼ 1; …; m; j ¼ 1; …; li ; i.e. there are Pm ij : ij ¼ Ef i¼1 li lower and upper previsions aij ¼ E fij ; a Suppose that each of experts is characterized by a subjective probability gij or an interval of probabilities ½gij ; gij called second-order probabilities. The second-order probabilities are interpreted as a model for uncertainty about the ‘correct’ value of an partially known reliability measure. By considering the second-order probabilities as beliefs to experts, they can be calculated as the proportion of true or correct judgements elicited from these experts. They can also be obtained as confidence probabilities of confidence intervals concerning parameters of an unknown lifetime distribution, for example, the MTTF and moments of time to failure. In this case, a procedure of statistical reasoning can be regarded as an expert. We assume that the random variables xi are discrete and are defined on the finite sample spaces Vi ¼ {xi1 ; …; xini }: It should be noted that the finite sample spaces are used for simplicity. The obtained results can be easily extended to the case of infinite sample spaces. The judgements can formally be written as follows: Pr{aij # Efij # a ij } [ ½gij ; gij ;
i # m; j # li :
ð6Þ
Here the set {aij ; a ij } contains the first-order previsions, the set {gij ; gij } contains the second-order probabilities and X fij ðxÞpi ðxÞ; ð7Þ Efij ¼ x[Vi
where pi ðxÞ is some unknown probability distribution of the discrete random variable xi : Our aim is to produce new judgement which can be regarded as a combination of available judgements. Uncertainty of judgements about reliability of components leads to uncertainty of a system reliability measure. Therefore, if the component reliability measures have the interval-valued probabilities [gij ; gij ], then there exists some interval-valued probability ½g; g that the system reliability measure Eg belongs to the interval ½a; a ; i.e. there holds a # Eg # a : The probabilities g and g serve as beliefs to possible values of the system reliability measure which are of
interest to us. At the same time, to evaluate the system reliability it is useful also to know expected values of the system reliability measure under given information about reliability of components. These expected values can be regarded as the most credible to some extent. Roughly speaking, if we have probabilities g and g defined for different intervals of the reliability measure Eg; then there exists some interval of Eg obtained by means of the expectation operator. We will call this interval ‘average’ to distinguish expectations (previsions) on the first and second levels of the considered second-order uncertainty model. In fact, the average interval allows us to get rid of the more complex second-order model and to deal with the usual first-order model. In other words, the following tasks should be solved. 1. Computing the probability bounds ½g; g for some new interval A ¼ ½a; a of new linear previsions EgðXÞ characterizing the system reliability. 2. Computing an average interval ½ap ; ap of new previsions EgðXÞ; i.e. reducing the second-order model to the first-order one. It should be noted that Eg ¼
X
···
x1 [V1
X
X
gðXÞpðXÞ ¼
xm [Vm
gðXÞpðXÞ;
ð8Þ
X[Vm
where pðXÞ is some unknown joint probability distribution of the vector X and Vm ¼ V1 £ · · · £ Vm : It is assumed that information about independence of random variables is absent. This implies that only joint densities have to be considered here. In order to give the reader the essence of the subject analyzed and make all the formulas more readable, we will mainly consider the natural extension only for the upper bound. Furthermore, throughout the paper the obvious constraints for distributions pi to the optimization problems P such that pi ðxÞ $ 0; x[Vi pi ðxÞ ¼ 1 will not be written. The same concerns the density functions. We will also assume for simplicity that there is available only one judgement about each random variable, i.e. li ¼ 1: In this case, we will omit the index j below.
4. Computing the probability bounds First we find the upper probability g: Theorem 1. The upper probability g is determined from the following optimization problem (
g ¼ min
c0 ;ci ;di
c0 þ
m X i¼1
) ðci gi 2 di gi Þ ;
ð9Þ
L.V. Utkin / Reliability Engineering and System Safety 79 (2003) 341–351
subject to ci ; di [ Rþ ; c0 [ R; i ¼ 1; …; m; and c0 þ
m X
ðci 2 di ÞIAi X
Here Aci ¼ ðinf fi ðxÞ # Efi # ai Þ < ðai # Efi # sup fi ðxÞÞ:
!
x
fi ðxi ÞpðXÞ
X[Vm
i¼1
$ IA
X
ð10Þ
! gðXÞpðXÞ ;
;p [ P:
X[Vm
Here P is the set of all probability distributions {pðXÞ}: Let us consider constraints (10) in detail. In order to compute the indicator functions, it is necessary to substitute the different functions p from P and to calculate the corresponding sums. Obviously, this task cannot be practically solved because the number of probability distributions p is infinite. Therefore, we propose another way to solve it. It should be noted that the indicator functions IAi ð·Þ and IA ð·Þ in Eq. (10) take only values 0 and 1. This implies that, by substituting different distributions p into constraints (10), we obtain 2mþ1 constraints of the form c0 þ
m X
ðci 2 di Þyi $ y0 ;
ð11Þ
where ðy0 ; y1 ; …; ym Þ ¼ Y is a binary vector such that yi [ {0; 1} for all i ¼ 0; …; m: At that, yi ¼ 1 if, afterP substituting a distribution p from P into the expression X[Vm fi ðxi Þ £pðXÞ; we obtain X fi ðxi ÞpðXÞ [ Ai ; X[Vm
and yi ¼ 0 if X fi ðxi ÞpðXÞ Ai : X[Vm
However, we cannot arbitrarily take any vector Y and form the corresponding constraint because there is some connection between indicator functions through the distribution pðXÞ: It is possible that for some realizations of the vector Y we cannot find the distribution pðXÞ such that, after its substitution into arguments of indicator functions, we obtain values of IAi ð·Þ and IA ð·Þ corresponding to this vector Y: In this case, we will say that the corresponding set of arguments of indicator functions is inconsistent and corresponding constraint (11) does not exist and has to be removed from the list of 2mþ1 constraints. Let J be sets of indices and J # N ¼ {0; 1; 2; …; m}: Denote the following sets of events:
AcJ
¼
{Aci ; i
[ J};
A0 ¼ {A} ¼ {ai # Efi # a i }:
x
Let Pi be a set of distributions p satisfying ai # Efi # a i and P0 be a set of distributions p satisfying a # Eg # a : We call the set AJ consistent if there is at least one distribution p satisfying all constraints whose indices belong to J, i.e. \ Pi – Y: ð12Þ i[J
Let C be a set of all consistent sets. Now we can see that if AJ < AcN w J [ C (consistent), then ( 1; i [ J ; IAi ðEfi Þ ¼ yi ¼ 0; i J if AJ < AcN w J < A0 [ C; then X c0 þ ðci 2 di Þ $ 1;
ð13Þ
i[J
if AJ < AcN w J < A0 C; then X ðci 2 di Þ $ 0: c0 þ
ð14Þ
i[J
i¼1
AJ ¼ {Ai ; i [ J} ¼ {ai # Efi # a i ; i [ J};
345
In other words, if the set AJ < AcN w J is consistent, then there exists at least one distribution p such that all linear previsions Efi ; i [ J; are in intervals ½ai ; a i and their indicator functions equal 1, all linear previsions Efi ; i [ N w J; do not belong to intervals ½ai ; a i and their indicator functions equal 0. So, to simplify constraints (10), it is necessary to look over all consistent sets AJ < AcN w J < A0 : Then Eq. (10) can be rewritten for any J as follows: ( X 1; AJ < AcN w J < A0 [ C; ð15Þ c0 þ ðci 2 di Þ $ 0; AJ < AcN w J < A0 C: i[J Now it is necessary to find a simple way for determining the consistency of sets AJ < AcN w J : The following theorems allow to simplify the determination of consistency. Theorem 2. For any J # N; the set AJ < AcN w J is consistent. Theorem 2 implies that we do not need to determine the consistency of sets AJ < AcN w J : It is necessary to determine only the consistency of sets AJ < AcN w J < A0 : Theorem 3. Let AJ < AcN w J be a set of consistent constraints and A0 ¼ {a # Eg # a }: where Let B ¼ {b # Eg # b}; b ¼ min Eg; PJ
b ¼ max Eg; PJ
AcN w J :
subject to AJ < Here the minimum and maximum are taken over the set PJ of all possible joint probability
346
L.V. Utkin / Reliability Engineering and System Safety 79 (2003) 341–351
distribution functions pðXÞ satisfying the condition AJ < AcN w J : Then the set of constraints AJ < AcN w J < A0 is – Y: consistent if A0 > B – Y or ½a; a > ½b; b
are computed as solutions to corresponding optimization problems.
Theorem 3 implies that consistency of the set AJ < AcN w J < A0 can be determined by applying the natural extension of the set AJ < AcN w J on B: If we know an analytical expression for previsions of the system (for example, there exist many known assessments of the system reliability expressed through the component reliabilities [5 – 7,26]), then Theorem 3 simplifies the check of consistency of sets because only two intervals ½a; a and have to be compared. At that the interval ½b; b is ½b; b known from the available analytical expressions. Now we can write X b ¼ min gðXÞpðXÞ;
5. Computing an ‘average’ interval
P
c0 ;ci ;di
i¼1
subject to ci ; di [ Rþ ; c0 [ R; i ¼ 1; …; m; ;p [ P c0 þ
m X
ðci 2 di ÞIAi ðEfi Þ $ Eg:
ð19Þ
i¼1
X[Vm
b ¼ max P
X
Theorem 4. The set of constraints (19) can be represented as 2m constraints X ðci 2 di ÞIAi ðEfi Þ $ max Eg: c0 þ
gðXÞpðXÞ;
X[Vm
subject to X fi ðxÞpi ðxÞ [ Ai ; fj ðxÞpj ðxÞ [ Acj ;
PJ
i[J
i [ J;
Here the maximum is taken over the set PJ of all possible distributions {pðxÞ} satisfying the set of consistent constraints AcN w J < AJ :
x[V
X
The second task of computing the ‘average’ bounds ap ¼ for the linear prevision Eg can be solved EEg and ap ¼ EEg as follows. Let us rewrite Eqs. (9) and (10) as ( ) m X p a ¼ min c0 þ ðci gi 2 di gi Þ ; ð18Þ
j [ N w J:
x[V
It can be seen that the above problems can be regarded as natural extension of the first-order previsions {ai ; a i } on the previsions Eg and Eg: The similar reasoning can be made for computing the lower previsions. In this case, the following optimization problem can be written: ! m X g ¼ max c0 þ ðci gi 2 di gi Þ ; ð16Þ c0 ;ci ;di
i¼1
subject to ci ; di [ Rþ ; c0 [ R; i [ J; ;J # N ( X 1; AcN w J < AJ < Ac0 C; c0 þ ðci 2 di Þ # 0; AcN w J < AJ < Ac0 [ C: i[J
Theorem 4 implies that the maximum of the linear prevision is replaced by the upper prevision E J g of the gamble g under consistent constraints AcN w J < AJ : As a result, we obtain the constraint X c0 þ ðci 2 di Þ $ E J g: ð20Þ i[J
Here E J g ¼ max PJ
ð17Þ
So, we write a general algorithm for computing g and g : Step 1 By considering all possible binary vectors ðy0 ;y1 ;…; ym Þ; yi [ {0; 1}; the sets AJ < AcN w J < Ac0 and AJ < AcN w J < A0 are formed, where i [ J if yi ¼ 1 and i [ N w J if yi ¼ 0: Step 2 Choosing the sets AcN w J < AJ < Ac0 and AJ < AcN w J < A0 from the list obtained at Step 1. Step 3 Constraints (15) are used for computing g and constraints (17) are used for computing g: Step 4 From the systems of constraints obtained at Step 3 and objective functions (9) and (16), the probabilities g and g
X
gðXÞpðXÞ;
ð21Þ
X[Vm
subject to AcN w J < AJ : The same way can be used for computing the lower value ap : Here the minimum of the linear prevision is replaced by the lower prevision EJ g of the gamble g under consistent constraints AcN w J < AJ : As a result, we obtain the constraint: X c0 þ ðci 2 di Þ # EJ g: ð22Þ i[J
Here EJ g ¼ min PJ
X
gðXÞpðXÞ;
ð23Þ
X[Vm
subject to AcN w J < AJ : If we know an analytical expression for previsions of the system, then EJ g and E J g can be easily obtained (see Section 6). So, we write the following algorithm for computing ap and ap :
L.V. Utkin / Reliability Engineering and System Safety 79 (2003) 341–351
Step 1 By considering all possible binary vectors ðy1 ; …; ym Þ; yi [ {0; 1}; the sets AcN w J < AJ of constraints are formed, where i [ J if yi ¼ 1 and i [ N w J if yi ¼ 0: Step 2 Choosing a set AcN w J < AJ from the list obtained at Step 1. Step 3 The linear programming problem (21) is used for computing ap ; the linear programming problem (23) is used for computing ap : Step 4 From the systems of constraints obtained at Step 3 and objective functions (18), previsions ap and ap are computed as solutions to corresponding optimization problems.
347
Table 1 Consistency of the constraint sets Set A{1;2} < A0 A{1} < Ac{2} < A0 A{2} < Ac{1} < A0 Ac{1;2} < A0 A{1;2} < Ac0 A{1} < Ac{2} < Ac0 A{2} < Ac{1} < Ac0 Ac{1;2} < Ac0
Consistent No No Yes No Yes Yes Yes Yes
A{1} < Ac{2} < A0 : The following holds b ¼ maxð0; 0 þ 0 2 1·8Þ ¼ 0;
6. Numerical examples
b ¼ minð4; 3Þ ¼ 3:
6.1. Example 1
Hence A0 > B ¼ Y because ½5; 8 > ½0; 3 ¼ Y: This implies that the set A{1} < Ac{2} < A0 is inconsistent. At the same time, Ac0 > B – Y because ½0; 5 > ½0; 3 – Y: This implies that the set A{1} < Ac{2} < Ac0 is consistent. So, we have the following linear programming problem for computing g
Let us consider a series multi-state system [27] consisting of two components ðgðXÞ ¼ minðx1 ; x2 ÞÞ: The system and components may be in nine states 0; 1; …; 8: Then there holds xi [ V ¼ {0; 1; …; 8}: Denote the maximal number of a state L ¼ 8: Suppose that the lower and upper mean levels of component performance a1 ¼ 0; a 1 ¼ 4; a2 ¼ 3; a 2 ¼ 8 are known with corresponding beliefs g1 ¼ 0:7 and g2 ¼ 0:6 to the assessments. Let us find the interval probabilities g and g that the mean levels of system performance is in the interval ½a; a ¼ ½5; 8: Formally, the available information can be represented as Pr{0 # Ex1 # 4} ¼ 0:7;
Pr{3 # Ex2 # 8} ¼ 0:6:
g ¼ min ðc0 þ 0:7ðc1 2 d1 Þ þ 0:6ðc2 2 d2 ÞÞ; c0 ;ci ;di
subject to ci ; di [ Rþ ; c0 [ R; i ¼ 1; 2; and c0 þ ðc1 2 d1 Þ þ ðc2 2 d2 Þ $ 0; c0 þ ðc1 2 d1 Þ $ 0; c0 þ ðc2 2 d2 Þ $ 1; c0 $ 0:
It is necessary to find
The solution is c0 ¼ d1 ¼ 1; c1 ¼ c2 ¼ d2 ¼ 0 and g ¼ 0:3: The lower probability g can be similarly computed
Pr{5 # E minðx1 ; x2 Þ # 8} [ ½g; g:
g ¼ max ðc0 þ 0:7ðc1 2 d1 Þ þ 0:6ðc2 2 d2 ÞÞ;
According to Ref. [7], there hold for the series multi-state system
subject to ci ; di [ Rþ ; c0 [ R; i ¼ 1; 2; and
c0 ;ci ;di
b ¼ maxð0; a1 þ a2 2 ðn 2 1ÞLÞ; b ¼ minða1 ; a 2 Þ: Denote A{1;2} ¼ {0 # Ex1 # 4; 3 # Ex2 # 8};
c0 þ ðc1 2 d1 Þ þ ðc2 2 d2 Þ # 0; c0 þ ðc1 2 d1 Þ # 0; c0 þ ðc2 2 d2 Þ # 0; c0 # 0:
Ac{1;2} ¼ {4 # Ex1 # 8; 0 # Ex2 # 3};
The solution is c0 ¼ c1 ¼ c2 ¼ d1 ¼ d2 ¼ 0 and g ¼ 0: Let us find now the average mean levels of system performance ap and ap : The results of computing EJ minðx1 ; x2 Þ and E J minðx1 ; x2 Þ are given in Table 2. For instance, let us consider a row corresponding to the set A{1} < Ac{2} : The following holds
A0 ¼ {5 # E minðx1 ; x2 Þ # 8};
EJ minðx1 ; x2 Þ ¼ maxð0; 0 þ 0 2 1·8Þ ¼ 0;
A{1} < A{2} ¼ {0 # Ex1 # 4; 0 # Ex2 # 3}; c
A{2} < Ac{1} ¼ {4 # Ex1 # 8; 3 # Ex2 # 8};
B ¼ {b # E minðx1 ; x2 Þ # b}:
E J minðx1 ; x2 Þ ¼ minð4; 3Þ ¼ 3:
Theorem 2 implies that the sets A{1;2} ; A{2} < A{1} ; Ac{1} < A{2} ; Ac{1;2} are consistent. Theorem 3 implies that the set AJ < AcN w J < A0 is consistent if A0 > B – Y: The consistency of possible sets is shown in Table 1. For instance, let us consider a row corresponding to the set c
Hence we have the following linear programming problem for computing ap ap ¼ min ðc0 þ 0:7ðc1 2 d1 Þ þ 0:6ðc2 2 d2 ÞÞ; c0 ;ci ;di
348
L.V. Utkin / Reliability Engineering and System Safety 79 (2003) 341–351
Table 2 Values of EJ minðx1 ; x2 Þ and E J minðx1 ; x2 Þ for different J Set
EJ minðx1 ; x2 Þ
E J minðx1 ; x2 Þ
A{1;2} A{1} < Ac{2} A{2} < Ac{1} Ac{1;2}
0 0 0 0
4 3 8 3
subject to ci ; di [ Rþ ; c0 [ R; i ¼ 1; 2; and c0 þ ðc1 2 d1 Þ þ ðc2 2 d2 Þ $ 4; c0 þ ðc1 2 d1 Þ $ 3; c0 þ ðc2 2 d2 Þ $ 8; c0 $ 3: The solution is c0 ¼ 7; c2 ¼ 1; d1 ¼ 4; c1 ¼ d2 ¼ 0 and ap ¼ 4:8: The lower probability ap can be similarly computed ap ¼ max ðc0 þ 0:7ðc1 2 d1 Þ þ 0:6ðc2 2 d2 ÞÞ; c0 ;ci ;di
subject to ci ; di [ Rþ ; c0 [ R; i ¼ 1; 2; and c0 þ ðc1 2 d1 Þ þ ðc2 2 d2 Þ # 0; c0 þ ðc1 2 d1 Þ # 0; c0 þ ðc2 2 d2 Þ # 0; c0 # 0: The solution is c0 ¼ c1 ¼ c2 ¼ d1 ¼ d2 ¼ 0 and ap ¼ 0: Thus, the average mean level of system performance is in the interval [0,4.8]. The mean level of system performance is in the interval [5,8] with the lower and upper probabilities 0 and 0.3, respectively. This means that it is difficult to expect that the mean level of system performance under given information about reliabilities of components will be greater than 5 and the system will be in states 5; …; 8 because the probability of this event is rather small and belongs to the interval [0,0.3]. This conclusion is confirmed by bounds for the average mean level of system performance. 6.2. Example 2 Let us consider a parallel system consisting of two components ðgðXÞ ¼ maxðx1 ; x2 ÞÞ: Suppose that two experts provide the following information about MTTFs of two components, respectively: 1. the first expert: MTTF of the first component is greater than 8 h; 2. the second expert: MTTF of the second component is less than 5 h. The belief to the first expert is 0.9. This means that the expert provides 90% of true judgements. The belief to the second expert is between 0.3 and 1. This means that the expert provides greater than 30% of true judgements. Formally, the available information can be represented as Pr{8 # Ex1 } ¼ 0:9; Pr{0 # Ex2 # 5} [ ½0:3; 1:
Here a1 ¼ 8; a 1 ! 1; a2 ¼ 0; a 2 ¼ 5; g1 ¼ g1 ¼ g1 ¼ 0:9; g2 ¼ 0:3; g2 ¼ 1: Let us find lower and upper probabilities that the system MTTF is greater than 6 h, i.e. Pr{6 # E maxðx1 ; x2 Þ} [ ½g; g: Here times to failure x1 ; x2 are continuous random variables. Therefore, sums and probability distributions in all equations are replaced by integrals and density functions. According to Refs. [6,28], there hold for the parallel system b ¼ maxða1 ; a2 Þ;
b ¼ a 1 þ a 2 :
Denote A{1;2} ¼ {8 # Ex1 , 1; 0 # Ex2 # 5}; A{1} < Ac{2} ¼ {8 # Ex1 , 1; 5 # Ex2 , 1}; A{2} < Ac{1} ¼ {0 # Ex1 # 8; 0 # Ex2 # 5}; Ac{1;2} ¼ {0 # Ex1 # 8; 5 # Ex2 , 1}; A0 ¼ {6 # E maxðx1 ; x2 Þ , 1}; B ¼ {b # E maxðx1 ; x2 Þ # b}: All sets AJ < AcN w J < A0 and AJ < AcN w J < Ac0 are consistent, except two sets A{1;2} < Ac0 and A{1} < Ac{2} < Ac0 : This implies that the optimization problem for computing g is of the form
g ¼ max ðc0 þ 0:9ðc1 2 d1 Þ þ 0:3c2 2 d2 Þ; c0 ;ci ;di
subject to ci ; di [ Rþ ; c0 [ R; i ¼ 1; 2; and c0 þ ðc1 2 d1 Þ þ ðc2 2 d2 Þ # 1; c0 þ ðc1 2 d1 Þ # 1; c0 þ ðc2 2 d2 Þ # 0; c0 # 0: Hence g ¼ 0:9: The upper probability can be similarly computed and g ¼ 1: Let us find now the average MTTFs ap and ap of the system. By using the proposed algorithm, we obtain the following linear programming problem for computing ap ap ¼ max ðc0 þ 0:9ðc1 2 d1 Þ þ 0:3c2 2 d2 Þ; c0 ;ci ;di
subject to ci ; di [ Rþ ; c0 [ R; i ¼ 1; 2; and c0 þ ðc1 2 d1 Þ þ ðc2 2 d2 Þ # 8; c0 þ ðc1 2 d1 Þ # 8; c0 þ ðc2 2 d2 Þ # 0; c0 # 5: Hence ap ¼ 7:2: The upper probability ap can be similarly computed and ap ! 1: So, we have obtained that the average MTTF of the system is greater than 7.2 h. The large lower and upper probabilities that the system MTTF is greater than 6 h is confirmed by values of the average MTTF. 6.3. Example 3 Let us consider a series – parallel system consisting of three components The second and third components
L.V. Utkin / Reliability Engineering and System Safety 79 (2003) 341–351
constitute a parallel subsystem. The first component and the parallel subsystem are connected in series. The system time to failure is defined as minðx1 ; maxðx2 ; x3 ÞÞ; where x1 ; x2 ; x3 are the component times to failure. Suppose that an expert provides the following data about the reliability of components at time 10 h: 1. the first component reliability is greater than 0.9; 2. the second component reliability is greater than 0.6; 3. the third component reliability is greater than 0.2. The belief to the expert is 0.8. The available information can be formally represented as Pr{0:9 # EI½10;1Þ ðx1 Þ # 1} ¼ 0:8; Pr{0:6 # EI½10;1Þ ðx2 Þ # 1} ¼ 0:8; Pr{0:2 # EI½10;1Þ ðx3 Þ # 1} ¼ 0:8: Here a1 ¼ 0:9; a2 ¼ 0:6; a3 ¼ 0:2; a 1 ¼ a 2 ¼ a 3 ¼ 1; g1 ¼ g2 ¼ g3 ¼ 0:8: Let us find lower and upper probabilities that the system reliability at time 10 h is greater than 0.95, i.e. Pr{0:95 # Egðx1 ; x2 ; x3 Þ # 1} [ ½g; g; where gðx1 ; x2 ; x3 Þ ¼ I½10;1Þ ðminðx1 ; maxðx2 ; x3 ÞÞ; and average bounds for the system reliability at time 10 h, i.e. average bounds for Egðx1 ; x2 ; x3 Þ: According to Refs. [7,28], there hold for the system b ¼ maxð0; a1 þ maxða2 ; a3 Þ 2 1Þ; b ¼ minða1 ; minða2 þ a 3 ; 1ÞÞ: J#N¼ By considering different sets AJ < {1; 2; 3}; we can similarly compute (see Sections 6.1 and 6.2) the system characteristics
g ¼ 0:8;
ap ¼ 0:3;
7. Conclusion The second-order model of the system reliability under the extremely limited available information about the component reliability behavior has been considered in this paper. The model is more realistic than Bayesian hierarchical models in reliability analysis of systems consisting of components whose reliability cannot be described by a certain probability distribution of time to failure and whose independence is called in question. A complex software system is one of the obvious examples of such systems because it is difficult to describe the software reliability behavior precisely due to a large number of factors which make a contribution to the software reliability [29]. Efficient algorithms for computing beliefs to the system reliability measures and for computing expected or average bounds for the reliability measures have been proposed. These algorithms are represented as a number of linear programming problems whose solution can be found by means of well known methods, for example, by means of simplex methods. Moreover, if a system reliability measure can be explicitly expressed in terms of the available component reliability measures, then the number of linear programming problems, which have to be solved, is sufficiently reduced. In this case, it is necessary to solve two optimization problems for obtaining new beliefs and two problems for computing the average bounds. Three numerical examples given in this paper have shown that non-trivial results can be obtained even by partial available information about the component reliability. The imprecision of results reflects the incompleteness of available information. At the same time, the imprecision in some cases does not allow us to make a useful decision concerning the system reliability.
Acknowledgements AcN w J ;
g ¼ 0;
349
ap ¼ 0:98:
It can be seen from the example that the intervals ½g; g ¼ ½0; 0:8 and ½ap ; ap ¼ ½0:3; 0:98 are very wide and the obtained numerical results are too imprecise to make a useful decision concerning the system reliability. The values 0.3 and 0.98 can be interpreted as pessimistic and optimistic assessments of the system reliability, respectively. It is obvious that the large imprecision of the third component reliability contributes to imprecision of results. If we suppose, for instance, that the third component reliability is greater than 0.8, then the bounds for the average system reliability at time 10 h are 0.52 and 0.98.
The work was supported by the Alexander von Humboldt Foundation (Germany). I am very grateful to Prof. Dr Kurt Weichselberger, Dr Anton Wallner, Dr Thomas Augustin (Munich University, Germany), and Dr Igor Kozine (Risoe National Laboratory, Denmark) for their very valuable remarks and comments. I also thank the referees for useful and detailed suggestions that improved the paper.
Appendix A Proof of Theorem 1. Suppose that the set of m þ 1 linear previsions Efi ; i ¼ 1; …; m; Eg is an outcome set. Then we have the set of lower gi and upper gi probabilities of events Ai ¼ {ai # Efi # a i }: Let Qi ¼ ½inf x Efi ðxÞ; supx Efi ðxÞ; i ¼ 1; …; m; be the sample spaces. In this case, the linear
350
L.V. Utkin / Reliability Engineering and System Safety 79 (2003) 341–351
previsions Efi ; Eg can be regarded as continuous random variables denoted zi ; z, respectively. At that, the variable z is some function of variables z1 ; …; zm whose implicit form is unknown. By regarding the linear previsions as random variables, we cannot define a functional relationship between them except some simplest special cases. At the same time, we cannot regard the corresponding variables as independent different ones because there is no information about their independence and they are joined through the joint probability distribution pðXÞ: Therefore, the following approach is proposed. Note that probabilities of events Ai can be represented as previsions of gambles being the indicator functions IAi ðzi Þ ¼ IAi ðEfi Þ: Denote Z ¼ ðz1 ; …; zm Þ: Suppose that CðZÞ is a joint density of the vector of random variables Z: Then the upper A ðzÞ can be obtained from the following probability g ¼ EI optimization problem (see Section 2): ð g ¼ max IA ðzÞCðZÞdZ; m
the same time, it follows from the optimization problems for computing c and c that ½c; c # ½a; a (since A0 , AJ < AcN w J < A0 ). This implies
subject to ð gi # m IAi ðzi ÞCðZÞdZ # gi ;
follows from the constraint X c0 þ ðci 2 di ÞIAi ðEð2Þ fi Þ $ Eð2Þ g;
R
Q
i # m:
Q
Here R is the set of all possible joint densities {CðZÞ}; Qm ¼ Q1 £ · · · £ Qm : The corresponding dual optimization problem is of the form ( ) m X g ¼ min c0 þ ðci gi 2 di gi Þ ; c0 ;ci ;di
c0 þ
ðci 2 di ÞIAi ðzi Þ $ IA ðzÞ:
i¼1
It follows from Eq. (8) and from the equality X X fi ðxÞpi ðxÞ ¼ fi ðxi ÞpðXÞ; Efi ¼ x[Vi
X[Vm
that the above constraints can be rewritten in the form of Eq. (10), as was to be proved. A Proof of Theorem 2. This is obvious because the set AJ < AcN w J contains previsions of gambles depending on different xi : A Proof of Theorem 3. Let c ¼ min Eg; RJ
as was to be proved.
c ¼ max Eg; RJ
subject to AJ < < A0 : Here the minimum and maximum are taken over the set RJ of all possible probability distribution functions pðXÞ satisfying conditions AJ < AcN w J < A0 : Since AJ < AcN w J # AJ < AcN w J < A0 and AJ < AcN w J is consistent, then there holds ½c; c # Due to consistency of AJ < AcN w J ; there holds ½b; b: ¼ ½c; c : At ½c; c – Y: This implies that ½c; c > ½b; b AcN w J
A
Proof of Theorem 4. Let pð1Þ and pð2Þ be some distributions satisfying constraints AcN w J < AJ and X X gðXÞpð2Þ ðXÞ $ gðXÞpð1Þ ðXÞ: X[Vm
X[Vm
Hence there holds IAi ðEð1Þ fi Þ ¼ IAi ðEð2Þ fi Þ; where X fi ðxi ÞpðkÞ ðXÞ; k ¼ 1; 2: EðkÞ fi ¼ X[Vm
Then the constraint X ðci 2 di ÞIAi ðEð1Þ fi Þ $ Eð1Þ g; c0 þ i[J
i[J
and can be removed. This implies that Eq. (19) is equivalent to X c0 þ ðci 2 di ÞIAi ðEfi Þ $ max Eg: i[J
RJ
A
i¼1
subject to ci ; di [ Rþ ; c0 [ R; i ¼ 1; …; m; and ;zi [ Qi ; m X
$ ½c; c > ½b; b ¼ ½c; c – Y; ½a; a > ½b; b
References [1] Walley P. Statistical reasoning with imprecise probabilities. London: Chapman & Hall; 1991. [2] Kuznetsov VP. Interval statistical models. Moscow: Radio and Communication; 1991, in Russian. [3] Weichselberger K. The theory of interval-probability as a unifying concept for uncertainty. Int J Approx Reason 2000;24:149–70. [4] Weichselberger K. Elementare Grundbegriffe einer allgemeineren Wahrscheinlichkeitsrechnung. Intervallwahrscheinlichkeit als umfassendes Konzept, vol. 1. Heidelberg: Physika; 2001. [5] Gurov S, Utkin L. Reliability of systems under incomplete information. St Petersburg: Lubavich Publications; 1999, in Russian. [6] Utkin L, Gurov S. New reliability models based on imprecise probabilities. In: Hsu C, editor. Advanced signal processing technology. Singapore: World Scientific; 2001. p. 110–39, chapter 6. [7] Utkin L, Gurov S. Imprecise reliability of general structures. Knowledge Inform Syst 1999;1(4):459–80. [8] de Cooman G. Possibilistic previsions. In: EDK, ed. Proceedings of IPMU’98, vol. 1, Paris; 1998. p. 2– 9. [9] Goodman IR, Nguyen HT. Probability updating using second order probabilities and conditional event algebra. Inform Sci 1999;121(3/4): 295 –347. [10] Nau RF. Indeterminate probabilities on finite sets. Ann Stat 1992;20: 1737–67. [11] Walley P. Statistical inferences based on a second-order possibility distribution. Int J Gen Syst 1997;9:337–83. [12] Ekenberg L, Thorbio¨rnson J. Second-order decision analysis. Int J Uncertain, Fuzz Knowledge-Based Syst 2001;9:13–38.
L.V. Utkin / Reliability Engineering and System Safety 79 (2003) 341–351 [13] Gilbert L, de Cooman G, Kerre E. Practical implementation of possibilistic probability mass functions. Proceedings of Fifth Workshop on Uncertainty Processing (WUPES 2000), Jindvrichouv Hradec, Czech Republic; 2000. p. 90–101. [14] Nguyen H, Kreinovich V, Longpre L. Second-order uncertainty as a bridge between probabilistic and fuzzy approaches. Proceedings of the Second Conference of the European Society for Fuzzy Logic and Technology EUSFLAT’01, England; 2001. p. 410–3. [15] Kozine I, Utkin L. Constructing coherent interval statistical models from unreliable judgements. In: Zio E, Demichela M, Piccini N, editors. Proceedings of the European Conference on Safety and Reliability ESREL2001, vol. 1, Torino, Italy; 2001. p. 173 –80. [16] Lindqvist B, Langseth H. Uncertainty bounds for a monotone multistate system. Probab Engng Inform Sci 1998;12:239 –60. [17] de Cooman G. Precision–imprecision equivalence in a broad class of imprecise hierarchical uncertainty models. J Stat Plan Infer 2002; 105(1):175–98. [18] Berger J. Statistical decision theory and Bayesian analysis. New York: Springer; 1985. [19] Goldstein M. The prevision of a prevision. J Am Stat Soc 1983;87: 817–9. [20] Good I. Some history of the hierarchical Bayesian methodology. In: Bernardo J, DeGroot M, Lindley D, Smith A, editors. Bayesian statistics. Valencia: Valencia University Press; 1980. p. 489–519.
351
[21] Robert C. The Bayesian choice. New York: Springer; 1994. [22] Zellner A. An introduction to Bayesian inference in econometrics. New York: Wiley; 1971. [23] Barlow R, Proschan F. Statistical theory of reliability and life testing: probability models. New York: Holt, Rinehart and Winston; 1975. [24] Utkin L, Kozine I. Different faces of the natural extension. In: de Cooman G, Fine T, Seidenfeld T, editors. Imprecise probabilities and their applications. Proceedings of the Second International Symposium ISIPTA’01, Ithaca, USA: Shaker Publishing; 2001. p. 316–23. [25] Utkin L. Imprecise reliability analysis by comparative judgements. Proceedings of the Second International Conference on Mathematical Methods in Reliability, vol. 2, Bordeaux, France; 2000. p. 1005– 8. [26] Kozine I, Filimonov Y. Imprecise reliabilities: experiences and advances. Reliab Engng Syst Safety 2000;67:75 –83. [27] Barlow R, Wu A. Coherent systems with multistate components. Math Oper Res 1978;3:275 –81. [28] Utkin L, Gurov S. New reliability models on the basis of the theory of imprecise probabilities. IIZUKA’98—The Fifth International Conference on Soft Computing and Information/Intelligent Systems, vol. 2, Iizuka, Japan; 1998. p. 656–9. [29] Cai K. Software defect and operational profile modeling. Dordrecht: Kluwer Academic Publishers; 1998.