Journal of Statistical Planning and Inference 132 (2005) 21 – 31 www.elsevier.com/locate/jspi
Monte-Carlo approximation for probability distribution of monotone Boolean function Alexander Andronov∗ Department of Mathematical Support of Transport System Management, Riga Technical University, 1 Kalku St., LV-1658 Riga, Latvia Available online 12 August 2004
Abstract The new method for probability of the value 1 evaluation for monotone Boolean function (for example for system reliability) is proposed. The method is based on a presentation of the function by its canonical disjunctive normal form. Monte-Carlo procedure assumes random choice of this form terms. Their arithmetical mean gives approximation for the probability of the value 1. Such an estimator is unbiased. The variance of this estimator has been calculated. It is shown that proposed method has less variance in comparison with the crude Monte-Carlo method. © 2004 Elsevier B.V. All rights reserved. MSC: 65C05; 90B25 Keywords: Logical functions; Reliability evaluation; Monte-Carlo method
1. Introduction Monotone Boolean functions are widely applied in reliability theory, logical systems of control etc. The Boolean function f (x) = f (x1 , x2 , . . . , xm ) of m Boolean variables x1 , x2 , . . . , xm ∈ B = {0, 1} is defined by the formula 1 if x ∈ D, f (x) = 0 otherwise, where D ⊂ , = 2M —the set of m-dimensional Boolean vectors, M = {1, 2, . . . , m}. ∗ Corresponding author.
E-mail address:
[email protected] (A. Andronov). 0378-3758/$ - see front matter © 2004 Elsevier B.V. All rights reserved. doi:10.1016/j.jspi.2004.06.013
22
A. Andronov / Journal of Statistical Planning and Inference 132 (2005) 21 – 31
The monotone property of Boolean function f (x1 , x2 , . . . , xm ) means that from the con for Boolean variables x , x ∈ B ={0, 1} it follows that dition x1 x1 , x2 x2 , . . . , xm xm i i ). To avoid a triviality we suppose f (0, 0, . . . , 0)=0, f (x1 , x2 , . . . , xm ) f (x1 , x2 , . . . , xm f (1, 1, . . . , 1) = 1. The function f can be determined by its canonical disjunctive normal form (CDNF) m (x1 , x2 , . . . , xm ) = xii ,
(1)
∈D i=1
where = (1 , 2 , . . . , m ) ∈ is m-tuple of zero and unity, ∨ and ∧ are the Boolean operations of disjunction and conjunction, xi1 =xi , xi0 = x¯i —negation of xi , see Schneeweiss (1989). The actual value of xi is a random Boolean variable Xi with known probability distribution P {Xi = 1} = Pi , P {Xi = 0} = 1 − Pi , 0 < Pi < 1. All variables X1 , . . . , Xm are mutually independent. Let X = (X1 , . . . , Xm ). Obviously, P {X = } =
m i=1
Pi∗i =
i∈I ()
Pi
(1 − Pi ),
(2)
i ∈I / ()
where Pi∗1 = Pi , Pi∗0 = 1 − Pi , I () = {i ∈ M : i = 1}. A well-known result (Andronov, 1996; Beichelt and Franken, 1983; Schneeweiss, 1989) says that the probability R that f = 1 can be expressed via its CDNF(1) by the following way: R = P {f (X) = 1} =
m
∈D i=1
Pi∗i .
(3)
We will illustrate our exposition by using the reliability theory. Consider a reliability system that includes m elements. The state of the ith element is described by Boolean variable xi which has value 1, if the ith element is active and 0 otherwise. Then the system state is described by the m-dimensional Boolean vector x = (x1 , . . . , xm ). Set D is a set of system active states. The Boolean function f is called a structure function of the system. In fact, the actual state of the ith element is a random one. It is described by a random Boolean variable Xi , P {Xi =1}=Pi , P {Xi =0}=1−Pi . We suppose that the system elements fail mutually independently. In this case probability (3) is called the system reliability R. We will use the terminology of reliability theory for visual exposition. Unfortunately, it is impossible to use formula (3) immediately for two causes: (a) often set D is given implicitly, by a numerical algorithm, a graphical or network representation of the system states, etc., (b) the number of elements from D is too large. Also, formulas (1) and (3) have a theoretical importance only. In this case, crude Monte-Carlo works as follows (Gertsbakh, 2000). Independent random experiments are realized. In lth experiment we generate Boolean random variables
A. Andronov / Journal of Statistical Planning and Inference 132 (2005) 21 – 31
X(l) = (X1 (l), X2 (l), . . . , Xm (l)): 1 with probability Pi , Xi (l) = 0 with probability 1 − Pi .
23
(4)
Then we calculate value of Y (l) = f (X(l)). Usually, this problem is not solved by the use of expression (1) but by a search of operational path in a corresponding network of function calculation or system states, etc. (Barlow, 1998; Gertsbakh, 2000). Repeating r times (l = 1, . . . , r) gives a Monte-Carlo estimator of the reliability: r 1 Y (l). Rˆ = r
(5)
l=1
It is clear that EY (l) = R, DY (l) = R(1 − R), E Rˆ = R, D Rˆ = R(1 − R)/r. Now we describe an alternative approach to evaluate reliability R. The proposed approach has been induced by discussion with Professor I. Gertsbakh (November 1996; Beer-Sheva, Israel). 2. Suggested approach Note that in the usual Monte-Carlo approach, an actual state of each element is generated in accordance with distribution (4): 1 means active state, 0 means failed state. On the contrary, our approach does not suppose a generation of the states, but it generates an addend from (3). More precisely, we generate Boolean random variables A1 , A2 , . . . , Am independently and equal probably: Ai is equal to 1 and 0 with probability 0.5. As a result we get m-dimensional Boolean vector A = (A1 , A2 , . . . , Am ) from in accordance with the uniform distribution on : P {A1 = 1 , . . . , Am = m } = 2−m ,
= (1 , . . . , m ) ∈ .
(6)
Let value A(l) = (A1 (l), . . . , Am (l)) correspond to the lth experiment. Then we set m ∗A (l) P i if A(l) ∈ D, W (l) = i=1 i (7) 0 otherwise. After r experiments we have the following estimator of the reliability: r
1 R˜ = 2m W (l). r
(8)
l=1
In fact, we need to verify the condition A ∈ D only. It is the same problem that we had for the usual Monte-Carlo approach. We will call estimator (8) as Monte-Carlo approximation of the reliability R. Let us consider its properties. Statement 1. The Monte-Carlo approximation (8) is an unbiased estimator of the reliability: E R˜ = R.
24
A. Andronov / Journal of Statistical Planning and Inference 132 (2005) 21 – 31
Proof. Let us calculate the mathematical expectation EW (l) by the use of formulas (3), (6) and (7): m Pi∗i P {A1 = 1 , . . . , Am = m } EW (l)= ∈D
= 2−m
m
∈D i=1
i=1
Pi∗i = 2−m R.
(9)
Therefore, in accordance with (8) r
1 EW (l) = R. E R˜ = 2m r
l=1
Statement 2. The variance of Monte-Carlo approximation is calculated by the formula m 1 ∗ (10) (Pi i )2 − R 2 . 2m D R˜ = r ∈D i=1
Proof. Note that E(W (l))2 =
P {A1 = 1 , . . . , Am = m }
∈D
m i=1
DW (l) = E(W (l))2 − (EW (l))2 = 2−m
(Pi∗i )2 = 2−m
m
∈D i=1
m
∈D i=1
(Pi∗i )2 ,
(Pi∗i )2 − (2−m R)2 .
Therefore in accordance to (8)
m r 1 1 ∗ 2m 2m −m 2 −m 2 i 2 DW (l) = 2 (Pi ) − (2 R) . D R˜ = 2 2 r r
∈D i=1
l=1
3. Comparison of the methods As both estimators Rˆ and R˜ are unbiased, we will use their variances for the comparison. Let us remember that D Rˆ = R(1 − R)/r = (R − R 2 )/r. At first, we consider two classical cases of system (Barlow, 1998; Beichelt and Franken, 1983; Gertsbakh, 2000). A series system: In this case, D = {(1, 1, . . . , 1)}, 1 D R˜ = r
R=
m
Pi ,
i=1
2
m
m i=1
Pi2
−R
2
=
1 2 m R (2 − 1). r
Therefore, D R˜ < D Rˆ if 2m R < 1 or R < 2−m .
(11)
A. Andronov / Journal of Statistical Planning and Inference 132 (2005) 21 – 31
25
A parallel system: In this case, D = − {(0, 0, . . . , 0)}, R = 1 −
m
(1 − Pi ),
i=1 m
∈D i=1
(Pi∗i )2 = = =
1 D R˜ = r
2m
m
(Pi∗i )2 −
m
(1 − Pi )2
i=1 ∈ i=1 m m [Pi2 + (1 − Pi )2 ] − (1 − Pi )2 i=1 i=1 m m (2Pi2 − 2Pi + 1) − (1 − Pi )2 , i=1 i=1
m i=1
(2Pi2 − 2Pi + 1) −
m
(1 − Pi )2 − R 2 .
(12)
i=1
It is not difficult to see that variance D R˜ is greater than D Rˆ if the probability R is not small. To investigate the alternative case we use the method of small parameter ε > 0, ε → 0, Pi = ε i ∀i. It allows us later to investigate a general case of an arbitrary system. A general case: Now we consider an asymptotical approach for a case of general system when the elements have low reliability: Pi = εi , ε → 0. Let = 1 + 2 + · · · + m for ∈ , Dk = { ∈ D: = k}, k ∗ = min{k ∈ {1, 2, . . . , }: Dk = }. If ∈ Dk ∗ then m i=1
Pi∗i = ε k
∗
∗
i∈I ()
i + o(εk ).
Therefore, m
∈D i=1
R = εk
∗
Pi∗i = ε k
∗
∗
∈Dk ∗ i∈I ()
∈Dk ∗ i∈I ()
∗
i + o(εk ),
i + o(εk ),
(13)
(14)
and we have the following result. Statement 3. The asymptotical variance of Monte-Carlo approximation is calculated by the formula 1 ∗ ∗ D R˜ = 2m ε 2k 2i − R 2 + o(ε2k ). (15) r ∈Dk ∗ i∈I ()
26
A. Andronov / Journal of Statistical Planning and Inference 132 (2005) 21 – 31 ∗
ˆ Statement 4. If 2m < ε−k or m < − k ∗ log2 ε then D R˜ < D R: 1 ∗ ∗ D R˜ < εk 2i − R 2 + o(εk ) r ∈Dk ∗ i∈I ()
1 ∗ ∗ < [R − R 2 ] + o(ε k ) = D Rˆ + o(ε k ). r Note that the parallel (k ∗ = 1) and series (k ∗ = m) systems give extreme values of k ∗ for the considered asymptotical case.
4. Mixed estimators Let us remember that in the crude Monte-Carlo approach we generate a random Boolean variable Xi that determines an actual state of the ith element (see formula (4)). In the suggested approach we generate a random Boolean variable Ai . This variable determines a number of used addend from (3). The main idea of the new approach is to generate both Boolean random variables on the base of the same random variable Ti that has uniform distribution over interval (0,1). Then we get two correlated estimators Rˆ and R˜ of system reliability. Their linear combination R ∗ = R˜ + (1 − )Rˆ
(16)
is called a mixed estimator. ˜ What variance has it? At first, we consider a covariance of Rˆ and R. Let 1 if X = (X1 , X2 , . . . , Xm ) ∈ D, Y= 0 otherwise. m P ∗Ai if A = (A1 , A2 , . . . , Am ) ∈ D, W = i=1 i 0 otherwise.
(17)
(18)
As before we will use integer variables l to mark a simulation run number. Now the covariance of Rˆ and R˜ (see formulas (5) and (8)) can be calculated by the formula r r 1 1 m ˆ R)=Cov ˜ Cov(R, Y (l), 2 W (l) r r l=1
= r −2 2m
r
l=1
Cov(Y (l), W (l)) = r −1 2m Cov(Y, W ).
(19)
l=1
Therefore we have to calculate covariance of Y and W or E(Y W ) since EY = R, EW = 2−m R and Cov(Y, W ) = E(Y W ) − 2−m R 2 .
(20)
A. Andronov / Journal of Statistical Planning and Inference 132 (2005) 21 – 31
27
Fig. 1. Arrangement of events {Xi = 1} and {Ai = 1}.
We consider a case when reliabilities Pi of all elements are less than 0.5: Pi 0.5 ∀i ∈ M. Two modes of Xi and Ai generation are possible. For the first mode (see Fig. 1a) we have relations: 1 if Ti Pi , Xi = 0 otherwise, 1 if Ti 0.5, Ai = 0 otherwise. Note that the random event {Xi = 1} implies the random event {Ai = 1}. Therefore, according to (17) and (18), the random event {X ∈ D} = {Y = 1} implies the random event {W > 0}. Furthermore, we see from Fig. 1a, that P {Ai = 1|Xi = 0} = (0.5 − Pi )/(1 − Pi ), P {Ai = 0|Xi = 0} = 0.5/(1 − Pi ). Therefore, E(Y W )= P {X = }E{Y W |X = } = P {X = }E{W |X = } ∈D
=
P {X = }
∈D
=
Pi
i∈I ()
P {X = }
∈D
i∈I ()
i ∈I / ()
Pi
i ∈I / ()
∈D E(Pi∗Ai |Xi
1 ((0.5 − Pi )Pi + 0.5(1 − Pi )) . 1 − Pi
Finally, according to formula (2) E(Y W ) = Pi2 (0.5 − Pi2 ). ∈D i∈I ()
= 0)
(21)
i ∈I / ()
The second mode of random variables Xi and Ai generation is presented in Fig. 1b. Here, 1 if Ti Pi , Xi = 0 otherwise,
28
A. Andronov / Journal of Statistical Planning and Inference 132 (2005) 21 – 31
Ai =
if Ti 0.5, otherwise.
1 0
Note that {Xi = 1} and {Ai = 1} are disjoint events. To get an expression on E(Y W ) we need to use the following notation: let for J ⊂ M D(J ) = { ∈ D: I () ∩ J = ∅} = { ∈ D: i = 0 ∀i ∈ J }. We have: E(Y W )= P {X = }E(W |X = ) P {X = }E(Y W |X = ) = ∈D
=
P {X = }
∈D
=
P {X = , A = }E(W |A = )
Pi
∈D ∈D(I ()) i∈I ()
×
Pj
j ∈I ()
i∈I ()
0.5
(0.5 − Pi )
i ∈I / (∨)
j ∈I / ()
∈D i∈I ()
(1 − Pj ),
where I ( ∨ ) = I () ∪ I (). Therefore, E(Y W )= Pi (1 − Pi ) ×
P {A = |X = }E(W |X = , A = )
∈D(I ())
∈D ∈D(I ())
=
∈D
0.5Pj
∈D(I ()) j ∈I ()
(0.5 − Pi )(1 − Pi ).
(22)
i ∈I / (∨)
Now we are able to calculate covariance (19) for two estimators by the use of the formulas (19)–(22). It allows to calculate the variance of mixed estimator (16): ˜ R). ˆ DR ∗ = 2 D R˜ + (1 − )2 D Rˆ + 2(1 − )Cov(R,
(23)
5. Optimal estimator Naturally, a choice of value for mixed estimator (16) has to ensure a minimization of estimator variance. With this purpose we use the following lemma. ˜ D Rˆ and coLemma. Let R˜ and Rˆ be two unbiased estimators of R with variances D R, ˜ R). ˆ Then the minimal variance (23) of mixed estimator (16) is given variance C = Cov(R, by the value
opt =
D Rˆ − C . D R˜ + D Rˆ − 2C
(24)
A. Andronov / Journal of Statistical Planning and Inference 132 (2005) 21 – 31
29
˜ D Rˆ and C values are unknown, but we are able to estimate them simulOf course, D R, ˜ taneously with the evaluation of R by use of the same estimators Rˆ and R. Let us consider our main cases. A series system: For the first mode of Xi and Ai generation (Fig. 1a), we have in accordance with formulas (19)–(21) and (11): ˜ E(Y W ) = R 2 , C = r −1 (2m R 2 − R 2 ) = D R. Therefore, opt = 1 and R˜ is an optimal estimator. For the second mode of Xi and Ai generation (Fig. 1b), we have E(Y W ) = 0. Therefore C = −R 2 /r and the optimal value of is equal to 1/(1 + 2m R). A parallel system: For the first mode of Xi and Ai generation, formula (21) gives E(Y W )= =
Pi2
(0.5 − Pi2 ) −
m
(0.5 − Pi2 )
i=1 i ∈I / () ∈ i∈I () m m (0.5 − Pi2 ) = 2−m Pi2 + (0.5 − Pi2 ) − i=1 i=1
−
m i=1
(0.5 − Pi2 ).
Therefore, in accordance to (19) and (20) m 1 2 2 C= (1 − 2Pi ) 1−R − r i=1
and we are able to calculate the optimal value (24). For the second mode of Xi and Ai generation, the analogous algebra shows that
m m 1 C= (1 − 2Pi + 2Pi2 ) + (1 − 3Pi + 2Pi2 ) . R(1 − R) − r i=1
i=1
6. Numerical example Let us consider a series–parallel system. The system includes n series identical blocks, each block includes n parallel identical elements. The system has an active state if each block has at least one active element. In this case, structure function of the system has the following form: n n 1 − f= (1 − xi,j ) , i=1
j =1
where xi,j is a state (1 or 0) of the jth element from the ith block. Let us suppose that failures of the elements are mutually independent random events and that all elements have the same reliability p = P {Xi,j = 1} for all i, j . Therefore, system reliability R = (1 − (1 − p)n )n .
30
A. Andronov / Journal of Statistical Planning and Inference 132 (2005) 21 – 31
Table 1 Comparision of the estimators r
R
Rˆ
D Rˆ
D ∗ Rˆ
R˜ 1
5 × 103 5 × 104
0.5740 0.5740
0.5686 0.5775
4.89 × 10−5 4.89 × 10−6
4.91 × 10−5 4.88 × 10−6
0.5765 0.5741
r 5 × 103 5 × 104
D R˜ 1 6.60 × 10−5 6.60 × 10−6
D ∗ R˜ 1 6.54 × 10−5 6.60 × 10−6
R˜ 2 0.5726 0.5735
D R˜ 2 6.60 × 10−5 6.60 × 10−6
D ∗ R2∗ 6.63 × 10−5 6.61 × 10−6
1opt
R ∗ (1opt ) 0.5718 0.5761
DR ∗ (1opt ) 3.32 × 10−5 3.32 × 10−6
∗1opt
R ∗ (∗1opt ) 0.5718 0.5761
D ∗ R ∗ (∗1opt ) 3.28 × 10−5 3.32 × 10−5
R ∗ (2opt ) 0.5702 0.5759
DR ∗ (2opt ) 3.54 × 10−5 3.54 × 10−6
∗2opt
R ∗ (2opt ) 0.5702 0.5759
D ∗ R ∗ (2opt ) 3.56 × 10−5 3.54 × 10−6
0.4090 0.4090
2opt 0.3994 0.3994
0.4063 0.4117
0.3940 0.4021
We considered the following numerical data: n = 4 (therefore m = n2 = 16), p = 0.4, the number of replicas from the formulas (5) and (8) r = 5000 and r = 50000. The corresponding results are presented in Table 1. Table 1 contains values of reliability R and its estimators Rˆ and R˜ i , variances D Rˆ and D R˜ i of the last estimators, that has been calculated by formulas (5), (8) and (10). Variance’s estimators D ∗ Rˆ and D ∗ R˜ i have been obtained by using estimators Rˆ and R˜ i instead of R in the corresponding formulas. Formula (24) allows the calculation of optimal value iopt for the first (i = 1) and for the second (i = 2) mode of X, A generation. Value of R ∗ (iopt ) is calculated by formula (16) and its ˆ D Rˆ and D R˜ i variance DR ∗ (iopt ) by formula (23). In fact true values of C = Cov(R˜ i , R), ˜ ˆ from formula (24) are unknown. If we use estimatorsR and R instead of R in formulas (10), (19), (20) and (24) then we get value ∗iopt . Analogously, we get reliability estimator R ∗ (∗iopt ) and estimator of its variance D ∗ R ∗ (∗iopt ). The presented table allows us to draw the following conclusions: 1. The use of optimal combination for reliability estimators Rˆ and R˜ ensures a perceptible reduction of estimator variance. For example, for r = 5 × 103 the variance of initial crude Monte-Carlo estimator D Rˆ = 4.89 × 10−5 has decreased to DR ∗ (opt ) = 3.32 × 10−5 . 2. Both X and A generation modes give approximately the same variance of reliability estimators. 3. When we determine the optimal weight opt then the substitution of unknown reliability R by its estimators Rˆ or R˜ does not change the optimal value opt . For example, for r = 5 × 103 we have the following results: (a) for the first mode of X and A generation the optimal value opt = 0.4090, while the estimated value ∗opt = 0.4063; (b) for the second mode of generation: opt = 0.3994, ∗2opt = 0.3940.
A. Andronov / Journal of Statistical Planning and Inference 132 (2005) 21 – 31
31
The following conclusion may be drawn on the base of these results: if we use estimators instead of the real values of the reliability R, then the variances of the optimal estimators do not actually increase. The presented numerical results corroborate our theoretical inferences and efficiency of proposed approach. The variance of obtained estimators can be decreased by the following way. Let us know ˜ Then we can choose sample A from D˜ instead of . It gives set D˜ ⊂ such that D ⊂ D. ˜ instead of 2m in (8). For example, this case has place for k-out-of-n system. multiplier |D| 7. Conclusions A new method for estimation of probability of monotone Boolean function values has been proposed. Its combination with the crude Monte-Carlo approach allows decreasing the variance of the estimators. References Andronov, A., 1996. Unbiased estimation of probability of the value 1 of a Boolean function of random variables. Autom. Cont. Comp. Sci. 1, 21–30. Barlow, R.E., 1998. Engineering Reliability. ASA-SIAM Series on Statistics and Applied Probability. Beichelt, F., Franken, P., 1983. Zuverlassigkeit und Instandhaltung. VEB Verlag Technik, Berlin. Gertsbakh, I., 2000. Reliability Theory: with Applications to Preventive Maintenance. Springer, Berlin. Schneeweiss, W.G., 1989. Boolean Functions with Engineering Application and Computer Programs. Springer, Berlin.