A method dealing with correlations in uncertainty propagation by using traditional correlation coefficients

A method dealing with correlations in uncertainty propagation by using traditional correlation coefficients

Reliability Engineering and System Safety 41 (1993) 107-114 A method dealing with correlations in uncertainty propagation by using traditional correl...

518KB Sizes 0 Downloads 16 Views

Reliability Engineering and System Safety 41 (1993) 107-114

A method dealing with correlations in uncertainty propagation by using traditional correlation coefficients Qin Zhang Institute of Nuclear Energy Technology, Tsinghua University, Beijing 100084, People's Republic of China (Received 16 November 1992; accepted 4 January 1993)

It is important to include correlations among input variables in uncertainty propagation in probabilistic safety assessment (PSA), because otherwise the output variable (e.g. system failure probability) may be underestimated significantly. As an improvement of the method presented previously by the author (Qin Zhang, A general method dealing with correlation in uncertainty propagation in fault trees. Reliability Engineering & System Safety. 26 (1989) 231-47), this paper provides further solution to the problem that uses the traditional correlation coefficients (TCCs) instead of the correlation fraction coefficients (CFCs) newly defined in the author's previous paper. An example is provided to illustrate the method and shows that the results of using CFC and TCC are the same.

NOTATION CFC

Cov{X,, XA CUS k exp(.)

e(.) mi

Pu qi

TCC

v~,vj X

x, X/,

Xk Correlation fraction coefficient newly defined in Ref. 1 A constant associated with X~ and Xj (see eqns (16) and (21)) = E{(Xi - qi)(Xj - qj)}, the covariance between Xi and Xj kth common uncertainty source, k = 1,...,n The exponential function Expected value of '.' Median value of X~ The TCC between Xi and Xj (see eqn (12)) Mean value of Xi Traditional correlation coefficient Variance of X~ and Xj respectively Output variable, a random variable Input variable i, a normally or lognormally distributed random variable, i=l,...,m A normally or log-normally distributed random variable associated with only X~ and CUS k

~i, ~ik

Pik

oi, trik

One of Xik selected as a reference variable for CUS k = 1 when Xik and X~k vary in a same direction; = - 1 when Xi, and Xjk vary in an opposite direction Distribution parameter of Xi and Xik respectively, associated with their mean values The CFC associated with X~ and CUS k (see eqn (5)), Distribution parameter of Xg and X~k respectively, associated with their variances

1 INTRODUCTION In probabilistic safety assessment (PSA) and other fields, how to evaluate uncertainty is an important issue, where the datum uncertainty propagation, i.e. evaluating the uncertainty of a top event parameter according to uncertainties of basic event data, plays an important role. A lot of work has been done in datum uncertainty propagation. 1-5 However, they are based on an assumption: the datum uncertainties or, in other words, the input variables are s(statistically)independent of each other, which is unfortunately not

Reliability Engineering and System Safety 0951-8320/93 / $06.00 © 1993 Elsevier Science Publishers Ltd, England. 107

108

Qin Zhang

realistic in many practical cases. 6 It has been noted that ignoring the correlations among input variables will lead to a considerable underestimation of the output variable that may be the system failure probability or frequency. 6 Thus much effort has been put on the development of methodologies that can include the correlations among input variables in the propagation. It has been pointed out that the perfect (or complete) correlations resulting from identical components or basic events can be included by treating the correlated variables as same variables. 78 Further work 6"9-1~ was done to include both perfect and imperfect (incomplete) correlations. The method presented in Ref. 6 introduced a new concept, the Common Uncertainty Source (CUS), which says that the correlations among input variables are caused by n s-independent CUSs. Each CUS is shared by more than one input variable to a different degree and thus causes the correlations among the input variables. By using this concept, Ref. 6 presented a method that modifies the relation between the normally or log-normally distributed input variables and the output variable by breaking input variables into new input variables, so that in the modified relation all input variables are s-independent of each other, and then the existing uncertainty propagation methods such as Monte Carlo method, the method of moments, etc., become applicable. However, this method relies on the identification of CUSs and the correlation fraction coefficients (CFCs) that are newly defined in Ref. 6 to quantify the degree of the correlation. Although CFCs are intuitive so that they can be obtained from the engineering judgements, they are not easy to be obtained from the industrial data (see Section 5 of Ref. 6). Also, CUSs are not always easily identified. After a brief description of the method presented in Ref. 6, this paper presents a method that uses the traditional correlation coefficients (TCCs), which are easily obtained from the industrial data, to generate a set of CUSs and associated CFCs. These CUSs have only mathematical significance, but they enable one to use the methodology presented in Ref. 6 to solve the problems when CFCs are not available. An example is provided to illustrate this method, and based on this example, a comparison between using CFC and TCC is made, which shows the two results are the same.

Xi can be decomposed as n

n

Xi = mixio n Xik = miXio I~ X.k,,,,io, k=l

where Xik is defined as a log-normally distributed random variable with #ik = 0 02k =

(3) o'iz

Suppose the relation between the output variable X and the n input variables X1, X2 . . . . . Xn is X ---g(Xl, X2 . . . . . Xm)

(1)

If there are n CUSs, in Ref. 6 we obtained the following results. When Xi is log-normally distributed,

(4)

k =1)

2

aik Pik ~ (~2

(5)

In the above equations, mi is the median of Xi; #ik reflects the mean value of Xi,; oi and Oik reflect the variances of X~ and Xik, respectively; Pik is the CFC of X~ associated with CUS k; Xi0 represents only the effect of X/s own s-independent uncertainty source; Xik (k 4=0) represents only the effect of kth CUS on Xi; and Xk is any one of X i k ( k ~ O ) selected as a reference for CUS k. In this paper, the dot symbol '.' in any subscript means the index of an input variable that is chosen as the reference variable for a given CUS k. Obviously, the total number of reference variables is n. When X/ is normally distributed, Xi can be deomposed as shown in eqn (6) where Xi, is defined as a normally distributed random variable with the parameters satisfying eqns (3), (4) and (5); qi is the mean value of Xi. The meanings of Xio, Xik and X k are the same as described above. In Eqns (2) and (6), Xio and X.k, i = l , . . . , m , k = l . . . . . n, are sindependent of each other because they do not share any CUS.

Xi = qi q- Xio-]- ~ Xik = qi -~-Xio "~-- ~ Oi-"~kX k k=l

(6)

k = l O.k

Reference 6 considered only the cases where the deviation directions of Xik and X k (any one of Xik) are the same. When the deviation directions can be opposite, eqns (2) and (6) should be modified as (see Appendix 1): tl

Xi = miXio

H

vk,rO¢i,kOtk/Ok -~.k

(7)

k=l

Xi = qi + Xio + ~

O{i.k -Oik - X.k

k=l

2 METHODOLOGY

(2)

k=l

(8)

O'.k

where a~i.k= 1 when the deviation directions of Xik and the reference variable X k are the same, or = - 1 when the directions are opposite. Obviously, O~ilk= 1. If we redefine Pik as Pik ------C~iko-~ Crike {1, -1}

(9)

A m e t h o d o f dealing with correlations in uncertainty propagation

where ~ik indicates the influence direction of CUS k on X~, then tri.k can be calculated from

From eqns (12), (15) and (16), we get ~

(10)

Ogi.k = OgikOg.k

Substituting eqns (7) and (8) into eqn (1), we get eqn (11) in which all input variables are s-independent of each other and the existing uncertainty propagation methods are applicable to get the distribution of X. X = g*(Xl.o, X2,o, . . . , gin,o, 2 . ' , X.2 . . . . .

k=l

o~k = o~

n.

When CFC, i.e. P,k, are available, Oik and a~,'.k can be calculated from eqns (9) and (10). However, P~k are not easy to get. In contrast, the traditional correlation coefficients TCCs, i.e. Po, are easy to know from industrial data. The purpose of what follows is to find a set of CUSs (i.e. Oik and t~,.k, k -- 1 . . . . . n) that has only mathematical significance but will produce the same correlations among input variables, to the same degree, as produced using TCCs. When Po are available, by the definition of Po (see any book of statistics), Cov(Xi,

po =

X j ) ~---E [ ( X

trik >--0

(19)

a~,.k • {1, - 1 }

(20)

The same results are applicable to the case of normal distribution (see Appendix 3) except eqn (16) which should be modified as eqn (21). G

= Cov(Xi,

(21)

Xj) = P i j ~

In the above equations, Po, q , m~, oi and Cij are known, i a n d j e {1 . . . . . m}. What we need to do is to find a set of o,.~ and ai.k that satisfy eqns (17)-(20), where k e { 1 . . . . . n } and n is to be determined. Once we find Oi.k and Oti.k, we can use eqn (7) or (8) to modify eqn (1) and get eqn (11) in which all input variables are s-independent. One way to get Oik and a~.k is as follows:

Step 1

i - qi)(Xj - qj)]

x/v, vj

(18)

k=O

X.n)

Now the only problem left is how to know Oik and

(17)

OLi.k~j.kOikOjk = Cij

with the constraints shown in eqns (18)-(20).

(11) o:i.k, i ----1, . . . , m, k = 0 , 1 . . . . .

109

v vj = E[XiXj] - qiqj

(12)

Vv, vj

L e t Oi,o = 0 (i = 1 . . . . , m - 1), o'1,1 = 01, 0"1,k ----0 (k = 2 , . . . , n) and select XI,, as the reference variable: X 1 -~"XI,,, T h U S (~',., = t~',,,,, = 1. For i = 1 and j = 2 . . . . , m, eqn (17) becomes

where qi and qj are the mean values of X~ and Xj, respectively; V~ and Vj are the variances of Xi and Xj, respectively; E ( X i q j ) = E ( q i X j ) = qiqj. Here readers should keep in mind that i and j are the indices of input variables, and subscript k is the index of CUS. For log-normal distribution (see Ref. 12),

O[j.lOl,laj, 1 m Cl j

V,- = exp(2#i + o2)[exp(o~/) - 1]

The results O~l.kOik (or Olj.kajk) of this step are illustrated in Fig. 1 in which '*' means being calculated from eqn (22) and the blanks mean 'unknown'.

Vj = exp(2#j + o2)[exp(~) - 1]

(13)

For normal distribution, Vi = o,

(14)

v,=o

If we imagine that the correlation between Xi and Xj is caused by n CUSs indexed by k, we have eqn (15) (see Appendix 2 for the details). E ( X i X j ) = E miXio

X~

Xjo

k=l

= mimj exp

Xy~ k°M°* k~,

(oi + ~ + 2 ~ k=,

Ci' = In(P°VV"VJ,

m i m j+ qiqj], - ~ (o~/+ o~j)

(22)

0"1,1

ICljl

01(j., ~ - ~ -

C~j

Step 2 Let O2,k= 0 (k = 3 . . . . . n). From eqn (18), where i=2, 02,2 = ~ (23) Select 2 . 2 = 2 2 , (17), we have

2.

Thus

1~'2.2 = 0~,2,2,2 = 1.

From eqn

[~'2,1,10(j,l,lO2,10"j, 1 "~ (It'2,2.2~j,2,20"2,2Oj, 2 ~---C2j

06i.kOgj.kOikOjk)

Define

JC,jl oj,, = - -

0gj, 2, 2 o'j, 2 :

C2j-- of.2,1,1o{j,1,lO2,1oj,1 02,2

(15)

0), 2 = [(~j,2,2aj,2[

(16)

~j, 2,20"j,2 O(j,2, 2 = [0~,j,2,20.j,2[

(24)

Qin Z h a n g

110 k~

1

2

3

...

n

0

(71,1

0

0

...

0

i=2

0

*

i=3

0

i=m-1

0

i=l

0

*

i=m

Fig. 1. O{i.k(Tik calculated from step 1 where

be determined.

n is to

k ~

0

1

2

3

...

n

i=l

0

ol, 1

0

0

...

0

0

...

0

i=2

0

*

(72, 2

i=3

0

*

*

i=m-1

0

*

'~

i=m Fig. 2,

calculated from step 2 where

a~i~crlk

where j = 2, 3 , . . . , m, 0"],2~- 0 a n d c~'j,2,2 takes the sign. The results of this step are illustrated in Fig. 2. Stepi, i=3,...,m-1 Let 0",k=0 ( k = i + l have

In this way, we get m - 1 CUSs and m new input variables: X1 = XL1, Xa = X2,2. . . . . X . m - I -- X m . m - I and Xm,o. They are s-independent of each other. That is to say, this method will generate the same number (m) of input variables in which the first m - 1 variables are corresponding to m - 1 CUSs and the last one is corresponding to an independent uncertainty source. Of course, these CUSs and the independent uncertainty source have only mathematical significance. In summary, when Pij are available, we calculate C0 from eqn (16) or (21), then calculate 0"ik and 'Xi.k through steps 1 to m. By substituting eqn (7) or (8) into eqn (1), we can get eqn (11). Finally, by using the existing uncertainty propagation methods, we can obtain the distribution of X.

. . . . , m ) . From eqn (18), we

0"ii ~--- V O ' / 2 - - O~i,l - - 0 " 2 2 . . . . .

O~/,i--I

(25)

Select X.i = Xii. T h u s a~i.~= o{,iii : 1. F r o m eqn (17), OgjiiOji -~ (Ci] -

o{,i.l.lOfj.l.10"i.lOj,

1 -

ofi.2.201j.2.20"i.20"j.2.

-- O{i,i_l,i_lO{j,i__l,i_10"i,i__10"j,i_l)/

• .

0"i i

0"j~ = l a)eicrj,[

(26)

O{]iiOji O{jii = I O{]ii0"ji[

where j = i + 1 . . . .

, m,

be determined.

n is to

0"ji ~ 0 and aS-. takes the sign.

3 EXAMPLE Step m

F o r the sake of comparison, consider example 2 in Ref. 6. T h e original relation between the output

F r o m eqn (18), Om,o = Vo m - o m., - O m.2

variable and variables is

- 0"Zm,m_l (27)

--,

The final results are illustrated in Fig. 3.

X :

XI(X

the

log-normally

2 + X 3 --~ X 4 --~ X 5 + X 6 ( X

k=

0

1

2

3

...

n

i=l

0

%,~

0

0

...

0

0

...

:

0

*

*

03, 3

-..

:

Z

;

J

:

...

0

0

*

*

•..

Om_l,m. 1

i=2 i=3

i=rn-1 i=m

Fig. 3. Oli.kO~k calculated from step m where

n = m -

1.

distributed

input

7 --]- X s @ X 9 --~ X | o ) )

A method of dealing with correlations in uncertainty propagation

111

relation between X and X~,...,

Tablel. ql, a~ and m~ of X~ in the example

X~o is:

X = 8.26 x 1 0 - 7 X q ( 7 . 5 1 × 10-3X°ia826X.2

i

q, (10 -2)

or

mi (10 -3)

1 2 3 4 5 6 7 8 9 10

0.00022 2 1 1 0.5 0.1 10 10 5 20

1.4 1-4 1.4 1.4 1.4 1.4 1.4 1.4 1-4 1-4

0.000826 7.51 3-75 3.75 1-88 0.375 37.5 37.5 18.8 75.1

+ 3 . 7 5 x .l f.l.- .3

y0.2826 vO.5527V

~

-'~.2

+ 3.75 x . . . . 1 -~.2 -'~-3 ,,'x.4 + 1.88 × 1N-3y°'2826 Y°'3493 v°1323v°'l173 v at,J ,eL .1 -"x -2 "1"-.3 -"x .4 -,'x -5 +X.6(3"75 × 10

--2

1

-~.2

1 2 3 4 5 6 7 8 9 10

0.6 0.1 0.1 0.1 0.2 1.0 0.1 0.1 0.1 0.2

0.4 0.2 0.2 0.2 0.2 . 0.2 0.2 0-2 0.2

. 0-2 0-3 0-2 -.

3 .

. 0.2 0.1 0.1 0.2

. 0-2 0-3 0-2 --

. 0.2 0.1 0.1 0.2

~

62

y0.5037 V0.328 lv0.8381

.,,x.2

.,,x.3

.,,x.4

(I--2 V vO'2826vO'3493vO.1323 + 7"51 x 1l,, -'~10.o,~.l A.2 A.3 y 0 . 1 1 7 3 v O . 7 5 9 4 V 0 . 0 3 0 6 6 V 0 . 0 1 1 25 V 0 . 0 1 4 28X \

x "1.4 4

5

. 0.3 ---. 0-3 ----

. -0.3 --. ~ 0.3 ---

6

7

--0-4 --

---0.4

--0.4 --

---0.4

.

.

---.s

-'~.7

"~.8

It is n o t e d that in the n i n e original c o r r e l a t e d CUSs. I n Ref. 6, 17 o b t a i n e d in the m o d i f i e d

(28)

X = 8"26 × 10-7Xl.oX!i414(7.51

× IO-322,0X.IX.2X.3X.4 + 3.75 + 1.88

k

))

original p r o b l e m , t h e r e are input variables and seven final i n p u t v a r i a b l e s were r e l a t i o n (see e q n (29)).

1N-3ll

Table 3. The original a,k in the example

A.9

A s a c o m p a r i s o n , the o r i g i n a l m o d i f i e d r e l a t i o n b e t w e e n X a n d X1, • • . , X10 is s h o w n in e q n (29) (see also Ref. 6).

+ 3.75 × . . . .

i

-~.4

110"015 17 I.r0"026 0 3 V 0 " 0 2 7 3 8 V X zx. 5 ,,,x. 7 A. 8 A.9

j 2

-'~.3

y 0 - 0 1 2 I,,'0-035 25"V X .,,x. 5 "1--7 " x .8

Table 2. Pe in the example

1

0.8955 0.082 67 X.2 X. 3

113--2y0"2826 y0"5527V0-845V0.052

+3-75 × .....

ltq-2y0"2826

0

0-2826

X.1

y 0 - 0 4 9 92 V 0 - 0 2 9 44 V X ,-~, .4 -"x -5 ."x .7

+ 1-88 × . . . . .

i

-'~-3

1 / ) - - 3 y 0 . 2 8 2 6 "V'0.5037 I z 0 . 3 2 8 l V

V

~l.225v0-707v

3,o-~.v'~ .2

~.3

/I,.5

10-3X4,0X.IX.EX°37°7X.6 x 10-3X.5,0X.lX.3X.7 x

+ X6(3"75 × lO-2X7.oX.IX.2X.3X.4 0

1

2

3

4

5

6

7

ill--2 Y

+ 3.75 x ~,, 1 2 3 4 5 6 7 8 9 10

1.084 0.443 0.443 0.443 0.626 1,4 0.443 0.443 0.443 0.626

0.885 0.626 0.626 0.626 0.626 . 0.626 0.626 0.626 0.626

. . . . . . 0.626 0.626 0.767 - --0.767 0-443 -0.767 - -0.626 0.626 - -0.885 - -0-626 - --0.885 . . . . . . 0.626 0.626 0.767 - --0.767 0.443 - 0.767 - -0.626 0.443 ~ -0.885 - -0.626 - --0.885

in which X~ . . . . . X5, X7 . . . . . X~o are c o r r e l a t e d d u e to seven C U S s . T h e original d a t a are s h o w n in T a b l e s 1 - 3 . F o r the sake of c o m p a r i s o n , the t r a d i t i o n a l c o r r e l a t i o n coefficients p~j for the c o r r e l a t e d variables are a s s u m e d to be as s h o w n in T a b l e 4. T h e y are actually calculated (or t r a n s f e r r e d ) f r o m the d a t a in T a b l e 3 by using e q u a t i o n s ( 1 5 ) - ( 1 7 ) w h e r e all Oli.k= trj.k= 1. T h e c o r r e s p o n d i n g C~j are s h o w n in T a b l e 5. By u s i n g the m e t h o d p r o p o s e d in S e c t i o n 2, the n e w oi, a n d o i J o k are c a l c u l a t e d as s h o w n in T a b l e s 6 a n d 7, a n d all ~.k are c a l c u l a t e d to b e 1. T h e n e w l y m o d i f i e d

V

y 1.225V0.707V

-'~8.0~.~-~.2

-~.3

-'-.5

lO-2X9,oJ(.1X.2X°37°7X.6

-Jr- 1 " 8 8

×

+ 7.51

× 10-2Xlo,oS.lX.3X.7))

(29)

H e r e , after the t r a n s f o r m a t i o n , the total n u m b e r of final i n p u t v a r i a b l e s in the n e w l y m o d i f i e d r e l a t i o n is still t e n a n d the n u m b e r of the m a t h e m a t i c a l C U S s is eight (see e q n (28)). H o w e v e r , a l t h o u g h the two e q u a t i o n s are so different, they are e x p e c t e d to b e mathematically equivalent. By u s i n g the c o m p u t e r p r o g r a m U P I M ( M o n t e Carlo M e t h o d ) in the P C p a c k a g e T H P S A , ~3 we get the results as s h o w n in T a b l e 8. C o m p a r e d with those in Ref. 6, the largest e r r o r ( 4 . 8 % ) is at the 5 % p e r c e n t i l e point. T h e r e a s o n is that at this p o i n t the p r o b a b i l i t y d e n s i t y of X increases r a p i d l y so that the c o m p u t a t i o n e r r o r is larger. T h e o t h e r errors are less t h a n 2 - 2 % . C o n s i d e r i n g the c o m p u t a t i o n e r r o r of the M o n t e C a r l o m e t h o d , it is r e a s o n a b l e to c o n c l u d e that the two results of u s i n g C F C a n d T C C are, as expected, the s a m e , i.e. w h e n o n l y T C C s are available, o n e c a n t r a n s f e r T C C to C F C by u s i n g the m e t h o d p r e s e n t e d in this p a p e r , a n d t h e n the m e t h o d

Qin Zhang

112

Table 4. P~j in the example

i

j 2

3

4

5

6

7

8

9

0.1214 ---. . . .

0.1214 0.353 3 --. . . . . . . .

0.1214 0.309 7 0.312 8 -. . . .

0.1214 O. 159 1 0.156 4 0.156 4

0.1214 0.792 8 0.353 1 0.3095 0.195 2

0.1214 0-353 3 0.793 5 0"3128 0-1564 0"353 1

0-1214 0.473 7 0-312 8 0"791 6 0.1564 0-309 5 0.312 8

0.1214 O. 195 2 O. 156 1 0-156 1 0.622 1 O-195 0.156 1 0.156 1

. .

. .

.

.

10

Table 5. C~j in the example

/

/ 2

1 2 3 4 5 7 8 9

3

0.554 ---. . . .

4

5

0.554 0"554 1-149 1.061 -1"068 --. . . . . . . . . . . .

0.554 0.783 0.669 0.669

0 8 2 2

7

8

9

0"554 1.764 1.149 1-061 0.783 8

0.554 1-149 1 "765 1.068 0.669 2 1.149

0.554 1.061 1.068 1.763 0-669 2 1.061 1.068

. .

. .

.

10 0.554 0.783 0.669 0.669 1.567 0-783 0.669 0.669

.

8 2 2 8 2 2

Table 6. aek in the newly modified relation i

k

1 2 3 4 5 7 8 9 10

0

1

2

--

1-4 0"395 7 0-3957 0.395 7 0-395 7 0.395 7 0.395 7 0.395 7 0-3957

. 1-34 0.7406 0.674 9 0.468 1 1.2 0.740 6 0.674 9 0-4681

-------0.8184

3

4

.

5

.

. 1-12 0.367 5 O"148 2 0.092 59 0.946 4 0.367 5 0"1482

. .

6

. .

. .

. 1.101 O. 129 2 0.054 96 0-057 94 0.922 7 0.1292

.

. --0-596 0.016 32 0.006707

---0.600 5 0.008574

. .

. 1.232 0.036 27 0.014 78 0.018 69 0.9356

8

. .

.

7

.

.

. -0.592 0.020 87 0.015 41 0.01815

Table 7. Grtk/O.kin the newly modified relation (mr.,= o ~ ) i

1

2 3 4 5 7 8 9 10

k

1

2

0"282 6 0.282 6 0.282 6 0.2826 0.282 6 0-282 6 0.282 6 0-282 6

1 0"552 7 0.503 7 0.3493 0"895 5 0"552 7 0.503 7 0-349 3

1

.

3 .

4 .

. 1 0.328 1 0.1323 0"082 67 0.845 0.328 1 O-132 3

5 .

.

.

. .

1 0-1173 0.049 92 0.052 62 0"838 1 O. 117 3

6 .

. . . 1 0.029 44 0.012 0.015 17 0.759 4

8

. ---

---

1 0.027 38 0.011 25

1 0-014 28

.

. .

7 .

. .

. .

-1 0.035 25 0.026 03 0.030 66

A method of dealing with correlations in uncertainty propagation

113

Table 8. Results of considering and not considering correlations among basic event data of the exumple (4800 trials) Item

Assuming s-independence (10 -a)

Considering correlations by CFC (10 -8)

Considering correlations by TCC (10 -8)

5% Percentile Median value Mean value 95% Percentile

0-164 0.244 10.3 37.7

0-065 7 1.94 16.0 58-8

0.062 7 1-92 16.2 60.1

presented in Ref. 6 can be used to modify the relation between the output variable and the input variables.

REFERENCES 1. Levy, L. L. & Moore, A. H., A Monte Carlo technique for obtaining system reliability confidence limits from component test data. IEEE Trans. Reliability, R-16 (1967) 69-72. 2. Jackson, P. S., A second-order moments method for uncertainty analysis. IEEE Trans. Reliability, R-31 (1982) 382-4. 3. Kaplan, S., On the method of discrete probability distributions in risk and reliability calculations-application to seismic rick assessment. Risk Anal., 1 (1981) 189-96. 4. Mazumdar, M., An approximate method for computation of probability intervals for the top-event probability of fault tree. Nucl. Eng. Des., 71 (1982) 45-50. 5. Filshtein, E. L., Goldstein, R. & Koxmin, A. P., Extreme quantile sampling for safety analyses. Statistical Margin Development, Nicleat Power Systems, Combustion Engineering, Inc., Windsor, Connecticut, TIS-6962, 1981. 6. Qin Zhang, A general method dealing with correlations in uncertainty propagation in fault trees. Reliability Engineering and System Safety, 26 (1989) 231-47. 7. Apostolakis, G. & Kaplan, S., Pitfalls in risk calculations. Reliability Engineering, 2 (1981) 135-45. 8. Kafka, P. & Polke, H., Treatment of uncertainties in reliability models. Nucl. Eng. Des., 93 (1986) 203-14. 9. Iman, R. L. & Conover, W. J., A distribution-free approach to inducing rank correlation among input variables. Commurn. Statist. Simula., 11 (3)(1982) 311-34. 10. Der Kiureghian, A., Multivariate distribution models for structural reliability. Structural Mechanics in Reactor Technology, Vol. M., A. A. Balkema Rotterdam Boston, 1987. 11. Der Kiureghian, A. & Liu, P.-L. Structural reliability under incomplete probability information. J. Engineering Mechanics, ASCE, 112(1) (1986) 85-104. 12. Kapur, K. C. & Lamberson, L. R., Reliability in Engineering Design. John Wiley & Sons, New York, 1977. 13. Jianghui Zhang, THPSA: development of an integrated PC package for PSA level 1. Reliability Engineering and System Safety, 30 (1990) 301-11.

APPENDIX 1 For the log-normal distribution, similar to eqn (12) in Ref. 1, when the deviation directions of X~k and X k are opposite,

fx,, V~-~o~,x~kl e x p t[ _ l2 (In X,±-Oik

=fL L,

\

~ik)2]dxik o. k

/ J

Let In xik - P~k

Oik In X k -- IZ.k

Y.k ~-

(A1.2) (A1.3)

O.k

we have ~k 1 ®~ 7 ~ e x p ( - y 2 k / 2 ) dy.k

=LYe1

~7~exp(--y2k/2)dy.k

(Al.4)

where

Y/k=

In Xik

Yk = --

--

I~ik

aik In X k -

?t k

Ok

(A1.5) (A1.5)

Obviously, Y~k= --Yk. Since /zik =/Z.k = 0, we have

Xik = X ~ °'~/°'k

(A1.6)

X,k = X ~ *°'*'°*

(A1.7)

In general,

a,-.k = 1 when the directions are the same, or = - 1 when the directions are opposite. For the normal distribution, we can similarly prove

Xik = ati.kOik X.k O.k

where ¢ri.k is defined as the same as above.

(A1.8)

Qin Zhang

114

Thus from eqns (A1.7) and (A1.8), eqns (5) and (6) are modified as eqns (7) and (8).

=qiqi+qi~ E ( X i , ) + q j ~ E(Xi,) k=O

k~O

+2 2

APPENDIX 2

k =0 y=k- 1

For log-normal distribution,

(

E(X, Xi) = E m,X,o

X~ k=l

+ E(X,o)E(Xjo)+ ~] E(x,~xj,,)

--'O,m,Xjo

k=l

t

X.'*J'°'*/°* k=l

n = E(mimjXioXm)-~I=l X!~ '*'j'÷'.'°j*'°.')

= q,qj + 2 E(X,,X~,)

= m,mjE(X,o)E(X~o)~ E(X!~ '.*°'**~j*~,*)'°*) k=l (A2.1)

where E(Xi,) = I~i, = O, k = O, 1. . . . . n. Considering eqn (A1.8), the above equation becomes

E(X~Xj) = q, qj + ~_~ E(X~kXj,)

We have proved (see Appendix 5) that for a log-normally distributed variable X with/~ = 0,

k=l

n ( O[i.kOCj.kOik(ljk 2 ) = q~qi + ~'~ E --LT X, k=l

E(X") = exp(~ (vo) 2]

(A2.2)

[1

exp ~ (Oli.kOik + O{j.kajk) 2

tJ.k

=qiqj + ~ OYi'kO@kOikOJkE(X2.k ) k=l O.k / = qiqj + ~ O*~'kO:i'kO~*OJk02k k= 1 (72

Thus

x fl

(A3.1)

k=l

]

= qiq~ + ~ Oli.kOtj.kOikOj,

k=l

(A3.2)

k=l

= m~mj exp

o~0 + o)0 +

From eqn (12), we have

2 06.kOq.kOikOjk = p q ~

k=l

(A3.3)

k=l

= mimj exp

o/2k+

~k

To satisfy eqn (17), Co should be defined as (A3.4)

k=l

= mimj exp

APPENDIX 5

o z + o~

Suppose y = x v. Since/~ = 0, we have

+ 2 ~ Oli.,Otj.kOi.kOi.k

(A2.3)

k=l

APPENDIX 3

_ For normal distribution,

1 exp[_ 1 (lny)2] V~-~voy 2 \ v o / a dy (A5.1)

i.e. E(XiXj) =

E[(q, + t,=o ~ X,,)(qj+ ~X, , , ) ] =

= E qiqj + qi

Xyk k=0

+q, 2 Xik + ~., ~.~ XikXh, ) k=0

k=0 ?'=0

1 exp[_ 1 (lny)2] re(y) dy = V~-~(ryy 2 \ Oy / J dy (A5.2) where Oy = vo. Thus E(X~) = E ( y ) = exp(~ ~ ) = e x p [ ~1( v o ) ]2

(A5.3)