Probabilistic Engineering Mechanics 21 (2006) 338–351 www.elsevier.com/locate/probengmech
Reliability models based on bivariate exponential distributions Saralees Nadarajah a,∗ , Samuel Kotz b a Department of Statistics, University of Nebraska, Lincoln, NE 68583, United States b Department of Engineering Management and Systems Engineering, The George Washington University, Washington, DC 20052, United States
Received 3 October 2005; received in revised form 13 November 2005; accepted 18 November 2005 Available online 9 January 2006
Abstract Motivated by reliability applications, we derive the exact distributions of R = X + Y and W = X/(X + Y ) and the corresponding moment properties when X and Y follow five flexible bivariate exponential distributions. The expressions turn out to involve several special functions. c 2005 Elsevier Ltd. All rights reserved.
Keywords: Bivariate exponential distribution; Ratio of random variables; Reliability; Sum of random variables
1. Introduction Bivariate exponential distributions are undoubtedly the most applied distributions in the area of reliability. Apart from being of interest in the statistical issues of reliability, these distributions have attracted many practical applications in reliability problems. Some recent references of applications from reliability journals are: [6,13,17,25,14,26,18,15,16,19– 24,44–46]. In these applications, one often encounters the problem of determining the sum and the ratio of the components of bivariate exponential distributions. For instance, consider a two-unit (original and duplicate) cold standby system model with preventive maintenance and replacement of the failed duplicate unit. Suppose that the failed duplicate unit is non-repairable and replaced with an identical duplicate unit available instantaneously. Under this set-up, bivariate exponential distributions are the most commonly used models for the joint distribution of failure and repair/replacement times (see, for example, [13,17,25,14,26,18,15,16,19–24,46]). If X and Y represent the failure time and the repair/replacement time, respectively, then the sum R = X + Y and the ratio W = X/(X + Y ) will be quantities of immediate interest. In particular, R will represent the inter-arrival time between failures and W will represent the efficiency of the repair/replacement scheme. ∗ Corresponding author.
E-mail address:
[email protected] (S. Nadarajah). c 2005 Elsevier Ltd. All rights reserved. 0266-8920/$ - see front matter doi:10.1016/j.probengmech.2005.11.008
The aim of this paper is to derive the exact distributions of R = X + Y and W = X/(X + Y ) when X and Y are correlated exponential random variables arising from five of the most popular bivariate exponential distributions. Explicit expressions for the pdfs and moments of R and W for the five distributions are derived in Sections 2–6. The calculations involve several special functions, including the exponential integral function defined by Z x exp(t) dt, Ei(x) = t −∞ the complementary error function defined by Z ∞ 2 erfc(x) = √ exp(−t 2 ) dt, π x the complementary incomplete gamma function defined by Z ∞ Γ (a, x) = t a−1 exp(−t) dt, x
the degenerate confluent hypergeometric function defined by Z ∞ 1 Φ(a, b; x) = t a−1 (1 + t)b−a−1 exp(−xt) dt, Γ (a) 0 the Gauss hypergeometric function defined by 2 F1 (a, b; c; x)
=
∞ X (a)k (b)k x k , (c)k k! k=0
339
S. Nadarajah, S. Kotz / Probabilistic Engineering Mechanics 21 (2006) 338–351
the Jacobi polynomial defined by Pn(α,β) (x) =
(−1)n
(1 − x)−α (1 + x)−β 2n n! dn × n {(1 − x)α+n (1 + x)β+n } dx and, the modified Laguerre polynomial defined by x −ν exp(x) dn n+ν {x exp(−x)}, n! dx n where (e)k = e(e + 1) · · · (e + k − 1) denotes the ascending factorial. The properties of the above special functions can be found in [40,27]. L νn (x) =
2. Downton’s bivariate exponential Downton’s bivariate exponential distribution [7] is given by the joint pdf √ 2 x yρ 1 x+y f (x, y) = exp − I0 (1) 1−ρ 1−ρ 1−ρ for x > 0, y > 0, and 0 ≤ ρ < 1, where Iν (·) denotes the modified Bessel function of the first kind of order ν defined by 2 k ∞ X 1 x xν . Iν (x) = ν 2 Γ (ν + 1) k=0 (ν + 1)k k! 4 The marginal distributions of X and Y are unit exponentials; so, in particular, E(X ) = 1 and E(Y ) = 1. The parameter ρ is the correlation coefficient between X and Y with independence corresponding to ρ = 0. In the context of reliability studies, Downton [7] used a successive damage model to derive this distribution. Consider a system of two components, each being subjected to shocks, the intervals between successive ones having exponential distributions. Suppose the numbers of shocks N1 and N2 required for the components to fail follow a bivariate geometric distribution with the joint probability generating function P N1 z 1 z 1 /(1+α +β +γ −αz 1 −βz 2 −γ z 1 z 2 ). Write X = i=1 Xi P N2 and Y = Y , where X and Y are the inter-shock i i i=1 i intervals, mutually exponential variates. Then the component lifetime (X, Y ) has the pdf (1). Gaver [10] provided a slightly different motivation for this distribution. He supposes that two types of shocks are occurring to an item of equipment, fatal and non-fatal. Repairs are only made after a fatal shock has occurred; repairs of all the nonfatal defects are also made then. If it is assumed that the two types of shock both follow Poisson processes and that the time for repair is the sum of (the random number) of exponential variates, then time to failure and time to repair have Downton’s bivariate exponential distribution. Expressed concisely, the positive correlation arises because the longer the time to failure, the greater the accumulated non-fatal damage. For reliability applications of this distribution see [36,32,5, 30]. Theorems 1 and 2 derive the pdfs of R = X + Y and W = X/(X +Y ) when X and Y are distributed according to (1).
Theorem 1. If X and Y are jointly distributed according to (1) then r πr I1/2 exp − f R (r ) = 2(1 − ρ) 1−ρ √ √ r ρ r ρ × I−1/2 (2) 2(1 − ρ) 2(1 − ρ) for 0 < r < ∞. Proof. From (1), the joint pdf of (R, W ) = (X + Y, X/R) becomes ! √ √ 2r ρ w(1 − w) r r I0 exp − . (3) f (r, w) = 1−ρ 1−ρ 1−ρ Thus, the pdf of R can be written as r r J (r ), exp − f R (r ) = 1−ρ 1−ρ
(4)
where ! √ √ 2r ρ w(1 − w) J (r ) = dw. I0 1−ρ 0 √ w(1 − w), the integral J (r ) can be Substituting u = rewritten as √ Z 1/2 2r ρu u p J (r ) = 2 I0 du. (5) 2 1−ρ 0 1/4 − u Z
1
Direct application of Eq. (2.15.2.11) in [40, volume 2] shows that (5) can be calculated as √ √ r ρ r ρ π J (r ) = I1/2 I−1/2 . (6) 2 2(1 − ρ) 2(1 − ρ) The result of the theorem follows by combining (4) and (6). Theorem 2. If X and Y are jointly distributed according to (1) then f W (w) =
1−ρ {1 − 4ρw(1 − w)}3/2
(7)
for 0 < w < 1. Proof. Using (3), one can write f W (w) =
1 J (w), 1−ρ
(8)
where J (w) =
∞
Z
r exp −
0
r 1−ρ
I0
! √ √ 2r ρ w(1 − w) dr. 1−ρ
Direct application of Eq. (2.15.3.2) in [40, volume 2] shows that one can calculate J (w) as 3 2 J (w) = (1 − ρ) 2 F1 1, ; 1; 4ρw(1 − w) . (9) 2 Upon using the property that 2 F1 (a, b; a; x)
= (1 − x)−b ,
340
S. Nadarajah, S. Kotz / Probabilistic Engineering Mechanics 21 (2006) 338–351
Fig. 2. Plots of the pdf (2) for E(X ) = 1, E(Y ) = 1 and ρ = 0.2, 0.4, 0.6, 0.8. Fig. 1. Contours of the joint pdf (1) for E(X ) = 1, E(Y ) = 1 and ρ = 0.2, 0.4, 0.6, 0.8.
one can reduce (9) to J (w) = (1 − ρ)2 {1 − 4ρw(1 − w)}−3/2 .
(10)
The result of the theorem follows by combining (8) and (10).
Using special properties of the Bessel function of the first kind, one can derive elementary forms for the pdf in (2). This is illustrated in the corollary below. Corollary 1. If X and Y are jointly distributed according to (1) then √ r ρ r 2 sinh f R (r ) = √ exp − ρ 1−ρ 2(1 − ρ) √ r ρ . × cosh 2(1 − ρ) Figs. 1–3 illustrate the shape of (1), (2) and (7) for the selected values of the correlation coefficient ρ = 0.2, 0.4, 0.6, 0.8. Now, we derive the moments of R = X + Y and W = X/(X + Y ) when X and Y are distributed according to (1). We need the following lemma. Lemma 1. If X and Y are jointly distributed according to (1) then 1+ρ E(X m Y n ) = m!n!(1 − ρ)n Pn(0,m−n) 1−ρ for m ≥ 1 and n ≥ 1. Proof. One can express Z ∞ 1 x m n m E(X Y ) = x exp − 1−ρ 0 1−ρ √ Z ∞ 2 x yρ y n × y exp − I0 dy dx 1−ρ 1−ρ 0
Fig. 3. Plots of the pdf (7) for E(X ) = 1, E(Y ) = 1 and ρ = 0.2, 0.4, 0.6, 0.8.
Z ∞ x w 2n+1 1 − ρ 0 0 √ 2 xρw w2 × exp − I0 dw dx 1−ρ 1−ρ Z ∞ xρ m 0 n = n!(1 − ρ) x exp (−x) L n − dx, 1−ρ 0 (11) √ which follows after setting w = y and applying Eq. (2.15.5.4) in [40, volume 2]. The integral in (11) can be calculated by direct application of Eq. (2.19.3.2) in [40, volume 2] to yield Z ∞ xρ dx x m exp (−x) L 0n − 1−ρ 0 (0,m−n) 1 + ρ = Γ (m + 1)Pn . (12) 1−ρ =
2 1−ρ
Z
∞
x m exp −
The result of the lemma follows by combining (11) and (12).
S. Nadarajah, S. Kotz / Probabilistic Engineering Mechanics 21 (2006) 338–351
The moments of R = X + Y are now simple consequences of this lemma as illustrated in Theorem 3. The moments of W = X/(X + Y ) require a separate treatment as shown by Theorem 4.
341
3. Arnold and Strauss’s bivariate exponential Arnold and Strauss’s [2] bivariate exponential distribution is given by the joint pdf f (x, y) = K exp{−(ax + by + cx y)}
(17)
Theorem 3. If X and Y are jointly distributed according to (1) then n X k (0,n−2k) 1 + ρ n E(R ) = (13) n!(1 − ρ) Pk 1−ρ k=0
for x > 0, y > 0, a > 0, b > 0 and c > 0, where K = K (a, b, c) is the normalizing constant given by 1 ab ab Ei − . = −c exp K c c
for n ≥ 1.
Note that the marginal pdfs of X and Y are not exponential and are given by
Proof. The result in (13) follows by writing E((X + Y )n ) =
n X k=0
f X (x) =
n E(X n−k Y k ) k
and applying Lemma 1 to each expectation in the sum.
and
Theorem 4. If X and Y are jointly distributed according to (1) then the moments of W satisfy the recurrence relation E(W n ) =
2n − 3 1−ρ + E(W n−1 ) 4ρ(n − 2) 2(n − 2) n−1 E(W n−2 ) − 4ρ(n − 2)
exp(−ax) b + cx
f Y (y) =
exp(−by) , a + cy
respectively, with the expectations b K −1 E(X ) = c ab and
(14)
E(Y ) =
a c
K −1 . ab
However, the conditional distributions of Y | X = x and X | Y = y are exponential with parameters b + cx and a + cy, respectively. [2] motivated the use of (17) by observing that it often happens that a researcher has better insight into the forms of conditional distributions rather than the joint distribution. The distribution has been used to model such variables as blood counts and survival times of patients (see [29]). The correlation coefficient ρ = Corr(X, Y ) is given by
for n 6= 3, with the initial values r 2 − 3ρ 1 ρ − + arcsinh 4ρ(1 − ρ) (4ρ)3/2 1−ρ r ρ , if ρ > 0, − arcsinh − 1−ρ 2 r E(W ) = 2 − 3ρ 1 −ρ − + arcsin − 4ρ(1 − ρ) (−4ρ)3/2 1−ρ r −ρ − arcsin , if ρ < 0, 1−ρ
ρ= (15)
and E(W ) = 1/2. Proof. It follows from (7) that one can write Z 1 wn E(W n ) = (1 − ρ) dw. 2 3/2 0 (4ρw − 4ρw + 1) The recurrence relation in (14) follows by applying Eq. (2.263.1) in [27] to the above integral. The initial value in (15) follows by applying Eq. (2.263.2) in [27] to write Z 1 1 1 w 2 E(W ) = − − dw 4ρ 2 0 4ρw 2 − 4ρw + 1 3/2 Z 1 1 1 p dw (16) + 2 4ρ 0 4ρw − 4ρw + 1 and then applying Eqs. (2.261) and (2.264.6) in [27] to calculate the two integrals on the right hand side of (16). Finally, since (7) is symmetric around w = 1/2, one has E(W ) = 1/2.
abc + abK − K 2 . K (ab + c − K )
(18)
This is zero when c = 0, the case of independence, becomes increasingly negative with increasing c until it reaches approximately −0.32, and then gets less negative, tending slowly to zero as c → ∞. Theorem 5 derives the pdf of W = X/(X + Y ) when X and Y are distributed according to (17). Theorem 5. If X and Y are jointly distributed according to (17) then {aw + b(1 − w)}2 K f W (w) = 2 2 exp 4cw(1 − w) 4c w (1 − w)2 p × πcw(1 − w){aw + b(1 − w)} erfc aw + b(1 − w) × √ 2 cw(1 − w) {aw + b(1 − w)}2 − 2cw(1 − w) exp − . (19) 4cw(1 − w) for 0 < w < 1.
342
S. Nadarajah, S. Kotz / Probabilistic Engineering Mechanics 21 (2006) 338–351
Fig. 5. Plots of the pdf (19) for a = 1, b = 1 and c = 0.1, 1, 3, 5 (ρ = −0.073, −0.249, −0.310, −0.322). Fig. 4. Contours of the joint pdf (17) for a = 1, b = 1 and c = 0.1, 1, 3, 5 (ρ = −0.073, −0.249, −0.310, −0.322).
Proof. From (17), the joint pdf of (S, W ) = (X + Y, X/S) becomes f (s, w) = K s exp[−s{aw + b(1 − w)} − cs 2 w(1 − w)]. Thus, the pdf of W can be written as Z ∞ f W (w) = K s exp[−s{aw + b(1 − w)} 0
− cs 2 w(1 − w)] ds.
time of the first event in P1 , P2 and P3 , respectively, and let X be the failure time of component A and Y be the failure time of component B. Then the joint pdf of X = min(U, W ) and Y = min(V, W ) is given by (20). One of the attractive properties of this distribution is the lack of memory property given by Pr(X > s1 + t, Y > s2 + t)/ Pr(X > s1 , Y > s2 ) = Pr(X > t, Y > t). It has received wide ranging applications in reliability, see [43,38,12,8,39,28,42]. The marginal pdfs of X and Y are exponential with parameters θ1 + θ3 and θ2 + θ3 , respectively; so, in particular,
The result of the theorem follows by using Eq. (2.3.15.7) in [40, volume 1] to calculate the above integral.
E(X ) =
Figs. 4 and 5 illustrate the shape of (17) and (19) for a = 1, b = 1 and c = 0.1, 1, 3, 5. Note from (18) that the corresponding values of the correlation coefficient are ρ = −0.073, −0.249, −0.310, −0.322.
and
4. Marshall and Olkin’s bivariate exponential
(20)
for x > 0, y > 0, θ1 > 0, θ2 > 0 and θ3 > 0. This distribution arises in the following context: consider a system consisting of two components, A and B. Let there be three independent Poisson processes, P1 , P2 and P3 , with parameters θ1 , θ2 and θ3 , respectively. If an event occurs according to the first process, A fails. If an event occurs in the second process, B fails; and if an event occurs in the third process, both A and B fail simultaneously. Let U , V and W be the occurrence
1 . θ2 + θ3
The correlation coefficient ρ = Corr(X, Y ) is given by ρ=
Marshall and Olkin’s [34,35] bivariate exponential distribution is given by the joint pdf θ1 (θ2 + θ3 ) exp{−θ1 x − (θ2 + θ3 )y}, if 0 < x < y, θ2 (θ1 + θ3 ) exp{−θ2 y − (θ1 + θ3 )x}, f (x, y) = if 0 < y < x, θ 3 exp{−(θ1 + θ2 + θ3 )y}, if 0 < x = y
E(Y ) =
1 θ1 + θ3
θ3 θ1 + θ2 + θ3
and so θ3 = 0 corresponds to the case where X and Y are independent. Solving the three preceding equations, one can express θ1 , θ2 and θ3 as 1 − θ3 , E(X ) 1 − θ3 θ2 = E(Y ) θ1 =
(21) (22)
and θ3 =
{E(X ) + E(Y )}ρ . E(X )E(Y )(1 + ρ)
(23)
Theorems 6 and 7 derive the pdfs of R = X + Y and W = X/(X + Y ) when X and Y are distributed according to (20).
S. Nadarajah, S. Kotz / Probabilistic Engineering Mechanics 21 (2006) 338–351
343
Theorem 6. If X and Y are jointly distributed according to (20) then f R (r ) =
θ1 (θ2 + θ3 ) exp{−(θ2 + θ3 )r } θ3 + θ2 − θ1 × [exp{(1/2)(θ2 + θ3 − θ1 )r } − 1] θ2 (θ1 + θ3 ) + exp{−(θ1 + θ3 )r } θ2 − θ1 − θ3 × [1 − exp{−(1/2)(θ2 − θ3 − θ1 )r }] + θ3r exp{−(1/2)(θ1 + θ2 + θ3 )r }
(24)
for 0 < r < ∞. Proof. From (20), the joint pdf of (R, W ) = (X + Y, X/R) becomes θ1 (θ2 + θ3 )r exp{−θ1 r w − (θ2 + θ3 )r (1 − w)}, if w < 1/2, θ2 (θ1 + θ3 )r exp{−θ2 r (1 − w) − (θ1 + θ3 )r w}, f (r, w) = if w > 1/2, θ3 r exp{−(θ1 + θ2 + θ3 )r (1 − w)}, if w = 1/2. (25)
Fig. 6. Contours of the joint pdf (20) for E(X ) = 1, E(Y ) = 1 and ρ = 0.2, 0.4, 0.6, 0.8.
Thus, the pdf of R can be written as f R (r ) = θ1 (θ2 + θ3 )r exp{−(θ2 + θ3 )r } Z 1/2 × exp{(θ2 + θ3 − θ1 )r w} dw 0
+ θ2 (θ1 + θ3 )r exp(−θ2r ) Z 1 × exp{(θ2 − θ1 − θ3 )r w} dw 1/2
+ θ3r exp{−(1/2)(θ1 + θ2 + θ3 )r }. The result of the theorem follows by elementary integration of the two integrals above. Theorem 7. If X and Y are jointly distributed according to (20) then θ1 (θ2 + θ3 ) , if w < 1/2, {θ w + (θ2 + θ3 )(1 − w)}2 1 θ2 (θ1 + θ3 ) f W (w) = , if w > 1/2, (26) {θ2 (1 − w) + (θ1 + θ3 )w}2 θ3 , if w = 1/2 (θ1 + θ2 + θ3 )2 (1 − w)2 for 0 < w < 1. Proof. Using (25), one can write Z ∞ θ (θ + θ ) r exp{−θ1 r w − (θ2 + θ3 )r (1 − w)} dr, 3 1 2 0 if w < 1/2, Z ∞ θ2 (θ1 + θ3 ) r exp{−θ2 r (1 − w) − (θ1 + θ3 )r w} dr, f W (w) = 0 if > 1/2, Zw ∞ θ3 r exp{−(θ1 + θ2 + θ3 )r (1 − w)} dr, 0 if w = 1/2.
The result of the theorem follows by elementary integration of the above integrals.
Fig. 7. Plots of the pdf (24) for E(X ) = 1, E(Y ) = 1 and ρ = 0.2, 0.4, 0.6, 0.8.
Figs. 6–8 illustrate the shape of (20), (24) and (26) for E(X ) = 1, E(Y ) = 1 and ρ = 0.2, 0.4, 0.6, 0.8. Note from (21)–(23) that the corresponding parameter values are θ1 = (1−ρ)/(1+ρ), θ2 = (1−ρ)/(1+ρ) and θ3 = 2ρ/(1+ρ). Now, we derive the moments of R = X + Y and W = X/(X + Y ) when X and Y are distributed according to (20). We need the following lemma. Lemma 2. If X and Y are jointly distributed according to (20) then E(X m Y n ) =
n θ1 (θ2 + θ3 ) X (θ2 + θ3 )k (m + k)! (θ2 + θ3 )n+1 k=0 (θ1 + θ2 + θ3 )m+k+1 k!
+
m θ2 (θ1 + θ3 ) X (θ1 + θ3 )k (n + k)! (θ1 + θ3 )m+1 k=0 (θ1 + θ2 + θ3 )n+k+1 k!
344
S. Nadarajah, S. Kotz / Probabilistic Engineering Mechanics 21 (2006) 338–351
+
Z m X θ2 m! (θ1 + θ3 )k ∞ n+k y (θ1 + θ3 )m k=0 k! 0
× exp{−(θ1 + θ2 + θ3 )y} dy +
θ3 (m + n)! , (θ1 + θ2 + θ3 )m+n+1
(27)
where we have used the definition of the complementary incomplete gamma function and the identity that Γ (n + 1, x) = n! exp(−x)
n X xk . k! k=0
(28)
The result of the theorem follows by elementary integration of the two integrals in (27).
Fig. 8. Plots of the pdf (26) for E(X ) = 1, E(Y ) = 1 and ρ = 0.2, 0.4, 0.6, 0.8.
θ3 (m + n)! (θ1 + θ2 + θ3 )m+n+1 for m ≥ 1 and n ≥ 1. +
Proof. One can express E(X m Y n ) = θ1 (θ2 + θ3 )
∞Z ∞
Z 0
x m yn
x
× exp{−θ1 x − (θ2 + θ3 )y} dy dx Z ∞Z ∞ + θ2 (θ1 + θ3 ) x m yn 0
0
×
k=0
=
k!
dy
θ3 (m + n)! + (θ1 + θ2 + θ3 )m+n+1 n X θ1 n! (θ2 + θ3 )k (θ2 + θ3 )n k=0 k! Z ∞ × x m+k exp{−(θ1 + θ2 + θ3 )x} dx 0
Theorem 8. If X and Y are jointly distributed according to (20) then " k n X n θ1 (θ2 + θ3 ) X (θ2 + θ3 )l (n − k + l)! n E(R ) = k (θ2 + θ3 )k+1 l=0 (θ1 + θ2 + θ3 )n−k+l+1 l! k=0 n−k θ2 (θ1 + θ3 ) X (θ1 + θ3 )l (n − k + l)! (θ1 + θ3 )n−k+1 l=0 (θ1 + θ2 + θ3 )n−k+l+1 l! # θ3 n! + (θ1 + θ2 + θ3 )n+1
+
y
× exp{−θ2 y − (θ1 + θ3 )x} dx dy Z ∞ + θ3 y m+n exp{−(θ1 + θ2 + θ3 )y} dy 0 Z ∞ θ1 xm = (θ2 + θ3 )n 0 × exp(−θ1 x)Γ (n + 1, (θ2 + θ3 )x) dx Z ∞ θ2 + yn (θ1 + θ3 )m 0 × exp(−θ2 y)Γ (m + 1, (θ1 + θ3 )x) dy θ3 (m + n)! + (θ1 + θ2 + θ3 )m+n+1 Z ∞ θ1 n! = x m exp(−θ1 x) exp{−(θ2 + θ3 )x} (θ2 + θ3 )n 0 n X (θ2 + θ3 )k x k θ2 m! × dx + m k! (θ 1 + θ3 ) k=0 Z ∞ × y n exp(−θ2 y) exp{−(θ1 + θ3 )y} m X (θ1 + θ3 )k y k
The moments of R = X + Y are now simple consequences of this lemma as illustrated in Theorem 8. The moments of W = X/(X + Y ) require a separate treatment as shown by Theorem 9.
(29)
for n ≥ 1. Proof. The result in (29) follows by writing E((X + Y )n ) =
n X n k=0
k
E(X n−k Y k )
and applying Lemma 2 to each expectation in the sum.
Theorem 9. If X and Y are jointly distributed according to (20) then E(W n ) =
θ1 (θ2 + θ3 ) (θ1 − θ2 − θ3 )n+1 n X n (−1)n−l (θ2 + θ3 )n−l δ1 (l − 2) × l l=0 θ2 (θ1 + θ3 ) (θ1 + θ3 − θ2 )n+1 n X n × (−1)n−l θ2n−l δ2 (l − 2) l l=0
+
+
22−n θ3 (θ1 + θ2 + θ3 )2
(30)
S. Nadarajah, S. Kotz / Probabilistic Engineering Mechanics 21 (2006) 338–351
345
θ3 22−n , (θ1 + θ2 + θ3 )2
for n 6= 1, where # " θ1 + θ2 + θ3 k+1 1 k+1 − (θ2 + θ3 ) , k + 1 2 if k 6= −1, δ1 (k) = θ1 + θ2 + θ3 log , 2(θ2 + θ3 ) if k = −1
where we have set u = θ1 w + (θ2 + θ3 )(1 − w) and v = θ2 (1 − w) + (θ1 + θ3 )w. The result of the theorem follows by noting that the two integrals in (31) reduce to δ1 (k − 2) and δ2 (k − 2) after elementary integration.
and
5. Freund’s bivariate exponential
" # θ1 + θ2 + θ3 k+1 1 k+1 , if k = 6 −1, θ − k+1 2 2 δ2 (k) = 2θ2 log , if k = −1. θ1 + θ2 + θ3
Freund’s [9] bivariate exponential distribution is given by the joint pdf 0 αβ exp{−β 0 y − (α + β − β 0 )x}, if 0 < x < y, (32) f (x, y) = 0 α β exp{−α 0 x − (α + β − α 0 )y}, if 0 < y ≤ x
Proof. Using (26), one can write E(W n ) = θ1 (θ2 + θ3 )
wn dw {θ1 w + (θ2 + θ3 )(1 − w)}2
1/2
Z 0
+ θ2 (θ1 + θ3 )
1
Z
1/2
+
=
wn dw {θ2 (1 − w) + (θ1 + θ3 )w}2
θ3 22−n (θ1 + θ2 + θ3 )2
θ1 (θ2 + θ3 ) (θ1 − θ2 − θ3 )n+1
(θ1 +θ2 +θ3 )/2
Z
θ2 +θ3
θ2 (θ1 + θ3 ) + (θ1 + θ3 − θ2 )n+1
u −2 (u − θ2 − θ3 )n du
θ2
Z
(θ1 +θ2 +θ3 )/2
v −2 (v − θ2 )n dv
θ1 (θ2 + θ3 ) (θ1 − θ2 − θ3 )n+1 ×
n X n k=0
k
(−1)
Z
n−k
(θ2 + θ3 )
n−k k−2
u
du
θ3 22−n (θ1 + θ2 + θ3 )2
n X θ1 (θ2 + θ3 ) n = (−1)n−k (θ2 + θ3 )n−k n+1 k (θ1 − θ2 − θ3 ) k=0 (θ1 +θ2 +θ3 )/2
Z
θ2 +θ3
+
u k−2 du
n X θ2 (θ1 + θ3 ) n (−1)n−k θ2n−k n+1 k (θ1 + θ3 − θ2 ) k=0
Z
θ2
× (θ1 +θ2 +θ3 )/2
v k−2 dv
1 [(α − α 0 )(α + β) α + β − α0 × exp{−(α + β)x} + α 0 β exp(−α 0 x)]
and
θ2 +θ3
θ2 (θ1 + θ3 ) (θ1 + θ3 − θ2 )n+1 Z θ2 n X n × (−1)n−k θ2n−k v k−2 dv (θ1 +θ2 +θ3 )/2 k=0 k
×
for x > 0, y > 0, α > 0, β > 0, α 0 > 0 and β 0 > 0. This distribution arises in the following setting: X and Y are the lifetimes of two components assumed to be independent exponentials with parameters α and β, respectively; but, a dependence between X and Y is introduced by taking a failure of either component to change the parameter of the life distribution of the other component; if component 1 fails the parameter for Y is changed to β 0 and if component 2 fails the parameter for X is changed to α 0 . For reliability applications of this distribution see [3,4,1,11,31,37]. The marginal pdfs of X and Y are not exponential. They take the form of exponential mixtures given by
(θ1 +θ2 +θ3 )/2
+
+
(31)
f X (x) =
θ3 22−n + (θ1 + θ2 + θ3 )2 =
+
f Y (y) =
1 [(β − β 0 )(α + β) α + β − β0 × exp{−(α + β)y} + β 0 α exp(−β 0 y)],
respectively, with the expectations E(X ) =
α0 + β α 0 (α + β)
(33)
β0 + α . β 0 (α + β)
(34)
and E(Y ) =
The correlation coefficient ρ = Corr(X, Y ) is given by ρ=p
α 0 β 0 − αβ p . (α 0 )2 + 2αβ + β 2 (β 0 )2 + 2αβ + α 2
(35)
One can show that −1/3 < ρ < 1. Note X and Y are independent if and only if α = α 0 and β = β 0 . If β > β 0 (or if α > α 0 ), the expected life of component 2 (1) improves when component 1 (2) fails. In this case, α + β − β 0 > 0 and the marginal densities are mixtures of exponential densities. On
346
S. Nadarajah, S. Kotz / Probabilistic Engineering Mechanics 21 (2006) 338–351
the other hand, if β < β 0 (α < α 0 ), the expected lifetime of 2 (1) decreases after the failure of component 1 (2). In this case, α + β − β 0 < 0 (α + β − α 0 < 0) and the marginal densities are weighted averages of exponential densities. Theorems 10 and 11 derive the pdfs of R = X + Y and W = X/(X + Y ) when X and Y are distributed according to (32). Theorem 10. If X and Y are jointly distributed according to (32) then f R (r ) =
αβ 0 exp(−β 0r ) [1 − exp{−(1/2)(α + β − 2β 0 )r }] α + β − 2β 0 α 0 β exp(−α 0r ) + α + β − 2α 0 × [1 − exp{−(1/2)(α + β − 2α 0 )r }]
(36)
for 0 < r < ∞. Proof. From (32), the joint pdf of (R, W ) = (X + Y, X/R) becomes 0 αβ r exp −β 0 r (1 − w) − α + β − β 0 r w , if w <1/2, (37) f (r, w) = α 0 βr exp −α 0 r w − α + β − α 0 r (1 − w) , if w ≥ 1/2.
Fig. 9. Contours of the joint pdf (32) for E(X ) = 1, E(Y ) = 1 and ρ = 0.2, 0.4, 0.6, 0.8.
Thus, the pdf of R can be written as Z 1/2 f R (r ) = αβ 0r exp(−β 0r ) exp{−(α + β − 2β 0 )r w} dw 0
+ α 0 βr exp{−(α + β − α 0 )r } Z 1 × exp{(α + β − 2α 0 )r w} dw. 1/2
The result of the theorem follows by elementary integration of the two integrals above. Theorem 11. If X and Y are jointly distributed according to (32) then αβ 0 , if w < 1/2, 0 + β − β 0 )w}2 f W (w) = {β (1 − w) + (α (38) 0 αβ , if w ≥ 1/2 0 {α w + (α + β − α 0 )(1 − w)}2 for 0 < w < 1. Proof. Using (37), one can write Z ∞ αβ 0 r exp{−β 0 r (1 − w) − (α + β − β 0 )r w} dr, 0 if Zw < 1/2, f W (w) = ∞ 0 α β r exp{−α 0 r w − (α + β − α 0 )r (1 − w)} dr, 0 if w ≥ 1/2.
The result of the theorem follows by elementary integration of the above integrals. Figs. 9–11 illustrate the shape of (32), (36) and (38) for E(X ) = 1, E(Y ) = 1 and ρ = 0.2, 0.4, 0.6, 0.8 with the restriction α = β. Note from (33)–(35) that the corresponding
Fig. 10. Plots of the pdf (36) for E(X ) = 1, E(Y ) = 1 and ρ = 0.2, 0.4, 0.6, 0.8.
p parameter values are α = β = (2 − 3ρ + 9ρ 2 − 4ρ + 4)/4 and α 0 = β 0 = α/(2α − 1). Now, we derive the moments of R = X + Y and W = X/(X + Y ) when X and Y are distributed according to (32). We need the following lemma. Lemma 3. If X and Y are jointly distributed according to (32) then E(X m Y n ) =
n (m + k)!(β 0 )k α X (β 0 )n k=0 k!(α + β)m+k+1
+
m β X (n + k)!(α 0 )k 0 m (α ) k=0 k!(α + β)n+k+1
for m ≥ 1 and n ≥ 1.
347
S. Nadarajah, S. Kotz / Probabilistic Engineering Mechanics 21 (2006) 338–351 n−k X β (k + l)!(α 0 )l + 0 n−k (α ) l!(α + β)k+l+1 l=0
# (40)
for n ≥ 1. Proof. The result in (40) follows by writing E((X + Y )n ) =
n X n k=0
k
E(X n−k Y k )
and applying Lemma 3 to each expectation in the sum.
Theorem 13. If X and Y are jointly distributed according to (32) then E(W n ) = Fig. 11. Plots of the pdf (38) for E(X ) = 1, E(Y ) = 1 and ρ = 0.2, 0.4, 0.6, 0.8.
Proof. One can express Z ∞Z E(X m Y n ) = αβ 0 0
m n
x y
× exp{−β y − (α + β − β 0 )x} dy dx Z ∞Z ∞ 0 +α β x m yn 0
α0β (2α 0 − α − β)n+1 n X n (α 0 − α − β)n−m δ2 (m − 2) × m m=0
+
∞ x 0
y
× exp{−α 0 x − (α + β − α 0 )y} dx dy Z ∞ α xm = 0 n (β ) 0 × exp{−(α + β − β 0 )x}Γ (n + 1, β 0 x) dx Z ∞ β + 0 m yn (α ) 0 × exp{−(α + β − α 0 )y}Γ (m + 1, α 0 y) dy Z ∞ n X (β 0 x)k n!α x m exp{−(α + β)x} = 0 n dx (β ) 0 k! k=0 Z ∞ m X m!β (α 0 y)k + 0 m y n exp{−(α + β)y} dy, (α ) 0 k! k=0 (39) where we have used the definition of the complementary incomplete gamma function and the identity (28). The result of the theorem follows by elementary integration of the two integrals in (39). The moments of R = X + Y are now simple consequences of this lemma as illustrated in Theorem 12. The moments of W = X/(X + Y ) require a separate treatment as shown by Theorem 13. Theorem 12. If X and Y are jointly distributed according to (32) then " k n X n α X (n − k + l)!(β 0 )l n E(R ) = k (β 0 )k l=0 l! (α + β)n−k+l+1 k=0
n X αβ 0 n (−β 0 )n−m δ1 (m − 2) (α + β − 2β 0 )n+1 m=0 m
(41)
for n 6= 1, where " # α + β k+1 1 0 k+1 , if k 6= −1, − (β ) k+1 2 δ1 (k) = α+β log , if k = −1 2β 0 and
δ2 (k) =
" # α + β k+1 1 0 k+1 − , if k 6= −1, k + 1 (α ) 2 2α 0 log , α+β
if k = −1.
Proof. Using (38), one can write Z 1/2 wn n 0 dw E(W ) = αβ {β 0 (1 − w) + (α + β − β 0 )w}2 0 Z 1 wn + α0β dw 0 0 2 1/2 {α w + (α + β − α )(1 − w)} Z (α+β)/2 αβ 0 = u −2 (u − β 0 )n du (α + β − 2β 0 )n+1 β 0 α0β (2α 0 − α − β)n+1 Z α0 × v −2 (v + α 0 − α − β)n dv
+
=
(α+β)/2 αβ 0
(α + β − 2β 0 )n+1 Z (α+β)/2 X n n × (−β 0 )n−m u m−2 du m β0 m=0 +
α0β (2α 0 − α − β)n+1
348
S. Nadarajah, S. Kotz / Probabilistic Engineering Mechanics 21 (2006) 338–351
Z
α0
n X n
×
(α+β)/2 m=0
=
m
(α 0 − α − β)n−m v m−2 dv
αβ 0 (α + β − 2β 0 )n+1 Z (α+β)/2 n X n × (−β 0 )n−m u m−2 du 0 m β m=0 n X α0β n + (α 0 − α − β)n−m 0 n+1 m (2α − α − β) m=0 Z α0 v m−2 dv, (42) × (α+β)/2
where we have set u = β 0 (1 − w) + (α + β − β 0 )w and v = α 0 w+(α+β−α 0 )(1−w). The result of the theorem follows by noting that the two integrals in (42) reduce to δ1 (m − 2) and δ2 (m − 2) after elementary integration. 6. Lawrance and Lewis’s bivariate exponential Lawrance and Lewis’s [33] bivariate exponential distribution is given by the joint pdf 1 − a exp(−x), b−a if x = by, (b − 1)x + b(1 − a)y b(b − 1)(1 − a) , exp − f (x, y) = b−a (b − a)2 if ay < x < by, x b − 1 exp − , a b − a if x = ay
Thus, the pdf of R can be written as br (1 − a)r exp − f R (r ) = b−a b+1 (b − 1)r ar b(b − 1)(1 − a)r + exp − + b−a a(a + 1) (b − a)2 Z b/(b+1) (b − 1)r w + b(1 − a)r (1 − w) × exp − dw. b−a a/(a+1) The result of the theorem follows by elementary integration of the integral above.
(43)
for x > 0, y > 0 and 0 < a < 1 < b. Note that the density of this distribution is confined to the cone-shaped region ay < x < by. The marginals are unit exponential and the correlation coefficient ρ(X, Y ) is given by 1−a . (44) b It is easily seen that 0 < ρ < 1. For reliability applications of the distribution see [41]. Theorems 14 and 15 derive the pdfs of R = X + Y and W = X/(X + Y ) when X and Y are distributed according to (43). ρ=a+
Theorem 14. If X and Y are jointly distributed according to (43) then (1 − a)r br exp − f R (r ) = b−a b+1 (b − 1)r ar + exp − b−a a(a + 1) b(b − 1)(1 − a) − (b − a)(ab − 1) br r × exp − − exp − (45) b+1 a+1 for 0 < r < ∞.
Proof. From (43), the joint pdf of (R, W ) = (X + Y, X/R) becomes (1 − a)r exp(−r w), b−a b if w = b+1 , b(b − 1)(1 − a)r (b − a)n2 o f (r, w) = (46) (1−w) , × exp − (b−1)r w+b(1−a)r b−a a b if a+1 < w < b+1 , (b − 1)r rw exp − , a b−a a if w = a+1 .
Theorem 15. If X and Y are jointly distributed according to (43) then 1−a , (b − a)w 2 if w = b , b+1 b(b − 1)(1 − a) , f W (w) = t{(b − 1)w + b(1 − a)(1 − w)}2 (47) a b if < x < , a+1 b+1 (b − 1)a 2 , (b − a)w 2 a if x = a+1 for 0 < w < 1. Proof. Using (46), one can write 1 − a Z ∞ r exp(−r w) dr, b−a 0 b if w = b+1 , b(b − 1)(1 − a) Z (b − a)2 n o ∞ f W (w) = (b−1)r w+b(1−a)r (1−w) r exp − dr, b−a 0 a b if a+1 < w < b+1 , Z ∞ b − 1 r w r exp − dr, a b − a 0 a if w = a+1 . The result of the theorem follows by elementary integration of the above integrals. Figs. 12–14 illustrate the shape of (43), (45) and (47) for ρ = 0.2, 0.4, 0.6, 0.8 with the restriction b = 100. Note from (44) that the corresponding parameter a = (100ρ −1)/99.
S. Nadarajah, S. Kotz / Probabilistic Engineering Mechanics 21 (2006) 338–351
349
Fig. 14. Plots of the pdf (47) for E(X ) = 1, E(Y ) = 1 and ρ = 0.2, 0.4, 0.6, 0.8. Fig. 12. Contours of the joint pdf (43) for E(X ) = 1, E(Y ) = 1 and ρ = 0.2, 0.4, 0.6, 0.8.
Proof. One can express E(X m Y n ) =
=
Fig. 13. Plots of the pdf (45) for E(X ) = 1, E(Y ) = 1 and ρ = 0.2, 0.4, 0.6, 0.8.
Note too that x/y is plotted versus x in Fig. 12 because of the way (43) is defined. Now, we derive the moments of R = X + Y and W = X/(X + Y ) when X and Y are distributed according to (43). We need the following lemma. = Lemma 4. If X and Y are jointly distributed according to (43) then E(X m Y n ) =
(1 − a)(m + n)! (b − 1)a m+1 (m + n)! + (b − a)bn b−a b(1 − a)(b − a)m−1 m! + (b − 1)m m X (n + k)! b − 1 k × {a k − b−(n+1) } k! b − a k=0
for m ≥ 1 and n ≥ 1.
Z ∞ 1−a x m+n exp(−x) dx (b − a)bn 0 Z ∞ x b−1 m+n + x exp − dx (b − a)a n 0 a Z Z b(b − 1)(1 − a) ∞ by m n x y + (b − a)2 0 ay (b − 1)x + b(1 − a)y dx dy × exp − b−a (1 − a)(m + n)! (b − 1)a m+1 (m + n)! + (b − a)bn b−a b(b − 1)(1 − a) b − a m+1 + b−1 (b − a)2 Z ∞ b(1 − a)y × y n exp − b−a 0 (b − 1)ay × Γ m + 1, dy b−a Z ∞ b(1 − a)y − y n exp − b−a 0 (b − 1)by × Γ m + 1, dy b−a (1 − a)(m + n)! (b − 1)a m+1 (m + n)! + (b − a)bn b−a b(b − 1)(1 − a)m! b − a m+1 + b−1 (b − a)2 "Z ∞ b(1 − a)y + (b − 1)ay n × y exp − b−a 0 m X 1 (b − 1)ay k × dy k! b−a k=0 Z ∞ b(1 − a)y + (b − 1)by n − y exp − b−a 0
350
S. Nadarajah, S. Kotz / Probabilistic Engineering Mechanics 21 (2006) 338–351
m X 1 (b − 1)by k dy × k! b−a k=0 =
#
(1 − a)(m + n)! (b − 1)a m+1 (m + n)! + (b − a)bn b−a b(b − 1)(1 − a)m! b − a m+1 + b−1 (b − a)2 " k Z ∞ m X 1 (b − 1)a y n+k × k! b − a 0 k=0 b(1 − a)y + (b − 1)ay dy × exp − b−a Z m X 1 (b − 1)b k ∞ n+k − y k! b−a 0 k=0 # b(1 − a)y + (b − 1)by × exp − dy , (48) b−a
where we have used the definition of the complementary incomplete gamma function and the identity (28). The result of the theorem follows by elementary integration of the two integrals in (48). The moments of R = X + Y are now simple consequences of this lemma as illustrated in Theorem 16. The moments of W = X/(X + Y ) require a separate treatment as shown by Theorem 17. Theorem 16. If X and Y are jointly distributed according to (43) then (1 − a)(b + 1)n n! (b − 1)a(1 + a)n n! + E(R ) = (b − a)bn b−a n n−k−1 X n!b(1 − a)(b − a) + (b − 1)n−k k! k=0 (49)
for n ≥ 1.
E(W n ) =
=
(1 − a)bn−2 (b − 1)a n + (b − a)(b + 1)n−2 (b − a)(a + 1)n−2 + b(b − 1)(1 − a) Z b/(b+1) wn × dw 2 a/(a+1) {(b − 1)w + b(1 − a)(1 − w)} (1 − a)bn−2 (b − 1)a n + (b − a)(b + 1)n−2 (b − a)(a + 1)n−2 b(b − 1)(1 − a) + (ab − 1)n+1 Z b(b−a)/(b+1) × u −2 {u − b(1 − a)}n du (b−a)/(a+1)
(1 − a)bn−2 (b − 1)a n = + (b − a)(b + 1)n−2 (b − a)(a + 1)n−2 Z b(b−a)/(b+1) b(b − 1)(1 − a) + (ab − 1)n+1 (b−a)/(a+1) n X n × (−1)n−m {b(1 − a)}n−m u m−2 du m m=0 (1 − a)bn−2 (b − 1)a n + (b − a)(b + 1)n−2 (b − a)(a + 1)n−2 b(b − 1)(1 − a) + (ab − 1)n+1 n Xn × (−1)n−m {b(1 − a)}n−m m m=0 Z b(b−a)/(b+1) × u m−2 du, (b−a)/(a+1)
E((X + Y )n ) =
n X n k=0
k
E(X n−k Y k )
and applying Lemma 4 to each expectation in the sum.
(51)
where we have set u = (b−1)w+b(1−a)(1−w). The result of the theorem follows by noting that the integral in (51) reduces to δ(m − 2) after elementary integration.
Proof. The result in (49) follows by writing
Acknowledgments
Theorem 17. If X and Y are jointly distributed according to (43) then E(W n ) =
Proof. Using (47), one can write
=
n
n−k o X (k + l)! (b − 1)a l n l × a − b−(k+1) l! b−a l=0
for n 6= 1, where " k+1 k+1 # (b − a)k+1 1 b − , b+1 a+1 k+1 if k 6= −1, δ(k) = log a + 1 , b+1 if k = −1.
(1 − a)bn−2 (b − 1)a n + (b − a)(b + 1)n−2 (b − a)(a + 1)n−2 b(b − 1)(1 − a) + (ab − 1)n+1 n X n × (−1)n−m {b(1 − a)}n−m δ(m − 2) (50) m m=0
The authors would like to thank the editor and the referee for carefully reading the paper and for their great help in improving the paper. References [1] Adachi K, Kodama M. Availability analysis of two-unit warm standby system with inspection time. Microelectronics and Reliability 1980;20: 449–55. [2] Arnold BC, Strauss DJ. Bivariate distributions with exponential conditionals. Journal of the American Statistical Association 1988;83: 522–7. [3] Barlow RE, Proschan F. Techniques for analyzing multivariate failure
S. Nadarajah, S. Kotz / Probabilistic Engineering Mechanics 21 (2006) 338–351
[4]
[5] [6]
[7] [8]
[9] [10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
data. In: Tsokos CP, Shimi IN, editors. The theory and applications of reliability, 1. New York: Academic Press; 1977. p. 373–96. Biswas S, Nair G. A generalization of Freund’s model for a repairable paired component based on a bivariate Geiger Muller (G.M.) counter. Microelectronics and Reliability 1984;24:671–5. Cordova JR, Rodriguez-Iturbe I. On the probabilistic structure of storm surface runoff. Water Resources Research 1985;21:755–63. Dargahinoubary GR, Razzaghi M. Earthquake hazard assessment based on bivariate exponential-distributions. Reliability Engineering and System Safety 1994;44:153–66. Downton F. Bivariate exponential distributions in reliability theory. Journal of the Royal Statistical Society, B 1970;32:408–17. Ebrahimi N. Analysis of bivariate accelerated life test data for the bivariate exponential distribution. American Journal of Mathematical and Management Sciences 1987;7:175–90. Freund JE. A bivariate extension of the exponential distribution. Journal of the American Statistical Association 1961;56:971–7. Gaver DP. Point process problems in reliability. In: Lewis PAW, editor. Stochastic point srocesses: Statistical analysis. New York: John Wiley and Sons; 1972. p. 774–800. Goel LR, Gupta R, Singh SK. A two-unit parallel redundant system with three modes and bivariate exponential lifetimes. Microelectronics and Reliability 1984;24:25–8. Goel LR, Gupta R, Singh SK. Availability analysis of a two–unit (dissimilar) parallel system with inspection and bivariate exponential lifetimes. Microelectronics and Reliability 1985;25:77–80. Goel LR, Gupta R, Tyagi PK. Cost-benefit-analysis of a complex system with correlated failures and repairs. Microelectronics and Reliability 1993;33:2281–4. Goel LR, Gupta R, Tyagi PK. Analysis of a 2-unit standby system with preparation time and correlated failures and repairs. Microelectronics and Reliability 1995;35:1163–5. Goel LR, Mumtaz SZ. Analysis of a 2-server 2-unit cold standby system with correlated failures and repairs. Microelectronics and Reliability 1994;34:731–4. Goel LR, Mumtaz SZ. An inspection policy for unrevealed failures in a 2-unit cold standby system subject to correlated failures and repairs. Microelectronics and Reliability 1994;34:1279–82. Goel LR, Mumtaz SZ, Agrawal S. Analysis of an on-surface transit system with correlated failures and repairs. Microelectronics and Reliability 1994;34:165–9. Goel LR, Mumtaz SZ, Gupta R. A two-unit duplicating standby system with correlated failure-repair replacement times. Microelectronics and Reliability 1996;36:517–23. Goel LR, Shrivastava P. A 2-unit cold standby system with 3 modes and correlated failures and repairs. Microelectronics and Reliability 1991;31: 835–40. Goel LR, Shrivastava P. Profit analysis of a 2-unit redundant system with provision for rest and correlated failures and repairs. Microelectronics and Reliability 1991;31:827–33. Goel LR, Shrivastava P. A 2-unit standby system with imperfect switch, preventive maintenance and correlated failures and repairs. Microelectronics and Reliability 1992;32:1687–91. Goel LR, Shrivastava P. A warm standby redundant system with correlated failures and repairs. Microelectronics and Reliability 1992;32: 793–7. Goel LR, Tyagi PK. A 2-unit cold standby system with allowed down-
[24] [25]
[26]
[27] [28] [29] [30]
[31] [32] [33] [34] [35] [36]
[37] [38] [39]
[40] [41] [42] [43]
[44] [45]
[46]
351
time and correlated failures and repairs. Microelectronics and Reliability 1992;32:503–7. Goel LR, Tyagi PK. A 2 unit series system with correlated failures and repairs. Microelectronics and Reliability 1993;33:2165–9. Goel LR, Tyagi PK, Gupta R. Analysis of a standby system with dependent repair time and slow switching device. Microelectronics and Reliability 1994;34:383–6. Goel LR, Tyagi PK, Gupta R. A cold standby system with arrival time of server and correlated failures and repairs. Microelectronics and Reliability 1995;35:739–42. Gradshteyn IS, Ryzhik IM. Table of integrals, series, and products. 6th ed. San Diego: Academic Press; 2000. Hagen EW. Common-mode/common-cause failure: a review. Annals of Nuclear Energy 1980;7:509–17. Inaba T, Shirahata S. Measures of dependence in normal models and exponential models by information gain. Biometrika 1986;73:345–52. Kimura A, Seyama A. Statistical properties of short-term overtopping, In: Edge BL, editor. Proceedings of the nineteenth coastal engineering conference, vol. I. New York: American Society of Civil Engineers; 1985. p. 532–46. Klein JP, Moeshberger ML. Bounds on net survival probabilities for dependent competing risks. Biometrics 1986;44:529–38. Lai TL. Control charts based on weighted sums. Annals of Statistics 1974; 2:134–47. Lawrance AJ, Lewis PAW. Simple dependent pairs of exponential and uniform random variables. Operations Research 1983;31:1179–97. Marshall AW, Olkin I. A multivariate exponential distribution. Journal of the American Statistical Association 1967;62:30–44. Marshall AW, Olkin I. A generalized bivariate exponential distribution. Journal of Applied Probability 1967;4:291–302. Nagao M, Kadoya M. Two-variate exponential distribution and its numerical table for engineering application. Bulletin of the Disaster Prevention Research Institute, Kyoto University 1971;20:183–215. O’Neill TJ. Testing for symmetry and independence in a bivariate exponential distribution. Statistics and Probability Letters 1985;3:269–74. Osaki S. A two-unit parallel redundant system with bivariate exponential lifetimes. Microelectronics and Reliability 1980;20:521–3. Osaki S, Yamada S, Hishitani J. Availability theory for two-unit nonindependent series systems subject to shut-off rules. Reliability Engineering and System Safety 1989;25:33–42. Prudnikov AP, Brychkov YA, Marichev OI. Integrals and series, vol. 1–3. Amsterdam: Gordon and Breach Science Publishers; 1986. Raftery AE. A continuous multivariate exponential distribution. Communications in Statistics—Theory and Methods 1984;13:947–65. Rai K, Van Ryzin J. Multihit models for bivariate quantal responses. Statistics and Decisions 1984;2:111–29. Ramanarayanan R, Subramanian A. A 2-unit cold standby system with Marshall–Olkin bivariate exponential life and repair times. IEEE Transactions on Reliability 1981;30:489–90. Singpurwalla ND, Kong CW. Specifying interdependence in networked systems. IEEE Transactions on Reliability 2004;53:401–5. Sun YQ, Tiwari RC. Comparing cumulative incidence functions of a competing-risks model. IEEE Transactions on Reliability 1997;46: 247–53. Tyagi VK. Profit analysis of 2-unit standby system with rest period for the operator and correlated failures and repairs. Microelectronics and Reliability 1995;35:753–7.