Estimation and testing of availability of a parallel system with exponential failure and repair times

Estimation and testing of availability of a parallel system with exponential failure and repair times

Journal of Statistical Planning and Inference 77 (1999) 237–246 Estimation and testing of availability of a parallel system with exponential failure ...

84KB Sizes 0 Downloads 76 Views

Journal of Statistical Planning and Inference 77 (1999) 237–246

Estimation and testing of availability of a parallel system with exponential failure and repair times Malwane M.A. Ananda Department of Mathematical Sciences, University of Nevada, Las Vegas, NV 89154, USA Received 14 October 1997; accepted 10 September 1998

Abstract In this paper we consider the long-run availability of a parallel system having several independent renewable components with exponentially distributed failure and repair times. We are interested in testing availability of the system or constructing a lower conÿdence bound for the availability by using component test data. For this problem, there is no exact test or conÿdence bound available and only approximate methods are available in the literature. Using the generalized p-value approach, an exact test and a generalized conÿdence interval are given. An example is given to illustrate the proposed procedures. A simulation study is given to demonstrate their advantages over the other available approximate procedures. Based on type I and type II error rates, the simulation study shows that the generalized procedures outperform the other available c 1999 Elsevier Science B.V. All rights reserved. methods. AMS classiÿcations: 62 Keywords: Availability; Exponential distribution; Lower conÿdence limits; Generalized p-values; Generalized conÿdence intervals

1. Introduction To determine the long-term performance of a system that alternates between two capability states, up and down states according to some random process, one is often primarily concerned with the long-run availability of the system. Therefore, it is of interest to construct conÿdence intervals, in particular the lower conÿdence limit (LCL), and perform hypotheses testing on long-run availability of the system. Using exponential failure and repair times, Thompson (1966) gave lower conÿdence limits and tests of hypotheses for the system availability. Gray and Schucany (1969) considered the problem using exponential failure times and lognormal repair times. Thompson and Palicio (1975) and Martz and Waller (1982) considered the problem by Bayesian approach. Using component failure and repair data they gave conÿdence limits and testing procedures for such models. With component pass=fail data, Martz c 1999 Elsevier Science B.V. All rights reserved. 0378-3758/99/$ – see front matter PII: S 0 3 7 8 - 3 7 5 8 ( 9 8 ) 0 0 1 8 4 - 0

238

M.M.A. Ananda / Journal of Statistical Planning and Inference 77 (1999) 237–246

and Duran (1985) proposed several methods for constructing lower conÿdence limits for long-run availability of a parallel system. For a review of availability and related work, see Lie et al. (1977). In this paper, we consider the long-run availability of a parallel system consisting of several independent renewable components with exponentially distributed failure and repair times. We are interested in testing availability of the system by using component failure and repair times or constructing a lower LCL for the availability. For this problem, there is no exact test or conÿdence bound available in the literature. Elperin and Gertsbakh (1988) considered this problem and gave a conservative LCL for the long-run availability using a method (referred as the M-method) proposed by Gnedenko et al. (1969). Using limited simulation studies, Elperin and Gertsbakh (1988) showed that the M-method produces slightly better results than a large sample procedure reported in the paper. Applications of the M-method to other systems were given in Gertsbakh (1982). Using the generalized p-value approach introduced by Tsui and Weerahandi (1989), we construct an exact test for testing long-run availability. Using the generalized conÿdence interval concept introduced in Weerahandi (1993), we construct generalized LCL for the long-run availability of the parallel system. A limited simulation study was carried out to compare the performance of these generalized procedures with the other approximate procedures. According to the type I and type II error performance, this study shows that these generalized procedures outperform the other approximate procedures. In many statistical problems involving nuisance parameters, conventional statistical methods do not provide exact solutions. As a result, even with small sample sizes practitioners often resort to asymptotic methods which are known to perform very poorly with small sample sizes. Generalized p-value approach is a recently developed method which is based on exact probability statements rather than on asymptotic approximations. As a result, the performance of these generalized methods are better than the performance of the approximate procedures. According to a number of simulation studies (cf. Ananda (1997), Ananda and Weerahandi (1996), Ananda and Weerahandi (1997), Thursby (1992), Weerahandi and Johnson (1992), and many others), when compared, tests and conÿdence intervals obtained using generalized approach have been found to outperform the approximate procedures both in size and power. For a complete coverage and applications of these generalized tests and conÿdence intervals, the reader is referred to Weerahandi (1995).

2. Estimation and testing methods Let us consider a parallel system having n independent renewable components with exponentially distributed up and down times. For the ith (i = 1; 2; : : : ; n) component in the system, suppose X1i and X2i be the independent up and down times respectively and they are distributed exponentially with the respective means 1i and 2i . Then the

M.M.A. Ananda / Journal of Statistical Planning and Inference 77 (1999) 237–246

239

availability of the system in equilibrium is given by A = 1 − P (all components are down) n Y P (the ith component is down) =1− i=1

=1−

n  Y i=1

2i 1i + 2i

 :

(1)

We are interested in a lower conÿdence limit for A or testing the hypothesis H0 : A6A0

vs:

Ha : A¿A0 ;

(2)

where A0 is a known quantity. Suppose, for each component m1i and m2i up and down (1) (2) (m1i ) (1) (2) (m2i ) ; x1i ; : : : ; x1i and x2i ; x2i ; : : : ; x2i be their observed up times are available, and x1i and down times, respectively. Let S1i =

m1i X r=1

X1i(r) ;

S2i =

m2i X r=1

X2i(r)

for i = 1; 2; : : : ; n:

(3)

Then (Ski ; k = 1; 2; i = 1; 2; : : : ; n) is a joint sucient statistic for unknown parameters. Furthermore, Ski ’s are independent and 2Ski =ki (k = 1; 2; i = 1; 3; : : : ; n) has a chi-square distribution with degrees of freedom 2mki , respectively. Elperin and Gertsbakh (1988) proposed a conservative 100(1− )% lower conÿdence limit on A and it is given by LCLeg = 1 −

n Y [∗i =(1 + ∗i )];

(4)

i=1

where ∗i = −0:5+[0:25+1=(∗ Vi )]0:5 and ∗ is the unique positive root of the equation n X



Vi [0:25 + 1=( Vi )]

0:5

i=1

= c1− + (0:5)

n X

Vi :

(5)

i=1

Here Vi = S1i m2i =(S2i m1i ) for i = 1; 2; : : : ; n and c1− is the (1 − )th quantile of the Pn distribution of the random variable U = i=1 Fi , where Fi has a F-distribution with (2m1i ; 2m2i ) degrees of freedom. Here, for given , the percentage point c1− must be evaluated by simulation, and Eq. (5) must be solved for ∗ numerically. Elperin and Gertsbakh (1988) reported the following LCL based on large sample (m1i ; m2i large) normal approximation. An asymptotic 100(1 − )% lower conÿdence limit for the availability is given by (" n X ln(di =(1 + di )) LCLl = 1 − exp (" +z1−

i=1 n X i=1

m21i (m1i + m2i − 1) m2i (m1i − 1)2 (m1i − 2) m2i (1 + d1i )2

!1=2 #− ) ;

240

M.M.A. Ananda / Journal of Statistical Planning and Inference 77 (1999) 237–246

where di = (s2i =m2i )=(s1i =m1i ); z1− is the (1 − )th quantile of the standard normal distribution and [a]− = min{0; a}. Martz and Waller (1982) considered the problem in a Bayesian framework. When a parallel system consists of n identical components, assuming a squared-error loss function and independent gamma priors on 1i ∼ gamma( ; ) and 1i ∼ gamma( ; ), a 100(1 − )% lower conÿdence limit (for details, see Martz and Waller (1982), p. 572) for the system availability is given by (A ; 1), where A = 1 − (H )n . Here H =

(k + )(y+ + )F [2(k + ); 2(k + )] ; (k + )(x+ + ) + (k + )(y+ + )F [2(k + ); 2(k + )]

k is the observed number of failure=repair cycles, x+ is the total observed operating time, y+ is the total observed repair time and F (n1 ; n2 ) is the 100(1 − )th percentile of the F-distribution with degrees of freedom n1 and n2 . In the next sub-paragraph we derive the generalized lower conÿdence interval for A using the generalized conÿdence interval concept introduced in Weerahandi (1993). Even though this is an exact interval, they do not possess the repeated sampling property under the Neyman–Pearson framework. Nevertheless, even under the Neyman– Pearson framework, we show that, their actual probability coverage is almost the same as the desired nominal level. Generalized LCL for A. Deÿne the random variable "  −1 # n Y s2i s2i s1i + ; R= W2i W1i W2i

(6)

i=1

where Wki (k = 1; 2; i = 1; 2; : : : ; n) are independent chi-squared random variables with degrees of freedom mki (k = 1; 2; i = 1; 2; : : : ; n), respectively. Then 100(1− )% generalized lower conÿdence limit for A is given by LCLg = 1−c0 , where c0 in the (1− )th quantile of the random variable R, i.e. Pr(R6c0 ) = 1 − . The value of c0 can be evaluated using Monte Carlo Simulation. This can be done by generating very large number of random numbers from Wki (k = 1; 2; i = 1; 2; : : : ; n) chi-squared distributions and then evaluating R and looking at the empirical distribution of R. The proof of the generalized LCL is given in Appendix A. Now consider the problem of testing the hypothesis given in Eq. (2). Using the ÿrst two lower conÿdence limits (M-method and L-method) this can be done at level of signiÿcance if A0 ∈= (LCL; 1). Since these intervals are approximate conÿdence intervals, these procedures will yield approximate tests. Generalized p-values for testing H0 . The hypothesis H0 given in Eq. (2) can be tested using the generalized p-value " !  −1 # n Y s2i s1i s2i − 1 + A0 ¿0 ; + (7) p = Pr W2i W1i W2i i=1

M.M.A. Ananda / Journal of Statistical Planning and Inference 77 (1999) 237–246

241

where Wki (k = 1; 2; i = 1; 2; : : : ; n) are independent chi-squared random variables with degrees of freedom mki (k = 1; 2; i = 1; 2; : : : ; n): This p-value can easily be computed using Monte Carlo simulation. The derivation of this testing procedure is given in Appendix A. This p-value is an exact probability of a well deÿned extreme region of the sample space and measures the evidence in favor of the null hypothesis. This is an exact test in signiÿcance testing. In ÿxed level testing, one can use this p-value by rejecting the null hypothesis, if the generalized p-value is less than the desired nominal level . In the next section, using simulation studies, we will show that the actual type I error rate of this procedure is very close to the desired nominal level (unlike the other two methods) and outperforms the other two methods. 3. Examples In this section, we give one example and two simulation studies to illustrate the proposed procedures and to demonstrate their advantages over the other procedures. Example 1. Let us consider the data set analyzed in Elperin and Gertsbakh (1988). This data set was originally reported in Martz and Waller (1982) and analyzed using Bayesian techniques. The parallel system has two renewable components with exponentially distributed failure and repair times. The distribution of failure times are X11 ∼ Exp(60), and X12 ∼ Exp(60), respectively. The corresponding repair time distributions are X21 ∼ Exp(4) and X22 ∼ Exp(4). Each component has four failure and repair times as follows:  failure times :74:3; 19:0; 26:7; 88:5; Component 1 repair times :0:5; 10:1; 5:8; 1:2;  failure times :128:3; 17:8; 47:8; 5:2; Component 2 repair times :11:8; 4:8; 3:6; 5:0: The corresponding values of the sucient statistic given in Eq. (3) are S11 = 208:4; S12 = 17:6; S21 = 199:1, and S22 = 25:2. Using M-method and L-method, 95% lower conÿdence limits for long run availability reported in Elperin and Gertsbakh (1988) are 0.948 and 0.879, respectively. The 95% generalized lower conÿdence limit is 0.962. The true value of the long-run availability is 0.9961. Using noninformative prior distributions on both failure and repair times, the 95% Bayesian lower conÿdence limit reported in Martz and Waller (1982) is 0.964. The actual probability coverage for these procedures (M-method, L-method and G-method) are 0.988, 0.999 (from Elperin and Gertsbakh’s paper) and 0.947. According to these numerical values, the generalized LCL is in a close agreement with the noninformative Bayes LCL. However, the Bayesian interval under the noninformative prior is not exactly numerically equal to the generalized interval. As in

242

M.M.A. Ananda / Journal of Statistical Planning and Inference 77 (1999) 237–246 Table 1 Empirical conÿdence levels when nominal conÿdence 1 − = 0:9 (based on 50 000 replications) Parameters: n; Â1 = (11 ; : : : ; 1n ); Â2 = (21 ; : : : ; 2n )

G-Meth.

M-Meth.

L-Meth.

n = 2; Â1 = (1; 1); Â2 = (0:05; 0:06) n = 2; Â1 = (1; 1); Â2 = (0:25; 0:25) n = 2; Â1 = (10; 15); Â2 = (0:1; 0:2) n = 4; Â1 = (1; 1; 1; 1); Â2 = (0:12; 0:14; 0:15; 0:15) n = 4; Â1 = (1; 1; 3; 3); Â2 = (0:1; 0:2; 0:2; 0:3) n = 4; Â1 = (2; 3; 4; 5); Â2 = (0:1; 0:1; 0:2; 0:2)

0.894 0.878 0.899 0.871 0.876 0.891

0.939 0.934 0.942 0.967 0.970 0.972

0.983 0.985 0.983 0.983 0.983 0.983

Table 2 Type I error rates for testing H0 when nominal level = 0:1 (based on 50 000 replications) Parameters: n; Â1 = (11 ; : : : ; 1n ); Â2 = (21 ; : : : ; 2n )

G-Meth.

M-Meth.

L-Meth.

n = 2; Â1 = (1; 1); Â2 = (0:05; 0:06) n = 2; Â1 = (1; 1); Â2 = (0:25; 0:25) n = 2; Â1 = (10; 15); Â2 = (0:1; 0:2) n = 4; Â1 = (1; 1; 1; 1); Â2 = (0:12; 0:14; 0:15; 0:15) n = 4; Â1 = (1; 1; 3; 3); Â2 = (0:1; 0:2; 0:2; 0:3) n = 4; Â1 = (2; 3; 4; 5); Â2 = (0:1; 0:1; 0:2; 0:2)

0.106 0.119 0.101 0.129 0.125 0.109

0.061 0.063 0.058 0.032 0.031 0.028

0.017 0.014 0.018 0.018 0.017 0.017

some statistical problems (see, e.g., Ananda (1997)) this generalized procedure may be related (numerically, not philosophically) to some generalized Bayes procedure. It is very possible that there is a di use prior which makes this connection. Example 2. The following simulation example shows the empirical coverage of each procedure when the intended nominal conÿdence level is at = 0:1. The simulation is based on 50 000 replications. Sample sizes (number of failure times and number of repair times, i.e., m1i and m2i ) for all components were kept at 4. The simulation results are given in Table 1. According to these simulation results, the actual probability coverage for the G-method is very close to the intended coverage. Example 3. In the ÿxed level hypothesis testing setup, the following simulation example shows a comparison of performance of these three procedures. As in the previous example, sample sizes for failure and repair times for each component were kept at 4 throughout this example. All the results are based on 50 000 replications. For testing H0 : A6A0 vs: Ha : A¿A0 , Table 2 shows the empirical type I error rates when nominal (intended) type I error rate is at 0:1. Table 3 shows a power comparison for testing H0 : A60:96 vs: Ha : A¿0:96 for a two component system. The values in this table show the powers without adjusting the size. Table 4 shows the power comparison (for same parameter conÿgurations as in Table 3) after adjusting the size at = 0:1 (i.e., after adjusting the actual type I error rate at 0.1). Without adjusting the size, powers of G-method clearly outperform the other two methods. Even after adjusting the size, G-method still maintain a slight advantage

M.M.A. Ananda / Journal of Statistical Planning and Inference 77 (1999) 237–246

243

Table 3 Comparison of power for a two component system (without adjusting the size) for testing H0 : A60:96 vs. Ha : A¿0:96 (based on 50 000 replications) Parameters: Â1 = (11 ; 12 ); Â2 = (21 ; 22 ); and A

G-Meth.

M-Meth.

L-Meth.

Â1 = (1; 1); Â2 = (0:25; 0:25); A = 0:9600 Â1 = (1; 1); Â2 = (0:25; 0:15); A = 0:9739 Â1 = (1; 1); Â2 = (0:15; 0:15); A = 0:9829 Â1 = (1; 1); Â2 = (0:15; 0:10); A = 0:9881 Â1 = (1; 1); Â2 = (0:10; 0:10); A = 0:9917 Â1 = (2; 4); Â2 = (0:20; 0:30); A = 0:9936 Â1 = (1; 1); Â2 = (0:15; 0:05); A = 0:9938 Â1 = (1; 1); Â2 = (0:05; 0:05); A = 0:9977

0.119 0.246 0.416 0.571 0.716 0.801 0.815 0.967

0.063 0.148 0.284 0.425 0.580 0.684 0.701 0.934

0.014 0.043 0.104 0.192 0.310 0.412 0.429 0.794

Table 4 Comparison of power for a two component system (after adjusting the size) for testing H0 : A60:96 vs. Ha : A¿0:96 (power based on 50 000 replications and the nominal sizes are based on 75 000 replications) Parameters: Â1 = (11 ; 12 ); Â2 = (21 ; 22 ) and A

G-Meth.

M-Meth.

L-Meth.

Â1 = (1; 1); Â2 = (0:25; 0:25); A = 0:9600 Â1 = (1; 1); Â2 = (0:25; 0:15); A = 0:9739 Â1 = (1; 1); Â2 = (0:15; 0:15); A = 0:9829 Â1 = (1; 1); Â2 = (0:15; 0:10); A = 0:9881 Â1 = (1; 1); Â2 = (0:10; 0:10); A = 0:9917 Â1 = (2; 4); Â2 = (0:20; 0:30); A = 0:9936 Â1 = (1; 1); Â2 = (0:15; 0:05); A = 0:9938 Â1 = (1; 1); Â2 = (0:05; 0:05); A = 0:9977

0.100 0.215 0.376 0.534 0.671 0.766 0.784 0.959

0.100 0.208 0.367 0.524 0.663 0.760 0.776 0.958

0.100 0.205 0.362 0.519 0.659 0.756 0.773 0.957

over the other two methods. It should be noted here that when comparing powers of approximate tests, in order to get a meaningful comparison one need to adjust the sizes. However, in reality this is of less concern, since a practitioner is not going to adjust the nominal size in order to get the desired level. For instance, by looking at the adjusted power comparison, the performance of the L-method is almost as good as the other two procedures, but clearly the L-method is an extremely a conservative procedure. In fact, The actual type I error rate of the L-method is almost equal to one tenth of the intended level (with the given power comparison example, the actual type I error rate for the L-method is 0:014 and the intended level is 0:1). Overall, according to these simulation studies, when compared with the other two procedures, not only is the actual type I error rates of the generalized procedure closer to the intended size, it also outperforms the other two procedures in terms of power. Moreover, unlike the other two procedures, this generalized test is an exact test in signiÿcance testing and generalized conÿdence interval is an exact conÿdence interval based on exact probability statements.

244

M.M.A. Ananda / Journal of Statistical Planning and Inference 77 (1999) 237–246

Appendix A In this section we provide proof for the test and the conÿdence bound given in Section 2. The proofs are based on the generalized tests and generalized conÿdence intervals introduced in Tsui and Weerahandi (1989) and Weerahandi (1993). For more details of these generalized procedures, readers are referred to Weerahandi (1995b). A.1. Derivation of the generalized LCL given in Section 2 In order to construct a lower conÿdence limit for A, ÿrst let us construct an upQn per conÿdence limit for = i=1 [2i (1i + 2i )−1 ]. By suciency, we can restrict our attention to Ski ; k = 1; 2; i = 1; 2; : : : ; n: Notice that the underlying family of distributions parametrized by Â; the vector of unknown parameters, is invariant under the common scale transformations (S11 ; : : : ; S1n ; S21 ; : : : S2n ) → (kS11 ; : : : ; kS1n ; kS21 ; : : : ; kS2n ) and (11 ; : : : ; 1n ; 21 ; : : : ; 2n ) → (k11 ; : : : ; k2n ), where k is a positive constant. The parameter of interest is una ected by any change of scale, and therefore the statistical problem is invariant under the scale transformation. Now consider the potential generalized pivotal "  −1 # n Y s2i 2i s2i 2i s1i 1i + ; (A.1) R(X; x; Â) = S2i S1i S2i i=1

where ski ; k = 1; 2; i = 1; 2; : : : ; n are the observed values of Ski ; k = 1; 2; i = 1; 2; : : : ; n: Since Wki = 2Ski =ki has a chi-squared distribution with degrees of freedom 2nki ; the probability distribution of R is free of unknown parameters. The observed value of R(X; x; Â), R(x; x; Â) = does not depend on nuisance parameters, hence R is a generalized pivotal quantity. In order to construct 100(1 − )% upper conÿdence limit for ; ÿnd c0 such that 1 − = Pr (R6c0 ) " !  −1 # n Y s2i s2i s1i + 6c0 = Pr W2i W1i W2i i=1

where c0 is the (1 − )th quantile of the random variable R. Then 100(1 − )% lower conÿdence interval for is given by R(x; x; Â)6c0 , i.e., 6c0 . Hence, 100(1 − )% upper conÿdence interval for A is given by (1 − c0 ; 1]. This conÿdence interval is unique within the class of scale invariant interval estimates −1 −1 −1 ; S13 S11 ; : : : ; S1n S11 ; and it can be established as follows. It is easy to see that (S12 S11 −1 −1 −1 S21 S11 ; S22 S11 ; : : : ; S2n S11 ) is a maximal invariant for this testing problem. This R can be written in the form of Eq. (6), the distribution of R depends on the data only through −1 −1 −1 −1 −1 ; s13 s11 ; : : : ; s1n s11 ; s21 s11 ; s22 s11 ; : : : ; s2n a set of observed maximal invariant (s12 s11 −1 s11 ): Furthermore, the observed value of R does not depend on data which implies that it depends on data through the set of observed maximal invariant. Therefore, any invariant generalized conÿdence region of based on data can be constructed using R

M.M.A. Ananda / Journal of Statistical Planning and Inference 77 (1999) 237–246

245

alone (for details, see Theorem 6.1, p. 152 in Weerahandi (1995)) and the conÿdence interval is unique within the class of scale invariant interval estimates based on the sucient statistic Ski ; k = 1; 2; i = 1; 2; : : : ; n: A.2. Derivation of the generalized test given in Eq. (7) The problem of testing the hypothesis in Eq. (2) is equivalent to testing H0 : =

n Y [2i (1i + 2i )−1 ]¿ 0 = 1 − A0

vs:

Ha : ¡1 − A0 :

i=1

Consider the potential generalized test variable deÿned by "  −1 # n Y s2i 2i s2i 2i s1i 1i + − : T (X; x; Â) = S2i S1i S2i

(A.2)

i=1

The observed value of T is Tobs = T (x; x; Â) = 0: Since T can be written as "  −1 # n Y s2i s2i s1i + − ; T (X; x; Â) = W2i W1i W2i i=1

it is clear that when is speciÿed T has a probability distribution that is free of nuisance parameters. Furthermore, when x and nuisance parameters are ÿxed, the cdf of T , Pr(T 6t; ) is a monotonically increasing function of for any given t, i.e., T is stochastically decreasing in . Therefore, T is a test variable that can be used to test the hypothesis in (2) and the generalized p-value is given by p = Pr (T ¿Tobs = = 0 ) which can be written in the form of Eq. (7). Moreover, this p-value is unique within the class of scale invariant tests and it can be established as follows. As described in the previous proof, the family of distributions parametrized by  and the testing problem is invariant under the scale transformation. Since T can be expressed as "  −1 # n Y s2i s2i s1i + − T (X; x; Â) = s1i W2i s1i W2i s1i W2i i=1

the distribution of T depends on the data x only through the observed maximal invariant. Since the observed value of T is equal to zero, it also depends on data only through the observed maximal invariant. Therefore any scale invariant test can be obtained using T alone (for details, see chapter 5 of Weerahandi (1995)). References Ananda, M.M.A., 1997. Bayesian and non-Bayesian solutions to analysis of covariance models under heteroscedasticity. J. Econometrics 86, 177–192.

246

M.M.A. Ananda / Journal of Statistical Planning and Inference 77 (1999) 237–246

Ananda, M.M.A., Weerahandi, S., 1996. Testing the di erence of two exponential means using generalized p-values. Commun. Statist. Simul. Comput. 25 (2), 521–532. Ananda, M.M.A., Weerahandi, S., 1997. Two-way ANOVA with unequal cell frequencies and unequal variances. Statist. Sinica 7, 631– 646. Elperin, T., Gertsbakh, I., 1988. Lower conÿdence limit for the availability of a parallel system with renewable components. Commun. Statist. Theory Meth. 17 (2), 311–323. Gertsbakh, I.B., 1982. Conÿdence limits for highly reliable coherent systems with exponentially distributed component life. J. Amer. Statist. Assoc. 77, 673– 678. Gnedenko, B.V., Belyaev, Y.K., Solovyev, A.D., 1969. Mathematical Methods of Reliability Theory. Academic Press, New York. Gray, H.L., Schucany, W.R., 1969. Lower conÿdence limits for availability assuming lognormally distributed repair times. IEEE Trans. Reliab. R-18, 157–162. Lie, C.H., Hwang, C.L., Tillman, F.A., 1977. Availability of maintained systems: a state-of-the-art survey. AIIE Trans. 9, 247–259. Martz, H.F., Duran, B.S., 1985. A comparison of three methods for calculating lower conÿdence limits on system reliability using binomial data. IEEE Trans. Reliab. R-34, 113–120. Martz, H.F., Waller, R.A., 1982. Bayesian Reliability Analysis. Wiley, New York. Thompson, M., 1966. Lower conÿdence limits and a test of hypotheses for system availability. IEEE Trans. Reliab. R-15, 32–36. Thompson, W.E., Palicio, P.A., 1975. Bayesian conÿdence limits for the availability of systems. IEEE Trans. Reliab. R-24, 118–120. Thursby, J.G., 1992. A comparison of several exact and approximate tests for structural shift under heteroskedasticity. J. Econometrics 53, 363–386. Tsui, K., Weerahandi, S., 1989. Generalized p-values in signiÿcance testing of hypotheses in the presence of nuisance parameters. J. Amer. Statist. Assoc. 84, 602– 607. Weerahandi, S., 1993. Generalized conÿdence intervals. J. Amer. Statist. Assoc. 88, 899 – 905. Weerahandi, S., 1995. Exact Statistical Methods for Data Analysis. Springer, New York. Weerahandi, S., Johnson, R.A., 1992. Testing reliability in a stress-strength model when X and Y are normally distributed. Technometrics 34, 83–91.