- Email: [email protected]

Exponential progressive step-stress life-testing with link function based on Box–Cox transformation Tsai-Hung Fana,∗ , Wan-Lun Wanga , N. Balakrishnanb a Graduate Institute of Statistics, National Central University, Jhongli 32001, Taiwan b Department of Mathematics and Statistics, McMaster University, Hamilton, Ont., Canada

Received 16 May 2007; received in revised form 11 September 2007; accepted 19 October 2007 Available online 20 November 2007

Abstract In order to quickly extract information on the life of a product, accelerated life-tests are usually employed. In this article, we discuss a k-stage step-stress accelerated life-test with M-stress variables when the underlying data are progressively Type-I group censored. The life-testing model assumed is an exponential distribution with a link function that relates the failure rate and the stress variables in a linear way under the Box–Cox transformation, and a cumulative exposure model for modelling the effect of stress changes. The classical maximum likelihood method as well as a fully Bayesian method based on the Markov chain Monte Carlo (MCMC) technique is developed for inference on all the parameters of this model. Numerical examples are presented to illustrate all the methods of inference developed here, and a comparison of the ML and Bayesian methods is also carried out. © 2007 Elsevier B.V. All rights reserved. Keywords: Step-stress test; Accelerated life-testing; Maximum likelihood estimates; Bayesian inference; Fisher-scoring algorithm; Markov chain Monte Carlo; Cumulative exposure model; Box–Cox transformation; Progressive censoring; Link function

1. Introduction Reliability demonstration or veriﬁcation testing for highly reliable products, such as industrial commodities or highrisk public facilities, is often time-consuming as well as expensive under normal operating conditions since it would take a long period of time to produce a reasonable number of failures for the required analysis. In order to reduce the experimental cost and also to avoid observing some extreme lifetimes, accelerated life-testing (ALT) is used frequently in these situations. The design and analysis of such ALTs are, therefore, very important from a practical viewpoint. The aim of an ALT is to increase the rate of failures of highly reliability items in an efﬁcient and meaningful manner. Typically, the ALT design involves the selection of stress variables (or independent variables), such as temperature, humidity, voltage, pressure, etc., which deﬁne the stress environment, and the speciﬁcation of optimal test levels for these stress variables to produce an environment which accelerates the failure of items under test in a meaningful way. One of the simplest and useful ALT is the so-called “progressive step-stress” testing in which stress is increased step-by-step over time until the completion of the experiment. On the basis of data obtained from such a step-stress test, we intend to obtain information on the life-distribution of products. ∗ Corresponding author. Tel.: +886 3 4267228; fax: +886 3 4258602.

E-mail address: [email protected] (T.-H. Fan). 0378-3758/$ - see front matter © 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.jspi.2007.10.002

T.-H. Fan et al. / Journal of Statistical Planning and Inference 138 (2008) 2340 – 2354

2341

Usually, there are constraints on the length of a life-test and for this reason some form of censoring is commonly adopted. If, for example, removing unfailed items from the life-test at pre-speciﬁed times is adopted, we have “Type-I censoring”. Instead, if we terminate the life-test at the time of a failed item and remove all remaining unfailed items from the life-test, we have “Type-II censoring”. Balakrishnan et al. (2007, 2008) have discussed some inferential procedures for the exponential step-stress model under these two forms of censoring. Here, we assume that the failure of items under test can only be inspected at the end of a time-interval rather than a continuous monitoring of failures. In other words, we only record whether an item failed in a certain time-interval or not rather than observing its exact failure time. Data of this type are usually referred to as “grouped data” and in many industrial applications, it is quite common to observe data in this form. We will ﬁrst describe the ALT model for the general k-stage and M-stress variables, and present the corresponding matrix expressions. We will then discuss inferential methods for the model parameters based on progressively Type-I censored grouped data obtained from a step-stress life test. For details on progressive Type-I censoring, one may refer to the monograph by Balakrishnan and Aggarwala (2000) and the recent discussion paper by Balakrishnan (2007) which provides a comprehensive review of various developments on progressive censoring. There exists an extensive literature on step-stress ALTs. The simple step-stress, where only one change of stress occurs, was discussed by Nelson (1980). Optimal simple step-stress plans were proposed by Miller and Nelson (1983), while Bai et al. (1989) discussed the same problem under censoring. Multiple step-stress, where the stress changes more than once, was handled by Khamis (1997) and Khamis and Higgins (1998). Books by Nelson (1990), Meeker and Escobar (1998) and Bagdonavicius and Nikulin (2002) all provide an elaborate treatment on statistical methods for accelerated tests. However, most of the inferential work on ALTs are based on maximum likelihood (ML) estimation as seen, for example, in Gouno et al. (2004), Zhao and Elsayed (2005), Wu et al. (2006), and Balakrishnan and Xie (2007a, b), which may often require large sample sizes for efﬁcient statistical inference. A Bayesian approach, on the other hand, is better suited when dealing with smaller sample sizes. Some work on Bayesian inference in the context of ALTs have been conducted by van Dorp et al. (1996) and van Dorp and Mazzuchi (2004). Most of the papers dealing with exponential step-stress tests in the literature have assumed the failure to be a logarithmic function of stress; see Gouno et al. (2004), Wu et al. (2006), Yu et al. (2002), and Yeo and Tang (1999), among others. In this paper, we provide a generalization of these step-stress ALT models by considering the relationship between the failure rate and the stress variables to be in a linear way under the Box–Cox transformation (Box and Cox, 1964). We then discuss the ML and Bayesian methods of inference for the model parameters. For the ML approach, since the estimators of the parameters cannot be expressed analytically, the popular Fisher-scoring algorithm is applied for the numerical determination of the MLEs. For the Bayesian approach, the posterior distribution obtained by updating the prior information with ALT data cannot be procured in a closed form and Markov chain Monte Carlo (MCMC) procedure is used for drawing posterior inference. The rest of this paper is organized as follows. In Section 2, we ﬁrst introduce the basic model and notation. In Section 3, we present the computational issues for the ML approach. In Section 4, we discuss the posterior analysis with the use of MCMC procedure, and in particular by the (M–H) algorithm (cf. Hastings, 1970; Gilks and Wild, 1992; Chib and Greenberg, 1995). In Section 5, some numerical comparisons are presented based on several simulation studies and also some illustrative examples. Finally, some concluding remarks are made in Section 6. 2. Model description and notation We ﬁrst describe a multi-stage step-stress ALT with Type-I progressively group-censored data involving k stress environments and pre-speciﬁed levels of M-stress variables. The stress environments used in the experiment are more and more serious over time, and it is presumed that the ordering of the testing environments in terms of severity induces the same ordering in the associated failure rates, i.e., 0 ≡ 0 < 1 < · · · < k < k+1 ≡ ∞. Suppose N test items are available for the step-stress life-test. Let zi denote the stress environment at the ith time duration. The experiment is then carried out in the following manner. All N items are placed simultaneously on test at an initial stress environment z1 = (1, z11 , . . . , z1M )T , and run until time 1 , at which point, the number of failed items n1 is observed, and r1 N − n1 surviving items are removed from the test; starting at time 1 , the remaining N − n1 − r1 surviving items are tested at the second stress environment, say z2 = (1, z21 , . . . , z2M )T , and run until time 2 , at which point the number of failures n2 is observed and r2 surviving items are removed from the test, and

2342

T.-H. Fan et al. / Journal of Statistical Planning and Inference 138 (2008) 2340 – 2354

so on; ﬁnally, time of the life-test k , the number of failed items nk is observed and all remaining at the terminating r surviving items are all censored. rk = N − ki=1 ni − k−1 i=1 i Under the aforementioned step-stress ALT scheme, the following three assumptions are made: 1. At any stress level, the lifetime of a test item follows an exponential distribution. We suppose that at stress environment zi = (1, zi1 , . . . , ziM )T and in the time-interval [i−1 , i ), i = 1, 2, . . . , k (with 0 ≡ 0), the lifetime distribution of a test item is exponential with probability density function f (t|i ) = i exp(−i t),

t > 0,

(1)

where i > 0, i = 1, 2, . . . , k, is the scale parameter of the distribution which can be viewed as a hazard function or failure rate of a test item in the ith interval. 2. The occurrences of failures follow a cumulative exposure model. The model for the failure mode assumes that the remaining life of an item depends on the current cumulative “exposure” only regardless of how the “exposure” was accumulated. Also, the change in stress has no effect on life—only the level of stress does—a Markovian property; see Nelson (1990). Based on this assumption, therefore, the cumulative distribution function (cdf) of the lifetime T over the time range [0, ∞) can be expressed as ⎧ ⎫⎤ ⎡ i−1 ⎨ ⎬ F (t|) = ⎣1 − exp −i (t − i−1 ) − j j ⎦ I[i−1 ,i ) (t), (2) ⎩ ⎭ j =1

where = (1 , . . . , k ), i = 1, . . . , k. For simplicity, we shall denote the duration of jth time-interval by j = j − j −1 , the indicator function I[i−1 ,i ) (t) = 1 if t ∈ [i−1 , i ), and the convention 0i=1 {·} = 0. 3. The failure rate of a test item and the stress environment zi are linked by a linear relationship under the Box–Cox transformation. Speciﬁcally, we assume the relationship between the failure rate i and the stress environment zi to be ()

i where

= zTi ,

⎧ ⎨ i − 1 () i = ⎩ log(i )

(3)

if = 0, if = 0,

which is known as the Box–Cox transformation, with = (0 , 1 , . . . , M )T and being unknown parameters. The power-transformed parameter, , allows to represent the stress variables which are linear in the transformed variable, under a wide variety of situations. For example, it includes the commonly used square root, reciprocal and log transformations as special cases. We will refer to the model speciﬁed by (1)–(3) as the Box–Cox step-stress ALT (BC-SSALT) model. If is set as 0, then it corresponds to the usual step-stress ALT with a log-linear link function, namely, the log-SSALT model. Observe that, in general, we have the relationship i = (zTi + 1)1/ . For the rest of this paper, we will be primarily concerned with the development of appropriate inferential methods for this BC-SSALT model. 3. ML estimation In this section, we derive the maximum likelihood estimates (MLEs) of the parameters underlying the generalized SSALT model described in Section 2. 3.1. Estimation of parameters Given stress environments Z = (z1 , . . . , zk ) and the time durations j > 0 for j = 1, . . . , k, combining the cdf of the lifetime expressed by (2) and the model assumption speciﬁed by (3) with = 0, we can directly obtain the cdf of the

T.-H. Fan et al. / Journal of Statistical Planning and Inference 138 (2008) 2340 – 2354

2343

lifetime of BC-SSALT models. Then, given the numbers of failures n=(n1 , . . . , nk ) and censored items r=(r1 , . . . , rk ) corresponding to each time-interval, the log-likelihood function of the parameters (, ) is obtained as (, |Z, n, r) k ∝ {ni log[1 − exp(−(zTi + 1)1/ i )] − (mi − ni )(zTi + 1)1/ i },

(4)

i=1

i−1 where = 0 and mi = N − i−1 j =1 rj is the number of surviving items available at the beginning of the ith j =1 nj − stage. For determining the MLEs of (, ), we ﬁrst have the partial derivatives of (4) with respect to and as j(, ) = zi (zTi + 1)1/−1 i {ni [1 − exp(−(zTi + 1)1/ i )]−1 − mi }, j k

= 0

(5)

i=1

and k T z j(, ) 1 i (zTi + 1)1/ i = − 2 log(zTi + 1) T + 1) j (z i i=1 ×[ni [1 − exp(−(zTi + 1)1/ i )]−1 − mi ] ,

= 0,

(6)

respectively. Solution of the estimating equations j(, )/j = 0 and j(, )/j = 0 will yield the MLE of the underlying model parameter vector = (T , )T . Since the MLE of can not be expressed explicitly, we apply the Fisher-scoring algorithm (Thisted, 1988) for the numerical determination of the MLE. It is an iterative procedure based on the score vector s = j/j presented in (5) and (6) and the Fisher information matrix I found in (A.4) of the Appendix, for which we compute the expectations of the second derivatives derived in (A.1)–(A.3) of the Appendix by replacing mi and ni , for i = 1, . . . , k, by ⎡ ⎤ i−1 j ⎦, E(mi ) = N Gi−1 (i−1 ) ⎣1 − Gj (j ) j =1

where Gi (i )=

i

j =1 [1−Fj (j )] with i =(1 , . . . , i ), G0 (0 )=1 and j =rj /N

is the proportion of items censored

at the end of the jth stage, j = 1, . . . , k − 1, and E(ni ) = E(mi )Fi (i ), where Fi (i ) = 1 − exp(−(zTi + 1)1/ i ), = 0, respectively. (l) With ˆ denoting the estimate of at the lth step of the iteration, the iterative equation of the Fisher-scoring procedure is (l+1) (l) (l)−1 (l) ˆ sˆ , = ˆ + Iˆ

l = 0, 1, 2, . . . ,

(7)

(l) (0) (l) ˆ T is ˆ ˆ T , ) where sˆ(l) and Iˆ are values of s and I evaluated at ˆ , respectively. Given the initial value ˆ , the MLE =( −1 obtained upon convergence, and the corresponding Iˆ then provides an estimate of the asymptotic variance–covariance ˆ For ensuring that the Fisher information matrix, I(, ), is positive deﬁnite, we need matrix of the MLE, i.e., cov(). to have E(mi ) > 0, for i = 1, 2, . . . , k, meaning that, on the average, the number of unfailed items at the beginning of any stage must exceed the number of items to be progressively censored at that stage. Speciﬁcally, the assumptions of a large sample size and the step-stress ALT terminating at the designed last kth stress level are necessary for the SSALT model. For a large sample size, therefore, approximate individual conﬁdence intervals for the parameters and , or simultaneous conﬁdence region for can be obtained by using the asymptotic normality of MLE under some general regularity conditions (Casella and Berger, 2002).

2344

T.-H. Fan et al. / Journal of Statistical Planning and Inference 138 (2008) 2340 – 2354

For the sake of completeness, the expressions for the special case = 0 in which i = e zi T

for i = 1, . . . , k,

are presented here. Given all of the stress variables Z and the observed data n and r, the log-likelihood function of is (|Z, n, r) ∝

k

{ni log[1 − exp{−i ezi }] − (mi − ni )i ezi }. T

T

(8)

i=1

ˆ we solve j()/j = 0 for , where To ﬁnd the MLE of , denoted by , j() T T = zi i ezi {ni [1 − exp(−i ezi )]−1 − mi }. j k

(9)

i=1

Unfortunately, ˆ cannot be written in a closed form either. The Fisher-scoring algorithm (7) with replaced by together with the score vector (9) and the Fisher information matrix in the log-SSALT model I () =

k

zi zTi Bi (i ),

i=1

where

⎡ Bi (i ) = NGi−1 (i−1 ) ⎣1 −

i−1 j =1

⎤ j fi (i )2 ⎦ , Gj (j ) [1 − Fi (i )] Fi (i )

with Fi (i ) = 1 − exp(−i ezi ) and fi (i ) = i ezi exp(−i ezi ), is employed to iteratively provide the approximate MLE of . Again, the condition of Bi (i ) > 0, for i = 1, 2, . . . , k, is needed for satisfying the assumption that the step-stress test terminates at the kth stage. Details on the computation of Fisher information matrices are presented in the Appendix. T

T

T

3.2. Inference on functions of parameters It is important to realize that the purpose of the step-stress ALT is not merely to estimate the unknown parameters of the model. Quite often, interest may also lie in studying the effect of the stress environments on the failure-time distribution and on the prediction of the failure rate, the expected life length, and the reliability of an item under the use or normal operating conditions. Let be the index of the use or normal environment, and that all the variables under this environment be denoted by z = (1, z1 , . . . , zM )T . Then, all the above mentioned quantities of interest are indeed functions of and . For example, the failure rate is given by = () = (zT + 1)1/ ,

(10)

the expected life length can be expressed by T −1/ , T = T () = −1 = (z + 1)

(11)

and the reliability at mission time ti is R (ti ) = R (ti |) = exp{− ti } = exp{−(zT + 1)1/ ti }.

(12)

T ˆ T into Eqs. (10)–(12) to obtain the MLEs of the By the invariance property of MLE, we can replace by ˆ = (ˆ , ) corresponding quantities of interest. Furthermore, under general regularity conditions, for large samples, ˆ possesses an

T.-H. Fan et al. / Journal of Statistical Planning and Inference 138 (2008) 2340 – 2354

2345

asymptotic normal distribution, i.e., (ˆ − ())/ var(ˆ ) converges in distribution to the standard normal distribution, where the asymptotic variance of ˆ , var(ˆ ), can be derived by using the delta method. Speciﬁcally, we have ˆ ˙ (), var(ˆ ) = ˙ ()T cov() where

⎡ j ⎤

⎤ ⎡ z (zT + 1)1/−1 ⎢ j ⎥ ⎦ ⎥ ⎣ . ˙ () = ⎢ zT 1 ⎣ j ⎦ = (z + 1)1/ T + 1) log(z − 2 T (z + 1) j

Similarly, the asymptotic variance of Tˆ is given by ˆ T˙ (), var(Tˆ ) = T˙ ()T cov() where

⎡ jT ⎤

⎤ ⎡ −z (zT + 1)−(1+)/ ⎢ j ⎥ ⎦ ⎥ ⎣ . T˙ () = ⎢ zT ⎣ jT ⎦ = (z + 1)−1/ 1 log(zT + 1) − (zT + 1) 2 j

Finally, we have the asymptotic variance of Rˆ to be ˆ R˙ (ti |), var(Rˆ ) = R˙ (ti |)T cov() where

⎡ jR (t ) ⎤ ⎡ j ⎤ i −ti R (ti ) ⎢ j ⎥ ⎢ j ⎥ ⎥, ⎥ ⎢ R˙ (ti |) = ⎢ ⎣ jR (ti ) ⎦ = ⎣ j ⎦ −ti R (ti ) j j for the failure with R (ti ) as given in (12). As a result, the resulting approximate 100(1 − )% conﬁdence intervals ˆ ˆ ), Tˆ ± z(1− /2) var( Tˆ ), and rate, the expected life length, and the reliability are constructed by ± z(1− /2) var( Rˆ (ti )), respectively, where var(·) represents the corresponding variance evaluated at = ˆ and Rˆ (ti ) ± z(1− /2) var( ˆ and z(1− /2) denotes the 100(1 − /2) percentage point of the standard normal distribution. = , The necessary partial derivatives evaluated at = ˆ for the log-SSALT model are also presented as follows: j0 Tˆ ˙ 0 ˆ () = = z ez , j ˆ = 0 jT Tˆ ˙ 0 ˆ = −z e−z T () = j =ˆ and

jR0 (ti ) Tˆ Tˆ ˙ 0 ˆ = −ti exp{−ez ti }z ez . R (ti |) = j =ˆ

4. A fully Bayesian approach In order to reduce the cost of the experiment, small sample sizes are often used in reliability studies, and also many test items are censored from the life-test. For this reason, the large-sample based ML approach described in the preceding section may not be accurate. So, we develop in this section an alternate approach from the Bayesian perspective which typically displays better performance in the case of small samples.

2346

T.-H. Fan et al. / Journal of Statistical Planning and Inference 138 (2008) 2340 – 2354

4.1. Posterior inference In a Bayesian analysis, one must choose a suitable prior distribution denoted by () = (, ) for the model parameter . We assume and to be independent a priori, which is quite a reasonable assumption to make, resulting in (, ) = 1 ()2 (), where 1 () and 2 () are the prior densities of and , respectively. Furthermore, we consider that the prior distribution of follows a multivariate normal distribution with mean vector μ0 = 0 and variance–covariance matrix 0 = 2 IM+1 with 2 chosen arbitrarily large enough, say 103 , for robustness consideration (Berger, 1985). For the power-transformed parameter , we can consider a vague prior distribution such as a uniform distribution over (−h, h), where h is a bounded positive real value. Note that the choice of the variance–covariance matrix of is 2 IM+1 , where 2 is large to indicate large uncertainty of the prior mean vector μ0 for ensuring robustness in the sense that the resulting inference will not be inﬂuenced too much by the choice of prior parameters. Practically, the range for is not too big, and subjective prior is therefore not easy to be speciﬁed accurately. For this reason, we adopt a uniform prior for and yet it simpliﬁes the posterior density. Thus, combining the prior (, ) and the likelihood function speciﬁed by taking the antilog of (4), we obtain the joint posterior density of (, ) as (, |N, Z, n, r) ∝

k

{1 − exp[−(zTi + 1)1/ i ]}ni

i=1

× {exp[−(zTi + 1)1/ i ]}mi −ni |0 |−1/2 1 T −1 × exp − ( − μ0 ) 0 ( − μ0 ) . 2

(13)

Evidently, the posterior distribution as well as the Bayesian inference of cannot be obtained analytically. So, we shall apply the MCMC method to simulate an approximate posterior sample of , and then draw Bayesian inference approximately based on such a simulated sample. In view of (13), neither (|, N, Z, n, r) nor (|, N, Z, n, r) is a recognizable distribution, and for this reason the (M-H) algorithm (Hastings, 1970) described in the following subsection is employed. Prior speciﬁcation with the completely noninformative prior, such as () ∝ 1 or the Jeffreys’ prior (Jeffreys, 1946; Box and Tiao, 1973), can also be considered as long as the resulting posterior is not improper (Berger, 1985). The priors we use above not only guarantee that the posterior is integrable (as they are proper densities) but also possess the objectivity (as they contain vague information). Alternatively, if the prior knowledge of on μ0 is quite informative, one may replace the prior variance–covariance matrix 0 by some suitably structured matrix (or simply use other distribution such as a multivariate t distribution); and/or choose other subjective prior for to reﬂect the prior belief on . Then the MCMC procedure must be modiﬁed accordingly. 4.2. The MCMC algorithm With regard to the parameter , we transform to = log(( + h)/(h − )) (for example, with h = 4) so that ∈ (−∞, ∞). Then, the conditional posterior density of given the data and is proportional to

g( |, N, Z, n, r) = f (|, N, Z, n, r)J ,

(14)

where f (|, N, Z, n, r) = 2 ()

k

{1 − exp(−(zTi + 1)1/ i )}ni

i=1

× {exp(−(zTi + 1)1/ i )}mi −ni and J =2he /(1+e )2 is the Jacobian while transforming to . Note that such a transformation yields unbounded support of , resulting in the corresponding MCMC sampling being relatively stable as compared to that based on directly; see Lee et al. (2005).

T.-H. Fan et al. / Journal of Statistical Planning and Inference 138 (2008) 2340 – 2354

2347

The following sampling scheme is then used to obtain the approximate posterior distribution of (, ). In this case, ˆ as the starting values of and in the Monte Carlo simulation chain. Given the state we choose the MLEs, ˆ and , (l) (l)T (l) T = ( , ) at the lth iteration, the steps of the MCMC procedure for simulating (l+1) from the corresponding conditional posterior distributions iteratively are as follows: Step 1: Generate (l+1) using the M–H algorithm from f (| , N, Z, n, r) = 1 () (l)

k

{1 − exp(−((l) zTi + 1)1/ i )}ni (l)

i=1

× {exp(−((l) zTi + 1)1/ i )}mi −ni . (l)

(15)

To implement the M–H algorithm at the (l + 1)th iteration for , we choose a multivariate normal distribution with (l) (l) (l) −1 with mean (l) and covariance matrix as the proposal distribution q1 ((l+1) |(l) ), where = d 2 (I + −1 0 ) √ the scale d ≈ 2.4/ M + 1 as suggested by Gelman et al. (1996). So, the resulting acceptance probability is (l+1) (l) | , Z, n, r) f ( 1 ((l) , (l+1) ) = min ,1 . f ((l) |(l) , Z, n, r) (l+1)

via the M–H algorithm from g( |(l+1) , N, Z, n, r) in (14). Step 2: Generate (l) (l+1) (l) (l) A normal distribution with mean and variance 2 is chosen as the proposal distribution q2 ( | ). Using (l)

(l)

the delta method, the variance 2 can be estimated by 4h2 /[(h2 − 2(l) )2 I ], and the acceptance probability is (l+1) |(l+1) , Z, n, r) g( (l) (l+1) ,1 . 2 ( , ) = min (l) g( |(l+1) , Z, n, r) Having obtained

(l+1)

(l+1) = h(e

from the M–H algorithm, we transform it back to as

(l+1)

− 1)/(1 + e

(l+1)

).

The above two steps are then repeated iteratively. After sufﬁciently long burn-in iterations, one can use the remaining samples to make posterior inference on the parameters ofinterest and/or functions of these parameters. For example, (l) an approximate posterior mean vector of is ¯ = (1/L) L associated posterior variance–covariance l=1 , with the L L (l) (l) (l) (l) T ¯ ¯ 2 ¯ ¯ approximated by l=1 ( − )( − ) /(L − 1). Similarly, = (1/L) l=1 and L l=1 ( − ) /(L − 1) are the approximate posterior mean and posterior variance of , respectively. Moreover, based on a posterior sample (or an approximate sample) of , say (l) , l = 1, . . . , L, (l) T (l) (l) 1/ (l) , = ( ) = ( z + 1) (l)

l = 1, . . . , L,

form an approximate posterior sample of size L for , which in turn can be used to develop Bayesian inference about . For example, the sample mean and sample variance of (l) , l = 1, . . . , L, give the estimates of the posterior mean [ /2] [1− /2] and variance of , respectively, and ( , ) yields an approximate 100(1 − )% credible interval for , with [ ] denoting the -quantile of . Similar results hold for T and R (ti ) as well. In the special case when = 0 so that = only, the posterior of still does not have a closed form. One can incorporate the MCMC procedure again to simulate the posterior distribution of approximately. The algorithm is to follow a procedure similar to that of Step 1 described earlier, with the change that the function f in (15) used to determine the acceptance probability is now replaced by f (|N, Z, n, r) =

k

{1 − exp(−i ezi )}ni {exp(−i ezi )}mi −ni T

i=1

× |0 |

−1/2

T

1 T −1 exp − ( − μ0 ) 0 ( − μ0 ) . 2

All the Bayesian inference presented earlier can be mimicked once the posterior sample of is simulated.

2348

T.-H. Fan et al. / Journal of Statistical Planning and Inference 138 (2008) 2340 – 2354

5. Numerical illustrations In this section, we carry out two simulation studies to illustrate the methods of inference developed in the preceding sections. The step-stress test design we used is presented in Table 1. There are 10 stages in this experiment, and each time-interval is of length i = 50 for all i. For computational ease, we take the natural logarithm of all stress variables, and so the ﬁrst stress level is z1 = log(330) = 5.799, for example. We then carry out the following simulations under this progressive step-stress design. 5.1. Simulation study—I: model selection We generate two synthetic data sets, both with sample size N = 200. The ﬁrst one, denoted by “Case 1”, is generated from the BC-SSALT model with true parameters (0 , 1 , ) = (−392, 61, −0.325). The other generated from the log-SSALT model with true parameters (0 , 1 ) = (−61, 9), is denoted by “Case 2”. These two simulated data sets are presented in Table 2, and they are both ﬁtted by the log-SSALT and BC-SSALT models using the ML and Bayesian approaches. The summary estimation results are presented in Table 3. It is worth noting that for Case 1, the point estimates of parameters as well as the interval estimates based on both ﬁtted models display very different results. While for Case 2 these two ﬁtted models provide similar parameter estimates in 0 and 1 as well as the estimate in of being nonsigniﬁcantly different from zero under both ML and Bayesian methods. We now explore the model selection ability of the ML and Bayesian approaches and the corresponding computed results are presented in Table 4. For the ML method, the likelihood ratio test (LRT) statistic, which is a comparison of likelihood scores between two competitive models, is presented to judge which of the two models is more appropriate for these two data sets. The LRT turns out to be 88.154 for the data in Case 1, while for the data in Case 2 the LRT is close to zero. Comparison of these results with the 95th percentile of the chi-squared distribution with one degree of freedom reveals that the BC-SSALT model is signiﬁcantly favorable for the data in Case 1, while for the data in Case 2, there is no signiﬁcant evidence to reject the hypothesis that = 0, thus implying that the log-SSALT model is adequate. Moreover, ﬁts of models are compared using two popular information criteria (in the sense of smaller-is-better), viz., AIC = −2(max − p) and BIC = −2max + p log(N ), where max is the maximum log-likelihood, and p and N are the number of parameters and sample size, respectively. With these two criteria, we obtain exactly the same conclusions as those based on the LRT test, as presented in Table 4. It is interesting to mention here that for Case 2, both models provide quite close results. For Case 1, however, based on these model selection criteria, there is a strong indication that the BC-SSALT model is signiﬁcantly better than the log-SSALT model. From a Bayesian viewpoint, it is of great interest to compare the appropriateness of these two models based on the Bayes factor, which involves the calculation of the marginal distributions of the data. Assuming that the two models Table 1 Step-stress pattern design Stage Variable

1 330

2 360

3 390

4 420

5 450

6 480

7 510

8 540

9 570

10 600

Table 2 Progressive step-stress Type-I group-censored data simulated from two kinds of models Case

Data

Stage 1

2

3

4

5

6

7

8

9

10

1: BC-SSALT

ni ri

6 7

4 4

4 8

8 4

17 1

16 6

42 4

37 0

28 1

3 0

2: log-SSALT

ni ri

3 14

4 10

8 1

9 14

15 11

35 4

31 5

16 1

15 1

2 1

ni : number of failures; ri : number of right censored observations.

T.-H. Fan et al. / Journal of Statistical Planning and Inference 138 (2008) 2340 – 2354

2349

Table 3 Comparisons of parameter estimations Case

Fitted model

MLE Est

1: BC

2: log

Bayesian Sd

95% CI

Mean

Lower

Upper

Sd

95% CI∗ 2.5%

97.5%

BC

0 1

−383.084 59.524 −0.318

180.192 28.312 0.091

−736.260 4.032 −0.496

−29.908 115.016 −0.139

−383.203 59.534 −0.319

9.986 1.574 0.014

−403.682 56.656 −0.347

−364.646 62.785 −0.294

log

0 1

−67.420 10.051

4.390 0.707

−76.024 8.666

−58.815 11.437

−67.391 10.047

4.221 0.681

−75.752 8.748

−59.367 11.397

BC

0 1

−61.418 9.101 −0.003

35.043 5.433 0.106

−130.103 −1.548 −0.211

7.266 19.750 0.204

−60.757 8.995 −0.002

8.150 1.262 0.029

−76.035 6.739 −0.054

−46.257 11.410 0.052

log

0 1

−60.209 8.914

4.286 0.694

−68.609 7.553

−51.808 10.274

−59.853 8.854

3.824 0.620

−67.109 7.672

−52.582 10.014

CI∗ : credible interval is consist of 2.5% and 97.5% quantiles of posterior samples.

Table 4 Summary of criteria of model selections Case

Fitted model

log L

AIC

BIC

p(data|Model) ˆ

1: BC

BC-SSALT log-SSALT

−340.257 −384.334

686.51 772.668

696.41 779.26

3.2361 × 10−149 .7.9641 × 10−153

2: log

BC-SSALT log-SSALT

−308.857 −308.857

623.714 621.714

633.610 628.310

1.0124 × 10−135 7.4910 × 10−136

are a priori equally probable, a better approach would be to use samples from the posterior distribution, say {(l) }L l=1 , and calculate the marginal density by the harmonic means of sampled likelihoods (Newton and Raftery, 1994) as p(data) ˆ ≈

1 f (data|(l) )−1 L L

−1 .

(16)

l=1

The estimated marginal densities obtained by (16) are presented in the last column of Table 4. The Bayes factor of BC-SSALT model relative to log-SSALT model for Cases 1 and 2 are 4063.446 and 1.352, respectively. These values imply that the data in Case 1 strongly support the BC-SSALT model while the log-SSALT model is adequate for the data in Case 2. Thus, these results are consistent with those obtained from the classical model selection criteria based on LRT. We now examine the effect of the stress environments on the failure-time distribution and the prediction of the failure rate, the expected life length, and the reliability at mission times ti =(50, 200, 500, 1000, 1500, 3000) of the item under the use condition. The ﬁrst stage of the step-stress pattern design, viz., z1 =(1, 5.799), is assumed as the normal operating environment for this purpose. The corresponding estimates based on the ML and Bayesian approaches are presented in Table 5. It is clear from this table that the BC-SSALT model performs much better than the log-SSALT model for the data in Case 1. We also observe that the log-SSALT model underestimates the failure rate and overestimates the expected life length of reliability quite considerably. On the other hand, for the data in Case 2, all prediction results under the two models are quite close and this is to be expected since the data in this case are ﬁtted well by both models. This leads us to the important observation that if the BC-SSALT model is the true model, then ﬁtting the data by the log-SSALT model may lead to signiﬁcantly misleading results, while if the log-SSALT model is the true model, inferential results based on the BC-SSALT model will be quite close to those based on the true log-SSALT model. This demonstrates the practical usefulness of the BC-SSALT model proposed in this paper.

2350

T.-H. Fan et al. / Journal of Statistical Planning and Inference 138 (2008) 2340 – 2354

Table 5 Comparison of estimates of the failures rate, the expected life length, and the reliability Case

Fitted model

Failure rate (×10−4 )

Method

True 1: BC

Reliability 50

200

500

1000

1500

3000

3.203

3122.496

0.984

0.938

0.852

0.726

0.619

0.383

BC

MLE MCMC

3.102 3.161

3224.035 3267.691

0.985 0.984

0.940 0.939

0.856 0.854

0.733 0.730

0.628 0.625

0.394 0.393

log

MLE MCMC

1.080 1.127

9258.707 9584.798

0.995 0.994

0.979 0.978

0.947 0.945

0.898 0.894

0.850 0.845

0.723 0.716

1.890

5292.145

0.991

0.963

0.910

0.828

0.753

0.567

True 2: log

Life length

BC

MLE MCMC

1.974 2.143

5066.319 5237.711

0.990 0.989

0.961 0.958

0.906 0.899

0.821 0.809

0.744 0.729

0.553 0.537

log

MLE MCMC

2.003 2.083

4993.479 5099.688

0.990 0.990

0.961 0.959

0.905 0.901

0.819 0.813

0.741 0.734

0.548 0.541

Table 6 Progressively Type-I group-censored data simulated from the BC-SSALT model with true parameters 0 = −392, 1 = 61, = −0.325 Sample size

Stage 1

2

3

4

5

6

7

8

9

10

25

ni ri

1 1

1 1

2 1

0 0

0 0

3 0

2 0

6 0

4 0

3 0

50

ni ri

1 0

0 1

2 2

1 1

5 0

4 1

6 0

13 1

11 0

1 0

200

ni ri

6 7

5 5

5 3

12 0

14 1

22 0

36 6

39 1

33 1

4 0

Sd

95% CI∗

Table 7 Summary statistics for the BC-SSALT model based on the MLE and Bayesian approaches N

MLE Est

Bayesian Sd

95% CI

Mean

Lower

Upper

2.5%

97.5%

25

0 1

−392.485 60.565 −0.343

519.445 81.080 0.256

−1052.857 −42.652 −0.656

267.8869 163.783 −0.029

−392.547 60.435 −0.353

10.399 1.661 0.043

−414.239 57.185 −0.445

−372.336 63.909 −0.279

50

0 1

−392.413 61.020 −0.292

328.481 51.612 0.160

−1036.236 −40.138 −0.606

251.410 162.180 0.021

−393.576 61.148 −0.299

10.071 1.584 0.027

−412.468 58.109 −0.354

−373.984 64.141 −0.246

200

0 1

−392.431 60.904 −0.333

191.893 30.120 0.094

−768.542 1.869 −0.517

−16.321 119.940 −0.148

−393.315 61.031 −0.334

9.328 1.472 0.014

−412.258 58.240 −0.364

−376.024 64.017 −0.309

5.2. Simulation study – II: comparison of inferential methods In this subsection, We present the results of a simulation study involving three different sample sizes, say N = 25, 50 and 200. The simulated progressive Type-I group-censored data generated from the BC-SSALT model with true parameters 0 = −392, 1 = 61, and = −0.325 are presented in Table 6.

T.-H. Fan et al. / Journal of Statistical Planning and Inference 138 (2008) 2340 – 2354

2351

N=25 1.0

True model MLE MCMC

Reliability

0.8 0.6 0.4 0.2 0.0

UPB

UCB LPB

LCB

0

5000

10000

15000 Time

20000

25000

> 30000

N=50 1.0 True model MLE

Reliability

0.8

MCMC

0.6 0.4 0.2 0.0

UPB

UCB LCB

LPB

0

10000

20000

30000

40000

50000

Time N=200 1.0 True model MLE

Reliability

0.8

MCMC

0.6 0.4

UCB UP B LP B

0.2 LCB

0.0 0

5000

10000

15000

> 20000

Time Fig. 1. Reliability plots for the BC-SSALT model based on different sample sizes. LCB (UCB): 95% lower (upper) conﬁdence band; LPB (UPB): 95% lower (upper) posterior band.

We use the Fisher-scoring algorithm in (7) to determine the MLEs and the approximate 95% conﬁdence intervals for the parameters are obtained by using the normal approximation, as described earlier in Section 3. For the Bayesian analysis, we perform the MCMC algorithm described in Section 4.2 to generate a sequence of 10,000 random samples

2352

T.-H. Fan et al. / Journal of Statistical Planning and Inference 138 (2008) 2340 – 2354

iteratively. By discarding the ﬁrst 1000 iterations to eliminate effects from the initial values and then choosing one sample point per 10 iteration to reduce the autocorrelation within the iterated samples, a sample size of 900 is obtained to approximate the joint posterior distribution of = (0 , 1 , ). The estimates of the parameters and their standard errors from the ML approach and the summary statistics computed from the converged MCMC samples, giving the posterior mean, standard deviations, and 95% credible intervals, are presented in Table 7. It can be observed from this table that the estimates of parameters derived from the MLEs and the MCMC samples are quite similar, but the former have signiﬁcantly larger standard errors than the latter, especially for small samples. Consequently, if the sample size is small, the conﬁdence intervals for the model parameters based on the MLEs will be much wider thus leading to inappropriate statistical inference. On the other hand, the Bayesian approach provides more precise estimates as it yields smaller standard errors as well as narrower credible intervals. With regard to the prediction of reliability, we present the reliability in time based on the MLEs and MCMC estimates in Fig. 1, which is separated into three panels for the three sample sizes. We plot the estimated reliability by the thick dashed line and their 95% conﬁdence intervals by a pair of thin dashed lines for a range of time-interval [0, 50,000] in each panel. From this ﬁgure, we observe that the conﬁdence intervals based on MLEs as well as the Bayesian posterior intervals based on MCMC samples for the reliability enclose the true reliability. However, when the sample size is small, say when N = 25 or N = 50, the conﬁdence intervals based on the MLEs may go outside the interval [0, 1]. We therefore note that the Bayesian approach described here offers quite accurate inference compared to the ML approach in the case of small samples. Both approaches will yield very similar results based on our simulation ﬁndings when the sample size is sufﬁciently large. 6. Concluding remarks This article proposes the BC-SSALT model to handle progressively Type-I group-censored data, and presents inferential methods using the ML and Bayesian approaches. From simulation studies in Section 5, it is shown that the Bayesian methodology can yield more accurate inferences than that of the ML method especially in the case of small sample size. It is worth mentioning that the priors used are relatively ﬂat for robustness consideration, and the results so obtained are quite close to those obtained by using MLEs in the case of large samples. Perhaps, one could use some other prior distributions, such as the heavy-tailed t distribution or truncated normal distribution. Of course, if some prior knowledge could be obtained from the previously conducted step-stress experiments, then a more informative prior distribution can be proposed and used. With the assumption of cumulative exposure model, the change of the stress environment is instantaneous. If the environment is not changed instantaneously but gradually, and the amount of time to change gradually from one environment to the next is marked as “ramp–time”; see van Dorp and Mazzuchi (2004). It will be of interest to develop the Bayesian analysis for such a model. In addition, the assumption that the step-stress test terminates at the last stage can be removed under the modiﬁcation of censoring by proportion; see Balakrishnan and Han (2008). Furthermore, one can also introduce the Box–Cox transformation in the case of other lifetime distributions, such as Weibull, lognormal, and inverse Gaussian, that are used in the case of step-stress ALTs. Work in these directions are currently under progress and we hope to report these ﬁndings in a future paper. Acknowledgments We gratefully acknowledge the referees for their valuable comments, which substantially improved the quality of the paper. This research work was supported by the National Science Council under Grant number NSC 95-2118-M008-001 of Taiwan. Appendix A. Here, we give the detailed computations of the Fisher information matrix in the general SSALT model.

T.-H. Fan et al. / Journal of Statistical Planning and Inference 138 (2008) 2340 – 2354

2353

A.1. The Fisher information matrix The corresponding second derivatives of (4) are given by j2 (, ) jjT

=

k

zi zTi i (zTi + 1)1/−1

i=1

ni 1 − exp(−i (zTi + 1)1/ )

ni i (zTi + 1)1/−1 exp(−(zTi + 1)1/ i ) 1− × T − zi + 1 [1 − exp(−(zTi + 1)1/ i )]2

− mi ,

= 0,

(A.1)

k n j2 (, ) i zi i (zTi + 1)1/−1 = − mi jj 1 − exp(−(zTi + 1)1/ i ) i=1

×

zTi 1 1 log(zTi + 1) − −1 zTi + 1 2

ni i (zTi + 1)1/ exp(−(zTi + 1)1/ i )

−

[1 − exp(−(zTi + 1)1/ i )]2 zTi 1 T , log(zi + 1) × − (zTi + 1) 2

= 0

(A.2)

and j2 (, ) j2

=

k

(zTi + 1)1/ i

1 − exp(−(zTi + 1)1/ i )

i=1

⎡

×⎣

−

zTi

1

(zTi + 1) 2 2zTi

2 (zTi + 1)

−

ni

− mi

2 log(zTi + 1) (zTi )2

(zTi + 1)2

+

2

log(zTi + 1) 3

ni i (zTi + 1)1/ exp(−(zTi + 1)1/ i )

−

[1 − exp(−(zTi + 1)1/ i )]2

×

zTi (zTi + 1)

−

1 2

log(zTi + 1)

2 ⎫⎫ ⎬⎬ ⎭⎭

,

= 0.

(A.3)

Then, the following properties are helpful in ﬁnding the Fisher information matrix in the general SSALT model. Properties: 1. By induction with E(m1 ) = N and m i+1 = mi − ni − ri , for i = 1, . . . , k, we get E(mi ) = N Gi−1 (i−1 )[1 − i−1 i /G ( )], where G ( ) = j j i i j =1 [1 − Fj (j )] with i = (1 , . . . , i ), G0 (0 ) = 1 and j = rj /N , j =1 j j = 1, . . . , k − 1. 2. Given ni−1 , . . . , n1 , the random variable ni has a binomial distribution with parameters (mi , Fi (i )). Using this fact and the smoothing property of conditional expectation with respect to ni−1 , . . . , n1 , we have E(ni ) = E(mi )Fi (i ), where Fi (i ) = [F (i ) − F (i−1 )]/[1 − F (i−1 )].

2354

T.-H. Fan et al. / Journal of Statistical Planning and Inference 138 (2008) 2340 – 2354

Therefore, we obtain the Fisher information matrix in the BC-SSALT model, denoted by ⎤ ⎡ 2 j (, ) j2 (, ) ⎢ jjT I I jj ⎥ ⎥ ⎢ . I(, ) = −E ⎢ 2 ⎥= T ⎣ j (, ) j2 (, ) ⎦ I I

jjT j2 In a similar way, the second derivative of (8) with respect to is ⎤ ⎧ ⎫ ⎡ zTi exp − ezTi k ⎨ ⎬ 2 e i i j () ni T zTi ⎣1 − ⎦ − mi . = z z e i i i T T ⎩ 1 − exp − ezi ⎭ jjT 1 − exp − ezi i=1

i

(A.4)

(A.5)

i

Using the above two properties again and computing the expectation of negative of the expression in (A.5), the Fisher information matrix in the log-SSALT model is found. References Bagdonavicius, V., Nikulin, M.S., 2002. Accelerated Life Models: Modeling and Statistical Analysis. Chapman & Hall, Boca Raton, FL. Bai, D.S., Kim, M.S., Lee, S.H., 1989. Optimum simple step-stress accelerated life tests with censoring. IEEE Trans. Reliab. 38, 528–532. Balakrishnan, N., 2007. Progressive censoring methodology: an appraisal (with discussion). TEST 16, 211–296. Balakrishnan, N., Aggarwala, R., 2000. Progressive Censoring: Theory, Methods, and Applications. Birkhäuser, Boston. Balakrishnan, N., Han, D., 2008. Optimal step-stress testing for progressively Type-I censored data from exponential distribution. J. Statist. Plann. Inference, to appear. Balakrishnan, N., Xie, Q., 2007a. Exact inference for a simple step-stress model with Type-I hybrid censored data from the exponential distribution. J. Statist. Plann. Inference 137, 3268–3290. Balakrishnan, N., Xie, Q., 2007b. Exact inference for a simple step-stress model with Type-II hybrid censored data from the exponential distribution. J. Statist. Plann. Inference 137, 2543–2563. Balakrishnan, N., Kundu, D., Ng, H.K.T., Kannan, N., 2007. Point and interval estimation for a simple step-stress model with Type-II censoring. J. Qual. Technol. 39, 35–47. Balakrishnan, N., Xie, Q., Kundu, D., 2008. Exact inference for a simple step-stress model from the exponential distribution under time constraint. Ann. Inst. Statist. Math., to appear. Berger, J.O., 1985. Statistical Decision Theory and Bayesian Analysis. Springer, New York. Box, G.E.P., Cox, D.R., 1964. An analysis of transformations (with discussion). J. Roy. Statist. Soc. Ser. B 26, 211–252. Box, G.E.P., Tiao, G.C., 1973. Bayesian Inference in Statistical Analysis. Addison-Wesley, Reading, MA. Casella, G., Berger, R.L., 2002. Statistical Inference. Duxbury, Paciﬁc Grove, CA. Chib, S., Greenberg, E., 1995. Understanding the Metropolis–Hastings algorithm. Amer. Statist. 49, 327–335. Gelman, A., Roberts, G., Gilks, W., 1996. Efﬁcient metropolis jumping rules. In: Bernardo, J.M., Berger, J.O., Dawid, A.P., Smith, A.F.M. (Eds.), Bayesian Statistics, vol. 5. Oxford University Press, New York, pp. 1–5. Gilks, W.R., Wild, P., 1992. Adaptive rejection sampling for Gibbs sampling. J. Roy. Statist. Soc. Ser. C 41, 337–348. Gouno, E., Sen, A., Balakrishnan, N., 2004. Optimal step-stress test under progressive Type-I censoring. IEEE Trans. Reliab. 53, 388–393. Hastings, W.K., 1970. Monte Carlo sampling methods using Markov chains and their applications. Biometrika 57, 97–109. Jeffreys, H., 1946. An invariant form for the prior probability in estimation problems. Proc. Roy. Soc. London Ser. A 196, 453–461. Khamis, I.H., 1997. Optimum M-step, step-stress test with k stress variables. Comm. Statist. Comput. Simul. 26, 1301–1313. Khamis, I.H., Higgins, J.J., 1998. A new model for step-stress testing. IEEE Trans. Reliab. 47, 131–134. Lee, J.C., Lin, T.I., Lee, K.J., Hsu, Y.L., 2005. Bayesian analysis of Box–Cox transformed linear mixed models with ARMA(p,q) dependence. J. Statist. Plann. Inference 133, 435–451. Meeker, W.Q., Escobar, L.A., 1998. Statistical Methods for Reliability Data. Wiley, New York. Miller, R., Nelson, W., 1983. Optimum simple step stress plans for accelerated life testing. IEEE Trans. Reliab. 32, 59–65. Nelson, W., 1980. Accelerated life testing—step-stress model and data analysis. IEEE Trans. Reliab. 29, 103–108. Nelson, W., 1990. Accelerated Testing: Statistical Models Test Plans and Data Analysis. Wiley, New York. Newton, M.A., Raftery, A.E., 1994. Approximate Bayesian inference by the weighted likelihood bootstrap (with discussion). J. Roy. Statist. Soc. Ser. B 56, 3–48. Thisted, R.A., 1988. Elements of Statistical Computing: Numerical Computation. Chapman & Hall, New York. van Dorp, J.R., Mazzuchi, T.A., 2004. A general Bayes exponential inference model for accelerated life testing. J. Statist. Plann. Inference 119, 55–74. van Dorp, J.R., Mazzuchi, T.A., Fornell, G.E., Pollock, L.R., 1996. A Bayes approach to step-stress accelerated life testing. IEEE Trans. Reliab. 45, 491–498. Wu, S.J., Lin, Y.P., Chen, Y.J., 2006. Planning step-stress life test with progressively type I group-censored exponential data. Statist. Neerlandica 60, 46–56. Yeo, K.P., Tang, L.C., 1999. Planning step-stress life-test with a target accelerated-factor. IEEE Trans. Reliab. 48, 61–67. Yu, I.T., Wang, Z.H., Cheng, S.W., Fuh, C.D., 2002. Reliability analysis for pyrotechnics products. J. Chinese Statist. Assoc. 40, 333–360. Zhao, W., Elsayed, E.A., 2005. A general accelerated life model for step-stress testing. IIE Trans. 37, 1059–1069.

Copyright © 2024 C.COEK.INFO. All rights reserved.