Bayesian C-optimal life testing plans under progressive type-I interval censoring scheme

Bayesian C-optimal life testing plans under progressive type-I interval censoring scheme

Applied Mathematical Modelling 70 (2019) 299–314 Contents lists available at ScienceDirect Applied Mathematical Modelling journal homepage: www.else...

842KB Sizes 0 Downloads 42 Views

Applied Mathematical Modelling 70 (2019) 299–314

Contents lists available at ScienceDirect

Applied Mathematical Modelling journal homepage: www.elsevier.com/locate/apm

Bayesian C-optimal life testing plans under progressive type-I interval censoring scheme Soumya Roy a,∗, Biswabrata Pradhan b a b

Indian Institute of Management Kozhikode, Kozhikode 673570, India SQC & OR Unit, Indian Statistical Institute 203 B.T. Road, Kolkata 700108, India

a r t i c l e

i n f o

Article history: Received 4 June 2018 Revised 4 January 2019 Accepted 17 January 2019 Available online 23 January 2019 Keywords: Bayesian C-optimality criterion C-efficiency Log-normal distribution Metropolis-Hastings algorithm Weibull distribution

a b s t r a c t This work considers optimal planning of progressive type-I interval censoring schemes for log-location-scale family of distributions. Optimum schemes are obtained by using a Bayesian C-optimality design criterion. The C-optimality criterion is formed to attain precision in estimating a particular lifetime quantile. An algorithm is proposed to obtain the optimal censoring schemes. Optimal schemes are obtained under two different scenarios for the Weibull and log-normal models, which are two popular special cases of loglocation-scale family of distributions. A sensitivity analysis is conducted to study the effect of various prior inputs on the optimal censoring schemes. Furthermore, a simulation study is undertaken to illustrate the sampling variations resulting from the optimal censoring schemes. © 2019 Elsevier Inc. All rights reserved.

1. Introduction Testing components or systems for failures can be time consuming and a costly affair. Thus, reliability engineers often prefer censored life tests, which produce only partial information about the failure times of the test units [1]. There are many variants even among the censoring mechanisms. Type-I and type-II are possibly two most widely-used censoring schemes in practical applications. Hybrid censoring, which is essentially a combination of conventional type-I and type-II censoring schemes, has also become extremely popular in recent times [2]. These three schemes involve single stage of censoring and as a result, they do not permit any intermediate withdrawal of the test units from life test experiments. However, intermediate withdrawals of the test units are sometimes essential in many applications. For instance, consider Example 2.12 presented in chapter 2 of [1]. This example provides details of a life test experiment of 68 battery cells. The experiment was conducted at normal usage condition through automatic cycling. However, some battery cells are removed from the on-going life test experiment to extract information on the aging process and degradation. Sometimes removals may also happen for various other reasons (see [3] for a detailed discussion in this context). In order to facilitate intermediate withdrawals of the test units from the life test experiment, progressive type-I and type-II censoring schemes are proposed in the literature [4]. These two progressive censoring schemes are essentially generalizations of type-I and type-II censoring schemes, which can handle intermediate withdrawals of the test units from the life test experiment. The censoring schemes we have discussed so far assume that continuous inspection facility is available during the life test experiment. As a result, the exact failure times are available for all the test units that have failed during the life test ∗

Corresponding author. E-mail address: [email protected] (S. Roy).

https://doi.org/10.1016/j.apm.2019.01.023 0307-904X/© 2019 Elsevier Inc. All rights reserved.

300

S. Roy and B. Pradhan / Applied Mathematical Modelling 70 (2019) 299–314

experiment. However in practice, continuous inspection is often not possible due to resource limitations. In view of this, interval censoring (henceforth, IC) schemes are proposed [1], which assumes that failures are observed in groups only at certain inspection times. However, the traditional IC schemes do not allow intermediate withdrawals of the test units from the experiment. As a remedy, progressive type-I interval censoring (henceforth, PIC-I) scheme is introduced in the literature [4,5]. As in conventional IC schemes, we observe the number of failures only within two successive pre-fixed inspection times in a PIC-I scheme. However, a PIC-I scheme allows the intermediate withdrawals of test units at each of the pre-fixed inspection times. There have been a number of works on inference of various lifetime models based on PIC-I data. For example, classical statistical inference is first considered for exponential lifetime distribution in [6]. Subsequently, this work is extended for Weibull and generalized exponential models in [7] and [8], respectively. Bayesian inference for generalized exponential and Weibull distributions is provided in [9]. Furthermore, classical and Bayesian inference are also provided for log-normal model in [10]. Recently, a proportional hazards family of distributions is considered under the competing risk set up in [11]. However, a major issue in the context of PIC-I schemes is the design of such censoring schemes. Optimal design of these schemes has received considerable attention lately. For example, the A- and D-optimal schemes are available in [12] and [13] for log-normal and Weibull models, respectively. A cost-function based approach is adopted in [14] and [15]. Such censoring schemes are also obtained under competing risks set up in [16]. Though the optimal schemes presented in these articles are quite different in nature, they have one thing in common. The design criteria used in all these articles involve the unknown model parameters, which must be appropriately specified in order to obtain the optimal schemes. For this reason, these design criteria are often referred to as local optimality criteria [17]. In practice, the values of the lifetime model parameters are unknown to the experimenter. In this context, a common practice is to employ proper prior distributions over the entire parameter space. The resulting design criteria are commonly known as Bayesian design criteria, which can formally be justified using Bayesian decision theoretic arguments [18]. In this context, it should be mentioned that useful prior information can be extracted using past experience and expert knowledge about the failure mechanism of the test units before conducting any life test experiment [1]. The literature on optimal Bayesian PIC-I schemes is rather limited. Bayesian D-optimal schemes are obtained for the log-normal lifetime model in [19]. Note that the optimal schemes presented in [19] are mainly suited for inference of model parameters. However, in reliability studies, a major quantity of interest is often a quantile belonging to the lower-tail of a lifetime distribution [20]. Thus, it is important to consider a Bayesian design criterion that fits this purpose. In this article, we obtain the optimal PIC-I schemes using a Bayesian design criterion based on a quantile of lifetime distribution, which is commonly known as a Bayesian C-optimality criterion [18]. We develop the method of obtaining the optimal schemes for log-location-scale family of distributions. A generic algorithm is proposed to obtain the optimal schemes. We consider Weibull and log-normal distributions, which are two of the most popular special cases of the log-location-scale family of distributions, for illustration. Note that these two lifetime models possess distinctly different tail properties. Since the main goal of this article is to infer about a lifetime quantile from the lower-tail, the resulting optimal schemes may be markedly different under these two lifetime models. Towards this goal, we present a comparative study of the optimal schemes under these two lifetime models in two different scenarios. Next, we perform a sensitivity analysis to assess the influence of prior information on the resulting optimal schemes. We present a comparative study to evaluate the performance of the proposed algorithm with respect to the existing algorithm. Furthermore, a detailed simulation study is undertaken to study the frequentist properties of the resulting optimal schemes. The rest of this article is arranged as follows. We cover the preliminaries of this article in Section 2. An algorithm for obtaining the optimal PIC-I schemes is provided in Section 3. A detailed numerical illustration is presented in Section 4 with the help of a real-life example. We then carry out a sensitivity analysis in Section 5. A comparative study to evaluate the performance of the proposed algorithm is considered in Section 6. A simulation study is undertaken in Section 7 to observe the sampling variations resulting from the optimal schemes. Finally, we conclude this article in Section 8. 2. Preliminaries In this section, we first present the lifetime model in Sub-section 2.1. Next, the Fisher information matrix is derived in Sub-section 2.2. Bayesian method, as opposed to frequentist method, allows formal incorporation of existing subjective knowledge through a prior distribution. Thus, Sub-section 2.3 provides the details on the prior distributions used in this article. Next, Sub-section 2.4 presents the Bayesian C-optimality design criterion which is used to obtain the optimal schemes. 2.1. Model A PIC-I scheme can be described as follows. Suppose the life test experiment begins with n0 identical test units at time x0 = 0. The units on test are inspected at pre-fixed inspection times x1 < . . . < xm . Note that xm is the pre-specified termination time of the experiment. For j = 1, 2, . . . , m, let Nj be the number of units at risk at the beginning of the j-th interval (x j−1 , x j ]. At xj , we observe the number of failures Dj that have occurred in the interval (x j−1 , x j ]. Let Sj be the number of surviving units at xj and Rj be the number of withdrawals from the experiment at xj . Then we have N1 = n0 and N j = N j−1 − D j−1 − R j−1 , for j = 2, . . . , m. Also, we have S j = N j − D j , for j = 1, . . . , m and Rm = Sm . The intermediate withdrawals are generally decided based on pre-fixed withdrawal proportionsat the  intermediate inspection times. To be precise, let pj denote the pre-fixed withdrawal proportion at xj . Then, R j = S j p j , where the notation x is used to

S. Roy and B. Pradhan / Applied Mathematical Modelling 70 (2019) 299–314

301

denote the greatest integer less than or equal to x. It is easy to see that 0 ≤ pj < 1, for j = 1, . . . , m − 1, and pm = 1. In the remainder of this article, such a censoring scheme is also referred to as a m-point PIC-I scheme and is denoted by ηm = {x1 , . . . , xm , p1 , . . . , pm−1 , pm = 1}. Now, let X denote the lifetime of a unit subjected to a life test under a PIC-I scheme ηm . We further assume that the lifetime distribution of Y = log X belongs to a location-scale family, having the pdf





f y|θ =

1

σ

y − μ

g

σ

,

(1)

where g(·) is a probability density function (pdf) and θ = (μ, σ ), with μ and σ being the location and scale parameters, respectively. Let F( · |θ ) be the distribution function of Y. It is well-known that various standard lifetime models can be obtained as special cases of the above log-location-scale family. For√example, we get the Weibull model by choosing g(z ) = exp(z − exp(z )) and the log-normal model by choosing g(z ) = (1/ 2π ) exp(−z2 /2 ). Let G(·) and G¯ (· ) be the distribution function and survival function of standardized random variable Z = (Y − μ )/σ , respectively. Now, let qj denote the conditional probability that an item, which is at risk at time x j−1 , will fail by time xj , for j = 1, 2, . . . , m. Then

qj =

G



    ξ j − G ξ j−1   =1− 1 − G ξ j−1 

  ξj  , G¯ ξ j−1 G¯





where ξ j = log(x j ) − μ /σ and ξ j−1 = log(x j−1 ) − μ /σ . We further assume that the lifetime model (1) satisfies the regularity conditions given by [21]. For ease of reference, we provide below these regularity conditions after making necessary adjustments as per our model (1) and the corresponding notations: Regularity conditions: (I) The unobserved log-lifetimes Y1 , . . . , Yn0 are i.i.d. with common distribution function F( · |θ ) and density function f( · |θ ). (II) The support of f( · |θ ) is independent of θ . (III) The parameter space  contains an open set 0 of which the true parameter θ 0 is an interior point.

(IV) For almost all y, the distribution function F(y|θ ) admit all the third derivatives ∂ ∂θ ∂F θ(y|∂θθ) for all θ ∈ 0 and u, v, w = u v w 1, 2. Also all the first, second and third order derivatives of F(y|θ ) with respect to the parameters are bounded for all θ ∈ 0 . (V) The xj ’s are chosen in such a way that (a) 0 < qj < 1 for j = 1, 2, . . . , m. 3

∂q

(b) ∇ θ q is a matrix of rank 2, where ∇θ q = (( ∂θ j ))2×m for j = 1, 2, . . . , m and u = 1, 2. u 2.2. Fisher information matrix Suppose H j is the history up to time xj , for j = 1, . . . , m. Then the data obtained under a PIC-I scheme ηm can be represented as Hm = {D1 , R1 , . . . , Dm , Rm }. Now it is easy to see that

D j |H j−1 ∼ Binom(N j , q j ), for j = 1, 2, . . . , m,



where H j−1 = D1 , R1 , . . . , D j−1 , R j−1 more, for j = 2, . . . , m,

E[N j ] = n0

j−1



and H0 is empty. Then E[N1 ] = n0 , E[D1 ] = n0 q1 and E[R1 ] = n0 (1 − q1 ) p1 . Further-

(1 − pl )(1 − ql ),

l=1

E[D j ] = E[N j ]q j = n0

j−1

(1 − pl )(1 − ql )q j ,

l=1

and

E[R j ] = E[N j ](1 − q j ) p j = n0

j−1

(1 − pl )(1 − ql )(1 − q j ) p j .

l=1

See [21] for further details. Let nj and dj be the realizations of Nj and Dj , respectively. Then, the likelihood function of θ can be written as

L (θ ) ∝

m

j=1

d

q j j (1 − q j )n j −d j .

(2)

302

S. Roy and B. Pradhan / Applied Mathematical Modelling 70 (2019) 299–314

After ignoring the constant of proportionality, the log-likelihood function is given by

(θ ) =

m

d j log q j + (n j − d j ) log (1 − q j ) .

j=1

Now let I(θ ; ηm ) denote the Fisher information matrix of θ under a PIC-I scheme ηm . Then, by definition,

⎡ 

I

E −



 ⎢ θ ; ηm = ⎣ 

∂ 2 ln (θ ) ∂μ2 ∂ 2 ln (θ )

E − ∂ μ∂ σ

where

 



∂ 2 ln (θ )

E − ∂ μ∂ σ



E −

∂ 2 ln (θ ) ∂σ 2

⎤

 I11 ⎥ = ⎦  I12



I12 , I22



 m  2 ∂qj ∂ 2 ln (θ ) 1 I11 = E − = E[N j ] , q j (1 − q j ) ∂μ ∂μ2 j=1  2   m   ∂qj ∂qj ∂ ln (θ ) 1 I12 = E − = E[N j ] , ∂ μ∂ σ q j (1 − q j ) ∂μ ∂σ j=1  2  m  2 ∂qj ∂ ln (θ ) 1 I22 = E − = E[ N ] , j q j (1 − q j ) ∂σ ∂σ 2 j=1

with

∂ q j G¯ (ξ j )g(ξ j−1 ) − G¯ (ξ j−1 )g(ξ j ) ∂ q j ξ j−1 G¯ (ξ j )g(ξ j−1 ) − ξ j G¯ (ξ j−1 )g(ξ j ) = and = .    2 2 ∂μ ∂σ σ G¯ (ξ ) σ G¯ (ξ ) j−1

j−1

Following Lemma 4 from Section 2 in [21], we now note that the information matrix I(θ ; ηm ) is a positive-definite (pd) matrix for m ≥ 2. 2.3. Prior distribution Let π (θ ) be the joint prior density of θ = (μ, σ ). It is generally very difficult to directly specify π (θ ). In order to circumvent this issue, engineers typically express their subjective knowledge through various other lifetime parameters, which can be independently specified in real-life applications. For example, [1] suggested the use of a small system lifetime quantile and the scale parameter σ in the context of Bayesian inference of log-location-scale family of lifetime distributions. Following their suggestion, we assume that the prior information can be independently elicited for y p(0) and σ , where y p(0) = μ + σ −1 ( p(0 ) ) is the p(0) -th quantile of the log-lifetime distribution. Typically, p(0) is fixed in a such a way that y p(0) represents some small quantile of the lifetime distribution [1]. Note that y p(0) can take any value on R. Thus, any density with R as its support may be selected as a prior distribution for y p(0) . For subsequent numerical computations, we mainly work with a normal prior distribution for y p(0) . Also, the scale parameter σ is a non-negative real number. Thus, one may work with any density having its support over (0, ∞) as the prior for σ . In this article, a gamma prior distribution is used for σ . Now, it is straight-forward to obtain the joint prior distribution of μ and σ by using the standard change of variables technique. 2.4. Bayesian design criterion A number of Bayesian design criteria have been presented in the literature. Each of these design criteria is obtained keeping in mind the basic objective behind experimentation. See [18] for an extensive review of various Bayesian design criteria and their decision-theoretic justifications. In reliability studies, a major quantity of interest is often a quantile lying in the lower-tail of the lifetime distribution. Thus, a major reason for experimentation may be to estimate a lifetime quantile xp , where p is such that xp belongs to the lower-tail of the lifetime distribution. Since we are mainly dealing here with the log-location-scale family of distributions, it is reasonable to work with log (xp ) [20], which is given by

log(x p ) = μ + −1 ( p)σ = y p (θ ). The Bayesian design criterion considered in this article is based on the pre-posterior expectation of the posterior variance of yp (θ ) [20] and is given by

φc ( η ) =



[cT I−1 (θ , η )c]π (θ )dθ , ∂ y (θ ) ∂ y (θ )





p p where η ∈ {η2 , . . . , ηm } and c = ( ∂μ , ∂σ ) = 1, −1 ( p) . This design criterion is in fact a well-known Bayesian C-optimality design criterion. See [18] for a detailed discussion on the background theory for this design criterion.

S. Roy and B. Pradhan / Applied Mathematical Modelling 70 (2019) 299–314

303

Furthermore, this design criterion may also be viewed as a natural extension of the local C-optimality criterion [18]. Now in order to obtain the optimal PIC-I scheme η∗ , we must minimize φ c (η) over all possible η ∈ {η2 , η3 , . . . , }. The resulting optimal scheme η∗ is then expected to improve the estimation precision of the p-th quantile of the lifetime distribution. 3. Optimal PIC-I schemes In this section, Bayesian method for designing optimal PIC-I schemes is presented in details. Note that the main focus of this article is to obtain the optimal PIC-I schemes for a fixed sample size n0 . Thus, in order to compute the optimal schemes, the sample size n0 is first fixed. Next, for a fixed n0 , the design criterion φ c (η) must be minimized over all possible η ∈ {η2 , η3 , . . . , }. However, as the following result shows, it is not possible to obtain the globally optimal scheme η∗ . Result 1: Let ηm = {x1 , . . . , xm , p1 , . . . , pm−1 , pm = 1} and ηm+1 = {x1 , . . . , xm , xm+1 , p1 , . . . , pm , pm+1 = 1}. Then

φc (ηm+1 ) ≤ φc (ηm ). See Appendix A for a proof. The above result shows that as m, the number of inspections, goes up, the value of the design criterion decreases. Thus, it is not possible to find the globally optimal PIC-I scheme irrespective of the choice of m. We then start with m = 2 and find the 2-point optimal scheme η2∗ . Next, we obtain the 3-point optimal scheme η3∗ and check the relative efficiency of η2∗ with respect to η3∗ . If the relative efficiency of η2∗ achieves the desired level, we stop and accept η2∗ as the optimal scheme. Otherwise, we find the 4-point optimal scheme η4∗ and examine the relative efficiency of η3∗ with respect to η4∗ . If η3∗ attains the desired efficiency, then it is accepted as the optimal scheme. Otherwise, this process is continued till we find an optimal ∗ with respect to η ∗ PIC-I scheme with the desired level of efficiency. Note that the relative efficiency of ηm m+1 is given by [22]

  φc ηm∗ +1 Ec = . φc (ηm∗ )

(3)

∗ = {x∗ , . . . , x∗ , p∗ , . . . , p∗ In order to obtain the m-point optimal PIC-I scheme ηm , pm = 1}, we need to solve the following m 1 1 m−1 optimization problem:

minimize

φc ( η m )

subject to

0 < x1 < . . . < xm < ∞, 0 ≤ p j < 1, j = 1, . . . , m − 1, pm = 1,

ηm

where

φc ( η m ) =



[cT I−1 (θ , ηm )c]π (θ )dθ .

(4)

(5)

It is easy to understand that the above optimization problem involves 2m − 1 decision variables. However, as the following result shows, it is possible to reduce the dimension of the above optimization problem from 2m − 1 to m. 0 = {x , . . . , x , p = 0, . . . , p Result 2: Let ηm = {x1 , . . . , xm , p1 , . . . , pm−1 , pm = 1} and ηm m 1 1 m−1 = 0, pm = 1}. Then

φc (ηm0 ) ≤ φc (ηm ). See Appendix B for a proof. The above result implies that the optimal solution obtained from (4) will not allow any intermediate withdrawals. Thus, the resulting optimal PIC-I schemes will essentially be IC schemes. Thus in order to find the optimal PIC-I scheme, the intermediate withdrawal proportions must be fixed at the outset. The optimal inspection points are then determined by solving the optimization problem provided in (4), which now involves only m decision variables x1 , . . . , xm . Furthermore, note that the design criterion φ c (ηm ) given in (5) involves two-dimensional integrals. Towards this end, we use the following approximation [23]:

φc (ηm ) ≈ φcApprox (ηm ) =

n 1 T −1 c I ( θ i , ηm ) c , n

(6)

i=1

where {θ 1 , . . . , θ n } is a sample of size n from the joint prior distribution π (θ ), with n being sufficiently large. Now, for convenience, Algorithm 1 summarizes the entire process discussed above. 4. Numerical illustration In this section, we provide numerical illustrations of constructing Bayesian C-optimal PIC-I schemes for Weibull and lognormal lifetime models. As mentioned in Section 2.1, these two lifetime models are special cases of log-location-scale family of distributions given by (1). Furthermore, it is easy to see that both these models satisfy the regularity conditions stated

304

S. Roy and B. Pradhan / Applied Mathematical Modelling 70 (2019) 299–314

Algorithm 1 Optimal PIC-I scheme. Draw a sample {θ 1 , . . . , θ n } from the joint prior distribution π (θ ).  n should be sufficiently large m ← 2. 3: Fix p1 .  p1 is pre-fixed due to Result 2 Approx Approx ( η2 ) .  φc (η2 ) is given by (6) with m = 2 4: Approximate φc (η2 ) by φc Approx (η2 ), subject to the constraints 0 < x1 < x2 < ∞, to obtain η2∗ . 5: Minimize φc 1:

2:

6: 7:

φcNew ← φcApprox (η2∗ ). η2New ← η2∗ .

18:

Ec ← 0.  Initialization while Ec < E0 do  E0 is pre-fixed, typically a fraction close to 1 φcOld ← φcNew . Old ← η New . ηm m m ← m + 1. Fix p1 , . . . , pm−1 .  p1 , . . . , pm−1 are pre-fixed due to Result 2 Approx Approximate φc (ηm ) by φc ( ηm ) . Approx ∗. Minimize φc (ηm ), subject to the constraints 0 < x1 < x2 < . . . < xm < ∞, to obtain ηm Approx New ∗ φc ← φc ( ηm ) . New ← η ∗ . ηm m Ec ← φcNew /φcOld .  Ec is given by (3)

19:

Old as the optimal PIC-I scheme. Return ηm

8: 9: 10: 11: 12: 13: 14: 15: 16: 17:

Table 1 Data set for V7 transmitter vacuum tubes. Inspection time

Number of deaths

Number of withdrawals

25 50 75 100

109 42 17 7

0 0 0 13

Table 2 Prior means and SDs of y0.01 and σ . Weibull

Log-normal

Parameter

Mean

SD

Parameter

Mean

SD

y0.01

−2.182 1.208

0.622 0.122

y0.01

0.508 1.072

0.332 0.111

σ

σ

in Section 2.1. This implies that the corresponding Fisher information matrix is pd in each case. Thus, both Results 1 and 2 provided in Section 3 hold true for these two models. Now suppose the main goal behind the experimentation is to estimate 0.10-th quantile of the lifetime distribution. For the purpose of illustration, we use an example provided by [1], which provides an interval-censored data set for “V7” transmitter vacuum tubes. This data set is reproduced in Table 1 for quick reference. This example is primarily used here to obtain a sensible joint prior distribution for the model parameters θ . Furthermore, as in vacuum tube data set, we assume that n0 = 188 systems are available for the PIC-I experiment. As discussed in Section 2.3, we employ normal and gamma distributions as priors for y p(0) and σ , respectively, where

p(0 ) = 0.01 as suggested by Meeker and Escobar [1]. Next, the prior hyper-parameters are fixed as follows. In the absence of any other reliable subjective information, the vacuum tube data set is used as a source of subjective information on y p(0) and σ . The hyper-parameters for normal and gamma priors are then chosen in such a way that the prior means and standard deviations (SDs) tally with their respective maximum likelihood estimates (MLEs) and the estimated standard errors (SEs), which are computed using the vacuum tube data set for each lifetime model. The MLE and estimated SE of y p(0) are computed here using the Delta method. The prior means and SDs for y p(0) and σ are provided in Table 2 for both the lifetime models. The optimal schemes are obtained here under two different scenarios. In first case, the inspection times are assumed to be equispaced, i.e., for i = 1, . . . , m, xi = ix0 , where x0 is the common time interval between two consecutive inspection times. Such schemes are commonly known as equi-spaced (henceforth, ES) PIC-I schemes. These schemes are very popular for real-life applications, mainly for their practical convenience. In second case, however, the assumption of equal inspection time intervals is relaxed. The resulting schemes are referred as optimally spaced (henceforth, OS) PIC-I schemes. For conve-

305

2.2

100 × φc

2.0

11 9

1.8

10

100 × φc

2.4

12

2.6

13

S. Roy and B. Pradhan / Applied Mathematical Modelling 70 (2019) 299–314

1

2

3

4

5

3

4

x0

5

6

7

x0

(a) Weibull Model

(b) Log-normal Model

Fig. 1. Plots of φ c for 2-point ES schemes with p0 = 0.0.

Table 3 Optimal ES schemes with p0 = 0.0. Weibull

Log-normal

m

x0

φc (ηm∗ ) × 100

Ec

x0

φc (ηm∗ ) × 100

Ec

2 3 4 5 6 7 8 9 10

1.732 1.408 1.211 1.078 0.985 0.907 0.851 0.819 0.791

9.015 7.745 7.237 6.951 6.762 6.626 6.524 6.442 6.375

0.859 0.934 0.960 0.973 0.980 0.984 0.988 0.990 -

3.772 2.942 2.437 2.094 1.863 1.677 1.546 1.423 1.337

1.759 1.527 1.437 1.389 1.360 1.340 1.325 1.315 1.306

0.868 0.941 0.967 0.979 0.985 0.989 0.992 0.994 -

nience, we assume here that the intermediate withdrawal proportions are all equal, i.e., p1 = · · · = pm−1 = p0 , where p0 is pre-fixed. Furthermore, for the purpose of illustration, we present here optimal PIC-I schemes with p0 = 0.0 and p0 = 0.2. The resulting schemes are essentially IC schemes for p0 = 0.0. Note that we must solve the optimization problem in (4) to obtain the optimal schemes in both the cases. Towards this end, the R library “nloptr” is used. The “nloptr” library is basically an R interface to well-known open-source library “nlopt”, which can handle non-linear optimization problems involving various types of constraints. We now present the optimal ES schemes for each fixed m. Note that the ES schemes involve only one decision variable x0 . Fig. 1 shows the plot of the design criterion φ c (η2 ) against x0 for both the lifetime models with p0 = 0.0. It is obvious that φ c (η2 ) is convex with respect to x0 . This holds true even for p0 = 0.2. Furthermore, the same pattern is also observed for m > 2. This suggests that we can indeed obtain globally optimal solutions for both the models. Tables 3 and 4 present optimal ES schemes for both the models, assuming p0 = 0.0 and p0 = 0.2, respectively. Furthermore, Tables 3 and 4 also provide the optimal values of the design criterion as well as the relative efficiencies of the resulting optimal schemes. For p0 = 0.0, the optimal values of x0 decrease with increase in m for both the models. Thus, one must carry out the inspections more frequently as the number of inspections goes up. However, for p0 = 0.2, the optimal value of x0 first decreases with the increase in m and then changes its direction as the number of inspections is further increased. For the Weibull model, the optimal value of x0 increases as the number of inspections is increased from 6 to 7, whereas for the log-normal model, the optimal value of x0 increases as the number of inspections is increased from 5 to 6. Also, the optimum inspection interval under the log-normal model is relatively wider as compared to the same under the Weibull model for both p0 = 0.0 and p0 = 0.2. As expected, the optimal value of the design criterion goes down with increase in m for both the models. For a fixed m, however, the optimal values of the design criterion are relatively larger with p0 = 0.2 for both the models. ∗ with respect to η ∗ Furthermore, the relative efficiency of the optimal scheme ηm improves with the increase in m for both m+1 the models. However, the rate of improvement diminishes as the number of inspections goes up. Following the algorithm

306

S. Roy and B. Pradhan / Applied Mathematical Modelling 70 (2019) 299–314 Table 4 Optimal ES schemes with p0 = 0.2. Weibull

Log-normal

m

x0

φc (η ) × 100

Ec

x0

φc (ηm∗ ) × 100

Ec

2 3 4 5 6 7 8 9 10

1.846 1.620 1.497 1.452 1.444 1.459 1.466 1.491 1.520

9.430 8.197 7.756 7.533 7.399 7.309 7.246 7.199 7.163

0.869 0.946 0.971 0.982 0.988 0.991 0.994 0.995 -

4.031 3.424 3.151 3.051 3.058 3.071 3.085 3.095 3.109

1.849 1.659 1.603 1.581 1.570 1.564 1.559 1.556 1.553

0.897 0.967 0.986 0.993 0.996 0.997 0.998 0.998 -

∗ m

8.4 8.2

2.6

8.0 2.4

100 × φc

7.8

100 × φc 2.2 7.6 2.0 7.4 1.8 10

1.2 10

1.0

9

x1

4.0 3.5

8

0.8

9 8

3.0

7

2.5

6

0.6

5 0.4

7

2.0

x2

x1

4

x2

1.5 1.0 6

(a) Weibull Model

(b) Log-normal Model

Fig. 2. Plots of φ c for 2-point OS schemes with p0 = 0.0. Table 5 OS schemes with p0 = 0.0. Weibull m

Inspection times

φc (ηm∗ ) × 100

Ec

2 3 4 5

(0.687,5.049) (1.590,12.537,112.751) (0.799,5.104,23.509,152.793) (0.535,3.336,12.427,49.151,206.605)

7.237 5.788 5.207 4.947

0.800 0.900 0.950 -

(2.697,8.654) (1.876,4.511,10.718) (2.227,5.614,14.056,138.044)) (1.706,3.837,7.779,17.072,147.559)

1.612 1.432 1.315 1.259

0.888 0.919 0.957 -

Log-normal 2 3 4 5

presented in Section 3, it seems that the optimal schemes with three inspections (shown in boldface in Tables 3 and 4) will be sufficient to achieve the desired objective for both Weibull and log-normal models irrespective of the choice of p0 . Next, the OS schemes are obtained for both the models. As discussed in Section 3, the optimization problem involved in the OS schemes involve m decision variables x1 , . . . , xm . For m = 2, Fig. 2 shows the plot of the design criterion φ c (η2 ) against x1 and x2 for both the lifetime models with p0 = 0.0. It is easy to see that φ c (η2 ) is convex for both the models with p0 = 0.0. This is true even for p0 = 0.2. However, for m > 2, such graphical checks are not feasible. As is standard in such cases, a number of starting values are tried for m > 3 just to confirm the optimality of the resulting schemes. Tables 5 and 6

S. Roy and B. Pradhan / Applied Mathematical Modelling 70 (2019) 299–314

307

Table 6 OS schemes with p0 = 0.2. Weibull m

Inspection times

φc (ηm∗ ) × 100

Ec

2 3 4 5

(0.965,6.583) (1.940,14.037,117.998) (1.618, 8.359, 36.074,175.713) (1.654,7.938,29.251,109.539, 282.247)

7.526 6.227 5.907 5.791

0.827 0.949 0.980 -

1.704 1.591 1.503 1.489

0.934 0.945 0.991 -

Log-normal 2 3 4 5

(3.139,9.885) (2.932,6.756,16.061) (3.289,7.795,17.525,126.832) (3.349,7.864,17.028,80.998,230.684)

Table 7 Relative efficiency of existing life test plan for V7 transmitter vacuum tubes with respect to 4-point optimal schemes. Model

Optimal schemes p0 = 0.0

Weibull Log-normal

p0 = 0.2

ES

OS

ES

OS

0.477 0.263

0.343 0.241

0.511 0.293

0.389 0.275

present the OS schemes with p0 = 0.0 and p0 = 0.2, respectively. The optimal values of the design criterion φ c and the relative efficiencies of the resulting OS schemes are also included in the Tables 5 and 6. For a fixed m, the resulting scheme under the Weibull model is significantly different from the one obtained under the log-normal model. This is true for both p0 = 0.0 and p0 = 0.2. Furthermore, the optimal values of the design criterion go down as the number of inspections are increased for both the lifetime models. However, the optimal values of the design criterion are slightly higher for p0 = 0.2 as compared to same for p0 = 0.0. Also, for each fixed m, the optimal values of the design criterion in this case are significantly lower than the optimal value in the first case under both the models. This clearly suggests that the OS schemes are more efficient as compared to their ES counterparts. Furthermore, as per the algorithm, the optimal scheme with three inspections should be sufficient for the Weibull model. However, for the log-normal model, one may go ahead with a scheme with only two inspections. These optimal schemes are shown in boldface in Tables 5 and 6. Now we demonstrate the utility of the optimal ES and OS schemes over traditional non-optimal life test plans. Towards this goal, we compute the relative efficiencies of the existing life test plan with respect to the 4-point optimal ES and OS schemes presented above. Note that the existing life test plan for transmitter vacuum tubes comprises of four inspections at 25, 50, 75 and 100 days without any intermediate withdrawal. Table 7 reports the relative efficiencies of the existing life test plan with respect to the 4-point optimal ES and OS schemes. It is obvious that the existing life test plan is highly inefficient for both lifetime models, irrespective to the choice of p0 . 5. Sensitivity analysis The optimal schemes presented above are obtained using certain planning inputs such as the sample size and prior information on lifetime parameters y p(0) and σ . Thus it is important to figure out the effect of these planning inputs on the resulting optimal schemes. This section carries out a sensitivity analysis only with respect to prior information on y p(0) and σ . Furthermore, for brevity, here the sensitivity analysis is restricted only to the case of ES schemes. In order to carry out the sensitivity analysis with respect to the prior information, normal and gamma distributions are again used as priors for y p(0) and σ . However, the prior means and SDs are varied in the following manner. For both y p(0) and σ , we consider the following three levels of prior means: Case 1: MLE − SE, Case 2: MLE, and Case 3: MLE + SE. Similarly, we work with three different levels of prior SDs for both y p(0) and σ , which are as follows: Case 1: 0.75 × SE, Case 2: SE, and Case 3: 1.25 × SE. The MLEs and SEs of y p(0) and σ are already provided in Table 2. Note that three different levels of prior means and prior SDs essentially provide us nine different combinations, resulting in nine different prior distributions. Optimal schemes are obtained using the above-mentioned prior distributions under both Weibull and log-normal lifetime models with p0 = 0.0 and p0 = 0.2. Tables 8 and 9 provides the details of optimal 4-point schemes for p0 = 0.0 and p0 = 0.2,

308

S. Roy and B. Pradhan / Applied Mathematical Modelling 70 (2019) 299–314 Table 8 Optimal ES schemes under different prior hyper-parameters for m = 4 and p0 = 0.0. Weibull

Log-normal

Prior

x0

φc (η ) × 100

Ec

x0

φc (ηm∗ ) × 100

Ec

(1,1) (1,2) (1,3) (2,1) (2,2) (2,3) (3,1) (3,2) (3,3)

0.424 0.506 0.608 1.027 1.211 1.416 2.523 2.893 3.398

5.569 5.777 6.054 6.962 7.237 7.532 8.509 8.859 9.220

0.967 0.958 0.950 0.968 0.960 0.952 0.969 0.962 0.954

1.452 1.560 1.694 2.279 2.437 2.633 3.588 3.805 4.083

1.118 1.149 1.195 1.397 1.437 1.489 1.712 1.761 1.818

0.973 0.967 0.960 0.972 0.967 0.960 0.972 0.967 0.960

∗ m

Table 9 Optimal ES schemes under different prior hyper-parameters for m = 4 and p0 = 0.2. Weibull

Log-normal

Prior

x0

φc (ηm∗ ) × 100

Ec

x0

φc (ηm∗ ) × 100

Ec

(1,1) (1,2) (1,3) (2,1) (2,2) (2,3) (3,1) (3,2) (3,3)

0.541 0.628 0.720 1.306 1.497 1.716 3.183 3.624 4.155

5.960 6.199 6.553 7.439 7.756 8.132 9.076 9.483 9.932

0.978 0.969 0.962 0.980 0.971 0.963 0.980 0.973 0.965

1.986 2.044 2.101 3.040 3.151 3.277 4.720 4.889 5.095

1.256 1.289 1.342 1.561 1.603 1.663 1.903 1.956 2.021

0.992 0.987 0.981 0.991 0.986 0.980 0.991 0.986 0.980

Fig. 3. Optimal time intervals for ES schemes for m = 4 and p0 = 0.0.

respectively. The prior combination (i, j) refers to prior distributions for y p(0) and σ having means and SDs as in Case i and Case j, respectively, for i = 1, 2, 3 and j = 1, 2, 3. Figs. 3 and 4 further show the optimal solutions for convenience. It is easy to see that the optimal 4-point schemes will be sufficient to achieve the desired objectives under both the models. Furthermore, for fixed prior SDs, the optimal time interval x0 increases as the prior means are increased under both the models irrespective of the choice of p0 . A similar pattern is observed as the prior SDs are increased, keeping the prior means fixed. For each prior combination, the equispaced inspections are closer under the the Weibull model as compared to the log-normal model. Moreover, for each prior combination, the 4-point optimal scheme is more efficient under the log-normal than the Weibull model. Also, the optimal inspection intervals are relatively wider with p0 = 0.2 for both the models under each prior combination.

S. Roy and B. Pradhan / Applied Mathematical Modelling 70 (2019) 299–314

309

Fig. 4. Optimal time intervals for ES schemes for m = 4 and p0 = 0.2. Table 10 Optimal ES schemes under new prior distributions for m = 4 and p0 = 0.0. Weibull



Prior

x0

φc (ηm∗ ) × 100

∗ Ec ηm |ηm∗ +1

(1,1) (1,2) (1,3) (2,1) (2,2) (2,3) (3,1) (3,2) (3,3)

0.438 0.560 0.681 1.046 1.284 1.589 2.513 2.976 3.690

5.514 5.683 5.849 6.910 7.158 7.389 8.460 8.793 9.110

0.964 0.955 0.945 0.966 0.957 0.948 0.967 0.959 0.951



Log-normal



x0

φc (ηm∗ ) × 100

∗ Ec ηm |ηm∗ +1

1.464 1.593 1.733 2.300 2.480 2.676 3.607 3.858 4.134

1.102 1.127 1.158 1.380 1.414 1.454 1.693 1.736 1.787

0.972 0.966 0.958 0.972 0.965 0.958 0.971 0.965 0.958

x0

φc (ηm∗ ) × 100

∗ Ec ηm |ηm∗ +1

1.977 2.052 2.123 3.046 3.169 3.300 4.723 4.909 5.122

1.238 1.265 1.302 1.542 1.577 1.625 1.882 1.929 1.989

0.992 0.986 0.980 0.991 0.985 0.979 0.990 0.984 0.978



Table 11 Optimal ES schemes under new prior distributions for m = 4 and p0 = 0.2. Weibull



Prior

x0

φc (η ) × 100

Ec η |η

(1,1) (1,2) (1,3) (2,1) (2,2) (2,3) (3,1) (3,2) (3,3)

0.558 0.676 0.762 1.315 1.583 1.829 3.171 3.672 4.374

5.900 6.100 6.353 7.385 7.676 7.993 9.029 9.422 9.833

0.976 0.966 0.958 0.977 0.968 0.959 0.979 0.970 0.961

∗ m

∗ m

∗ m+1



Log-normal





Though the level of prior information is varied in the above sensitivity analysis through different choices of prior means and SDs, it still assumes normal and gamma distributions as priors for y p(0) and σ , respectively. The effect of assuming specific parametric forms for y p(0) and σ is now studied in detail. The optimal schemes are now obtained using uniform prior for y p(0) and log-normal prior for σ . The prior means and SDs are again varied exactly in the same manner as discussed above. Tables 10 and 11 report the details of optimal 4-point schemes for both Weibull and log-normal models with p0 = 0.0 and p0 = 0.2, respectively. Furthermore, the optimal solutions are plotted in Figs. 5 and 6. It is easy to see that there is hardly any significant change in the optimal values of x0 for both the models. Furthermore, there is hardly any change in any of the findings presented earlier due to the change of prior distributions for y p(0) and σ .

310

S. Roy and B. Pradhan / Applied Mathematical Modelling 70 (2019) 299–314

Fig. 5. Optimal time intervals for ES schemes with new prior distributions m = 4 and p0 = 0.0.

Fig. 6. Optimal time intervals for ES schemes with new prior distributions m = 4 and p0 = 0.2.

6. Comparison with existing algorithm In this section, we present a comparative study to evaluate the performance of Algorithm 1, provided in Section 3. Note that there is hardly any work available on the optimal Bayesian design of PIC-I scheme, except for [19]. Thus, we restrict our study to the comparison of Algorithm 1 with the algorithm presented in [19]. For ease of reference, we will subsequently use the name “Algorithm 0” for the algorithm provided in [19]. Note that Algorithm 0 is employed to obtain the Bayesian D-optimal PIC-I scheme. Though Algorithm 1 is structurally almost similar to the Algorithm 0, it is different in one important aspect. Algorithm 0 does not pre-specify the values of pi s, for i = 1, . . . , m − 1. As a result, Algorithm 0 involves additional m − 1 decision variables as compared to Algorithm 1. Even if we assume p1 = . . . = pm−1 , Algorithm 0 will still have an extra decision variable for both ES and OS schemes. Thus, Algorithm 1 is expected to be more efficient in practical implementations.

S. Roy and B. Pradhan / Applied Mathematical Modelling 70 (2019) 299–314

311

Table 12 Elapsed time for ES schemes under different algorithms. Weibull

Log-normal

m

Algorithm 0

Algorithm 1

Algorithm 0

Algorithm 1

2 3 4 5 6 7 8 9 10

8.076 8.624 10.145 10.892 11.343 11.446 17.059 23.208 22.555

5.380 6.084 6.011 5.375 6.287 7.698 8.750 10.559 14.841

11.039 12.936 14.033 11.434 14.837 19.126 23.101 33.571 28.970

6.890 5.930 7.996 7.693 10.927 12.872 14.165 20.305 17.261

Table 13 Elapsed time for OS schemes under different algorithms. Weibull

Log-normal

m

Algorithm 0

Algorithm 1

Algorithm 0

Algorithm 1

2 3 4 5

24.192 99.880 454.806 301.562

15.214 86.139 283.926 238.264

21.978 43.966 85.498 171.686

12.509 39.968 65.340 153.891

Towards this end, we compare the elapsed times for obtaining the optimal schemes under both the algorithms. We assume equal withdrawal proportions in both cases. Furthermore, we prefix p0 = 0.0 for Algorithm 1, i.e., there is no intermediate withdrawal. Tables 12 and 13 report the elapsed time (in seconds) for obtaining the ES and OS schemes under these two algorithms, respectively. We have done all computations on a 64 bit Desktop PC with Intel Core i7 processor (3.40 GHz × 8) and 16 GB RAM. The elapsed times are measured using the “proc.time” function in R. As expected, Algorithm 1 outperforms Algorithm 0 for all values of m. 7. Simulation study In this section, we assess the sampling variations associated with the optimal PIC-I schemes through a Monte Carlo simulation study. Such simulation studies are also carried out in [20] in the context of Bayesian ALT plans. Here we also evaluate the usefulness of the optimal schemes over non-optimal life test plans. Towards this end, we restrict the simulation study to the 4-point optimal ES and OS schemes, presented in Section 4. However, other optimal schemes can also be examined in a similar fashion without much difficulty. We generate 10 0 0 data sets under 4-point optimal ES and OS schemes with p1 = p2 = p3 = 0.0 and p1 = p2 = p3 = 0.2. These optimal schemes are already provided in Section 4. Furthermore, we also simulate 10 0 0 data sets under the existing life test plan for transmitter vacuum tubes. Note that this existing life test plan involves inspections at 25, 50, 75 and 100 days with no intermediate withdrawal. As in Section 4, each simulated data set consists of n0 = 188 observations. We follow the algorithm presented by Chen and Lio [8] to generate these data sets, treating the MLEs as the true values of the model parameters. Next, we perform a Bayesian analysis for each simulated data set using the non-informative prior distribution of θ = (μ, σ ). For the location-scale family of distributions, it is common to assume that the priors for μ and σ are independent, resulting in π (θ ) = π (μ )π (σ ). Furthermore, a typical non-informative prior for μ is given by π (μ)∝1, whereas, for σ , a popular choice is π (σ )∝1/σ . See [24, pp. 87–89] for a detailed discussion in this context. Then, the joint prior distribution for θ is given by

π (θ ) ∝

1

σ

.

Now, by Bayes’ theorem, the joint posterior distribution of μ and σ is given by

    π θ|Hm ∝ L θ × π (θ )

where L(θ ) is the likelihood function given by (2). Now the expression of L(θ ) suggests that it is not possible to study the characteristics of π (θ|Hm ) analytically. We study the characteristics of posterior distribution by drawing a sample from the joint posterior distribution π (θ|Hm ) using MCMC technique. Here, we employ Metropolis-Hastings (MH) algorithm to generate observations from π (θ|Hm ). We have used MH algorithm as it is a reasonably efficient and possibly most generic algorithm in the sense that it can be used to generate observations from any posterior distribution. See [25] for a comprehensive discussion on MH algorithm. We obtain a sample of size 50 0 0 from π (θ|Hm ). Let {(μ(1) , σ (1) ), . . . , (μ(B) , σ (B) )} denote the posterior sample, with B = 50 0 0.

312

S. Roy and B. Pradhan / Applied Mathematical Modelling 70 (2019) 299–314 Table 14 Average (Avg) posterior estimates and SDs of y0.1 (θ ) for ES and OS schemes with m = 4. Model

Scheme

p0

TV

Mean Avg

SD

Avg

SD

Avg

SD

AL

Weibull

Existing ES ES OS OS Existing ES ES OS OS

0.0 0.0 0.2 0.0 0.2 0.0 0.0 0.2 0.0 0.2

0.657 0.657 0.657 0.657 0.657 1.628 1.628 1.628 1.628 1.628

0.586 0.604 0.584 0.641 0.631 1.565 1.615 1.612 1.626 1.625

0.371 0.285 0.317 0.228 0.237 0.236 0.125 0.131 0.113 0.125

0.613 0.633 0.616 0.650 0.641 1.584 1.622 1.619 1.629 1.630

0.365 0.276 0.304 0.227 0.236 0.232 0.123 0.129 0.113 0.125

0.612 0.601 0.593 0.623 0.617 1.534 1.545 1.545 1.582 1.575

0.349 0.262 0.285 0.224 0.233 0.217 0.134 0.138 0.120 0.130

1.394 1.145 1.239 0.861 0.935 0.913 0.487 0.516 0.441 0.472

Log-normal

Median

Mode

95% HPDCS

Given the sample {μ( ) , σ ( ) ), = 1, . . . , B} from the joint posterior π (θ|Hm ), we can easily obtain a sample from ( ) the posterior of y0.1 (θ ). This is accomplished as follows. For = 1, . . . , B, we compute y0.1 (θ ( ) ), where y0.1 (θ ) = μ( ) + (1 ) (B ) −1 ( )

( p)σ . Note that {y0.1 (θ ), . . . , y0.1 (θ )} can be considered as a sample from the joint posterior of y0.1 (θ ). Now ( ) based on the sample {y0.1 (θ ), = 1, . . . , B}, we can easily evaluate the posterior features of y0.1 (θ ). Table 14 provides average posterior mean, median, mode of y0.1 (θ ) along with their SDs for both Weibull and log-normal lifetime models under the existing life test plan and 4-point optimal ES and OS schemes. Table 14 further reports average length (AL) of 95% highest posterior density credible sets (HPDCS) in each case. It is evident that the performance of the resulting Bayesian point and interval estimates improves significantly under ES and OS schemes as compared to the existing scheme. Now for the ES schemes, the bias and SDs in the posterior point estimates increases as p0 goes up. Furthermore, the ALs of 95% HPDCS also go up substantially with the increase in p0 . The same pattern is also visible for the OS schemes. However, as expected, the OS schemes perform better as compared to the ES schemes under both the models.

8. Discussion and conclusion In this article, we present Bayesian methods for designing C-optimal PIC-I schemes. The C-optimal design criterion focuses on the estimation precision of a particular lifetime quantile, belonging to the lower-tail of the lifetime distribution. It is assumed that the system lifetime belongs to a log-location-scale family of distributions. Next we present a generic algorithm, which is employed to obtain the optimal schemes under Weibull and log-normal lifetime distributions. It is observed that the choice of underlying lifetime distributions is indeed very important for constructing PIC-I schemes. A sensitivity analysis is conducted to study the effect of prior information on the resulting C-optimal ES schemes. It is observed that the resulting schemes may change significantly with the change in prior means and SDs. A simulation study is undertaken to illustrate the sampling variations associated with the optimal ES and OS schemes. It is observed that the OS schemes perform better as compared to its ES counterparts under both the models. Furthermore, as expected, the performance of the posterior point and interval estimates deteriorates as the intermediate withdrawal proportions are increased. The proposed method can be easily generalized for other lifetime distributions. However, the optimal schemes presented in this article are obtained using a Bayesian C-optimality design criterion. Further work is necessary for obtaining the optimal schemes under other Bayesian design criteria.

Acknowledgment The authors would like to thank an Associate Editor and two anonymous reviewers for their constructive comments and suggestions, which greatly improved the quality of the article.

Appendix A. Proof of Result 1 Proof: First, note that I(θ ; ηm ) and I (θ ; ηm+1 ) are both pd matrices, for m ≥ 2. Furthermore,



Im+1 I (θ ; ηm+1 ) − I (θ ; ηm ) = 11 m+1 I12



m+1 I12 m+1 , I22

S. Roy and B. Pradhan / Applied Mathematical Modelling 70 (2019) 299–314

where

1 = E[Nm+1 ] qm+1 (1 − qm+1 )

m+1 I11

1 = E[Nm+1 ] qm+1 (1 − qm+1 )

m+1 I12

m+1 I22 =

1 E[Nm+1 ] qm+1 (1 − qm+1 )

  

∂ qm+1 ∂μ ∂ qm+1 ∂μ ∂ qm+1 ∂σ

313

2 ,

 2

 ∂ qm+1 , ∂σ

.

Now I (θ ; ηm+1 ) − I (θ ; ηm ) is a non-negative definite matrix. This implies that I−1 (θ ; ηm ) − I−1 (θ ; ηm+1 ) is also a non-negative definite matrix [26, p. 70]. Then, ∀c = (c1 , c2 ) ,



cT I−1 (θ ; ηm ) − I−1 (θ ; ηm+1 ) c ≥ 0  

⇒ cT I−1 (θ ; ηm+1 )c π (θ )dθ ≤



cT I−1 (θ ; ηk )c

π ( θ )d θ

⇒ φ (ηm+1 ) ≤ φ (ηm ). Hence the result. Appendix B. Proof of Result 2 0 ) and I(θ ; η ) be Fisher information matrices under the PIC schemes η 0 and η , respectively. Note that Let I (θ 1 ; ηm m m 1 m

I



θ; η

0 m





I0 = 11 0 I12



0 I12 , 0 I22

where m

0 I11 =

1 E[N 0j ] q j (1 − q j )

j=1 0 I12 =

m

1 E[N 0j ] q j (1 − q j )

j=1 0 I22 =



m j=1



1 E[N 0j ] q j (1 − q j )



∂qj ∂μ ∂qj ∂μ ∂qj ∂σ

2 ,



 ∂qj , ∂σ

2 ,

0 and is given where E[N 0j ] is the expected number of items at risk at the beginning of j-th interval under a PIC-I scheme ηm by

E[N 0j ] = n0

j−1

(1 − ql ), for j = 1, . . . , m.

l=1

0 ) are pd matrices for m ≥ 2. Furthermore, ∀c = (c , c ) , Note that I(θ ; ηm ) and I (θ ; ηm 1 2



0 c I ( θ ; ηm ) − I ( θ ; ηm ) c 0 = c I ( θ ; ηm ) c − c I ( θ ; ηm ) c













0 0 0 = I11 − I11 c12 + 2 I12 − I12 c1 c2 + I22 − I22 c22

 2   m  0  ∂qj ∂qj ∂qj 1 = + 2 c1 c2 E[N j ] − E[N j ] ∂μ q j (1 − q j ) ∂μ ∂σ j=1 j=1   m  0  ∂qj 2 1 + c22 E[N j ] − E[N j ] . q j (1 − q j ) ∂σ j=1        2  m  0  2 ∂qj 2 q ∂qj ∂ ∂qj 1 j = E[N j ] − E[N j ] c1 + 2 c1 c2 + c22 q j (1 − q j ) ∂μ ∂μ ∂σ ∂σ j=1  2 m  0  ∂qj ∂qj 1 = E[N j ] − E[N j ] c1 + c2 ≥ 0, q j (1 − q j ) ∂μ ∂σ j=1 c12

m



 0  1 E[N j ] − E[N j ] q j (1 − q j )

314

S. Roy and B. Pradhan / Applied Mathematical Modelling 70 (2019) 299–314

since

E[N 0j ]

− E[N j ] = n

j−1

( 1 − ql ) − n

l=1

j−1

l=1

 (1 − pl )(1 − ql ) = n

j−1

 ( 1 − ql )

l=1

1−

j−1

 ( 1 − pl ) ≥ 0.

l=1

0 ) − I (θ ; η ) is a non-negative definite matrix. This implies that I −1 (θ ; η ) − I −1 (θ ; η 0 ) is also a nonThen I1 (θ 1 ; ηm m m 1 1 1 1 m 1 1 negative definite matrix [26, p. 70], which implies



0 c I−1 (θ ; ηm ) − I−1 (θ ; ηm ) c ≥ 0, ∀c  

0 ⇒ c I−1 (θ ; ηm ) c π ( θ )d θ ≤ c I−1 (θ ; ηm )c π (θ )dθ ,

∀c .

Hence the result. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26]

W.Q. Meeker, L.A. Escobar, Statistical Methods for Reliability Data, Wiley, New York, 1998. N. Balakrishnan, D. Kundu, Hybrid censoring: models, inferential results and applications, Comput. Stat. Data Anal. 57 (2013) 166–209. N. Balakrishnan, R. Aggarwala, Progressive Censoring: Theory, Methods and Applications, Boston, Birkhauser, 20 0 0. N. Balakrishnan, Progressive censoring methodology: an appraisal, TEST 16 (2007) 211–259. M.M.A. Sobhi, A.A. Soliman, Estimation for the exponentiated Weibull model with adaptive type-II progressive censored schemes, Appl. Math. Model. 40 (2) (2016) 1180–1192. R. Aggarwala, Progressive interval censoring: some mathematical results with application to inference, Commun. Stat. Theory Methods 30 (2001) 1921–1935. H.K.T. Ng, Z. Wang, Statistical estimation for the parameters of Weibull distribution based on progressively type-I interval censored sample, J. Stat. Comput.Simul. 79 (2009) 145–159. D.G. Chen, Y.L. Lio, Parameter estimations for generalized exponential distribution under progressive type-I interval censoring, Comput. Stat. Data Anal. 54 (2010) 1581–1591. Y. Lin, Y.L. Lio, Bayesian inference under progressive type-I interval censoring, J. Appl. Stat. 39 (2012) 1811–1824. S. Roy, E.V. Gijo, B. Pradhan, Inference based on progressive type-I interval censored data from log-normal distribution, Commun. Stat. Simul. Comput. 46 (2017) 6495–6512. K. Ahmadi, F. Yousefzadeh, M. Rezaei, Progressively Type-I interval censored competing risks data for the proportional hazards family, Commun. Stat. Simul. Comput. 46 (2017) 5924–5950. C. Lin, S.J.S. Wu, N. Balakrishnan, Planning life tests with progressively type-I interval censored data from the log-normal distribution, J. Stat. Plan. Inference 139 (2009) 54–61. C. Lin, N. Balakrishnan, S.J.S. Wu, Planning life tests based on progressively type-I grouped censored data from the Weibull distribution, Commun. Stat. Simul. Comput. 40 (2011) 574–595. S.-R. Huang, S.-J. Wu, Reliability sampling plans under progressive type-I interval censoring using cost functions, Reliab., IEEE Trans. 57 (2008) 445–451. S. Budhiraja, B. Pradhan, Computing optimum design parameters of a progressive type I interval censored life test from a cost model, Appl. Stochastic Models Bus. Ind. 33 (2017) 494–506. S.J. Wu, S.R. Huang, Planning progressive type-I interval censoring life tests with competing risks, IEEE Trans. Reliab. 63 (2014) 511–522. K. Chaloner, K. Larntz, Bayesian design for accelerated life testing, J. Stat. Plan. Inference 33 (1992) 245–259. K. Chaloner, I. Verdinelli, Bayesian experimental design: areview, Stat. Sci. 10 (1995) 273–304. S. Roy, B. Pradhan, Bayesian optimum life testing plans under progressive type-I interval censoring scheme, Qual. Reliab. Eng. Int. 33 (2017) 2727–2737. Y. Zhang, W.Q. Meeker, Bayesian methods for planning accelerated life tests, Technometrics 48 (2006) 49–60. S. Budhiraja, B. Pradhan, D. Sengupta, Maximum likelihood estimators under progressive type-I interval censoring, Stat. Probab. Lett. 123 (2017) 202–209. M. Clyde, K. Chaloner, The equivalence of constrained and weighted designs in multiple objective design problems, J. Am. Stat. Assoc. 91 (1996) 1236–1244. A. Atkinson, A. Donev, R. Tobias, Optimum Experimental Designs, with SAS (Oxford Statistical Science Series), USA, Oxford University Press, 2007. J.O. Berger, Statistical Decision Theory and Bayesian Analysis, 2nd, Springer-Verlag, New York, 1985. S. Chib, E. Greenberg, Understanding the Metropolis-Hastings algorithm, Am. Stat. 49 (1995) 327–335. C.R. Rao, Linear Statistical Inference and Its Applications, Wiley, Singapore, 1973.