Accepted Manuscript Inference for the Burr XII reliability under progressive censoring with random removals Sumith Gunasekera
PII: DOI: Reference:
S0378-4754(17)30284-7 http://dx.doi.org/10.1016/j.matcom.2017.07.011 MATCOM 4486
To appear in:
Mathematics and Computers in Simulation
Received date : 21 December 2015 Revised date : 13 December 2016 Accepted date : 31 July 2017 Please cite this article as: S. Gunasekera, Inference for the Burr XII reliability under progressive censoring with random removals, Math. Comput. Simulation (2017), http://dx.doi.org/10.1016/j.matcom.2017.07.011 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Manuscript Click here to view linked References
Inference for the Burr XII Reliability under Progressive Censoring with Random Removals Sumith Gunasekera Department of Mathematics The University of Tennessee at Chattanooga Chattanooga, TN 37403 USA December 13, 2016 Abstract The inference about the reliability function of Burr XII distribution using the concept of generalized variable method based on progressively type II censoring with random removals, where the number of units removed at each failure time has a discrete uniform distribution, is proposed. As assessed by simulation, the coverage probabilities of the proposed approach are found to be very close to the nominal level even for small samples. The proposed new approaches are computationally simple and are easy to use. The method is illustrated using two examples.
Key words: Burr XII distribution, Generalized variable method, Progressively type II censored sample, Uniformly distributed random removals, Reliability function
1
Introduction
In this study, we consider the case of progressively type II right censoring scheme with uniformly distributed random removals. In the presence of these progressively type II censored data, reliability analysis is utilized to assess the lifetime data under the Burr XII distribution (or simply 1
the Burr distribution, it is also known as the Singh—Maddala distribution [15]). The Burr XII distribution, first introduced in the literature by Irving Burr [5], is the functionally simplest from the Burr system of distributions, which includes twelve types of cumulative distribution functions. Many standard theoretical distributions, including the Weibull, exponential, logistic, generalized logistic, Gompertz, normal, extreme value, and uniform distributions are special cases or limiting cases of the Burr system of distributions. The shape parameter plays a vital role for the Burr XII distribution and its capacity to assume various shapes often permits a good fit when used to describe biological, clinical, or other experimental data. The Burr XII distribution has been recognized as a useful model for the analysis of lifetime data. It has also been applied in area of quality control, reliability studies, duration, failure time modeling, studies in simulation, quantal bioassay, non-normal control charts, the approximation of theoretical distributions, and the fitting of smooth curves to histograms. Their utility in survivorship applications will be of special interest in this effort. Burr has gained special attention in the last two decades due to the potential of using it in practical situations. Let the lifetime X of a particular component (unit) have a Weibull distribution with a shape parameter c and a rate parameter α; for brevity, we shall also say X ∼ W(c, α); with the probability density function fX (x; c, α) = cαxc−1 exp(−αxc ); x > 0, c > 0, α > 0.
(1.1)
Suppose that in a population of units, there could be a ubiquitous variation in α values because of small fluctuations in manufacturing tolerances. Let α have a gamma distribution with scale parameter 1 and a shape parameter θ; for brevity, we shall also say α ∼ G(θ, 1); with the probability density function fα (α; θ) =
1 θ−1 α exp(−α); α > 0, θ > 0. Γ(θ)
(1.2)
Rodriguez [16] indicated that the Burr type XII distribution can be obtained as a smooth mixture of Weibull distribution compounded with respect to α, where α has a standard gamma distribution with a parameter θ. That is, fX (x; c, θ) =
Z∞
cαxc−1 exp(−αxc )
1 θ−1 α exp(−α) dα Γ(θ)
0
= cθxc−1 (1 + xc )−(θ+1) , x > 0, c > 0, θ > 0. 2
(1.3)
Equation (1.3) is the probability density function of the Burr type XII distribution; for brevity, we shall also say X ∼ B(c, θ); and the corresponding reliability function is SX (x; c, θ) = (1 + xc )−θ ; x > 0, c > 0, θ > 0.
(1.4)
When fitting these distributions to data, a location parameter ξ and a scale parameter β are incorporated by replacing X with (X − ξ) /β. In case where the location parameter may be 0
assumed to be known, the parameter ξ may be excluded by substituting X for (X − ξ) . When
X has a Burr XII distribution, under the data transformation, Y = ln(1 + X c ) is exponentially
distributed with a rate parameter θ; for brevity, we shall also say Y ∼ E(θ). Hence, the density and reliability of Y are, respectively, given by fY (y; θ) = θ exp(−θy); y > 0, θ > 0,
(1.5)
SY (y; θ) = exp(−θy); y > 0, θ > 0.
(1.6)
and
Inferences for Burr XII distribution were discussed by many authors. Mousa [2] obtained empirical Bayes estimation of the parameter θ and the reliability function based on accelerated type II censored data. Mousa and Jaheen [3] obtained Bayes approximate estimates for the two parameters and reliability function of the Burr (c, θ) based on progressive type II censored samples. Based on the same progressive samples as above, Soliman [16] obtained the Bayes estimates using both the symmetric (Squared Error) loss function, and asymmetric (LINEX, General Entropy) loss functions. Abdel-Ghaly et al. [1] applied the Burr XII distribution to measure software reliability. Li et al.[12] proposed the empirical estimators of reliability performances for Burr XII distribution under LINEX error loss. Moore and Papadopoulos [13] derived Bayesian estimators of the parameter k and the reliability function for the Burr XII distribution under three different loss functions. Zimmer et al. [23] also presented statistical and probabilistic properties of the Burr XII distribution and described its relationship to other distributions used in reliability analyses. Wang and Keats [18] gave the maximum likelihood method for obtaining point and interval estimators of the parameters of the Burr XII distribution. Wu et al. [19] use the method of maximum likelihood to derive the point estimators to construct the exact confidence interval and region for the parameters of the Burr XII distribution. 3
The purpose of this paper is to develop a simple approach which can be used to obtain confidence interval for, and to perform hypothesis testing of, the reliability function of Burr XII based on progressively type II censored data with random removals, where the number of units removed at each failure time has a discrete uniform distribution. Toward this, we develop method based on the concept of generalized variable. The generalized p-value was introduced by Tsui and Weerahandi [17] and generalized CI by Weerahandi [20] presenting them as extensions of— rather than alternatives to—classical methods of statistical evaluation. The concepts of generalized CI and generalized p-value have been widely applied to a variety of practical settings such as ANOVA, regression, Analysis of Variance (ANOVA), Analysis of Reciprocals (ANORE), Analysis of Covariance (ANCOVA), Analysis of Frequency (ANOFRE), Multivariate Analysis of Variance (MANOVA), mixed models, and growth curves where standard methods failed to produce satisfactory results obliging practitioners to settle for asymptotic results and approximate solutions. For example, see Weerahandi [21, 22], Gunasekera [7, 8, 9], and Gunasekera and Ananada [10]. For a recipe of constructing generalized pivotal quantities (GPQs), see Iyer and Patterson [11].
2
Classical inference
Let X denote the lifetime of a product and X has the Burr XII distribution with the density as in (1.1). With progressively type II right censoring, n units are placed on test. Consider that X1:m:n ≤ X2:m:n ≤···≤ Xm:m:n is the corresponding progressively type II right censored sample, with censoring scheme R = r = (r1 , r2 , ..., rm ); where m denote the number of failures observed before termination from n items that are on test, and r1 , r2 , ..., rm denote the corresponding numbers of units randomly removed (withdrawn) from the test. Furthermore, let x1:m:n ≤ x2:m:n ≤···≤ xm:m:n be the observed ordered lifetimes. Let ri denote the number of P units removed at the time of the ith failure, 0 ≤ ri ≤ n − m − i−1 j=1 rj , i = 2, 3, ..., m − 1, with Pm−1 0 ≤ r1 ≤ n − m and rm = n − m − j=1 rj , where ri ’s are non-pre-specified integers and m are pre-specified integers. Note that if r1 , r2 , ..., rm−1 = 0, so that rm = n − m, this scheme reduces
to the conventional type II right censoring scheme. Also note that if r1 = r2 = ... = rm = 0, so that m = n, the progressively type II right censoring scheme reduces to the case of no censoring scheme (complete sample case).
4
Since the joint density of X1:m:n , X2:m:n ,···,Xm:m:n is given by [4] m Y i=1
Ai fX (xi:m:n ; c, θ)[SX (xi:m:n ; c, θ)]ri , where Ai = n −
Xi−1
j=1
(rj + 1)
(2.1)
So, the conditional likelihood function is given by L(θ|R = r) = (cθ)m
m Y
Ai
i=1
xi:m:n c−1 exp[−θ(1 + Ri ) ln(1 + xci:m:n )], 1 + xi:m:n c
(2.2)
where Ai is as defined in (2.1). Now, suppose that the number of units removed at each failure time Ri (i = 1, 2, m − 1) follows P a discrete uniform distribution; for brevity, we shall also say Ri ∼ UD (0, n − m − i−1 j=1 rj ); with
probability mass function
P (Ri = ri |Ri−1 = ri−1 , Ri−2 = ri−2 , ..., R1 = r1 ) =
n−m−
and P (R1 = r1 ) =
1 Pi−1
j=1 rj + 1
, i = 2, 3, ..., m − 1,
1 . n−m+1
Suppose further that Ri (i = 1, 2, ..., m − 1) is independent of xi:m:n , then the unconditional likelihood function can be expressed as L(θ) = L(θ|R = r)P (R = r), where P (R = r) =
Qm−1 i=1
P (Ri = ri |Ri−1 = ri−1 , Ri−2 = ri−2 , ..., R1 = r1 ).
It is evident that P (R = r) does not depend on the parameters c and θ, and hence the MLE of θ can be obtained by the conditional likelihood function given in (2.2) directly. Assuming that c is given (or known), the maximum likelihood estimate of θ can be derived by solving the equation: d m X ln L(θ|R = r) = − (1 + ri ) ln(1 + xci:m:n ) = 0. dθ θ i=1 m
Hence, we show that the MLE θb of θ is given by
m , c i=1 (1 + ri ) ln(1 + Xi:m:n )
where c is given (or known).
θb = Pm
5
(2.3)
In reliability analysis, we are not only interested in deriving inference about the parameters but also interested in making inference about the survival function and the percentiles of the failure time distribution. Let xp be the 100pth percentile of the failure time distribution. One can obtain xp by solving P (X ≤ xp ) = F (xp ) = p, where F (·) is the cdf of the Burr XII distribution given by FX (x; c, θ) = 1 − (1 + xc )−θ ; x > 0, c > 0, θ > 0. Then, it is easy to see that £ ¤1/c . xp = 1 − (1 − p)−1/θ
Consequently, by using the invariance of MLE, the MLE of S and xp are, respectively, given by b Sb = (1 + X c )−θ ; X > 0, c > 0,
and
(2.4)
i h b 1/c x bp = 1 − (1 − p)−1/θ
c Now, let Yi:m:n = (1 + ri ) ln(1 + Xi:m:n ), i = 1, 2, ..., m. It is easy to show that Y1:m:n ≤
Y2:m:n ≤···≤ Ym:m:n is a progressively type II censored sample from the exponential distribution with mean (1/θ). For a fixed set of R = r = (r1 , r2 , ..., rm ), let us consider the following scaled (generalized ) spacings W1 = nY1:m:n W2 = (n − r1 − 1)(Y2:m:n − Y1:m:n ) . . . Wi = (n − .
Xi−1
j=1
rj − (i − 1))(Yi:m:n − Yi−1:m:n )
. . Wm = (n −
Xm−1 j=1
rj − (m − 1))(Ym:m:n − Ym−1:m:n ) 6
Balaksrihnan and Aggarwala [4] proved that the progressively type II right censored spacings W1 , W2 , ..., Wm are all independent and identically distributed as exponential with the mean (1/θ), P Pm that is, Wi ∼ Exponential(1/θ) = Gamma(1, 1/θ). Then, W = m i=1 Wi = i=1 (1 + ri ) ln(1 + c Xi:m:n ) ∼ Gamma(m, 1/θ). Since we can write the denominator of θb as the sum of m independent
generalized spacings, we can find that, conditionally on a fixed set of R = r = (r1 , r2 , ..., rm ),
U = 2θW = 2mθ/θb has a chi-square distribution with 2m degrees of freedom. In addition,
because conditional distribution of U is independent of R = r, it must follow that the marginal distribution of U is also a chi-square distribution with 2m degrees of freedom. Testing procedure for the reliability function Construct a statistical testing procedure to assess whether the reliability function adheres to the required level. The one-sided hypothesis testing and one-sided confidence interval for S are obtained using the pivotal quantity U = 2θW . Assuming that the required reliability is larger than s∗ , where s∗ denotes the target value, the null hypothesis H0 : S ≤ s∗ and the alternative
b the MLE of S as the test statistic, the hypothesis Ha : S > s∗ are constructed. By using S, b Sb > S0∗ }, where S0∗ is the critical value. rejection region can be expressed as {S|
Given the specified significance level γ, the critical value S0∗ can be calculated as follows:
=⇒
Sup P (Sb > S0∗ ) = γ
{S ≤ s∗ }
=⇒ P (Sb > S0∗ |S = s∗ ) = γ m
=⇒ P ((1 + xc )− W > S0∗ |(1 + xc )−θ = s∗ ) = γ ´ ³ m =⇒ P − ln(1 + xc ) > ln S0∗ | − θ ln(1 + xc ) = ln s∗ = γ ¶ µ W ln s∗ m ln(1 + xc ) =γ |θ = − =⇒ P W < − ln S0∗ ln(1 + xc ) µ ¶ 2θm ln(1 + xc ) ln s∗ =⇒ P 2θW < − |θ = − =γ ln S0∗ ln(1 + xc ) ¶ µ 2m ln s∗ = γ, =⇒ P U < ln S0∗
(2.5)
where U ∼ χ22m . From (2.5), utilizing χ22m,1−γ function which represents the lower 100(1 − γ)th
percentile of χ22m , then
2m ln s∗ = χ22m,1−γ ∗ ln S0 7
is obtained. Thus, the following critical value can be derived: ¶ µ 2m ln s∗ ∗ , S0 = exp χ22m,1−γ
(2.6)
where s∗ , γ and m denote the target value, the specified significance level and the number of observed failures before termination, respectively. Moreover, we also find that S0∗ is independent of n and Ri , i = 1, 2, ..., m. Confidence interval for the reliability function Given the specified significance level γ, the level (1 − γ) one-sided confidence interval for S can be derived as follows: With the pivotal quantity U , where U ∼ χ22m and χ22m,1−γ which represents the lower 100(1 −
γ)th percentile of χ22m ,
m =⇒ P (2θW ≥ χ22m,1−γ ) = 1 − γ, where S = (1 + xc )−θ and Sb = (1 + xc )− W ¶ µ 2m ln S ln(1 + xc ) 2 ≥ χ2m,1−γ = 1 − γ =⇒ P ln(1 + xc ) ln Sb µ ¶ 2m ln S 2 =⇒ P ≥ χ2m,1−γ = 1 − γ b ln S !) ( Ã χ22m,1−γ ln Sb =1−γ =⇒ P S ≥ exp 2m
From Eq. (2.7), then S ≥ exp
Ã
χ22m,1−γ ln Sb 2m
(2.7)
!
is the level 100(1 − γ)% one-sided confidence interval for S. Thus, the level 100(1 − γ)% classical lower confidence bound for S can be written as C
LB (x) = exp
Ã
χ22m,1−γ ln Sb 2m
!
,
(2.8)
b α and m denote the MLE of S, the specified significance level and the number of observed where S, failures before termination, respectively.
The power function for the testing of the reliability function The power of this statistical test is the probability of correctly rejecting a false null hypothesis. The null hypothesis H0 : S ≤ s∗ and the alternative hypothesis Ha : S > s∗ are constructed. The power of this statistical test is derived as follows: 8
b Sb > S0∗ = exp Under type II censoring scheme, we get a size γ test with the rejection region {S| ¢ ¡ 2m ln s∗ /χ22m,1−γ } for the number of observed failures before termination m and sample size n (m ≤ n). The power P (s∗1 ) of the test at the point S = s∗1 (> s∗ ) is P (s∗1 ) = P (Sb > S0∗ |S = s∗1 ) m
= P ((1 + xc )− W > S0∗ |(1 + xc )−θ = s∗1 ) ´ ³ m = P − ln(1 + xc ) > ln S0∗ | − θ ln(1 + xc ) = ln s∗1 ¶ µ W ln s∗1 m ln(1 + xc ) |θ = − = P W <− ln S0∗ ln(1 + xc ) ¶ µ ln s∗1 2θm ln(1 + xc ) |θ = − = P 2θW < − ln S0∗ ln(1 + xc ) ¶ µ ∗ 2m ln s1 = P 2θW < ln S0∗ ¶ µ ln s∗1 2 ∗ , χ P (s1 ) = P U < ln s∗ 2m,1−γ
(2.9)
where U ∼ χ22m .
The Monte Carlo simulation algorithm of the power P (s∗1 ) is given in the following steps:
Step 1: (a) Given x, s∗ , s∗1 , γ, m, n and R = (R1 , R2 , ..., Rm ), where s∗ < s∗1 < 1 and m ≤ n. (b) The value of θ is calculated by the equation θ = − ln s∗1 / ln(1 + xc ).
(c) Generate (X1:m:n , X2:m:n , ···, Xm:m:n ) , a progressively type II right censored sample from a two-parameter Burr XII distribution with density. P b (d) The value of Sb is calculated by Sb = (1 + xc )−θ , where θb = m/[ m i=1 (1 + Ri ) ln c (1 + Xi:m:n )].
¤ £ (e) If Sb > S0∗ then Count = 1, else Count = 0, where S0∗ = exp 2m ln s∗ /χ22m,1−γ .
Step 2: (a) The step 1 is repeated 1000 times.
(b) The estimation of the power P (s∗1 ) is Pb(s∗1 ) =
T otal Count 1000
.
Step 3: (a) The step 2 is repeated 100 times, then 100 estimations of the power P (s∗1 ) can be obtained as follow: Pb1 (s∗1 ), Pb2 (s∗1 ), ..., Pb100 (s∗1 ).
(b) The mean Pb(s∗1 ) of Pb1 (s∗1 ), Pb2 (s∗1 ), ..., Pb100 (s∗1 ), that is, Pb(s∗1 ) =
100 P
i=1
Pbi (s∗1 )
100
.
(c) The sample mean square error (SMSE) of Pb1 (s∗1 ), Pb2 (s∗1 ), ..., Pb100 (s∗1 ) is 9
SMSE =
3
100 P
2
[Pbi (s∗1 )−P (s∗1 )]
i=1
100
, where P (s∗1 ) can be calculated by (2.9).
Generalized inference
3.1
Generalized variable method : A review
Suppose that Y = (Y1 , Y2 , ..., Yn ) form a random sample from a distribution, which depends on the parameters (θ, β) where θ is the parameter of interest and βT is a vector of nuisance parameters. Consider testing H0 : θ ≤ θ0 vs. Ha : θ > θ0 , where θ0 is a specified quantity. A generalized test variable (Tusi and Weerahandi [17]) of the form T (Y; y, θ, β), where y is an observed value of Y, is chosen to satisfy the following three conditions. 1. For fixed y, the distribution of T (Y; y, θ, β) is free of the vector of nuisance parameter β. 2. The value of T (Y; y, θ, β) at Y = y is free of any unknown parameters. 3. For fixed y and β, and for all t, Pr [T (Y; y, θ, β) ≥ t] is either an increasing or a decreasing function of θ. If T (Y; y, θ, β) is stochastically increasing in θ, the generalized p-value for testing H0 : θ ≤ θ0 vs. Ha : θ > θ0 is given by SupH0 Pr [T (Y; y, θ, β) ≥ t] , and if T (Y; y, θ, β) is stochastically decreasing in θ, the generalized p-value for testing H0 : θ ≤ θ0 vs. Ha : θ > θ0 is given by SupH0 Pr [T (Y; y, θ, β) ≤ t] .
Furthermore, a 100(1−γ)% two-sided generalized confidence interval for θ is given by (R(y; γ2 ),
R(y; 1 − γ2 )), where R(y; p) is the pth quantile of R(Y; y, θ, β), which is the generalized pivotal quantity that has a relationship T (Y; y, θ, β) = R(Y; y, θ, β) − θ and satisfy the following two conditions. 1. For fixed y, the distribution of R(Y; y, θ, β) is free of the vector of nuisance parameter β. 2. The value of R(Y; y, θ, β) at Y = y is free of the nuisance parameter β and equals to the parameter of interest θ. For further details on the concepts of generalized p-values, we refer readers to the books by Weerahandi [29, 30].. Confidence interval for the reliability function Given the specified significance level γ, the level (1−γ) one-sided generalized confidence interval
10
for S can be derived as follows: Let x1:m:n , x2:m:n ,···, xm:m:n denote m number of failures observed before termination and R1 , R2 , ..., Rm denote the corresponding numbers of units removed (withdrawn) from the test. Furthermore, let x1:m:n ≤ x2:m:n ≤···≤ xm:m:n be the observed ordered lifetimes. Let Ri denote the number of P units removed at the time of the ith failure, 0 ≤ Ri ≤ n − i−1 j=1 Rj − i,i = 2, 3, ..., m − 1, with P 0 ≤ R1 ≤ n − 1 and Rm = n − i−1 j=1 Rj − m, where Ri ’s and m are pre-specified integers. The basic idea is to exhibit a generalized pivotal statistic for θ − say Q θ (X; x; δ),or simply
Q θ − satisfying the properties of the generalized pivotal statistic. That is, the observed value of
Q θ − say q θ (x; x; δ),or simply q θ − is θ, and the distribution function of Q θ is free of any unknown parameters. Once this is done, a generalized pivotal statistic, say Q S for S − say Q S (X; x; δ),or θ
simply Q S −in (1.2) is given by Q S = (1 + xc )−Q for given c. P c In order to derive Q θ , let θbobs = m/[ m i=1 (1 + Ri ) ln(1 + xi:m:n )] denote the observed value of P c 2 θb = m/[ m i=1 (1 + Ri ) ln(1 + Xi:m:n )] defined in (2.3). Using U = 2θW ∼ χ2m , it is easily seen that a choice of Q θ is given by Qθ =
Xm U U θbobs = , where W = (1 + Ri ) ln(1 + xci,n ). i=1 2m 2W
Hence, a generalized pivotal statistic for S in (1.2) is given by " ¾# bobs ½ 1 U θ Q S = exp ln . 2m (1 + xci:m:n )
(3.1)
(3.2)
S Let Q1−γ (xm,n ; θbobs ), where xm,n = (x1:m:n , x2:m:n , ···, xm:m:n ), satisfy S (xm,n ; θbobs )] = 1 − γ P [Q S ≤ Q1−γ
S (xm,n ; θbobs ) is a 100(1−γ)% lower confidence limit for S. That is, lower confidence bound The Q1−γ
(LB G ) is
S (xm,n ; θbobs ) LB G (x) = Q1−γ
(3.3)
Testing procedure for the reliability function
Construct a statistical testing procedure to assess whether the reliability function adheres to the required level. The one-sided hypothesis testing for S is obtained using the generalized test variable T S (X; x; δ) = QS (X; x; δ)− S, or simply T S = QS −S, where T = T S (X; x; δ). Assuming 11
that the required reliability is larger than s∗ , where s∗ denotes the target value, the null hypothesis H0 : S ≤ s∗ and the
alternative hypothesis Ha : S > s∗ are constructed. Then, the generalized p-value, denoted by pg , is derived as follows: (
"
U θbobs ln pg = Pr exp 2m
½
1 (1 + xci:m:n )
¾#
≥ s∗
)
(3.4)
This p-value can be either computed by numerical integration exact up to a desired level of accuracy or well approximated by a Monte Carlo method. When there are a large number of random numbers from various random variables, the latter method is more desirable and computationally more efficient. p is an exact probability of a well-defined extreme region of the sample space and measures the evidence in favor of the null hypothesis. This is an exact test in significance testing. In fixed level testing, one can use this p-value by rejecting the null hypothesis, if p < δ, where δ a desired nominal level . The following algorithm is useful in constructing pg . Algorithm 1 Step 1: Given c, k, γ, m, n, s∗ and R = (R1 , R2 , ..., Rm ), (a) The generation of data U1 , U2 , ..., Um is by the uniform distribution U(0, 1). £ ¤1/c (b) By the transformation of Zi = 1 − (1 − Ui )−1/θ , i = 1, 2, ..., m, (Z1 , Z2 , ..., Zm ) is a random sample from the Burr XII distribution with
density as (1.1). Z1 n
Z2 (n−R1 −1)
Zi P , for i = 1, 2, ..., m. [n− i−1 j=1 Rj −i+1] (X1,n , X2,n , ..., Xm,n ) is the progressively type II right censored sample from a two-
(c) Set Xi:m:n =
+
+ ... +
parameter Burr XII distribution with density as (1.1). Step 2: Compute the maximum likelihood estimate of θ P c θbobs = m/ m i=1 (1 + Ri ) ln(1 + xi,n ),
Step 3: For j = 1 : M
(a) Generate U ∼ χ22m
(b) Compute the quantity Q θ = U θbobs /2m, nh i £ ¤o S c b (c) Compute Q = exp U θobs /2m ln 1/(1 + xi,n )
(end j loop)
12
Generalized p value is estimated by the proportion of Q S which are greater than s∗ . The S S 100(1 − γ)th percentile of Qj=1,2,...,M ’s, Q1−γ (xm,n ; θbobs ), is the lower bound of the one-sided 1 − γ S confidence interval. That is, LB G = Q1−γ (xm,n ; θbobs ).
Generalized coverage probabilities, size and power of the test for the reliability
function Coverage probabilities of the generalized confidence intervals, powers of generalized tests, and expected lengths of the generalized confidence limits are computed using the Monte Carlo method given in the following algorithm. Algorithm 2 Given c, k, γ, m, n, s∗ and R = (R1 , R2 , ..., Rm ), For j = 1 : q 1. Generate U ∼ χ22m , 2. Set θ = U θbobs /2m,
3. Use Algorithm 1 to construct ⎧ a (1 − γ) confidence interval Cj , ⎨ 1, if C contains S j , δj = ⎩ 0, if C does not contains S j 4. Use Algorithm 1 to compute⎧the generalized p-value, ⎨ 1, if p-value < γ . ηj = ⎩ 0, if p-value > γ (end j loop) P The proportion 1q qj =1 δj is the estimated coverage probability of the generalized confidence
interval. It is evident that sometimes the coverage of the generalized confidence interval may not equal to the nominal level, but, when generalized confidence interval reduces to traditional classical confidence intervals, theoretical results are available on coverage properties of generalized P confidence intervals. The proportion 1q nj =1 ηj is the estimated power of the generalized test. The expected length of the generalized limits is estimated by the average length of Cj ’s.
4
Numerical examples
In this section, we propose the new hypothesis testing procedure to a real-world data and a simulation data. Example 1 considered is the failure data of n = 19, m = 8 electrical insulating
13
fluids from Nelson [14]. In Example 2, we discuss a simulated data that is generated from the Burr XII distribution with c = 3, k = 0.5, n = 20, m = 10. These numerical examples illustrate the use of the testing procedure. The proposed testing procedure not only can handle non-normal lifetime data, but also can handle the progressively type II right censored sample. Example 1: Real-world data In the case of real-world data, use the least squares estimation method which is based on the minimum Error Sum of Squares (SSE) for various values of c and the “shape-first” approach (that is to fit the shape parameter c before fitting the parameter θ) to fit the optimal value of c and estimate of θ such that SSE is minimized for progressively type II right censored data. Then, c is defined as known. The procedure is as follows: Step 1. Let X have the Burr XII distribution whose pdf is given by fX (x; c, θ) = cθxc−1 (1 + xc )−(θ+1) ; x > 0, c > 0, θ > 0. then the cdf of X is given as FX (x; c, θ) = 1 − (1 + xc )−θ ;
x > 0, c > 0, θ > 0,
and FX (x; c, θ) satisfies ln (1 − FX (x; c, θ)) = −θ ln(1 + xc ),
x > 0, c > 0, θ > 0,
Consider that X1:m:n ≤ X2:m:n ≤···≤ Xm:m:n is the corresponding progressively type II right censored sample, with censoring scheme R = (R1 , R2 , ..., Rm ). Q The expectation of FX (xi:m:n ; c, θ) is 1 − m j = m−i+1 (aj /(aj + 1)), i = 1, ..., m, Pm where aj = j + i = m−j+1 Ri (see [4]). By using the approximate equation ³ ³ ´´ Q ln 1 − 1 − m (a /(a + 1)) ≈ −θ ln(1 + xci:m:n ), i = 1, ..., m, we get j j j = m−i+1 θi ≈ −
³ ³ ´´ Qm ln 1 − 1 − j = m−i+1 (aj /(aj + 1)) ln(1 + xci:m:n )
for i = 1, ..., m.
Then using the least squares estimation method for various values of c and the “shape-first” approach to fit the optimal value of c,calculate the SSE for given each value of c, that is, m X b 2 , where θb = Pm SSEc = (θi − θ)
m c i=1 (1 + Ri ) ln(1 + Xi:m:n )
i=1
14
Now, find the optimal value of c (say cf it) ) and estimate θ such that SSE is minimized. The density of the fitted Burr XII distribution is now fX (x; cf it) , θ) = cf it θxcf it −1 (1 + xcf it) )−(θ+1) ;
x > 0, θ > 0.
Step 2. Use the scale-free goodness-of-fit test for Burr XII distribution based on the Gini statistic due to Gail and Gastwirth [6] for the progressively type II right censored data X1:m:n , X2:m:n ,···,Xm:m:n . The procedure is as follows: The null hypothesis is H0 : X ∼ the Burr XII distribution with the pdf fX (x; cf it) , θ)
= cf it θxcf it −1 (1 + xcf it) )−(θ+1)
The Gini statistic given as follows: Pm−1 iWi+1 i = 1P Gm = , (m − 1) m i = 1 Wi
where Wi = (m − i + 1)(Z(i) − Z(i−1) ), Z(0) = 0, i = 1, ..., m, Z1 = nY1 , Zi = P [n − i−1 j = 1 (Rj + 1)](Yi − Yi−1 ), i = 1, ..., m, and the data transformation Yi = ln(1 + xcf it) ).
© ª For m = 3, ..., 20, the rejection region is given by Gm > ξ1− γ/2 or Gm < ξγ/2 ,
where the critical value ξγ/2 is the 100(γ/2)th percentile of the Gm statistic and is
available on p. 352 in Gail and Gastwirth [6]. Nelson [14] reported data on times to breakdown of an insulating fluid between electrodes at a voltage of 34 kV. The 19 times-to-breakdown are: 0.19, 0.78, 0.96, 1.31, 2.78, 3.16, 4.15, 4.67, 4.85, 6.50, 7.35, 8.01, 8.27, 12.06, 31.75, 32.52, 33.91, 36.71, and 72.89. Zimmer et al. [29] indicated that the Burr type XII distribution is acceptable for these data. For the purpose of illustrating the generalized method discussed in this article, progressively type II censored sample with random removals was generated from n = 19 observations recorded at 34 kV in Nelson’s table [14]. The sample size is m = 8 and the number of removals at each failure time was obtained from a discrete uniform distribution as described in Section (2). The observed failure times and removed numbers are reported in Table 4.1.
15
Table 4.1 Progressively type II censored sample with random removals generated from the times to breakdown data i
1
xi:m:n ri
2
3
4
5
6
7
0.19 0.78 0.96 1.31 2.78 4.85 6.50 0
0
3
0
3
0
8 7.35
0
5
Once the procedure for handling real-world data described above, the value of c (out of various c values) that minimizes SSE is found to be c = 1.4 which is very close to the optimum (minimum) value of the graph of SSE versus c.(This graph has been omitted for saving space and can be produced upon request). Further, θb value corresponds to c = 1.4 is 0.22. Then, c is defined as known. That is,
fX (x; 1.4, θ) = 1.4θx0.4 (1 + x1.4 )−(θ+1) ;
x > 0, θ > 0.
The goodness of fit test for testing H0 : X ∼ the Burr XII distribution with the pdf fX (x; 1.4, θ) =
1.4θx0.4 (1 + x1.4 )−(θ+1) at level α = 0.05 based on the Gini statistic for the progressively type II right censored observed failure times
{xi,19 , i = 1, 2, ..., 8} = {0.19, 0.78, 0.96, 1.31, 2.78, 4.85, 6.50, 7.35} . This procedure has been explained in the previous section, and the Gini statistics is found to be P7 i = 1 iWi+1 G8 = = 0.41920. P (8 − 1) 8i = 1 Wi
where Wi = (m − i + 1)(Z(i) − Z(i−1) ), Z(0) = 0, i = 1, ..., m, Z1 = nY1 , Zi = [n −
1)](Yi − Yi−1 ), i = 1, ..., m, and the data transformation Yi = ln(1 + xcf it) ).
Pi−1
j = 1 (Rj
+
Since ξ0.025 = 0.28748 < G8 = 0.41920 < ξ0.975 = 0.71252,we cannot reject H0 at the 0.05 level of significance, and we can conclude the observed failure times from the Burr XII distribution with the pdf is fX (x; 1.4, θ) = 1.4θx0.4 (1 + x1.4 )−(θ+1) , x > 0, θ > 0, at level α = 0.05. Then, m 8 = = 0.2433. c 32.8879 i=1 (1 + Ri ) ln(1 + Xi,n )
θb = Pm
To fully explore the advantage of the newly introduced generalized variable method, classical and generalized 95% interval estimates are compared for the reliability function of the Burr distribution. In addition, p-values for testing reliability function are also compared. 16
Table 4.2 Comparison of 95% interval estimates of S x Point Estimate
Classical
Generalized
1
1.18
(0.95 − 1.56) (1.10 − 1.28)
2
1.22
(1.01 − 1.55) (1.15 − 1.35)
4
1.27
(1.10 − 1.65) (1.18 − 1.48)
Table 4.3 Comparison of p-values for testing H0 : S ≤ s∗ vs. Ha : S > s∗ at γ = 0.05 x s∗ 1
1.10
Classical Generalized 0.0550
0.4993
2
0.7601
0.0495
4
0.5478
0.0448
When hypothesis S > 1.10 is tested for x = 1, the generalized p-value is 0.4993, which is less than the nominal size of the test γ = 0.05. This provides evidence in favor of the claim, which is true since estimator of S is1.18. However, the classical p-value is 0.0550, a value greater than γ = 0.05, suggests that the claim is unacceptable, which is false since the estimator of S is 1.18. Hypothesis tests for S > 1.10 is tested when x = 2 and 4 are also discussed in a similar fashion. All these arguments clearly show that the generalized variable method (GV-Method) provides accurate, reliable, and non-misleading results, while the large sample method (LS-Method) classical fails to do so. Hence, the GV-Method outperforms the large sample method for this particular case. Example 2: Simulated data In the case of simulated data, the following procedure must be followed to analyze data. Step 1: (a) Given c, k, γ, m, n and R = (R1 , R2 , ..., Rm ), (b) The generation of data U1 , U2 , ..., Um is by the uniform distribution U(0, 1). £ ¤1/c (c) By the transformation of Zi = 1 − (1 − Ui )−1/θ , i = 1, 2, ..., m,
(Z1 , Z2 , ..., Zm ) is a random sample from the Burr XII distribution with density
as (1.1). (d) Set Xi,n =
Z1 n
+
Z2 (n−R1 −1)
+ ... +
Zi P i−1 , j=1 Rj −i+1]
[n− 17
for i = 1, 2, ..., m.
(X1,n , X2,n , ..., Xm,n ) is the progressively type II right censored sample from a twoparameter Burr XII distribution with density as (1.1). Step 2: repeat Step 1 and Step 2 in the case of real data. One can then employ the one-sided hypothesis testing to determine whether the reliability adheres to the required level. The proposed testing procedure about S can be organized as follows: Step 1. Let the data transformation Yi = ln(1+xci:m:n ), c > 0, i = 1, 2, ..., m for the progressively type II right censored data X1:m:n , X2:m:n ,···,Xm:m:n , where Xi:m:n ∼ Burr (c, θ), Yi ∼ Exponential (Rate = θ).This transformation is necessary if and only if because critical values of the Gini Statistics have been derived only for the exponential distribution by Gail and Gastwirth [6]. Step 2. Determine the lower reliability function s∗ , then the testing null hypothesis H0 : S ≤ s∗ and the alternative hypothesis Ha : S > s∗ are constructed.
Step 3. Specify a significance level γ. P m Step 4. Calculate the value of test statistic Sb = (1 + xc )− W , where W = m i=1 (1 + Ri )× ln(1 + xci:m:n ).
Step 5. Obtain the critical value S0∗ from Eq. (2.6), according to the target value s∗ , the significance level γ and the number of observed failures before termination m. Step 6. The decision rule of statistical test is provided as follows: If Sb > S0∗ , it is concluded that the reliability function meets the required level.
In addition, the proposed testing procedure can be constructed with the one-sided confidence interval too. The decision rule of statistical test is “If reliability function value s∗ ∈ / [LB C , ∞), it is concluded that the reliability function meets the required level.” Here we shall give some numerical results on the coverages of the two confidence intervals for S(x), based on the approximate lower confidence limits LB C (x) and LB G (x) , given in (2.8) and (3.6 ), respectively, for the case of the progressively type II censored samples with m number of failures observed before termination from n items that are on test and a removal scheme of R = (R1 , R2 , ..., R10 ) numbers of units removed (withdrawn) from the test. For this, we choose n = 20, m = 10, and (R1 , R2 , ..., R10 ) = (0, 3, 3, 0, 3, 0, 0, 5, 6, 2). Furthermore, the combinations of (c, θ) were chosen to be (c = 0.5, θ = 0.1, 1.1, 2.1), where, for c < 1, the hazard is monotonically 18
decreasing with time, (c = 1.5, θ = 0.1, 1.1, 2.1), where, for 1 < c < 2, the hazard is constant with time, (c = 2.5, θ = 0.1, 1.1, 2.1), where, for c > 2, the hazard increases, reaches a maximum, and then decreases (hump shape). We also choose a few values of x = 0.5, 1.0, 1.5, 2.0, 2.5, 3.0. The coverages of the confidence intervals were obtained using one million pseudo-random variates from the Burr XII distributions. For 1 − γ = 0.90, the coverages are reported in Table 4.4. It is clear
that LB G (x) is an extremely satisfactory confidence limit. On the other hand for the confidence
interval based on LB C (x), the coverage can be unsatisfactory in some cases; the interval can be conservative when x is large compared to c. We also carried out a similar comparison for n = 30, 50, 100 with their corresponding failures and removal schemes. Foe n = 30, the pattern showed up is very similar to that for n = 10, reported in Table 4.3. However, for n = 100, the coverages of both confidence intervals turned out to be quite satisfactory. These numerical results are not reported here and can be produced upon request.
19
Table 4.4 Coverage of the confidence intervals based on LB C and LB G for n = 20, m = 10, and 1 − γ = 0.95 c
0.5
θ
0.1
1.1
2.1
1.5
0.1
1.1
2.1
2.5
0.1
1.1
2.1
x 0.5
1.0
1.5
2.0
2.5
3.0
LB C (x)
94.95
96.88
97.28
97.00
96.95
96.98
LB G (x)
94.95
94.69
94.69
94.75
94.80
94.35
LB C (x)
95.05
94.50
96.61
96.78
97.17
97.88
LB G (x)
95.05
95.15
95.05
94.75
95.05
95.01
LB C (x)
95.24
94.80
94.12
96.50
96.88
96.88
LB G (x)
95.24
94.80
94.68
95.56
94.96
96.97
LB C (x)
95.16
96.92
97.57
97.02
LB G (x)
95.16
94.96
95.28
94.76
LB C (x)
94.78
93.37
96.66
97.04
LB G (x)
94.78
94.30
95.44
95.12
LB C (x)
95.10
94.25
94.46
95.66
LB G (x)
95.10
94.25
95.04
94.72
LB C (x)
97.53
96.45
LB G (x)
95.04
95.12
LB C (x)
96.58
97.88
LB G (x)
95.00
95.13
LB C (x)
96.18
96.88
LB G (x)
95.25
95.33
Table 4.5 shows the classical and the generalized empirical (actual) type I error rates (the rejection rate of the null hypothesis: the fraction of times the p-value is less than the nominal level) for the test H0 : S ≤ s∗ vs. Ha : S > s∗ when nominal (intended) type-I error rate is at γ = 0.05. The pseudo-random variates X were generated from X ∼ Burr(c, θ) for various parameter configurations β = (c, θ) and different sample sizes (n, m). All results are based on 1 20
000 000 replications.
Table 4.5
Empirical (true) type-I error rates for testing H0 : S ≤ s∗ vs. Ha : S > s∗ when nominal (intended) level γ = 0.10 (n, m)
(c, θ)
s∗
Generalized Classical
(10, 10), (0.5, 4)
0.52
0.118
0.052
(0.5, 5)
0.74
0.112
0.067
(1.0, 4)
0.27
0.135
0.060
(1.0, 5)
0.59
0.117
0.079
(2.5, 5)
0.71
0.111
0.083
(0.5, 4)
0.55
0.114
0.093
(0.5, 5)
0.70
0.117
0.0979
(1.5, 4)
0.29
0.099
0.095
(1.5, 5)
0.58
0.105
0.090
(2.5.5)
0.67
0.109
0.057
(15, 15)
Note: Actual Size (AS): For instance, actual size of the test for testing H0 : S ≤ 0.005 vs. Ha : S > 0.005 is the proportion of p-values that is less than the nominal value γ. That is, AS =
[Number of p-values for testing H0 : S ≤ 0.005 vs. Ha : S > 0.005] ≤ λ The total number of simulations
According to this simulation study, when compared with the LS-Method, the actual type-I error rates (actual sizes of tests) of the GV-Method get as close as to the intended (nominal) size γ = 0.10. Therefore, the GV-Method outperforms the LS-Method for this particular problem. Table 4.6 shows the power comparison for testing S ≤ 0.50 vs. S > 0.50 without and after adjusting the actual type-I error rate at γ = 0.10 based on 1 000 000 replications.
21
Table 4.6
Comparison of powers for testing S ≤ 0.43 vs. S > 0.43 without and after adjusting the size at γ = 0.10 Without adjusting the size After adjusting the size (n, m)
(c, θ)
S
Generalized
Classical
Generalized
Classical
(10, 5)
(0.5, 4) 0.45
0.117
0.055
0.010
0.010
(10, 3)
(0.5, 5) 0.46
0.249
0.129
0.215
0.108
(10, 2)
(1.5, 4) 0.47
0.566
0.421
0.543
0.408
(10, 1)
(1.5, 5) 0.68
0.777
0.638
0.763
0.599
(10, 3)
(2.5, 5) 0.79
0.897
0.769
0.856
0.715
Note: Unadjusted power (UP): For instance, when S = 0.005, in order to calculate the unadjusted power of the test, consider ∗
∗
any s∗ that is less than 0.005, where H0s : S ≤ s∗ vs. Has : S > s∗ . Unadjusted (actual) power of the test is then the fraction of number of p-values for testing H00.001 : S ≤ 0.001 vs. Ha0.001 : S > 0.001 that is less than the nominal level of significance γ,
where s∗ = 0.001. That is, UP =
[Number of p-values for testing H00.001 : s ≤ 0.001 vs. Ha0.001 : s > 0.001] ≤ λ The total number of simulations
Adjusted power (AP): For instance, when S = 0.005, in order to calculate the adjusted power of the test, consider ∗
∗
any s∗ that is less than 0.005, where H0s : S ≤ s∗ vs. Has : S > s∗ . Adjusted (actual) power of the test is then the fraction of number of p-values for testing H00.001 : S ≤ 0.001 vs. Ha0.001 : S > 0.001 that is less than the γth percentile of p-values (pγ ), for testing
H00.005 : S ≤ 0.005 vs. Ha0.005 : S > 0.005,whose test has been used to find the actual
size when s∗ = 0.005. That is, AP =
[Number of p-values for testing H00.001 : S ≤ 0.001 vs. Ha0.001 : S > 0.001] ≤ pλ } The total number of simulations
Without adjusting the size, the generalized powers for testing S ≤ 0.45 vs. S > 0.45 clearly suggest that the GV-Method outperforms the LS-Method. Even after adjusting the size, the GV22
Method still maintains a light advantage over the LS-Method. The size of the test has to be adjusted to get a meaningful comparison of power of tests. But, in reality practitioners, being less-concern about the size, are not interested in adjusting the nominal size in order to get the desired level γ. Our overall recommendation is to use the confidence limit LB G (x); it has an explicit analytic expression and is easily computed. Furthermore, coverage, unadjusted and adjusted power, actual type-I error rate, confidence interval of the reliability parameter of the Burr XII are quite satisfactory.
5
Conclusion
In this article, we have shown yet another complex problem where the GV-Method yielded efficient inferential procedures. Unlike the LS-Method, it yielded the generalized pivot variable which is simple to use for hypothesis testing of, and interval estimation for, the reliability parameter of Burr XII distribution based on progressively type II censoring with random removals, where the number of units removed at each failure time has a discrete uniform distribution. Even though this approach, which is used to tackle the complicated and complex situations arise in practical applications, is computationally involved, it is easy as the other procedure once it is programmed using Algorithms 1 and 2. In order to get consistent results irrespective of the seed used for random number generator, we recommend a Monte Carlo simulation of 1 000 000 runs ( or replications). The results for the p-values, confidence intervals, size of the test, adjusted and unadjusted power of the test, and coverage probabilities stemmed from the simulated data suggest that the existing LS-Method is not accurate even for very large samples, and GV-Method are conservative for some parameter configurations when sample sizes are moderate. Therefore, the GV-Method satisfactorily outperforms the existing LS-Method for this particular problem. The proposed model is very flexible, and the method is very easy to implement. Moreover, it is possible to obtain small sample result which is a clear advantage compared to the frequentist inference.
23
References [1] A.A. Abdel-Ghaly, G.R. Al-Dayian, F.H. Al-Kashkari, The use of Burr type XII distribution on software reliability growth modeling. Microelectron. Reliab. 37 (1997) 305—313. [2] M.A.M.A. Mousa, Empirical Bayes estimators for the Burr Type XII accelerated life testing model based on Type-2 censored data. J. Statist. Comput. Simul. 52 (1995) 95—103. [3] M.A.M.A. Mousa, Z.F. Jaheen, Statistical inference for the Burr model based on progressively censored data. Comput. Math. Appl. 43 (2002), 1441—1449. [4] N. Balakrishnan, R. Aggarwala, Progressive Censoring: Theory, Methods and Applications. Birkh¨auser, Boston, 2000. [5] I.W. Burr, Cumulative frequency functions. Ann.Math. Statist.13 (1942), 215—232. [6] M.H. Gail, J.L. Gastwirth, A scale-free goodness-of-fit test for the exponential distribution based on the Gini statistic. J. R. Statist. Soc. Ser. B 40(3) (1978) 350—357. [7] S. Gunasekera, Generalized Inference of R = Pr(X > Y) for Pareto Distribution. Statistical Papers (n´ee Statistische Hefte) 56(2) (2015) 333-351. doi: 10.1007/s00362-014-0584-8. [8] S. Gunasekera, Inferences on functions of Pareto parameters with applications to income inequality. Accepted for publication in Comm. Statist. Simulation Comput. (2015). doi: 10.1080/03610918.2014.983653. [9] S. Gunasekera, Generalized inferences for the expected winning bids. Accepted for publication in Amer. J. Math. Management Sci. (2015) [10] S. Gunasekera, M.M.A. Ananda, Generalized variable method inference for the location parameter of the general half-normal nistribution. Statist. Comput. Simul. 85(10) (2015) 21152132. doi: http://dx.doi.org/10.1080/00949655.2014.923424. [11] H.K. Iyer, C.M. Wang, T. Mathew, Models and confidence intervals for true values in interlaboratory trials. J. Amer. Statist. Assoc. 99(34) (2004) 1060—1071.
24
[12] X. Li, Y. Shi, J. Wei, J. Chai, Empirical Bayes estimators of reliability performances using LINEX loss under progressively type-II censored samples. Math. Comput. Simulation 73 (2007) 320—326. [13] D. Moore, A.S. Papadopoulos, The Burr type XII distribution as a failure model under various loss functions. Microelectron. Reliab. 40 (2000) 2117—2122. [14] W. Nelson, Applied Life Data Analysis. Wiley, New York, 1982. [15] S. Singh, G. Maddala, A Function for the size distribution of incomes. Econometrica 44(5) (1976) 963—970. [16] A.A. Soliman, Estimation of parameters of life from progressively censored data using Burr XII model. IEEE. Trans. Rel, 54(1) (2005) 34—42. [17] K. Tsui, S. Weerahandi, Generalized p-values in significance testing of hypotheses in the presence of nuisance parameters. J. Amer. Statist. Assoc. 84 (1989) 602—607. [18] F.K. Wang, J.B.Keats, Maximum likelihood estimation of the Burr XII parameters with censored and uncensored data. Microelectron. Reliab. 36 (1996) 359—362. [19] S.J. Wu, Y.J. Chen, C.T. Chang, Statistical inference based on progressively censored samples with random removals from the Burr Type XII distribution. J. Stat. Comput. Simul. 77 (2007) 19—27. [20] S. Weerahandi, Generalized confidence intervals. J. Amer. Statist. Assoc. 88(423) (1993) 899— 905. [21] S. Weerahandi, Exact Statistical Methods for Data Analysis. Springer-Verlag, New York, 1995. [22] S. Weerahandi, Generalized Inference in Repeated Measures: Exact Methods in MANOVA and Mixed Models. John Wiley, New Jersey, 2004. [23] W.J. Zimmer, J.B. Keats, F.K.Wang, The Burr XII distribution in reliability analysis. J. Qual. Technol. 30 (1998) 386—394.
25