Maximum likelihood estimation of Burr XII distribution parameters under Type II censoring

Maximum likelihood estimation of Burr XII distribution parameters under Type II censoring

Microelectron. Reliab., Vol.33, No. 9, pp. 1251-1257,1993. Printedin GreatBritain. 0026-2714/9356.00+ .00 © 1993PergamonPressLtd MAXIMUM LIKELIHOOD ...

272KB Sizes 0 Downloads 73 Views

Microelectron. Reliab., Vol.33, No. 9, pp. 1251-1257,1993. Printedin GreatBritain.

0026-2714/9356.00+ .00 © 1993PergamonPressLtd

MAXIMUM LIKELIHOOD ESTIMATION OF BURR XII DISTRIBUTION PARAMETERS UNDER TYPE II CENSORING DALLAS R. WlNGO MathStat Associates, Inc., P.O.B. 8385, Red Bank, NJ 07701, U.S.A. (Received for publication 12 May 1992)

ABSTRACT

The mathematical theory for the point estimation of the parameters of the Burr Type XH distribution by maximum likelihood (ML) is developed for Type H censored samples. Also derived are necessary and sufficient conditions on the sample data that guarantee the existence, uniqueness and finiteness of the ML parameter estimates for all possible permiss~le parameter combinations. The asymptotic theory of ML is invoked to obtain approximate confidence intervals for the ML parameter estlmAtes. An application to reliability data arising in a life test experiment is discussed.

1. Introduction The Burr Type XiI distribution (BD) was first introduced by Burr (1942). The cumulative distribution and probability density functions of the BD are, respectively, F(x) ffi 1- 1/(xC+l)k

(1.1)

f (x) = c k x c'l/(Xc + 1)k +1

(1.2)

and

where x >0 and where c >0 and k >0 are shape parameters. The BD is a unimodal distribution that, as shown by Burr and Cislak (1968), Rodriguez (1977) and Tadikamalla (1980), covers the curve shape characteristics for the normal+ logistic and exponential (Pearson Type X) distributions, as well as a significant portion of the curve shape characteristics for the Pearson Types I (beta), IL HI (gamma), V, VII, IX and XII. It is therefore apparent that the versatility and fiem'bllityof the BD renders it quite attractive as a tentative model for data whose underlying distn~oution is unknown. Wingo (1983) has described methods for fitting the BD to life test or other (complete sample) data by maximum likelihood (ML) and has also provided an extensive list of references to earlier published work on this distn~oution. Other researchers who have studied the BD include Evans and Simom (1975), Evans and Ragab (1983) and Papadopoulos (1978). The present paper extends and augments the work of these authors by developing the theory for the ML point estlmRtion of the parameters of the BD when Type H singly censored sample d,t_Ais at hand. Necessary and sufficient conditions on the sample data are developed that guarantee the 1251

1252

D.R. WINGO

existence, uniqueness and finiteness of the ML parameter estimates (MLEs) for all possible permissible parameter combinations.

2. Statistkai Properties of the BD

The

probability

density

(1.2)

of the

BD

is unimedal

with

mode

Xmode = [(C-1)/(C k + 1)] 1/c if c > 1 and L-shaped if c_
origin ofx is - E {xp } = k B ( p /c + I,k -p/c),

where

B~y,z)=_r~y)r(z)/r~y +z). Obviously, ~

(2.1)

exists if and only if c k >p. If we

define Z - E {x } and 02 _= Var(x ) -- E {x2}- ;z2 , then it follows from (2.1) that tt = k B

(1/c+ 1, k - l / c )

(2.2)

and 02 = k B ( 2 / c + l , k - 2 / c ) - i ~ 2 .

(2.3)

The quantfle xq of order q is Xq = [ ( l - q ) "l/k- 1]1/c .

(2.4)

Xmtd = (21/k - 1)Vc .

(2.5)

The median of the BD is

3. Maximum LlkelUmod Estimation 3.1 Likelihood Function and Equations

For a Type II censored sample, the data consist of the r smallest observations (order statistics) O r ) observations having a common density function f (x,O) and distribution function F(x,O), where 0 is a vector-valued parameter. The log-likelihood function (1 .I .i~) of

{xo) . . . . . x(r)} is L ( O ) - l o g [ n ! l ( n - r ) ! ] + ~ l o g f (xi,e) + ( n - r ) l o g [ 1 - F ( x , O ) ] ,

(3.1)

iffil

where the parenthesized subscripts on the order statistics have been suppressed for notational simplicity and convenience. For the BD (1.1)-(1.2) the identity (3.1) is, up to a positive constant, explidtly L (c,k) -- r log(c) + rlog(k) - ( n - r ) k log(Xrc + 1) /"

r

+ (c-l) E log(x/)- (k + 1) E l°g(x / + 1). i -1

(3.2)

i -1

When r = n , L ( c , k ) reduces to the LLF for complete samples, as it should. As noted earlier, the ML point estimation problem for complete samples has been treated in detail by Wingo (1983).

Maximum likelihood estimation

1253

The likelihood equations (LEs) are OL/OC = 0 and O L / ~ = 0, where

OL/Oc

r c

(n -r)kx~ log(Xr) r .. r x / log(x/) + ~log(x/)-(k+l))_~ ---, XrC+l i=l i-1 x [ + l

=--r

r

(3.3) (3.4)

OL/Ok - ~ - (n -r) log(xrc + 1)- ~ log(x[ + 1). i=1

Solving OL/Ok = 0 for k gives, from (3.4), ~¢ =

r

(3.5)

(n -r) log(g + l) + ~

log(x/+1)

i-1

Substitution of (3.5) into (3.2) produces the conditional LLF

L'(c I

- ¢(c)- r log {(n -r)log (Xrc + I) + ~ Iog(x~+ I)1,

(3.6)

i-I

where

• ( c ) - r (log(r)-1) + rlog(c) + (c-1) ~log(xi) - ~ l o g ( ~ + l ) . i =1

(3.7)

i =1

It is of interest to note that 0 ( c ) is strictly concave in c for c > O. 3.2 Asymptotic Variances and Covarim~es of MLEs

The asymptotic variances and covariances of the MLEs is given by the elements of the inverse of the Fisher information matrix lij -E{-O2L/OOiOOj} where i,j = 1,2

and 0 = (c,k ) y . I is symmetric. Unfortunately, the exact mathematical expressions for the above expectations are very difficult to obtain. Therefore, we give below the observed Fisher information matrix ~.y = {-02L/OOiOOj}, which is obtained by dropping the expectation operator E. The approximate (i.e., observed) asymptotic variancecovariance matrix I~"for the MLEs is obtained by inverting the observed information matrix, viz. [I~//]

---- A

1

[Iij]'.

(3.8)

Differentiating (3.3)-(3.4) with respect to c and k gives x c (logxr) 2 Lcc =--~2 " (k + 1)=~1 xC(logxi)2 '= (x[+l) 2 - k ( n - r ) (Xrc + 1)2 r

L~-

where L ~

-

(3.10)

k2 ,

Lck =-Lkc - "

(3.9)

(n -r)x~ log (xr) . ~ x / log(x/) xC+l i=1 x/+1

(3.11)

02L/OaO[3 for the generic parameters a and/~.

3.3 Confidence Intervals

Because of the non-orthogonality of c and k, an exact inferential analysis of the BD will be very complicated. However, approximate confidence intervals for c and k can be developed by invoking the asymptotic normality of the MLEs. Separate (l-a) 100 percent confidence intervals for c and k are, respectively,

1254

D.R. WINGO

£ Zc,/2

(3.12)

V~(~)

and

+-Zo/2~Var ( k )

,

(3.13)

where Zo/2 is a standard normal variate. For a 95% confidence interval, a = 0.05 and Z0.025 = -1.96. Evans and Ragab (1983) have observed that when c is known and • =n, then ~I (l+x[) is a sufficient statistic for k. Under these same conditions on r and c, Patel,

i-1

Kapadia and Owen (1976) give the following confidence interval for k (when c is known):

P [X2(2y2n)
x 2 ( l " ; y ;2n)] = l - a

(3.14)

where Y -- ~, log(l+x c)

(3.15)

iffil

and where ~ ( ~ ;m) is the 100~ percentage point of the chi-square distribution with m degrees of freedom. 4. Existenceand Unlqumess of M L E s

As noted earlier, the BD is often used as a tentative model to fit data whose underlying statistical distribution is unknown. Therefore, there naturally arises the question as to whether, for arbitrary sample data, MLEs of the parameters of the BD exist and are unique and finite. When c is known, fixed and positive, k always exists, is unique and finite and given by (3.5). When k is known, fixed and positive, the LLF (3.2) is strictly concave in c, inasmuch as Lcc < 0. Therefore, when k >0 and fixed, it follows from the strict concavity of (3.2) for fixed k >0 that c", when it exists, is unique. The following proposition gives necessary and sufficient conditions on the sample data to ensure that always exists and is unique and finite. Proposition 1. The MLE ~ always exists, is unique and finite if and only if the

sample data contains at least one sample observation that differs from unity. Proof. Define ~(c)

-=

r

k (n -r)x c log(xr)

r + ~

i-1

Iog(x/) -

x,c'+1

where k >0 and fixed. Also, define the index sets M =- {i(i =1 ..... r) Ixi < 1}, e-{i(i=l

r x c log(x/)

- (k + 1) ~ - - , i=1 x f + l

..... r) lxi > l}.

(4.1)

Maximum likelihood estimation

1255

Straightforward algebraic manipulations show that as c --,0o +, r

E x[ (x[ + I)"I log(x/) ---, E log(x/) i =1

iEP

and flog(xr), if r6_.P x~ (x~ + 1)"1 log(xr) --* ~ 0, otherwise" Now,

¢(0) =

0o,

(4.2) It follows from (4.2) that 4(0o) < 0 if and only ifxi ~ 1 for some i (1
< O .



When both c and k are both unknown, the MLE ~ is the global maximizer of the conditional LLF (3.6) for all c >0. Given ~, the MLE k is computed from (3.5). For the present case, we investigate the existence, uniqueness and finiteness of the MLEs by examining the behaviour of the equation ~(c) = 0 on the positive real line, where

~(c) = dL*/dc. When c and k are to be estimated jointly, it is necessary to require that r_>2 and that xi ~xj for some i ~j (i.e., there must be at least two distinct observations in the data sample). The following proposition shows that, if these requirements are met, then ~ will exist if and only if the sample data contain at least one observation that is less than unity.

Proposition 2. Suppose that r >2 and xi ~xj for some i Fj. Then, the MLE A (and hence k) exists, is unique and finite if and only ifxl < 1 for some i (1
r + ~ log(x/)- ~ xc log(xi) _

~(C) = C

/=1

i=l

x[+l

(n -r)x c log(xr)

4- ~

xc +1

x c log(x/) --

iffil x~-~ J (n-r)log(xC+l) + ~ log(x~+l)

. (4.3)

iffil

Now, straightforward algebraic manipulations show that 0 ( 0 ) = co ~(0o) = ~

log(x;).

Clearly, ~0o) < O if and only ifxi < 1 for some l < i < r . Therefore, there exists at least one finite, positive root of the equation ~(c) = 0 if and only if the set M is nonempty. Differentiation shows that ~(c) is monotone decreasing in c for c >0. Hence, it follows that ~ will be unique. Thus, ~ will exist and will be unique and finite if and only if M is nonempty. •

1256

D.R. WINGO

TABLE 1. Time-to-Failure (in months) of 20 Electronic Components on Test expand ; nnnnnnnnnn. 0.1

0.1

0.2

0.3

0.4

0.5

0.5

0.6

0.7

0.8

0.9

0.9

1.2

1.6

1.8

2.3

2.5

2.6

2.9

3.1

It is instructive to compare the conclusions of Proposition 2 with those of Theorem 1 of Wingo (1983). Theorem 1 of Wingo (1983) contains a serious misprint that renders the theorem false. All is not lost, however. If every occurrence of the expression '~ci>_1" is replaced by '~ci_<1", then Theorem I remains valid.

5. Example

A life test experiment was conducted to assess the reliability of a certain electronic component. A total of 30 components was involved in the lfe test. For reasons of cost, the trial was terminated after the 20th component failed. The timeto-failure for these 20 components is given (in months) in Table 1. Because of its flexibility to model a wide variety of reliability data, the BD was chosen as a tentative model for the distribution of time-to-failure of the electronic component. Using n = 30, r = 20, X(1 )

=

0.1 and X(r) =x(20) = 3.1, the data of Table 1 were

fitted to the BD by ML with the following results: Var(~) =0.074, Var(k)=0.027 and Cov(c,k)=-0.022.

~ = 1.291, k =0.637,

Separate 95% confidence

intervals for c and k are (0.7578, 1.8242) and (0.3149, 0.9590), respectively.

REFERENCES

Burr, I. W. (1942). Cumulative frequency functions. Ann. Math. Statist. 13, 215-232. Burr, I. W. and Cislak, P. J. (1968). On a general system of distributions: I. Its curve-shape characteristics, II. The sample median. J. Amer. Statist. Assoc. 63, 627-635. Evans, I. G. and Ragab, A. S. (1983). Bayesian inferences given a type-2 censored sample from a Burr distribution. Commu~ Statist.-Theor. Meth. 12, 1569-1580. Evans, R. A. and Simons, G. (1975). Research on statistical procedures in reliability engineering. ARL TR 75-0154, Wright-Patterson AFB, Ohio. Papadopoulos, A. S. (1978). The Burr distribution as a failure model from a Bayesian approach. IEEE Transactions on Reliability R-27, 369-371. Patel, J. K., Kapadia, C. H. & Owen, D. B. (1976). Handbook of Statistical

Distn'butions. New York: Marcel Dekker.

Maximum likelihood estimation

Rodriguez, R. N. (1977). A guide to the Burr type XII distn'bufions. B / o ~

1257

64,

129-134. Tadikamalla, P. R. (1980). A look at the Burr and related distributions. Intemat Statist. Rev. 48, 337-344.

Wingo, D. R. (1983). Maximum likelihood methods for fitting the Burr Type XII distribution to life test data. BiometricalJ. 25, 77-84.