Reliability estimation of the selected exponential populations

Reliability estimation of the selected exponential populations

Statistics and Probability Letters 79 (2009) 1372–1377 Contents lists available at ScienceDirect Statistics and Probability Letters journal homepage...

436KB Sizes 0 Downloads 75 Views

Statistics and Probability Letters 79 (2009) 1372–1377

Contents lists available at ScienceDirect

Statistics and Probability Letters journal homepage: www.elsevier.com/locate/stapro

Reliability estimation of the selected exponential populations Somesh Kumar a,∗ , Ajaya Kumar Mahapatra a , P. Vellaisamy b,∗∗ a

Department of Mathematics, Indian Institute of Technology, Kharagpur-721302, India

b

Department of Mathematics, Indian Institute of Technology Bombay, Powai, Mumbai-400076, India

article

a b s t r a c t

info

Article history: Received 6 January 2009 Received in revised form 22 February 2009 Accepted 22 February 2009 Available online 6 March 2009 MSC: 62F10 62C20

Let Π1 , Π2 , . . . , Πk be k populations with Πi being exponential with an unknown location parameter µi and a common but known scale parameter σ , i = 1, . . . , k. Suppose independent random samples are drawn from the populations Π1 , Π2 , . . . , Πk . Let {Xi1 , Xi2 , . . . , Xin } denote the sample drawn from ith population, i = 1, . . . , k. A subset of the populations with high reliabilities is selected according to Gupta’s [Gupta, S.S., 1965. On some multiple decision (Selection and Ranking) rules. Technometrics 7, 225–245] subset selection procedure. We consider the problem of estimating simultaneously the reliability functions of the populations in the selected subset. The uniformly minimum variance unbiased estimator (UMVUE) is derived and its inadmissibility is established. An estimator improving the natural estimator is also obtained by using the differential inequality approach used by Vellaisamy and Punnen [Vellaisamy, P., Punnen, A.P., 2002. Improved estimators for the selected location parameters. Statist. Papers 43, 291–299]. © 2009 Elsevier B.V. All rights reserved.

1. Introduction The problem of estimating parameters after selection has been widely discussed in the literature. Many researchers have studied the estimation of the location and scale parameters of exponential populations. Recent references include Sackrowitz and Samuel-Cahn (1984), Cohen and Sackrowitz (1989), Vellaisamy (1992, 1996), Kumar and Kar (2001a,b), Vellaisamy (2003), and Vellaisamy and Jain (2008). The problem of estimation after selection seems to have been initially formulated and investigated by Rubinstein (1961, 1965) in the context of reliability estimation. Rubinstein used a sequential scheme for selecting the components in a manufacturing process. He derived unbiased estimators for the failure rates of the selected components. His methods also give unbiased estimator of selected Poisson parameters for a wide class of selection procedures. Vellaisamy and Punnen (2002) have considered the simultaneous estimation of location parameters after subset selection from exponential populations with known common scale parameter σ . It was shown that the natural estimator dominates the uniformly minimum variance unbiased estimator (UMVUE) in terms of mean squared error (MSE). Further, they have obtained an improvement over the natural estimator using a differential inequality approach due to Berger (1980) and Dasgupta (1986). In this paper, we consider simultaneous estimation of reliability functions for the model as considered by Vellaisamy and Punnen (2002). To the best of our knowledge, the problem of estimating reliability functions of the selected populations has not been addressed in the literature so far. Let Π1 , Π2 , . . . , Πk be k independent populations, where Πi follows a two parameter exponential distribution with density fi (x|µi , σ ) =

∗ ∗∗

1 −( x−µi ) σ e ,

σ

x > µi , µi ∈ R, i = 1, . . . , k.

Corresponding author. Tel.: +91 3222283662; fax: +91 3222255303. Corresponding author. E-mail addresses: [email protected] (S. Kumar), [email protected] (A.K. Mahapatra), [email protected] (P. Vellaisamy).

0167-7152/$ – see front matter © 2009 Elsevier B.V. All rights reserved. doi:10.1016/j.spl.2009.02.012

(1)

S. Kumar et al. / Statistics and Probability Letters 79 (2009) 1372–1377

1373

We assume throughout that the scale parameters are known and equal and the location parameters µi ’s are unknown and possibly unequal. In reliability and life testing situations, we refer to the location parameter µi as the minimum guarantee time. So, without loss of generality, we can take µi > 0, i = 1, . . . , k and the common scale parameter σ = 1. Let {Xi1 , Xi2 , . . . , Xin } be a random sample of size n drawn from the ith population, i = 1, . . . , k. The reliability function θi (t ) of the ith population at time t > 0 is given by

θi (t ) = P (Xij > t ) =



1, if t < µi , e−(t −µi ) , t ≥ µi ,

for j = 1, . . . , n and i = 1, . . . , k. Since t is a positive constant, the problem of estimating θi (t ) is equivalent to that of estimating θi = eµi . First we aim to select a subset of the populations with high reliabilities. We call the population associated with the maximum µi as the best population. Let Xi = min{Xi1 , Xi2 , . . . , Xin }, i = 1, . . . , k. Then clearly Xi follows an exponential distribution with density fXi (x|µi ) = ne−n(x−µi ) ,

x > µi > 0.

(2)

Let X(1) > X(2) > · · · > X(k) be the ordered values of {X1 , . . . , Xk }. Suppose a subset of the given k populations is selected according to Gupta’s subset selection procedure (Gupta, 1965), that is, select Πi if and only if Xi ≥ X(1) − d, for some d such that the probability of correct selection (CS) is at least P ∗ , a specified quantity (Gupta and Panchapakesan, 1979). That is, Prob(CS ) ≥ P ∗ ,

1 k

< P∗ < 1

and also d satisfies the relation ∞

Z

F k−1 (z + d)f (z )dz = P ∗ , 0

where f (u) and F (u) are the density function and cumulative distribution function of Xi respectively. We are interested in estimating θ = {θ1 I1 , θ2 I2 , . . . , θk Ik }, where θi = eµi , Ii = 1 ,

= 0,

if Xi > X(1)i − d, otherwise, i = 1, . . . , k,

and X(1)i = max{X1 , . . . , Xi−1 , Xi+1 . . . , Xk }. Note that the dimension of θ is random. We consider the squared error loss defined by

ˆ θ) =k (θˆ − θ) k2 = L(θ,

k X (θˆi − θi )2 Ii ,

(3)

i=1

where θˆ is any estimator of θ . Note that X = (X1 , . . . , Xk ) is a complete and sufficient statistic. This paper is organized as follows. In Section 2, a natural estimator of θ is proposed and the UMVUE is derived using the UV method of Robbins (1988). In Section 3, the UMVUE and the natural estimator are shown to be inadmissible. Further in Section 4, we have derived some improved estimators by solving a differential inequality in the light of Vellaisamy (1992) and Vellaisamy and Punnen (2002). 2. A natural estimator and the UMVUE of θ 1 Xi It is straightforward to check that θˆi = n− e is the UMVUE of θi for the component problem (that is, based on the ith n sample alone). So a natural estimator of θ is given by

 δ N = δN1 I1 , . . . , δNk Ik ,

where δNi =

n−1 n

eXi for i = 1, . . . , k.

(4)

To derive the UMVUE of θi Ii , we follow the UV method of Robbins. The following lemma is useful for the derivation of the UMVUE. Lemma 1. Let X1 , . . . , Xk be independent random variables, where the density of Xi is as given in (2) and µ = {µ1 , . . . , µk }. Further suppose U : Rk → R be a real-valued function such that for all µ, µi > 0, i = 1, . . . , k (i) Eµ [U (X )] < ∞

(ii) Eµ [eXi U (X )] < ∞.

1374

S. Kumar et al. / Statistics and Probability Letters 79 (2009) 1372–1377

Then V (X ) = eXi U (X ) − e(n+1)Xi



Z

U (X1 , . . . , Xi−1 , z , Xi+1 , . . . , Xk )e−nz dz Xi

satisfies Eµ [V (X )] = θi Eµ [U (X )]

for all µ.

Proof. The proof follows directly by using an integration by parts in the second expectation above. An application of Lemma 1 yields the unbiased estimator of θ as δ U = {U1 , . . . , Uk }, where Ui (X ) = eXi Ii −

1 (n+1)Xi −nZi e , n

and Zi = max{Xi , X(1)i − d}.

(5)



Remark 2. Let d = 0. In this case only the population corresponding to X(1) is selected and θM = reliability function of the selected (best) population. Then the UMVUE of θM is given by

δU = eX(1) −

Pk

i =1

θi Ii denotes the

k 1 X (n+1)Xi −nX (1) . e n i=1

(6)

3. Inadmissibility of the UMVUE and the natural estimator First, we show that the natural estimator δ N dominates the UMVUE δ U . Theorem 3. For estimating θ with respect to the squared error loss given in (3), the natural estimator δ N dominates the UMVUE δU . Proof. Let Iic = 1 − Ii . Then the risk of the unbiased estimator δ U = {U1 , . . . , Uk }, where Ui is as in (5), is given by R δU , θ



=

k X



Eµ eXi Ii −

i =1

=

k X



=

n

( Eµ

n−1 n

i =1



k X

n

n−1



i =1 k X

1

( Eµ

n−1 n

i =1

1

eXi Ii −

eXi − θi

n

eXi e−n(X(1)i −d−Xi ) Iic − θi Ii



e − θi Xi

eXi − θi

Ii −

1 n

2 Ii +

eXi e−n(X(1)i −d−Xi ) Iic

2

2

1 n

)

e2Xi e−2n(X(1)i −d−Xi ) Iic 2

2 ) Ii

= R(δ N , θ),

(7)

which is the risk of natural estimator δ N . Thus under mean squared error criterion, the natural estimator δ N dominates the UMVUE δ U of θ .  We further try to find an alternative estimator for θ . For this we consider d = 0. In this case the estimation problem is reduced to the reliability function of the selected best population, that is, θM . Consider the non-informative prior τ (µ) given by

τ (µ) =



1, 0,

if µi ∈ R, i = 1, . . . , k, otherwise.

The generalized Bayes estimator of θM with respect to the quadratic loss L(θ, θˆ ) = is given by

θˆ − θM θM

!2 .

(8)

S. Kumar et al. / Statistics and Probability Letters 79 (2009) 1372–1377

R δGB = R

1

f (x|µ)τ (µ)dµ

1

f (x|µ)τ (µ)dµ

µ θM µ θ2 M

R xk −∞

...

−∞

...

= R xk =

n−2 n−1

,

1 e−n(x1 −µ1 ) eµ1 1 −n(x1 −µ1 ) −∞ e2µ1 e

R x1

R −∞ x1

eX1 ,

1375

. . . e−n(xk −µk ) dµ1 . . . dµk . . . e−n(xk −µk ) dµ1 . . . dµk

if X1 ≥ Xj , j 6= 1.

(9)

The following remark immediately follows. 2 X(1) Remark 4. The estimator nn− e is generalized Bayes for the estimation of θM with respect to the non-informative prior −1 τ (µ) given by

τ (µ) =



1, 0,

if µi ∈ R, i = 1, . . . , k, otherwise.

The loss function is the quadratic loss given in (8). The above remark motivates us to consider a competing estimator δ ∗N of θ given by

 δ ∗N = δN∗1 I1 , . . . , δN∗k Ik , −2 Xi where δN∗i = nn− e for i = 1, . . . , k. We now prove the following theorem. 1

Theorem 5. For estimating θ with respect to the squared error loss (3), the estimator δ ∗N dominates δ N . Proof. Consider the risk difference k X

R(θ, δ ∗N ) − R(θ, δ N ) =

2

2

Eµ [{(δN∗i − δNi ) − 2(δN∗i − δNi )θi }Ii ].

i =1

Using Lemma 1, the above expectation can be written as R(θ, δ ∗N ) − R(θ, δ N ) =

k X



n

Z o 2 2 (δN∗i − δNi ) − 2eXi (δN∗i − δNi )θi Ii + 2e(n+1)Xi

k

X

 Eµ

 ∗i (δN − δNi )2 + 2(δNi − eXi )(δN∗i − δNi ) Ii + 2e(n+1)Xi

k

X

 Eµ

i=1

=−



Z



(δN∗i − δNi )e−nz dz



Zi

i=1

=

(δN∗i − δNi )e−nz dz Zi

i=1

=



k X

2n − 1 n2 (n − 1)2

 Eµ

1

e2Xi Ii −

e2Xi Ii + 2

n2 (n − 1)

i=1

2 n(n − 1)2 2

e(n+1)Xi e−(n−1)Zi



e(n+1)Xi e−(n−1)(X(1)i −d) Iic 2



n(n − 1)

< 0. Hence the theorem.



4. Some improved estimators In this Section, we shall obtain a class of improved estimators which dominate estimators of the form δ c = 1 2 {ceX1 I1 , . . . , ceXk Ik }, where 0 < c < 1. Note that for c = nn− and n− , δ c corresponds to estimators δ ∗N and δ N respectively. −1 n Pk

Theorem 6. Let Y = e−(n−1) i=1 Xi and for y ∈ (0, 1], s(y) > 0 be a real-valued function such that (i) ys(y) is decreasing and (ii) − 2((n1−−1c)) ≤ (ys0 (y) + s(y)) ≤ 0. Then under the squared error loss (3), the estimator T2 = {T21 I1 , . . . , T2k Ik }, where T2i = c − (n − 1) Ys0 (Y ) + s(Y )





eXi

dominates the estimator δ c as an estimator of θ.

1376

S. Kumar et al. / Statistics and Probability Letters 79 (2009) 1372–1377

Proof. Let T r = {Tr1 I1 , . . . , Trk Ik }, r = 1, 2 be any two estimators of θ of the form Tri = Tri (Xi , X(1)i , . . . , X(k−1)i ), for i = 1, . . . , k. Then the risk difference of T 1 and T 2 is as follows, R(T 2 , θ) − R(T 1 , θ) =

k X

E µ[{(T2i 2 − T1i 2 ) − 2(T2i − T1i )θi }Ii ].

i =1

Using Lemma 1, we get the unbiased estimator of E (θi T2i Ii ) as E [θi T2i Ii ] = E [{eXi T2i Ii − e(n+1)Xi η2i (Zi , X(1)i , . . . , X(k−1)i )}] where η2i (x1 , x2 , . . . , xk ) =

R∞ x1

(10)

T2i (z , x2 , . . . , xk )e−nz dz. Let hereafter

η2i (x1 , x2 , . . . , xk ) = η2i for notational simplicity. Hence the unbiased estimator of risk difference of the ith term is given by Di (X ) = (T2i 2 − T1i 2 )Ii − 2 eXi T2i Ii − e(n+1)Xi η2i − eXi T1i Ii + e(n+1)Xi η1i



= [{(T2i 2 − T1i 2 ) − 2eXi (T2i − T1i )}Ii + 2e(n+1)Xi (η2i − η1i )] = [{(T2i − T1i )2 + 2(T1i − eXi )(T2i − T1i )}Ii − 2e(n+1)Xi (η1i − η2i )].

(11)

Next, let Fi (x1 , . . . , xk ) = η1i (x1 , . . . , xk ) − η2i (x1 , . . . , xk ). Then (11) can be written as Di (X ) =

n

o (Fi1(i) )2 e2nXi + 2(T1i − eXi )Fi1(i) enXi Ii − 2e(n+1)Xi Fi (Zi , X(1)i , . . . , X(k−1)i ),

(12)

1(i)

(x1 , . . . , xk ) = ∂∂xi Fi (x1 , . . . , xk ). Let now T1i (Xi , X(1)i , . . . , X(k−1)i ) = ceXi , for 0 < c < 1. Substituting this value in (12) we have

where Fi

1(i) 2 2nXi

Di (X ) = {(Fi

) e

− 2(1 − c )Fi1(i) e(n+1)Xi }Ii − 2e(n+1)Xi Fi (Zi , X(1)i , . . . , X(k−1)i ).

So, our problem reduces to solve the differential inequality

{(Fi1(i) )2 e(n−1)Xi − 2(1 − c )Fi1(i) }Ii − 2{Fi (Zi , X(1)i , . . . , X(k−1)i )} ≤ 0.

(13)

In order to solve the above differential inequality, we proceed as follows. Let Fi (Zi , X(1)i , . . . , X(k−1)i ) = e−(n−1)Zi s(y), and 1(i)

Fi

  = −(n − 1) e−(n−1)Zi ys0 (y) + e−(n−1)Xi s(y) .

Note that for s(y) > 0, we have e−(n−1)Zi s(y) > 0. Substituting these values in (13), we get

i + 2(n − 1)(1 − c ) ys0 (y) + s(y) e−(n−1)Xi Ii − 2e−(n−1)Zi s(y) h 2 i ≤ (n − 1)2 ys0 (y) + s(y) + 2(n − 1)(1 − c ) ys0 (y) + s(y) e−(n−1)Xi Ii h

Di (X ) = (n − 1)2 ys0 (y) + s(y)

2

≤ 0, provided −

2(1−c ) (n−1)

≤ ys0 (y) + s(y) < 0. Now, we have

Fi (Xi , X(1)i , . . . , X(k−1)i ) = η1i (Xi , X(1)i , . . . , X(k−1)i ) − η2i (Xi , X(1)i , . . . , X(k−1)i )

⇒ T2i = T1i + Fi1(i) enXi . Therefore the ith coordinate of the improved estimator is given by T2i Ii = c − (n − 1)Ys0 (Y ) − (n − 1)s(Y ) eXi Ii .





This proves the theorem.

(14)



We now give some examples of s(y) which satisfy the conditions of Theorem 6. Remark 7. (i) Define s(y) =

α

y(1+y)

, for some α > 0 and y ∈ (0, 1]. For this choice, s(y) satisfies all the conditions of

Theorem 6. Also −α < ys (y) + s(y) ≤ − α4 < 0. 0

−y



(ii) Define s(y) = α e y , for some α > 0 and y ∈ (0, 1]. Then s(y) also satisfies all the conditions of Theorem 6. Also

−α < ys0 (y) + s(y) ≤ −α e−1 < 0. 

S. Kumar et al. / Statistics and Probability Letters 79 (2009) 1372–1377

1377

2(1−c )

In both the cases we can take α = (n−1) . Note that since Xi > µi > 0, so 0 < y ≤ 1. Therefore the choice of s(y) looks reasonable. Corollary 8. Consider the estimation of θM . Let y and s(y) be as defined in Theorem 6. Then the estimator T2 = c − (n − 1) Ys0 (Y ) + s(Y )





eX(1)

dominates the estimator T1 = ceX(1) for 0 < c < 1 under the squared error loss. −2 The following theorem is similar to Theorem 3.2 of Vellaisamy and Punnen (2002). Assume v = nk and c = nn− . 1

Theorem 9. Let T 2 , defined in Theorem 6, be an estimator of θ . Then the risk of T 2 is O( nk2 ). Proof. The risk of T 2 under quadratic loss is given by R(T 2 , θ) ≤

k X i =1

" E

n−2 n−1

2

eXi −µi − 1

2 + (n − 1)2 e2Xi −2µi Ys0 (Y ) + s(Y )

   n − 2 X −µ i i − 2(n − 1)e e −1 Ys (Y ) + s(Y ) n−1 " #   k X k 4v 4 n − 2 Xi −µi Xi −µi ≤ + + E e e − 1 J (Xi ) , (n − 1)2 (n − 1)2 (n − 2) (n − 1) n−1 i =1 Xi −µi

0

(15)

where J (Xi )

=

1,

if Xi > µi + log

=

0,

otherwise.

n−1 n−2

A straightforward calculation proves the theorem.



As a consequence, the improved estimators T 2 for δ N and δ ∗N are consistent. 5. Conclusion We have dealt with an interesting problem namely that of simultaneous estimation of M reliability functions, where M is random. Surprisingly, we are able to get shrinkage estimators improving upon the natural estimator, the UMVUE and a generalized Bayes estimator using a differential inequality approach. This result resembles Stein’s phenomenon for the simultaneous estimation of location and scale parameters, where differential inequality leads to shrinkage estimators improving upon the usual estimators. (See Berger (1980) and Dasgupta (1986)). References Berger, J.O., 1980. Improving on inadmissible estimators in continuous exponential families with application to simultaneous estimation of gamma scale parameters. Ann. Statist. 8, 545–575. Cohen, A., Sackrowitz, H., 1989. Two-stage conditionally unbiased estimators of the selected mean. Statist. Probab. Lett. 8, 273–278. Dasgupta, A., 1986. Simultaneous estimation of multiparameter gamma distribution under quadratic losses. Ann. Statist. 14, 206–219. Gupta, S.S., 1965. On some multiple decision (Selection and Ranking) rules. Technometrics 7, 225–245. Gupta, S.S., Panchapakesan, S., 1979. Multiple Decision Procedures: Theory and Methodology of Selecting and Ranking populations. John Wiley, New York. Kumar, S., Kar, A., 2001a. Estimating quantiles of a selected exponential population. Statist. Probab. Lett. 52, 9–19. Kumar, S., Kar, A., 2001b. Minimum variance unbiased estimation of quantile of a selected exponential population. Amer. J. Math. Management Sci. 21 (1-2), 183–191. Robbins, H., 1988. The UV method of estimation. In: Gupta, S.S., Berger, J.O. (Eds.), Statistical Decision Theory and Related Topics-IV, V. 1. Springer-Verlag, New York, pp. 265–270. Rubinstein, D., 1961. Estimation of the reliability of a system in development. Rep. R-61-ELC-1, Tech. Info. Service, GE Co. Rubinstein, D., 1965. Estimation of failure rates in a dynamic reliability program. Rep. 65-RGO-7, Tech. Info. Service, GE Co. Sackrowitz, H., Samuel-Cahn, E., 1984. Estimation of the mean of a selected negative exponential population. J. R. Statist. Soc., Ser B 46, 242–249. Vellaisamy, P., 1992. Inadmissibility results for the selected scale parameters. Ann. Statist. 20, 2183–2191. Vellaisamy, P., 1996. A note on the estimation of the selected scale parameters. J. Statist. Plann. Inference 55, 39–46. Vellaisamy, P., Punnen, A.P., 2002. Improved estimators for the selected location parameters. Statist. Papers 43, 291–299. Vellaisamy, P., 2003. Quantile estimation of a selected exponential population. J. Statist. Plann. Inference 115, 461–470. Vellaisamy, P., Jain, S., 2008. Estimating the parameter of the population selected from discrete exponential family. Statist. Probab. Lett. 78, 1076–1087.