Journal of Statistical Planning and Inference 141 (2011) 1224–1239
Contents lists available at ScienceDirect
Journal of Statistical Planning and Inference journal homepage: www.elsevier.com/locate/jspi
Exact likelihood inference for Laplace distribution based on Type-II censored samples G. Iliopoulos a,, N. Balakrishnan b a b
Department of Statistics and Insurance Science, University of Piraeus, 80 Karaoli and Dimitriou Str., 18534 Piraeus, Greece Department of Mathematics and Statistics, McMaster University, Hamilton, Ontario, Canada L8S 4K1
a r t i c l e i n f o
abstract
Article history: Received 16 November 2009 Received in revised form 20 September 2010 Accepted 24 September 2010 Available online 1 October 2010
We develop exact inference for the location and scale parameters of the Laplace (double exponential) distribution based on their maximum likelihood estimators from a Type-II censored sample. Based on some pivotal quantities, exact confidence intervals and tests of hypotheses are constructed. Upon conditioning first on the number of observations that are below the population median, exact distributions of the pivotal quantities are expressed as mixtures of linear combinations and of ratios of linear combinations of standard exponential random variables, which facilitates the computation of quantiles of these pivotal quantities. Tables of quantiles are presented for the complete sample case. & 2010 Elsevier B.V. All rights reserved.
Keywords: Laplace (double exponential) distribution Exact inference Maximum likelihood estimators Type-II censoring Mixtures Pivotal quantities Linear combinations of exponential order statistics
1. Introduction Let X1,y,Xn, n Z 2, be a random sample from the Laplace (or double exponential) distribution Lðm, sÞ, m 2 R, s 40, with probability density function (pdf) f ðx; m, sÞ ¼
1 ð1=sÞjxmj e , 2s
x 2 R:
It is well-known (see, for example, Johnson et al., 1995; Kotz et al., 2001) that the maximum likelihood estimators (MLEs) m^ and s^ of m and s are the sample median and the sample mean deviation from the sample median, respectively. Note that in the case of an even sample size, m^ is not unique since then any point between the two middle observations maximizes the likelihood function with respect to m. However, it is customary to define the sample median as the average of the two middle observations and take that as the MLE of m, and that is what we will do too hereafter. On the other hand, s^ is always well-defined, since it turns out to be the difference of the sum of all sample points above m^ (whatever we take it to be) and the sum of all sample points below m^ divided by the sample size. Balakrishnan and Cutler (1995) showed that the MLEs can be explicitly derived even in the presence of general Type-II censoring in the sample. Their result was further generalized by Childs and Balakrishnan (1997). To be more specific, let X1:n o oXn:n denote the ordered sample and assume that r observations have been censored from the left and s Corresponding author.
E-mail addresses:
[email protected] (G. Iliopoulos),
[email protected] (N. Balakrishnan). 0378-3758/$ - see front matter & 2010 Elsevier B.V. All rights reserved. doi:10.1016/j.jspi.2010.09.024
G. Iliopoulos, N. Balakrishnan / Journal of Statistical Planning and Inference 141 (2011) 1224–1239
1225
observations have been censored from the right, i.e., the observed data consist of the order statistics Xr þ 1:n o oXns:n , which corresponds to a doubly Type-II censored sample. Such a censored sample may arise either naturally (for example, when some extreme values cannot be recorded due to experimental constraints or they are just missing) or intentionally (when the researcher decides to ignore some extreme observations based on robustness considerations). In order to be able to estimate both parameters, at least two observations are needed and so, we will assume that nrs Z2. Clearly, when r = s= 0, the complete data are observed. In what follows, m ¼ mðnÞ Z 1 is defined to be equal to (n+ 1)/2 when n is odd and n/2 when n is even. In other words, we are taking n =2m 1 and n =2m for the odd and even sample cases, respectively. Then, from the above mentioned works, the following expressions of the MLEs are known:
If maxðr,sÞ o m, then
8 < Xm:n , m^ ¼ 1 : ðXm:n þ Xm þ 1:n Þ, 2
n ¼ 2m1, n ¼ 2m,
i.e., the sample median, and ( ) ½n=2 ns X X 1 s^ ¼ X þ sX ns:n rX r þ 1:n Xi:n , nrs i ¼ m þ 1 i:n i ¼ rþ1
where [x] denotes the integer part of x, and by convention if s Z m, then ( ) ns X 1 s^ ¼ ðXns:n Xi:n Þ þ rðXns:n Xr þ 1:n Þ nrs i ¼ r þ 1
P‘
i¼k
0 when k4 ‘;
and
n=2 s^ ; ns
m^ ¼ Xns:n þ log
if r Z m, then s^ ¼
1 nrs
(
ns X
) ðXi:n Xr þ 1:n Þ þ sðXns:n Xr þ 1:n Þ
i ¼ r þ1
and
m^ ¼ Xr þ 1:n þ log
nr s^ : n=2
Since m and s are the location and scale parameters, respectively, it is evident that the random variables T ¼ ðm^ mÞ=s^ and S ¼ s^ =s have distributions which are free of both parameters and can therefore serve as pivotal quantities. Hence, inference about m and s can be carried out based on T and S, respectively. Indeed, Bain and Engelhardt (1973) considered approximate inference based on the above pivotal quantities. Kappenman (1975) subsequently developed conditional inference for m and s by conditioning on appropriate ancillary statistics. Grice et al. (1978) compared numerically the two approaches with respect to inference about m and found that the conditional one gives slightly narrower confidence intervals. Childs and Balakrishnan (1996) extended Kappenman’s (1975) conditional approach to the case of Type-II right censored data. A completely different procedure based on the distribution of the standard t-statistic in the Laplace case was considered by Sansing (1976). In this paper, we develop exact inference for m and s based on T and S either when the sample is complete or when it is general Type-II censored. The importance of the results established here is twofold. First, we provide the necessary tools for constructing exact confidence intervals and tests of hypotheses under the important Laplace model that often serves as an alternative to the normal distribution; see Kotz et al. (2001). Moreover, by tabulating the most commonly used quantiles of T and S (see Tables 4 and 5), we make these inferential processes as straightforward as in the case of normal samples. Second, exact inferential methods under censoring developed under Laplace model makes a substantial addition as there are very few models such as exponential, Pareto and uniform for which this development is possible. The rest of this paper is organized as follows. In Section 2, we describe some known preliminary results on the Laplace distribution and also on the conditional distributions of order statistics which are essential for the ensuing developments. In Sections 3 and 4, we derive the exact distributions of S and T, respectively. We show that S is distributed as a mixture of linear combinations of independent standard exponential random variables while T is distributed as a mixture of ratios of (dependent) linear combinations of independent standard exponential random variables. Using these distributional results, in Section 5, we develop exact confidence intervals and tests of hypotheses about the parameters m and s. We also evaluate the performance of confidence intervals obtained from an asymptotic approach given by Bain and Engelhardt
1226
G. Iliopoulos, N. Balakrishnan / Journal of Statistical Planning and Inference 141 (2011) 1224–1239
(1973) and from the parametric bootstrap approach. In Section 6, we use a real dataset as an example and illustrate all the inferential procedures developed here. In Section 7, we make some concluding remarks. Finally, an Appendix presents the derivations of all the above mentioned distributions of the pivotal quantities. 2. Preliminaries Let X Lðm, sÞ. It is well-known that the parameter m is the median of the distribution and thus, PðX r mÞ ¼ PðX Z mÞ ¼ 12. Moreover, the conditional distribution of Xm, given X 4 m, is exponential with mean s, denoted by EðsÞ. This is also the conditional distribution of mX, given X r m. iid
Let Z1 , . . . ,Zn EðsÞ and denote by Z1:n o oZn:n the corresponding order statistics. For i= 1,y,n, let us denote the i-th spacing Zi:n Zi 1:n by Z~ i:n , where Z0:n 0. Then, the normalized spacings nZ~ 1:n ,ðn1ÞZ~ 2:n , . . . ,ðni þ 1ÞZ~ i:n , . . . , Z~ n:n form a random sample from EðsÞ as well (see Arnold et al., 2008). Iliopoulos and Balakrishnan (2009) established the following independence result concerning order statistics. If X1,y,Xn is a random sample from any (either discrete or continuous) parent distribution and D denotes the number of X’s being at most equal to a pre-fixed constant C, then, conditional on D = d, the blocks of order statistics (X1:n,y,Xd:n) and d d (Xd + 1:n,y,Xn:n) are independent. Moreover, conditional on D = d, ðX1:n , . . . ,Xd:n Þ ¼ ðL1:d , . . . ,Ld:d Þ and ðXd þ 1:n , . . . ,Xn:n Þ ¼ ðR1:nd , . . . ,Rnd:nd Þ, where L1,y,Ld is a random sample from the parent distribution right truncated at C, and R1,y,Rn d is a random sample from the parent distribution left truncated at C. Now, we shall explain how we can use the above results for the inferential problem at hand. Consider the complete sample X1,y,Xn from Lðm, sÞ and set D ¼ fX’s r mg. Clearly, D follows a binomial Bðn, 12Þ distribution. Conditional on D =d, d
d
iid
ðmXd:n , . . . , mX1:n Þ ¼ ðL1:d , . . . ,Ld:d Þ and ðXd þ 1:n m, . . . ,Xn:n mÞ ¼ ðR1:nd , . . . ,Rnd:nd Þ, where L1 , . . . ,Ld ,R1 , . . . ,Rnd EðsÞ. Moreover, since any linear combination of Li:d’s and Rj:n d’s can be expressed as a linear combination of the spacings L~ i:d ’s and R~ j:nd ’s which are independent EðsÞ random variables, we will first condition on D = d and express S and T through linear combinations of the above spacings. Then, we will derive the conditional distributions of S and T for all d = 0,1,y,n, and finally we will uncondition with respect to D in order to express the unconditional distributions of S and T as suitable mixtures. It should be mentioned that this mixture representation of the exact distributions of the MLEs, in the special case of complete samples of odd sizes, may be deduced from Proposition 2.6.6 of Kotz et al. (2001).
3. The distribution of S ¼ r^ =r By invariance, the distribution of S does not depend on s and so we may take s ¼ 1 without loss of any generality. In what follows, we set D ¼ fX’s r mg. Moreover, let L1,y,Ln,R1,y,Rn be iid Eð1Þ random variables and V an independent gamma Gða,1Þ random variable with scale parameter 1 and shape parameter a which will be suitably determined later. Case maxðr,sÞ o m: Consider first the case maxðr,sÞ o m. Depending on the value d of D we condition on, it is convenient to write (n r s)S in different forms as follows:
d r r: In this case, (n r s)S can be expressed as rðmXr þ 1:n Þ þ
½n=2 X
ðmXi:n Þ þ
i ¼ rþ1
d
¼ rRrd þ 1:nd
½n=2d X
m d X i ¼ rd þ 2
d
¼
ðXi:n mÞ þsðXns:n mÞ
nds X
Ri:nd þ
ðd þ i1ÞR~ i:nd þ
Ri:nd þ sRnds:nd
i ¼ md þ 1 nds X
ðndi þ1ÞR~ i:nd
ð2Þ
i ¼ md þ 1
nds X d þ i1 d Ri þ Ri ¼ ndi þ 1 i ¼ rd þ 2 i ¼ md þ 1 m d X
ð1Þ
i ¼ mþ1
i ¼ rd þ 1
¼
ns X
m d X
d þ i1 R þ V, ndi þ1 i i ¼ rd þ 2
where V Gðnms,1Þ. It can be readily verified that in (1) the parameter m is added as many times as it is subtracted and so it simply drops out. The same thing happens for all the cases that follow. Hence, the conditional pdf of (n r s)S, given D ¼ d 2 f0, . . . ,rg, is fSðmr1Þ ðx; h,aÞ presented in Theorem 1, where h ¼ ððnr1Þ=ðr þ1Þ, . . . ,ðnm þ1Þ=ðm1ÞÞ and a =n m s.
G. Iliopoulos, N. Balakrishnan / Journal of Statistical Planning and Inference 141 (2011) 1224–1239
1227
r od rm1: In this case, we can express (n r s)S as d X
rðmXr þ 1:n Þ þ
ðmXi:n Þ
i ¼ rþ1
d
¼
dr X
¼
dr X
(
Li:d þrLdr:d þ )
(
ðdi þ 1ÞL~ i:d þ
i¼1
!
0 which occurs for d ¼ m1 when n ¼ 2m1 ½n=2d X
Ri:nd þ
i¼1 md X
)
nds X
Ri:nd þ sRnds:nd
i ¼ m þ 1d
i¼1
)
nds X
ðd þ i1ÞR~ i:nd þ
ðndiþ 1ÞR~ i:nd
i ¼ md þ 1
nds m d X X d þ i1 d þ i1 d Ri þ R þ V, Li þ Ri ¼ ¼ ndi þ 1 ndi þ 1 i i¼1 i¼1 i¼1 i ¼ md þ 1 d
dr X
ðXi:n mÞ þsðXns:n mÞ
i ¼ mþ1
i¼1
)
i¼1
(
0 X
ns X
ðXi:n mÞ þ
i ¼ dþ1
with the convention (
½n=2 X
m d X
where V Gðnmsr þd,1Þ. Hence, ðnrsÞSjðD ¼ dÞ fSðmdÞ ðx; h,aÞ, where h ¼ ððndÞ=d, . . . ,ðnm þ 1Þ=ðm1ÞÞ and a =n m s r + d. d ¼ mans: In this case, we can express (n r s)S as ½n=2 X
rðmXr þ 1:n Þ þ
ns X
ðmXi:n Þ þ
ðXi:n mÞ þ sðXns:n mÞ
i ¼ rþ1
mþ1 9i ¼ ( ) m r nms = X X d ¼ Li:m þrLmr:m þ Ri:nm þsRnms:nm : ; i ¼ m½n=2 þ 1 i¼1 ( ) ( ) m r nms m r nms X X X X d ½n=2 d ½n=2 L1 þ L1 þV, ðmi þ1ÞL~ i:m þ ðnmiþ 1ÞR~ i:nm ¼ Li þ Ri ¼ ¼ ½n=2L1:m þ m m i¼2 i¼1 i¼2 i¼1
8 <
where V Gðnsr1,1Þ. Note that when n =2m the above conditional distribution becomes Gðnsr,1Þ. Thus, ðnrsÞSjðD ¼ mÞ fSð1Þ ðx; h,aÞ, where h ¼ ðm=ðm1ÞÞ and a= n s r 1 when n = 2m 1, or Gðnsr,1Þ when n= 2m. m þ1 r d o ns: Due to symmetry, (n r s)S has the same conditional distribution either when we condition on D = d or on D = n d. So, by interchanging d and n d in the conditional distribution of the case r od r m1, we have ðnrsÞSjðD ¼ dÞ fSðmn þ dÞ ðx; h,aÞ, where h ¼ ðd=ðndÞ, . . . ,ðnm þ 1Þ=ðm1ÞÞ and a= 2n m r s d. ns rd r n: Yet again, due to symmetry, the conditional distribution of (n r s)S, given D =d, is fSðms1Þ ðx; h,aÞ, where h ¼ ððns1Þ=ðs þ1Þ, . . . ,ðnm þ1Þ=ðm1ÞÞ and a =n m r. Case s Z m:
d r r: In this case, (n r s)S may be expressed as ns X
rðXr þ 1:n mÞ
i ¼ rþ1
¼
nds X
nds X
d
ðXi:n mÞ þ ðnsÞðXns:n mÞ ¼ rRrd þ 1:nd
Ri:nd þ ðnsÞRnds:nd
i ¼ rd þ 1
d ðd þi1ÞR~ i:nd ¼
i ¼ rd þ 2
nds X
d þ i1 R: ndi þ1 i i ¼ rd þ 2
Hence, ðnrsÞSjðD ¼ dÞ fSðnrs1Þ ðx; h,aÞ, where h ¼ ððnr1Þ=ðr þ 1Þ, . . . ,ðs þ1Þ=ðns1ÞÞ and a= 0.
r þ 1r d o ns: In this case, we can express (n r s)S as d X
rðmXr þ 1:n Þ þ d
¼
(
i ¼ rþ1 dr X
)
¼
dr X i¼1
(
Li:d þrLdr:d þ
i¼1 d
ðmXi:n Þ
Li þ
ns X
ðXi:n mÞ þ ðnsÞðXns:n mÞ
i ¼ dþ1 nds X
)
Ri:nd þ ðnsÞRnds:nd
( ¼
i¼1 nds X i¼1
dr X
) ðdi þ 1ÞL~ i:d þ
i¼1
(
nds X
) ðd þi1ÞR~ i:nd
i¼1
nds X d þ i1 d þ i1 d Ri ¼ R þ V, ndiþ 1 ndi þ1 i i¼1
where V Gðdr,1Þ. Hence, ðnrsÞSjðD ¼ dÞ fSðndsÞ ðx; h,aÞ, where h ¼ ððndÞ=d, . . . ,ðs þ 1Þ=ðns1ÞÞ and a= d r.
d Z ns: Finally, in this case, (n r s)S can be expressed as rðmXr þ 1:n Þ þ
ns X
d
ðmXi:n ÞðnsÞðmXns:n Þ ¼ rLdr:d þ
i ¼ rþ1
dr X
Li:d ðnsÞLdn þ s þ 1:d ¼
i ¼ dn þ s þ 1
where V Gðnrs1,1Þ. Hence, ðnrsÞSjðD ¼ dÞ Gðnrs1,1Þ.
dr X
d ðdi þ1ÞL~ i:d ¼
i ¼ dn þ s þ 2
nrs1 X i¼1
d
Li ¼ V,
1228
G. Iliopoulos, N. Balakrishnan / Journal of Statistical Planning and Inference 141 (2011) 1224–1239
Case r Z m: Using again a symmetry argument, we conclude that the conditional distributions of (n r s)S, given D =d, are as in the previous section when interchanging r and s, and replacing d by n d. Let fSjD ðxjdÞ denote in general the conditional pdf of (n r s)S, given D = d. Then, by standard arguments, the conditional pdf of S, given D = d, is fSjD ðxjdÞ ¼ ðnrsÞfSjD ððnrsÞxjdÞ. Using now all the conditional pdfs of S, given D = d, presented above, we can express the exact pdf of S as n n X n 1 X PðD ¼ dÞfSjD ðxjdÞ ¼ n fS ðxÞ ¼ fSjD ðxjdÞ, x 40: d 2 d¼0 d¼0 Note also that the distribution of S remains the same when the values of r and s are interchanged. 4. The distribution of T ¼ ðl^ lÞ=r^ Let m^ be the sample median, that is, 8 < Xm:n , ^ m¼ 1 : ðXm:n þ Xm þ 1:n Þ, 2
n ¼ 2m1 n ¼ 2m
¼
8 m X > > > X~ i:n , > >
n ¼ 2m1,
m X > 1 > > , X~ þ X~ > > : i ¼ 1 i:n 2 m þ 1:n
n ¼ 2m:
Once again, by invariance, we may take s ¼ 1 without loss of any generality. In what follows, U’s, Z’s, and W denote iid independent random variables, where U’s and Z’s Eð1Þ while W is a gamma random variable with scale parameter 1 and shape parameter that will be suitably determined. Moreover, the expressions for the conditional pdfs of T, given D = d, are presented in Theorems 2–6. Case maxðr,sÞ om:
d r r: By conditioning on D = d, we may write 8 < Rmd:nd , d ^ m m ¼ 1 : ðRmd:nd þ Rm þ 1d:nd Þ, 2
n ¼ 2m1, n ¼ 2m,
¼
8 md X > > > R~ i:nd , > >
n ¼ 2m1,
m d > X 1 > > > R~ i:nd þ R~ m þ 1d:nd , > : 2
n ¼ 2m:
ð3Þ
i¼1
Upon using (2), we can see that when n =2m 1, conditional on D = d, T=ðnrsÞ ðm^ mÞ=fðnrsÞs^ g has the same distribution as Pmd ~ i ¼ 1 R i:nd P Pmd ðd þ i1ÞR~ i:nd þ nds ðndiþ 1ÞR~ i:nd i ¼ rd þ 2
i ¼ md þ 1
P P Prd þ 1 1 1 1 1 R þ md R U þ mr1 Z i¼1 i ¼ rd þ 2 i¼1 i¼1 d ndi þ1 i ndiþ 1 i ¼ ndiþ 1 i m þ i1 i , P P Pmd d þ i1 mr1 mi Z þW R þ nds i¼1 i ¼ rd þ 2 i ¼ md þ 1 Ri m þi1 i ndiþ 1 i
Prd þ 1 d
¼
where W Gðnms,1Þ Gðms1,1Þ. Hence, T=ðnrsÞjðD ¼ dÞ fTðrd þ 1,mr1Þ ðx; h, k, l,aÞ, where h ¼ ðnr, . . . , ndÞ, k ¼ ð1=m, . . . ,1=ðnr1ÞÞ, l ¼ ððm1Þ=m, . . . ,ðr þ 1Þ=ðnr1ÞÞ and a= m s 1. On the other hand, when n =2m, we have, conditional on D = d, T/(n r s) to have the same distribution as P Prd þ 1 1 1 1 Pmd ~ Ri þ md Ri þ Rmd þ 1 1 ~ i¼1 i ¼ rd þ 2 d ndi þ 1 ndiþ 1 2m i ¼ 1 R i:nd þ 2 R md þ 1:nd ¼ Pnds Pmd P P ~ ~ dþ i1 md nds i ¼ rd þ 2 ðdþ i1ÞR i:nd þ i ¼ md þ 1 ðndi þ 1ÞR i:nd R þ Rmd þ 1 þ i ¼ md þ 2 Ri i ¼ rd þ 2 ndi þ 1 i Prd þ 1 d
¼
i¼1
P 1 1 1 Uþ Z1 þ mr Z i¼2 ndi þ 1 i 2m m þ i1 i , Pmr mi þ 1 Z þW i¼1 m þ i1 i
where W Gðnms1,1Þ Gðms1,1Þ. Hence, T=ðnrsÞjðD ¼ dÞ fTðrd þ 1,mrÞ ðx; h, k, l,aÞ, with h and a as before and k ¼ ð1=2m,1=ðm þ 1Þ, . . . ,1=ðnr1ÞÞ and l ¼ ð1,ðm1Þ=ðm þ 1Þ, . . . ,ðr þ 1Þ=ðnr1ÞÞ. r od rm1: In this case, the conditional distribution of m^ m, given D = d, is once again as in (3). Thus, when n =2m 1, we have, conditional on D =d, T/(n r s) to have the same distribution as
Pdr
i¼1
Pmd ~ i ¼ 1 R i:nd P P ðdi þ1ÞL~ i:d þ md ðd þ i1ÞR~ i:nd þ nrs i¼1
Pmd
1 Z m þ i1 i , ¼ Pmd mi Z þW i¼1 m þi1 i d
~
i ¼ md þ 1 ðndi þ1ÞR i:nd
i¼1
G. Iliopoulos, N. Balakrishnan / Journal of Statistical Planning and Inference 141 (2011) 1224–1239
1229
where W Gðnrsm þd,1Þ Gðmrs þd1,1Þ. Hence, T=ðnrsÞjðD ¼ dÞ fTð0,mdÞ ðx; k, l,aÞ, where k ¼ ð1=m, . . . , 1=ðndÞÞ, l ¼ ððm1Þ=m, . . . ,d=ðndÞÞ and a =m r s+ d 1. When n =2m, T/(n r s) has the same distribution as P 1 1 Pmd ~ 1~ þ1 Z1 þ md Z i¼2 i ¼ 1 R i:nd þ R md þ 1:nd d 2m m þ i1 i , 2 ¼ Pmd Pnrs Pdr P ~ ~ ~ mi þ 1 md þ 1 i ¼ 1 ðdiþ 1ÞL i:d þ i ¼ 1 ðd þi1ÞR i:nd þ i ¼ md þ 1 ðndi þ1ÞR i:nd Z þW i¼1 m þ i1 i
where W Gðnrsmþ d1,1Þ Gðmrs þ d1,1Þ. Hence, T=ðnrsÞjðD ¼ dÞ fTð0,md þ 1Þ ðx; k, l,aÞ, where k ¼ ð1=2m,1=ðm þ 1Þ, . . . ,1=ðndÞÞ, l ¼ ð1,ðm1Þ=ðm þ 1Þ, . . . ,d=ðndÞÞ and a =m r s+ d 1. d ¼ mans: In this case, the conditional distribution of m^ m, given D = m, is 8 n ¼ 2m1, < L1:m , d m^ m ¼ 1 : ðR1:nm L1:m Þ, n ¼ 2m: 2 Thus, when n =2m 1, we have, conditional on D = m, T/(n r s) has the same distribution as ð1=mÞZ1 =fðm1Þ= mZ 1 þ Wg, where W Gðnrs1,1Þ Gð2mrs2,1Þ, that is, T=ðnrsÞjðD ¼ dÞ fTð0,1Þ ðx; k, l,aÞ with k ¼ ð1=mÞ, d l ¼ ððm1Þ=mÞ and a = 2m r s 2. On the other hand, when n =2m, T=ðnrsÞ ¼ ð1=2mÞðZ1 Z2 Þ=ðZ1 þ Z2 þ WÞ, where W Gðnrs2,1Þ Gð2mrs2,1Þ. Thus, in this case, T=ðnrsÞjðD ¼ dÞ gT ðx; 1=2m,2mrs2Þ as given in Theorem 6. mþ 1 rd ons: Using a symmetry argument, we get when n= 2m 1, T=ðnrsÞjðD ¼ dÞ fTð0,dm þ 1Þ ðx; k, l,aÞ, where k ¼ ð1=m, . . . ,1=dÞ, l ¼ ððm1Þ=m, . . . ,ðndÞ=dÞ and a =n r s d +m 1, while when n ¼ 2m, T=ðnrsÞj ðD ¼ dÞ fTð0,dm þ 1Þ ðx; k, l,aÞ, where k ¼ ð1=2m,1=ðmþ 1Þ, . . . ,1=dÞ, l ¼ ð1,ðm1Þ=ðm þ 1Þ, . . . ,ðndÞ=dÞ and a = n r s d+ m 1. d Zns: Yet again, by a symmetry argument, we get when n= 2m 1, T=ðnrsÞjðD ¼ dÞ fTðdn þ s þ 1,ms1Þ ðx; h, k, l,aÞ with h ¼ ðns, . . . ,dÞ, k ¼ ð1=m, . . . ,1=ðns1ÞÞ, l ¼ ððm1Þ=m, . . . ,ðs þ1Þ=ðns1ÞÞ and a= m r 1, while when n =2m, T=ðnrsÞjðD ¼ dÞ fTðdn þ s þ 1,msÞ ðx; h, k, l,aÞ with h and a as before, k ¼ ð1=2m,1=ðm þ 1Þ, . . . ,1=ðns1ÞÞ and l ¼ ð1,ðm1Þ=ðm þ 1Þ, . . . ,ðsþ 1Þ=ðns1ÞÞ.
ðxjdÞ denote in general the conditional pdf of T/(n r s), given D = d. Then, by standard arguments, Now, let fTjD TjðD ¼ dÞ fTjD ðxjdÞ ¼ ðnrsÞ1 fTjD ðx=ðnrsÞjdÞ and the exact pdf of T is given by n X n 1 fT ðxÞ ¼ n fTjD ðxjdÞ, x 2 R: 2 d¼0 d
Case s Z m: Note here that m^ m Xns:n m n=2 ¼ þ log ns s^ s^ and so, we actually need the conditional distributions of T ¼ ðXns:n mÞ=s^ .
d r r: Conditional on D = d, T*/(n r s) has the same distribution as Prd þ 1
Pnds ~ i ¼ 1 R i:nd
Pnds
~
i ¼ rd þ 2 ðd þi1ÞR i:nd
i¼1
d
¼
P 1 1 U þ inrs1 Z ¼1 ndi þ1 i s þi i , Pnrs1 nsi Z i¼1 s þi i
and hence, T =ðnrsÞjðD ¼ dÞ fTðrd þ 1,nrs1Þ ðx; h, k, l,0Þ as given in Theorem 4, where h ¼ ðnr, . . . ,ndÞ, k ¼ ð1=ðnr1Þ, . . . ,1=ðs þ 1ÞÞ and l ¼ ððr þ1Þ=ðnr1Þ, . . . ,ðns1Þ=ðs þ 1ÞÞ. r þ 1r d o ns: Conditional on D =d, T*/(n r s) has the same distribution as Pnds 1 Pnds ~ Z i¼1 d s þi i i ¼ 1 R i:nd , ¼ P Pdr nds Pnds nsi ~ ~ i ¼ 1 ðdi þ1ÞL i:d þ i ¼ 1 ðd þi1ÞR i:nd Zi þ W i¼1 s þi
where W Gðdr,1Þ. Thus, T =ðnrsÞjðD ¼ dÞ fTð0,nrs1Þ ðx; k, l,aÞ, where k ¼ ð1=ðsþ 1Þ, . . . ,1=ðndÞÞ, l ¼ ððns1Þ= ðs þ1Þ, . . . ,d=ðndÞÞ and a= d r. d Z ns: Conditional on D =d, T*/(n r s) has the same distribution as Pdn þ s þ 1
Pdn þ s þ 1 ~ L i:d i¼1
Pdm þ 1
d
~
i ¼ dn þ s þ 2 ðnd þ i1ÞL i:d
¼
i¼1
1 U di þ1 i , W
where W Gðnrs1,1Þ. Thus, T =ðnrsÞjðD ¼ dÞ fTðdn þ s þ 1,0Þ ðx; h,aÞ as given in Theorem 5, where h ¼ ðns, . . . ,dÞ and a =n r s 1. It should be noted that this is the only case in which the conditional distribution can be written as the ratio of two independent random variables.
1230
G. Iliopoulos, N. Balakrishnan / Journal of Statistical Planning and Inference 141 (2011) 1224–1239
Now, let fTjD ðxjdÞ denote the conditional distribution of T*/(n r s), given D =d, and let K ¼ logðn=2=ðnsÞÞ. Then, TjðD ¼ dÞ fTjD ðxjdÞ ¼ ðnrsÞ1 fTjD ððxKÞ=ðnrsÞjdÞ and the exact pdf of T is given once again by n 1 X n fT ðxÞ ¼ n fTjD ðxjdÞ, x 2 R: 2 d¼0 d
Case r Z m: By symmetry, the conditional distributions can be deduced from the previous case.
5. Exact inference and comparison with asymptotic approach Since S and T are pivotal quantities for s and m, respectively, they can be used for developing exact inferential procedures for the two parameters. Their forms are analogous to that of the familiar normal theory involving chi-square and t distributed pivots and so everything works in exactly the same manner. For instance, let Sn,r,s;a and Tn,r,s;a denote the upper aquantiles of S and T when the sample size equals n and r and s observations have been censored from the left and right, respectively. Then, a 100ð1aÞ% exact confidence interval for s is ½s^ =Sn,r,s;a=2 , s^ =Sn,r,s;1a=2 , while the null hypothesis s ¼ s0 will be rejected in favor of the alternatives s 4 s0 , s o s0 or sas0 if s^ =s0 is larger than Sn,r,s;a , smaller than Sn,r,s;1a , or outside interval ½Sn,r,s,1a=2 ,Sn,r,s, a=2 , respectively. In a similar vein, a 100ð1aÞ% exact confidence interval for m is given by ½m^ Tn,r,s;a=2 s^ , m^ Tn,r,s;1a=2 s^ , and testing the hypothesis m ¼ m0 against the usual one- or two-sided alternatives is then carried out in the usual manner. Note here that unless r= s wherein the distribution of T is symmetric about the origin, Tn,r,s;1a=2 aTn,r,s;a=2 . In order to find any required quantile, one needs to solve a nonlinear equation. Although the exact pdfs of S and T look quite cumbersome, the task can be accomplished by using an appropriate strategy. First, from Theorem 1, one can see that in order to calculate quantiles of S, the lower incomplete gamma function is needed. This function is readily available in almost any statistics or mathematics package and can be accurately evaluated. Next, even though the conditional cdfs’ of T in Theorem 2 appear to be more difficult to compute, observe that since a is integer-valued, both numerator and denominator of each term of the sum consist of factorized polynomials. Hence, the method of partial fractions can give the required result. With respect to the accuracy of calculations, the fact that h’s, k’s and l’s are vectors of either integers or rationals allows us to work only with rational numbers and thus achieve any desired precision. In fact, this is exactly what we did for determining the quantiles of T using Mathematica. In Tables 4 and 5, we have tabulated quantiles of the exact distributions of T and S, respectively, for n up to 40 in the case of complete samples. More tables can be found at http:// www.unipi.gr/faculty/geh/Quantiles.zip. In the case of complete samples, Bain and Engelhardt (1973) relied on approximations of the distributions of S and T in order to construct confidence intervals and tests of hypotheses for the Laplace parameters. In fact, they started by providing their exact distributions when n= 3 and 5, but they then stated that ‘‘‘the derivation (y) becomes quite tedious as n increases’’. As we have derived here the exact distributions of S and T for all sample sizes, it would be worthwhile to evaluate the actual coverage probability of the approximate confidence intervals proposed by these authors. According to Bain and Engelhardt (1973), the most convenient approach for constructing a confidence interval for s is based on the approximation of the distribution of 2nS by a chi-square distribution with 2nEðSÞ degrees of freedom. By using the exact distribution of S, we observed that the above chi-square distribution provides a very good approximation indeed. More specifically, for n Z10, the actual confidence coefficients of the approximated intervals equal nominal values of 90%, 95% and 99% when rounded to the third decimal place. In order to construct confidence intervals for m, Bain and Engelhardt (1973) considered several approaches. One of these is based on the fact that T ¼ n1=2 ðm^ mÞ=s and S ¼ n1=2 ðs^ =s1Þ have asymptotic standard normal distributions (see also Chernoff et al., 1967). Since s^ is a consistent estimator of s, Slutsky’s Lemma ensures that n1/2T has an asymptotic standard normal distribution as well. However, the confidence intervals obtained by using the latter normal approximation are too narrow for finite samples and hence they have considerably less coverage probability than the nominal level. Therefore, they suggested to exploit the fact that, for finite samples, m^ , s^ are uncorrelated which implies that S*, T* are asymptotically independent. Then, an application of the delta method shows that n1/2T/(1 + T2)1/2 also has an asymptotic standard normal distribution. This in turns implies that an asymptotic 100ð1aÞ% confidence interval for m is given by m^ 7 s^ za=2 =ðnz2a=2 Þ1=2 , where za denotes the upper aquantile of the standard normal distribution. However, the exact coverage probability of this interval is quite below the nominal confidence level as can be seen in Table 1. Another simple strategy for constructing confidence intervals for the Laplace parameters is the parametric bootstrap approach, i.e., Monte Carlo sampling from Lðm^ , s^ Þ. Tables 2 and 3 contain results of a simulation study on the coverage probabilities of bootstrap confidence intervals for the two parameters. The results are based on 10 000 simulations with the number of bootstrap samples taken to be 1000. Note here that for the confidence interval for s the bias-corrected and accelerated percentile method (see Efron and Tibshirani, 1993, pp. 184–188) has been used since the distribution of s^ is asymmetric. As we can see the estimated confidence levels are quite close to the nominal ones for small sample sizes and achieve them for moderate sample sizes. However, this approach does not differ much from estimating the quantiles of S and T by Monte Carlo which is an easy task since they are pivotal quantities.
G. Iliopoulos, N. Balakrishnan / Journal of Statistical Planning and Inference 141 (2011) 1224–1239
1231
Table 1 Exact coverage probabilities of confidence intervals for m in the case of complete samples based on the normal approximation proposed by Bain and Engelhardt (1973). n
90% (%)
95% (%)
99% (%)
15 20 25 30 35 40 45 50 55 60
84.8 85.9 85.6 86.4 86.1 86.7 86.5 86.9 86.8 87.1
91.6 92.2 91.8 92.3 92.1 92.5 92.3 92.7 92.5 92.8
98.4 98.3 98.0 98.1 98.0 98.1 98.0 98.1 98.0 98.1
Table 2 Estimated coverage probabilities of bootstrap confidence intervals for m in the case of complete samples based on 10 000 simulations and 1000 bootstrap samples. n
90% (%)
95% (%)
99% (%)
15 20 25 30 35 40 45 50 55 60
87.6 87.8 88.3 88.6 88.9 89.6 89.4 89.9 90.0 89.1
93.0 93.6 93.8 94.4 94.4 94.9 94.7 95.0 95.1 94.7
98.3 98.6 98.7 99.0 98.8 98.9 99.1 99.0 99.0 99.0
Table 3 Estimated coverage probabilities of bootstrap confidence intervals for s in the case of complete samples based on 10 000 simulations and 1000 bootstrap samples. n
90% (%)
95% (%)
99% (%)
15 20 25 30 35 40 45 50 55 60
89.5 89.8 89.3 90.3 90.4 90.5 90.3 89.8 90.4 90.2
94.3 94.4 94.5 94.9 95.0 94.8 94.9 94.6 95.0 94.7
98.0 98.3 98.5 98.6 98.8 98.6 98.6 98.6 98.5 98.7
In conclusion, the distribution of 2nS can be approximated very well by a particular chi-square distribution and so, the latter can be used in order to avoid solving the rather cumbersome equations that yield the corresponding exact quantiles. On the other hand, the normal approximation of the distribution of T is poor (at least for moderate sample sizes) which means that its exact distribution is necessary for exact inference about the location parameter. Furthermore, the parametric bootstrap approach seems to work well provided that the number of bootstrap samples is large. In any case, for the convenience of users, we have provided in Tables 4 and 5 the most important quantiles of both exact distributions that would facilitate exact inference when dealing with Laplace distribution.
6. An illustrative example Bain and Engelhardt (1973) considered 33 years of flood data from two stations on Fox River, Wisconsin. They modeled the data using a Laplace distribution and provided 95% approximate confidence intervals (c.i.’s) for the location and scale parameters based on the pivotal quantities T and S, respectively. Kappenman (1975) analyzed further these data for
1232
G. Iliopoulos, N. Balakrishnan / Journal of Statistical Planning and Inference 141 (2011) 1224–1239
Table 4 Upper quantiles of T ¼ ðm^ mÞ=s^ in the case of complete samples. n
0.1
0.05
0.025
0.01
0.005
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
2.5 1.6646 1.0548 0.9418 0.7532 0.7068 0.6093 0.5843 0.5226 0.5070 0.4635 0.4529 0.4201 0.4125 0.3865 0.3808 0.3597 0.3552 0.3375 0.3340 0.3189 0.3160 0.3030 0.3006 0.2892 0.2871 0.2770 0.2752 0.2662 0.2647 0.2566 0.2552 0.2478 0.2467 0.2399 0.2389 0.2327 0.2318 0.2261
5 2.4271 1.5024 1.3144 1.0366 0.9702 0.8287 0.7949 0.7061 0.6856 0.6234 0.6098 0.5631 0.5535 0.5168 0.5096 0.4799 0.4743 0.4495 0.4451 0.4241 0.4205 0.4023 0.3994 0.3835 0.3810 0.3670 0.3648 0.3523 0.3505 0.3392 0.3377 0.3275 0.3261 0.3168 0.3156 0.3070 0.3060 0.2981
10 3.3166 2.0000 1.6841 1.3226 1.2237 1.0421 0.9945 0.8812 0.8536 0.7744 0.7564 0.6973 0.6847 0.6383 0.6290 0.5914 0.5844 0.5531 0.5476 0.5210 0.5166 0.4937 0.4900 0.4701 0.4670 0.4494 0.4468 0.4311 0.4289 0.4148 0.4128 0.4001 0.3984 0.3868 0.3853 0.3747 0.3733 0.3636
25 5.0990 2.7583 2.1910 1.7135 1.5524 1.3214 1.2483 1.1058 1.0647 0.9656 0.9396 0.8656 0.8478 0.7898 0.7769 0.7300 0.7203 0.6813 0.6737 0.6407 0.6347 0.6062 0.6013 0.5764 0.5724 0.5504 0.5470 0.5275 0.5246 0.5071 0.5045 0.4887 0.4865 0.4721 0.4702 0.4570 0.4553 0.4432
50 7.1414 3.4456 2.6121 2.0252 1.8030 1.5343 1.4364 1.2730 1.2192 1.1062 1.0725 0.9883 0.9655 0.8997 0.8834 0.8301 0.8178 0.7736 0.7641 0.7266 0.7190 0.6867 0.6806 0.6524 0.6474 0.6225 0.6183 0.5962 0.5926 0.5727 0.5696 0.5517 0.5490 0.5326 0.5303 0.5154 0.5133 0.5000
By symmetry, the corresponding lower quantiles are their negatives.
illustrating his conditional approach. The data are presented in Table 6. Here we provide 95% exact c.i.’s using the distributions of T and S presented in the preceding sections. From the data, we find m^ ¼ 10:13 and s^ ¼ 3:36091. From Table 4, we see that the 0.025-quantile of T is 0.4128, and so the exact 95% c.i. for the location parameter m is ½10:130:4128 3:36091,10:13 þ 0:4128 3:36091 ¼ ½8:74,11:52: For comparative purposes, we note that Bain and Engelhardt gave the approximate 95% c.i. for m to be [8.91,11.35], while Kappenman’s conditional approach yielded [8.99,12.41]. For a c.i. for the scale parameter s, we find from Table 5 the 0.975 and 0.025 quantiles of S to be 1.3492 and 0.6745, respectively. So, the 95% equi-tailed c.i. for s is ½3:36091=1:3492,3:36091=0:6745 ¼ ½2:49,4:98: This essentially agrees with Bain and Engelhardt’s approximate c.i. and Kappenman’s conditional c.i. of [2.49,4.97]. Childs and Balakrishnan (1996) discussed conditional inference of Laplace parameters under Type-II right censoring. As an example, they considered the Fox River flood data and assumed that the 10 largest observations had been censored. They reported the 95% conditional c.i.’s for m and s to be [7.69,11.40] and [2.73,6.30], respectively. Using the distributions derived in the previous sections, we found the 0.975 and 0.025 quantiles of T to be 0.4193 and 0.4191, respectively. (Note that in this case the distribution of T is not symmetric due to the unbalanced censoring and that is why the two quantiles differ in absolute value. Their absolute values are, however, too close because of the large sample size.) In this case, we have m^ ¼ 10:13, s^ ¼ 3:88217 and so, the 95% exact c.i. for m is ½10:130:4191 3:88217,10:13þ 0:4193 3:88217 ¼ ½8:50,11:76:
G. Iliopoulos, N. Balakrishnan / Journal of Statistical Planning and Inference 141 (2011) 1224–1239
1233
Table 5 Quantiles of S ¼ s^ =s in the case of complete samples. n
0.995
0.99
0.975
0.95
0.9
0.1
0.05
0.025
0.01
0.005
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
0.0050 0.0473 0.1103 0.1591 0.2087 0.2448 0.2818 0.3092 0.3378 0.3595 0.3824 0.4000 0.4188 0.4336 0.4494 0.4620 0.4755 0.4864 0.4981 0.5076 0.5179 0.5264 0.5355 0.5431 0.5513 0.5581 0.5655 0.5717 0.5784 0.5841 0.5902 0.5955 0.6011 0.6059 0.6111 0.6155 0.6203 0.6243 0.6290
0.0100 0.0672 0.1408 0.1930 0.2459 0.2827 0.3209 0.3482 0.3771 0.3984 0.4112 0.4384 0.4570 0.4713 0.4867 0.4989 0.5120 0.5224 0.5338 0.5429 0.5528 0.5609 0.5697 0.5769 0.5867 0.5912 0.5983 0.6041 0.6105 0.6159 0.6217 0.6266 0.6320 0.6365 0.6414 0.6456 0.6502 0.6541 0.6583
0.0250 0.1073 0.1962 0.2521 0.3091 0.3459 0.3852 0.4117 0.4406 0.4609 0.4833 0.4995 0.5174 0.5307 0.5455 0.5567 0.5692 0.5787 0.5894 0.5977 0.6071 0.6144 0.6226 0.6291 0.6364 0.6422 0.6487 0.6540 0.6599 0.6647 0.6701 0.6745 0.6793 0.6834 0.6879 0.6916 0.6958 0.6992 0.7031
0.0501 0.1541 0.2553 0.3124 0.3720 0.4079 0.4474 0.4726 0.5010 0.5200 0.5416 0.5565 0.5737 0.5859 0.5999 0.6100 0.6218 0.6304 0.6404 0.6479 0.6565 0.6631 0.6707 0.6765 0.6832 0.6884 0.6944 0.6991 0.7045 0.7088 0.7136 0.7175 0.7220 0.7256 0.7297 0.7329 0.7367 0.7398 0.7432
0.1006 0.2246 0.3383 0.3941 0.4554 0.4887 0.5276 0.5503 0.5776 0.5944 0.6147 0.6278 0.6437 0.6542 0.6671 0.6758 0.6845 0.6938 0.7029 0.7092 0.7170 0.7225 0.7293 0.7342 0.7401 0.7445 0.7498 0.7537 0.7584 0.7620 0.7663 0.7695 0.7734 0.7764 0.7799 0.7826 0.7859 0.7884 0.7914
1.6359 1.4088 1.5006 1.4188 1.4333 1.3884 1.3883 1.3590 1.3551 1.3342 1.3294 1.3134 1.3086 1.2959 1.2914 1.2810 1.2767 1.2681 1.2642 1.2567 1.2532 1.2467 1.2434 1.2378 1.2348 1.2298 1.2270 1.2225 1.2199 1.2159 1.2135 1.2099 1.2076 1.2043 1.2022 1.1991 1.1972 1.1944 1.1925
2.0565 1.6976 1.7616 1.6374 1.6345 1.5684 1.5567 1.5144 1.5023 1.4723 1.4613 1.4387 1.4290 1.4112 1.4027 1.3882 1.3807 1.3686 1.3619 1.3517 1.3457 1.3368 1.3315 1.3237 1.3189 1.3120 1.3076 1.3015 1.2974 1.2919 1.2882 1.2832 1.2798 1.2753 1.2721 1.2679 1.2650 1.2612 1.2584
2.4659 1.9763 2.0093 1.8433 1.8228 1.7359 1.7129 1.6579 1.6379 1.5993 1.5824 1.5534 1.5391 1.5164 1.5041 1.4858 1.4752 1.4599 1.4506 1.4377 1.4295 1.4183 1.4110 1.4013 1.3947 1.3861 1.3802 1.3726 1.3672 1.3603 1.3554 1.3492 1.3447 1.3390 1.3348 1.3297 1.3258 1.3210 1.3175
2.9951 2.3345 2.3228 2.1022 2.0582 1.9444 1.9066 1.8354 1.8050 1.7554 1.7309 1.6939 1.6737 1.6449 1.6279 1.6046 1.5901 1.5708 1.5583 1.5420 1.5310 1.5170 1.5073 1.4951 1.4865 1.4757 1.4679 1.4584 1.4513 1.4427 1.4363 1.4286 1.4227 1.4157 1.4103 1.4039 1.3989 1.3930 1.3883
3.3887 2.6000 2.5522 2.2908 2.2288 2.0950 2.0460 1.9628 1.9248 1.8671 1.8369 1.7941 1.7695 1.7362 1.7157 1.6889 1.6716 1.6494 1.6345 1.6157 1.6027 1.5867 1.5752 1.5612 1.5511 1.5388 1.5296 1.5187 1.5105 1.5007 1.4932 1.4844 1.4775 1.4695 1.4632 1.4559 1.4501 1.4433 1.4380
6.30 11.45 16.22
6.76 11.48 17.06
Table 6 Data on differences in flood stages for two stations on the Fox River, Wisconsin, for 33 different years. 1.96 7.65 11.75
1.96 7.84 11.81
3.60 7.99 12.34
3.80 8.51 12.78
4.79 9.18 13.06
5.66 10.13 13.29
5.76 10.24 13.98
5.78 10.25 14.18
6.27 10.43 14.40
Similarly, we found the 0.975 and 0.025 quantiles of S in this case to be 1.4190 and 0.6147, respectively. So, the exact 95% c.i. for s is ½3:88217=1:4190,3:88217=0:6147 ¼ ½2:74,6:32 which, as in the case of complete sample, essentially coincides with that obtained by the conditional approach.
7. Concluding remarks Here, we have developed exact distributional results for the distributions of the pivotal quantities based on the MLEs of the location and scale parameters of Laplace distribution based on general Type-II censored samples. It would be possible to develop similar results when the location and scale parameters m and s are estimated by other L-estimators such as the best linear unbiased estimators and best linear invariant estimators. Also, the results developed here for the case of Type-II censoring could be adopted to the situation when the available sample is progressively Type-II censored (see Balakrishnan, 2007, for details) or Type-I censored. Work on these problems is currently under progress and we hope to report these findings in a future paper.
1234
G. Iliopoulos, N. Balakrishnan / Journal of Statistical Planning and Inference 141 (2011) 1224–1239
Appendix iid
Lemma 1. Let U1 , . . . ,Uk Eð1Þ and h ¼ ðy1 , . . . , yk Þ be a vector of distinct positive real numbers. Then, the pdf of U ¼ is given by 0 1 k k Y X yi A uyj @ fU ðu; hÞ ¼ ye , u 4 0: yi yj j j¼1
Pk
i¼1
Ui =yi
i ¼ 1 iaj
iid
Theorem 1. Let U1 , . . . ,Uk Eð1Þ and W Gða,1Þ independently of U’s, where a is a positive integer. Further, let h ¼ ðy1 , . . . , yk Þ P be a vector of distinct positive integers that are all different than 1. Then, the pdf of S ¼ ki ¼ 1 Ui =yi þW is given by 0 1 k k syj Y X y ðkÞ i @ A yj e fS ðs; h,aÞ ¼ Pða,sð1yj ÞÞ, s 40, ð4Þ yi yj ð1yj Þa j¼1 i ¼ 1 iaj
and the corresponding distribution function is given by 0 1( ) k k Y X y ðkÞ i @ A Pða,sÞ Pða,sð1yj ÞÞ esyj , FS ðs; h,aÞ ¼ yi yj ð1yj Þa j¼1
s 4 0,
i ¼ 1 iaj
where Pða,xÞ ¼ GðaÞ1
Rx 0
t a1 et dt is the regularized lower incomplete gamma function.
Proof. Consider first the case k= 1. Then, U1 ¼ y1 ðSWÞ and dU1 =dS ¼ y1 , and so the joint pdf of (S,W) is ð1Þ fS,W ðs,w; y1 ,aÞ ¼ y1 esy1
wa1 ewð1y1 Þ , GðaÞ
s4 w 40:
Thus, the marginal pdf of S is Z s fSð1Þ ðs; y1 ,aÞ ¼ y1 esy1
wa1 ewð1y1 Þ y1 esy1 dw ¼ G ðaÞ ð1 y1 Þa w¼0
Z
sð1y1 Þ v¼0
va1 ev y1 esy1 dv ¼ Pða,sð1y1 ÞÞ, GðaÞ ð1y1 Þa
s 40:
On the other hand, FSð1Þ ðs; y1 ,aÞ ¼
Z
s
Z
t¼0 s
t
y1 ety1
w¼0
Z
wy1
fe
¼
sy1
e
w¼0
wa1 ewð1y1 Þ dw dt ¼ GðaÞ
Z
Z
s w¼0
s
y1 ety1
t¼w
wa1 ewð1y1 Þ dt dw GðaÞ
wa1 ewð1y1 Þ Pða,sð1y1 ÞÞ sy1 dw ¼ Pða,sÞ g e , GðaÞ ð1y1 Þa
s 4 0:
Now, let k 41. Since S =U +W, U= S W and dU/dS = 1, by using Lemma 1, we obtain the joint pdf of (S,W) to be 0 1 0 1 k k k k Y Y X X yi A ðswÞyj wa1 ew yi A ð1Þ ðkÞ @ @ f ðs,w; yj ,aÞ, s 4 w 40: ¼ fS,W ðs,w; h,aÞ ¼ ye yi yj j GðaÞ yi yj S,W j¼1 j¼1 i ¼ 1 iaj
i ¼ 1 iaj
Upon integrating with respect to w, we obtain the required result. The expression for FSðkÞ ðs; h,aÞ can be derived in an analogous manner. & iid
Theorem 2. Let U1 , . . . ,U‘ ,Z1 , . . . ,Zk ,W be independent random variables, U’s, Z’s Eð1Þ, and W Gða,1Þ, a 4 0. Further, let h ¼ ðy1 , . . . , y‘ Þ, k ¼ ðl1 , . . . , lk Þ and l ¼ ðm1 , . . . , mk Þ be vectors of positive numbers such that all y’ s are distinct and li =mi is strictly increasing in i. Then, the pdf of the random variable Pk P‘ 1 Ui =yi þ i ¼ 1 li Zi ð5Þ Y ¼ i ¼P k m Z þW i¼1 i i is fTð1,kÞ ðy; y1 , k, l,aÞ ¼ k X
ð1þ yy1 Þa
Qk
(
y1
m
i¼1 f1 þ y1 ðy i li Þg a þ k1
) k X a mi þ Ið0,1Þ ðyÞ 1þ yy1 1 þ y1 ðymi li Þ i¼1
y1 ðlj ymj Þ Q a ð l y m þ yÞ f1þ y1 ðymj lj Þg k flj li yðmj mi Þg j j¼1 j 8 9 k < X mj lj mi li mj = alj þ þ ðyÞ I :lj ymj þ y 1 þ y1 ðymj lj Þ lj li yðmj mi Þ; ð0, lj =mj
i¼1 iaj
i¼1 iaj
G. Iliopoulos, N. Balakrishnan / Journal of Statistical Planning and Inference 141 (2011) 1224–1239
when ‘ ¼ 1, and fTð‘,kÞ ðy; h, k,
l,aÞ ¼
‘ X
0 @
j¼1
1235
1 ‘ Y yi A ð1,kÞ f ðy; yj , k, l,aÞ yi yj T i ¼ 1 iaj
when ‘ 4 1. Proof. Let us first consider the case ‘ ¼ 1 and write for convenience U, y instead of U1, y1 . The joint pdf of U,Z1,y,Zk,W is wa1 uw Pk zi i ¼ 1 e , u,w,z1 , . . . ,zk 4 0: GðaÞ P P By solving (5) (for ‘ ¼ 1) with respect to U, we get U ¼ yfWY þ ki ¼ 1 ðmi Yli ÞZi g and dU=dY ¼ yðW þ ki ¼ 1 mi Zi Þ 40. So, the joint density of Y,Z1,y,Zk,W is ! k k Pk X X ywa1 wþ mi zi eð1 þ yyÞw i ¼ 1 f1 þ yðymi li Þgzi , w,z1 , . . . ,zk ,wyþ ðymi li Þzi 40: hð1Þ ðy,w,z1 , . . . ,zk ; y, k, l,aÞ ¼ GðaÞ i¼1 i¼1 ð6Þ Since ymi li 403y 4 li =mi , it follows that, for y4 lk =mk , we must integrate h(1)(y,w,z1,y,zk) for all w,z1 , . . . ,zk 4 0. On the other hand, if y 2 ðlk1 =mk1 , lk =mk Þ, we have ymk lk o 0 and ymi li 4 0, i=1,y,k 1, which means that g must be first P integrated for 0 ozk o yw=ðlk ymk Þ þ k1 w,z , . . . ,zk1 40. In general, when i ¼ 1 ððymi li Þ=ðlk ymk ÞÞzi and then for Pk1 1 y 2 ðlj1 =mj1 , lj =mj Þ, g must be integrated in turn for 0 ozk o yw=ðlk ymk Þ þ i ¼ 1 ððymi li Þ= ðlk ymk ÞÞzi , . . . , 0 o zj o Pj1 yw=ðlj ymj Þ þ i ¼ 1 ððymi li Þ=ðlj ymj ÞÞzi and w,z1 , . . . ,zj1 4 0. In what follows, we make use of the formula Z M 1 d d g þ eeM g þ þ dM eex ðg þ dxÞ dx ¼
e
x¼0
e
e
which holds for any g, d 2 R and either M 2 ½0,1Þ, e 2 R (where in the case e ¼ 0, we must take the limit of the right-hand side as e-0), or M ¼ 1, e 40. Moreover, in order to avoid unnecessary complicated evaluations, we shall derive the density for all y’s outside the set A ¼ fy : 1 þ yðymj lj Þ ¼ 0 for some j ¼ 1, . . . ,kg which is finite and hence has zero probability. Let us first consider y 4 lk =mk . Then, the density of Y at y is ! Z 1 Z 1 Z 1 k Pk X ywa1 mi zi eð1 þ yyÞw i ¼ 1 f1 þ yðymi li Þgzi dzk dz1 dw wþ w ¼ 0 z1 ¼ 0 zk ¼ 0 GðaÞ i¼1 ¼
Z
y
Z
1
1 þ yðymk lk Þ
w¼0
( ) k1 X wa1 ð1 þ yyÞw Pk1 f1 þ yðymi li Þgzi mk i ¼ 1 e dzk1 dz1 dw wþ mi zi þ 1 þ yðymk lk Þ zk1 ¼ 0 GðaÞ i¼1
Z
1
z1 ¼ 0
1
ð7Þ Z
y
¼ Qk f1 þ yðymi li Þg (i ¼ k1 k2 k X X wþ mi zi þ i¼1
ð1 þ yyÞa
Qk
Z
Z
1
w¼0
z1 ¼ 0
)
wa1 ð1 þ yyÞw Pk2 f1 þ yðymi li Þgzi i ¼ 1 e zk2 ¼ 0 GðaÞ 1
mi dzk2 dz1 dw ¼ 1þ y ðy mi li Þ i ¼ k1
( ) k X wa1 ð1 þ yyÞw mi e wþ dw 1þ yðymi li Þ w ¼ 0 GðaÞ i¼1 ( ) k X y a mi þ f1 þ yðymi li Þg 1 þ yy i ¼ 1 1 þ yðymi li Þ
y ¼ Qk f1þ yðymi li Þg i¼1 ¼
1
Z
1
i¼1
as stated. Next, for y 2 ðlk1 =mk1 , lk =mk Þ \ A* , the density of Y is ! Z 1 Z 1 Z yw=ðlk ymk Þ þ Pk1 ððymi li Þ=ðlk ymk ÞÞzi Z 1 k X i ¼ 1 ywa1 ð1 þ yyÞw Pk f1 þ yðymi li Þgzi i ¼ 1 wþ mi zi dzk dz1 dw e GðaÞ w ¼ 0 z1 ¼ 0 zk1 ¼ 0 zk ¼ 0 i¼1 Z 1 Z 1 Z 1 Pk1 a1 y w ð1 þ yyÞw f1 þ yðymi li Þgzi i ¼ 1 e ¼ 1þ yðymk lk Þ w ¼ 0 z1 ¼ 0 zk1 ¼ 0 GðaÞ
ð8Þ
1236
G. Iliopoulos, N. Balakrishnan / Journal of Statistical Planning and Inference 141 (2011) 1224–1239
( wþ
k1 X
mi zi þ
i¼1
"
Pk1 mk ef1 þ yðymk lk Þgðyw=ðlk ymk Þ þ i ¼ 1 ððymi li Þ=ðlk ymk ÞÞzi Þ 1 þ yðymk lk Þ
k1 X mk yw ymi li þ mk mi zi þ þ z wþ 1 þ y ðy m l Þ l y m l ymk i k k k k i¼1 i¼1 k
¼
k1 X
Z
y Z
w¼0
Z
z1 ¼ 0
dzk1 dz1 dw
! k1 X wa1 ð1 þ yyÞw Pk1 f1 þ yðymi li Þgzi mk i ¼ 1 e dzk1 dz1 dw wþ mi zi þ 1 þ yðymk lk Þ zk1 ¼ 0 GðaÞ i¼1
Z
1
1 þ yðymk lk Þ
Z
Z
1
!#)
1
wa1 ð1 þ y=ðlk ymk ÞÞw Pk1 ð1 þ ððymi li Þ=ðlk ymk ÞÞÞzi i ¼ 1 e w ¼ 0 z1 ¼ 0 zk1 ¼ 0 GðaÞ ) ! k1 X lk w lk mi li mk mk þ zi þ dzk1 dz1 dw : lk ymk i ¼ 1 lk ymk 1 þ yðymk lk Þ 1
1
1
ð9Þ
Note that the first k-fold integral is the same as that in (7) and so it is equal to (8). On the other hand, the second k-fold integral can be evaluated similarly and can be shown to equal ( ) k1 X yðlk ymk Þa þ k1 alk mk lk mi li mk þ þ : ð10Þ Q ðlk ymk þ yÞa f1 þ yðymk lk Þg k1 flk li yðmk mi Þg lk ymk þ y 1þ yðymk lk Þ i ¼ 1 lk li yðmk mi Þ i¼1
This proves the assertion for y 2 ðlk1 =mk1 , lk =mk Þ \ A* . In order to prove it for all of the remaining intervals ðlj1 =mj1 , lj =mj Þ \ A* , we can use induction. However, here we will go just one step further and prove the result for y 2 ðlk2 =mk2 , lk1 =mk1 Þ \ A* . In this case, the density of Y can be found by continuing from (9) and integrating over zk 1 P from 0 to yw=ðlk1 ymk1 Þ þ k2 i ¼ 1 ððymi li Þ=ðlk1 ymk1 ÞÞzi instead of 0 to 1. After the integration with respect to zk 1, the integral becomes Z 1 Z 1 Z 1 Pk2 y wa1 1 eð1 þ yyÞw i ¼ 1 f1 þ yðymi li Þgzi 1 þ yðymk1 lk1 Þ 1 þ yðymk lk Þ w ¼ 0 z1 ¼ 0 zk2 ¼ 0 GðaÞ " k2 Pk2 X mk mk1 þ mi zi þ wþ ef1 þ yðymk1 lk1 Þgðyw=ðlk1 ymk1 Þ þ i ¼ 1 ððymi li Þ=ðlk1 ymk1 ÞÞzi Þ 1þ yðymk lk Þ 1 þ yðymk1 lk1 Þ i¼1 ( )!# k2 k2 X X mk mk1 yw ymi li þ mk1 wþ mi zi þ þ zi þ 1 þ yðymk lk Þ 1 þ yðymk1 lk1 Þ lk1 ymk1 i ¼ 1 lk1 ymk1 i¼1 Pk2 lk ymk ð1 þ y=ðlk ymk ÞÞw ð1 þ ðy m l Þ=ð l y m i k i k ÞÞzi i ¼ 1 e lk lk1 yðmk mk1 Þ " k2 X lk w lk mi li mk mk lk mk1 lk1 mk þ þ zþ lk ymk i ¼ 1 lk ymk i 1 þ yðymk lk Þ lk lk1 yðmk mk1 Þ eð1 þ ðymk1 lk1 Þ=ðlk ymk ÞÞðyw=ðlk1 ymk1 Þ þ
Pk2
i ¼ 1
ðððymi li Þ=ðlk1 ymk1 ÞÞzi Þ
(
k2 X lk w lk mi li mk þ z lk ymk i ¼ 1 lk ymk i
k2 X mk lk mk1 lk1 mk l m l m lk w lk mi li mk þ þ k k1 k1 k þ z þ 1 þ yðymk lk Þ lk lk1 yðmk mk1 Þ lk ymk lk ymk i ¼ 1 lk ymk i
( ¼
Z
y
Qk
m
i ¼ k1 f1 þ yðy i li Þg k2 X
mi zi þ
wþ
i¼1
1
Z
Z
1
w¼0
z1 ¼ 0
mk1
1 þ yðymk1 lk1 Þ
þ
)!#)
wa1 ð1 þ yyÞw Pk2 f1 þ yðymi li Þgzi i ¼ 1 e zk2 ¼ 0 GðaÞ ) ! 1
mk
1þ yðymk lk Þ Z 1 Z 1
yðlk ymk Þ f1 þ yðymk lk Þgflk lk1 yðmk mk1 Þg Pk2 eð1 þ y=ðlk ymk ÞÞw i ¼ 1 ð1 þ ððymi li Þ=ðlk ymk ÞÞÞzi
dzk2 dz1 dw Z
w¼0
z1 ¼ 0
lk w lk ymk
þ
wa1 zk2 ¼ 0 GðaÞ 1
k2 X lk m li m i
i¼1
dzk2 dz1 dw
lk ymk
k
zi
lk mk1 lk1 mk mk þ þ lk lk1 yðmk mk1 Þ 1 þ yðymk lk Þ Z 1 Z 1 Z 1 y wa1 ð1 þ y=ðlk1 ymk1 ÞÞw Pk2 ð1 þ ðymi li Þ=ðlk1 ymk1 ÞÞzi i ¼ 1 e 1 þ yðymk lk Þ w ¼ 0 z1 ¼ 0 zk2 ¼ 0 GðaÞ " ! k2 X 1 mk1 mk wþ þ mi zi þ 1 þ yðymk1 lk1 Þ 1þ yðymk1 lk1 Þ 1 þ yðymk lk Þ i¼1
dzk2 dz1 dw
G. Iliopoulos, N. Balakrishnan / Journal of Statistical Planning and Inference 141 (2011) 1224–1239
lk ymk
lk1 w
lk lk1 yðmk mk1 Þ lk1 ymk1
þ
þ
k2 X lk1 m li m i
i¼1
lk1 mk lk mk1 mk þ lk1 lk yðmk1 mk Þ 1þ yðymk lk Þ
k1
lk1 ymk1
1237
zi
dzk2 dz1 dw :
ð11Þ
Here, it can be seen that after performing all necessary integrations, the terms in the first two braces reduce to (8) and (10), respectively. On the other hand, by using the facts that 1 lk ymk f1þ yðymk lk Þgðlk1 ymk1 Þ ¼ 1 þ yðymk1 lk1 Þ lk lk1 yðmk mk1 Þ f1þ yðymk1 lk1 Þgflk1 lk yðmk1 mk Þg and
mk1 mk lk ymk lk1 mk lk mk1 mk þ þ lk lk1 yðmk mk1 Þ lk1 lk yðmk1 mk Þ 1 þ yðymk lk Þ 1 þ yðymk1 lk1 Þ 1 þ yðymk1 lk1 Þ 1 þ yðymk lk Þ 1
¼
f1 þ yðymk lk Þgðlk1 ymk1 Þ lk1 mk lk mk1 mk1 þ f1 þ yðymk1 lk1 Þgflk1 lk yðmk1 mk Þg lk1 lk yðmk1 mk Þ 1þ yðymk1 lk1 Þ
we obtain from (11), after carrying out all the integrations, 8
9
< k X yðlk1 ymk1 Þa þ k1 alk1 mk1 lk1 mi li mk1 = þ þ Qk a : l y m þ y 1 þ y ðy m l Þ l k1 k1 k1 li yðmk1 mi Þ; ðlk1 ymk1 þ yÞ f1 þ yðymk1 lk1 Þg flk1 li yðmk1 mi Þg k1 k1 i ¼ 1 iak1
i ¼ 1 iak1
and this establishes the result for ‘ ¼ 1. P In order to prove it for ‘ 41, set U ¼ ‘i ¼ 1 Ui =yi so that we have P U þ ki ¼ 1 li Zi : Y ¼ Pk i ¼ 1 mi Zi þ W P P By solving for U, we get U ¼ WY þ ki ¼ 1 ðY mi li ÞZi and dU=dY ¼ W þ ki ¼ 1 mi Zi , and upon using Lemma 1, we conclude that the joint pdf of Y, Z’s, and W is 0 1( ! ) ‘ ‘ k Pk X X Y yi A yj wa1 ð1 þ yyj Þw f1 þ y ðy m l Þz ð‘Þ j i i i @ i ¼ 1 wþ mi zi e , h ðy,w,z1 , . . . ,zk ; h, k, l,aÞ ¼ yi yj GðaÞ j¼1 i¼1 i ¼ 1 iaj
w,z1 , . . . ,zk ,wy þ
k X
ðymi li Þzi 4 0:
i¼1
Now, in order to integrate out z’s and w, we have to work with terms within each brace separately. However, all these quantities have the form of (6), and this yields the result. & Remark 1. The distribution presented in Theorem 2 as well as those derived in Theorems 3 and 4 below may possibly be deduced by the work of Provost and Rudiuk (1994). These authors discussed the distribution of the ratio of dependent linear combinations of chi-square random variables (which are in fact exponential random variables when the degrees of freedom equal two) via inverse Mellin transforms. However, in our special case, we have chosen to derive the required distributions in a straightforward manner through standard transformations of random variables rather than to try to invert the corresponding Mellin transforms which are expressed as infinite power series. iid
Theorem 3. Let Z1,y,Zk,W be independent random variables, Z’s Eð1Þ and W Gða,1Þ, a 40. Further, let k ¼ ðl1 , . . . , lk Þ and l ¼ ðm1 , . . . , mk Þ be vectors of positive numbers such that li =mi is strictly increasing in i. Then, the pdf of the random variable Pk lZ ð12Þ Y ¼ Pk i ¼ 1 i i i ¼ 1 mi Zi þ W is fTð0,kÞ ðy; k,
8 9 k < X ðlj ymj Þa þ k2 lj mi li mj = alj þ l,aÞ ¼ ðyÞ: I a Qk lj li yðmj mi Þ; ð0, lj =mj flj li yðmj mi Þg :lj ymj þ y ¼ 1 j ¼ 1 ðlj ymj þ yÞ k X
i ¼ 1 iaj
i
ð13Þ
iaj
P Pk1 2 Proof. From (12), we get Zk ¼ k1 i ¼ 1 ððY mi li Þ=ðlk Y mk ÞÞZi þ YW=ðlk Y mk Þ with dZk =dY ¼ i ¼ 1 ððlk mi li mk Þ=ðlk Y mk Þ Þ 2 Zi þW lk =ðlk Y mk Þ , and so the joint pdf of Z1,y,Zk 1,W,Y is ( ) k1 Pk1 X wa1 hðy,w,z1 , . . . ,zk1 ; k, l,aÞ ¼ wlk þ ðlk mi li mk Þzi eð1 þ y=ðlk ymk ÞÞw i ¼ 1 ð1 þ ðymi li Þ=ðlk ymk ÞÞzi , 2 GðaÞðlk ymk Þ i¼1
1238
G. Iliopoulos, N. Balakrishnan / Journal of Statistical Planning and Inference 141 (2011) 1224–1239
w,z1 , . . . ,zk1 ,
k1 X wy ymi li þ z 4 0: lk ymk i ¼ 1 lk ymk i
From the last inequality, we conclude that if y4 lk =mk then the above joint pdf equals zero. Now, working as in the proof of Theorem 2, one has to consider here the cases y 2 ðlk1 =mk1 , lk =mk Þ,ðlk2 =mk2 , lk1 =mk1 Þ, . . . ,ð0, l1 =m1 Þ and carry out the integrations to arrive at the required result. & Remark 2. Let U Eð1Þ independently of Z’s and W. Then, by Theorem 2, for any y 4 0, P U=y þ ki ¼ 1 li Zi fTð1,kÞ ðy; y, k, l,aÞ: Yuy ¼ Pk i ¼ 1 mi Zi þ W We have limy-1 Yuy ¼ Y almost surely and consequently in distribution as well. On the other hand, it can be verified that limy-1 fTð1,kÞ ðy; y, k, l,aÞ ¼ fTð0,kÞ ðy; k, l,aÞ for all y 4 0. Hence, Theorem 3 could be deduced from Theorem 2 by using a limiting argument provided some additional regularity conditions would be satisfied. But, direct transformation of variables is sufficient for proving the result. iid
Theorem 4. Let U1 , . . . ,U‘ ,Z1 , . . . ,Zk Eð1Þ, and h ¼ ðy1 , . . . , y‘ Þ, k ¼ ðl1 , . . . , lk Þ and l ¼ ðm1 , . . . , mk Þ be vectors of positive numbers such that all y’ s are distinct and li =mi is strictly increasing in i. Then, P P‘ Ui =yi þ ki ¼ 1 li Zi Y ¼ i¼1 P fTð‘,kÞ ðy; h, k, l,0Þ: k i ¼ 1 mi Zi Proof. The proof proceeds exactly as that of Theorem 2, and everything works in the same way except that there is no integral here with respect to w. & iid
Theorem 5. Let U1 , . . . ,U‘ ,W be independent random variables, U’s Eð1Þ and W Gða,1Þ, a 4 0. Further, let h ¼ ðy1 , . . . , y‘ Þ be a vector of distinct positive numbers. Then, the pdf of the random variable P‘ U =y Y ¼ i¼1 i i W is fTð‘,0Þ ðy; h,aÞ ¼
1 ‘ Y ayj y i @ A , yi yj ð1þ yyj Þa þ 1 j¼1 ‘ X
0
y4 0:
i ¼ 1 iaj
Proof. Using Lemma 1, it can be shown rather easily.
& iid
Theorem 6. Let U,Z, W be independent random variables, U,Z Eð1Þ and W Gða,1Þ, a 40. Also, let c 4 0. Then, the pdf of the random variable Y¼
cðUZÞ U þZ þW
is given by gT ðy; c,aÞ ¼
aþ1 jyj a 1 Iðc,cÞ ðyÞ: 2c c
Proof. Since c is just a scale parameter, consider first c= 1 and set R= U Z, S= U+ Z. Then, U= (S+ R)/2, Z =(S R)/2, with Jacobian 12 and so the joint density of R,S,W is gR,S,W ðr,s,wÞ ¼
wa1 sw e , 2GðaÞ
w 40, s þr 4 0, sr 40:
Since Y = R/(S + W), we have R=Y(S+ W), dR/dY= S +W, and the joint density of Y,S,W is gT,S,W ðy,s,wÞ ¼
wa1 sw e , 2GðaÞ
w 40, ð1 þ yÞs þ yw 40, ð1yÞsyw4 0:
But fðy,s,wÞ : w 4 0,ð1 þ yÞs þ yw4 0,ð1yÞsyw 4 0g ¼ fðy,s,wÞ : w 4 0,1 o y o 0,s 4 yw=ð1 þ yÞg[
fðy,s,wÞ : w 40,0 ry o 1,s 4 yw=ð1yÞg
G. Iliopoulos, N. Balakrishnan / Journal of Statistical Planning and Inference 141 (2011) 1224–1239
1239
and so, 8 R1 R1 wa1 sw > > ds dw, > < w ¼ 0 s ¼ yw=ð1 þ yÞ 2GðaÞe gT ðy; c ¼ 1,aÞ ¼ a1 > w >R1 R1 > esw ds dw, : w ¼ 0 s ¼ yw=ð1yÞ 2GðaÞ
1 oy o 0, ¼ 0 r yo 1
aþ1 ð1jyjÞa Ið1,1Þ ðyÞ: 2
&
References Arnold, B.C., Balakrishnan, N., Nagaraja, H.N., 2008. A First Course in Order Statistics, classic ed. SIAM, Philadelphia. Bain, L.J., Engelhardt, M., 1973. Interval estimation for the two-parameter double exponential distribution. Technometrics 15, 875–887. Balakrishnan, N., 2007. Progressive censoring methodology: an appraisal (with discussions). Test 16, 211–296. Balakrishnan, N., Cutler, C.D., 1995. Maximum likelihood estimation of Laplace parameters based on Type-II censored samples. In: Nagaraja, H.N., Sen, P.K., Morrison, D.F. (Eds.), Statistical Theory and Applications: Papers in Honor of Herbert A. David. Springer-Verlag, New York, pp. 145–151. Chernoff, H., Gastwirth, J.L., Johns, M.V., 1967. Asymptotic distribution of linear combinations of functions of order statistics with applications to estimation. The Annals of Mathematical Statistics 38, 52–72. Childs, A., Balakrishnan, N., 1996. Conditional inference procedures for the Laplace distribution based on type-II right censored samples. Statistics and Probability Letters 31, 31–39. Childs, A., Balakrishnan, N., 1997. Maximum likelihood estimation of Laplace parameters based on general Type-II censored samples. Statistical Papers 38, 343–349. Efron, B., Tibshirani, R., 1993. An Introduction to the Bootstrap. Chapman & Hall, New York. Grice, J.V., Bain, L.J., Engelhardt, M., 1978. Comparison of conditional and unconditional confidence intervals for the double exponential distribution. Communications in Statistics—Simulation and Computation 7, 515–524. Iliopoulos, G., Balakrishnan, N., 2009. Conditional independence of blocked ordered data. Statistics and Probability Letters 79, 1008–1015. Johnson, N.L., Kotz, S., Balakrishnan, N., 1995. Continuous Univariate Distributions, vol. 2, second ed. John Wiley & Sons, New York. Kappenman, R.F., 1975. Conditional confidence intervals for double exponential distribution parameters. Technometrics 17, 233–235. ¨ Kotz, S., Kozubowski, T.J., Podgo´rski, K., 2001. The Laplace Distribution and Generalizations. Birkhauser, Boston. Provost, S.B., Rudiuk, E.M., 1994. The exact density function of the ratio of two dependent linear combinations of chi-square variables. Annals of the Institute of Statistical Mathematics 46, 557–571. Sansing, R.C., 1976. The t-statistic for a double exponential distribution. SIAM Journal on Applied Mathematics 31, 634–645.