A new weighted integral goodness-of-fit statistic for exponentiality

A new weighted integral goodness-of-fit statistic for exponentiality

Statistics and Probability Letters 78 (2008) 1006–1016 www.elsevier.com/locate/stapro A new weighted integral goodness-of-fit statistic for exponenti...

283KB Sizes 1 Downloads 90 Views

Statistics and Probability Letters 78 (2008) 1006–1016 www.elsevier.com/locate/stapro

A new weighted integral goodness-of-fit statistic for exponentiality L. Baringhaus a,∗ , N. Henze b a Leibniz Universit¨at Hannover, Germany b Universit¨at Karlsruhe (TH), Germany

Received 8 May 2007; received in revised form 27 July 2007; accepted 25 September 2007 Available online 20 November 2007

Abstract We propose a new weighted integral goodness-of-fit statistic for exponentiality. The statistic is motivated by a characterization of the exponential distribution via the mean residual life function. Its limit null distribution is the same as that of a certain weighted integral of the squared Brownian bridge. The Laplace transform and cumulants of the latter are expressible in terms of Bessel functions. c 2007 Elsevier B.V. All rights reserved.

MSC: primary 62G10; secondary 62E20

1. Introduction Let X 1 , . . . , X n be independent and identically distributed nonnegative random variables with some unknown distribution function F(x) = P(X 1 ≤ x), x ≥ 0. It is well-known that, under the assumption 0 < E(X 1 ) < ∞, the distribution of X 1 is exponential, i.e. F(x) = 1 − exp(−λx),

x ≥ 0,

for some λ > 0, if and only if the mean residual life function is constant, i.e. we have E(X 1 − z|X 1 > z) = E(X 1 )

for each z > 0.

(1)

Baringhaus and Henze (2000) noted that (1) is equivalent to E(min(X 1 , z)) = E(X 1 )F(z)

for each z > 0.

(2)

Arguing that, under the assumptions X 1 ≥ 0 and 0 < E(X 1 ) < ∞, (2) is a characteristic property of the class {Exp(λ) : λ > 0} of exponential distributions, they suggested a new approach to assess exponentiality. In particular, ∗ Corresponding address: Universit¨at Hannover, Institut f¨ur Mathematische Stochastik, Welfengarten 1, 30167 Hannover, Germany. Tel.: +49 511 762 4246; fax: +49 511 762 5385. E-mail address: [email protected] (L. Baringhaus).

c 2007 Elsevier B.V. All rights reserved. 0167-7152/$ - see front matter doi:10.1016/j.spl.2007.09.060

L. Baringhaus, N. Henze / Statistics and Probability Letters 78 (2008) 1006–1016

1007

they proposed the Cram´er–von Mises type statistic ∞

Z

"

Gn = n 0

n n 1X 1X min(Uk , z) − I (Uk ≤ z) n k=1 n k=1

#2 e−z dz,

(3)

P where X = n1 nk=1 X k and Uk = X k /X , k = 1, . . . , n. Interestingly, the limit null distribution of G n is the same as the limit null distribution of the classical Cram´er–von Mises statistic when testing the simple hypothesis of uniformity on the interval [0, 1]. The goodness-of-fit test based on G n rejecting the hypothesis of exponentiality if G n > gn where gn is the (1−α)-quantile of the null distribution of G n for some given α ∈ (0, 1), is consistent against any fixed alternative distribution. The test is discussed in the recent review paper on goodness-of-fit tests for exponentiality by Henze and Meintanis (2005). From the comparative simulation study given there one may conclude that it is a serious competitor, although there are other procedures showing a better power performance for special alternatives. However, the favorable behavior of these procedures is recognized by varying some weight parameter built in the corresponding test statistic. So, to obtain a possible gain in power performance we suggest to generalize (3) in a natural way by using the more general weight function e−az , z ≥ 0, where a > −1 is some real parameter. Then we have the test statistic ∞

Z

"

G n,a = n 0

n n 1X 1X min(Uk , z) − I (Uk ≤ z) n k=1 n k=1

#2 e−az dz.

(4)

By some elementary calculations we see that in the case a 6= 0 the statistic can be written in the form   n  2 1 1 X 1 min(Uk , U` ) (exp(−aUk ) + exp(−aU` )) G n,a = − + n k,`=1 a 3 a a2      1 2 1 1 + exp(−a min(U , U )) + + − exp(−a max(U , U )) . k ` k ` a a3 a2 a2 In the special case a = 0 the statistic has the alternative representation   n  1 1 2 1 2 1 X 3 min(Uk , U` ) + (min(Uk , U` ) − 1) max(Uk , U` ) − Uk − U` . G n,0 = n k,`=1 3 2 2 Using the central limit theorem in Hilbert spaces as a method of proof different to that given in Baringhaus and Henze R1 (2000) we show in the next section, that the limit null distribution of G n,a is the same as that of 0 B(t)2 (1 − t)a−1 dt, where (B(t), 0 ≤ t ≤ 1) is a Brownian bridge. Using this representation we shall be able to express the Laplace transform of the limit null distribution in terms of the Bessel function I 1 , and, additionally, its cumulants via sums a+1 of the even powers of the reciprocal zeros of the Bessel function J 1 . Critical values and empirical power values of a+1 the test obtained by simulation are shown in the last section. 2. The limit null distribution of Gn,a The null distribution of G n,a does not depend on the parameter λ of the underlying exponential distribution. We therefore assume that λ = 1, i.e. that X k has the distribution function 1 − exp(−x), x ≥ 0. Putting ! n   √ 1X Rn (z) = n min(X k , z) − X I (X k ≤ z) , z ≥ 0, n k=1 it follows that G n,a =

1 X

3



Z 0

Rn2 (z)

      Z ∞ 1 1 −az − 1 az − 1 e dz + 3 exp − Rn2 (z)e−az dz. X X 0

(5)

1008

L. Baringhaus, N. Henze / Statistics and Probability Letters 78 (2008) 1006–1016

R∞ We shall show below that for each a > −1 the statistic 0 Rn2 (z) exp(−az)dz has a limit distribution. Using this in advance we first prove that       Z ∞ 1 Rn2 (z) exp − − 1 az − 1 e−az dz = o P (1). (6) X 0 Let us treat the cases a ≥ 0 and −1 < a < 0 separately. If a ≥ 0 we put η = 12 , a ∗ = −η and 0 = 1. For a given 0 <  ≤ 0 we consider the case where | 1 − 1| < . From X     exp − 1 − 1 az − 1 ≤ za exp(az), z ≥ 0, X we obtain using z ≤ eηz /η for z ≥ 0 that Z ∞       Z ∞ 1 −az 2 Rn2 (z)z exp(−a(1 − )z)dz ≤ a dz − 1 az − 1 e exp − R (z) n X 0 0 Z a ∞ 2 ≤ Rn (z) exp(−[a(1 − ) − η]z)dz. η 0 Z a ∞ 2 ≤ Rn (z) exp(−a ∗ z)dz. η 0   a−3 1 1 ∗ If −1 < a < 0 we put η = a+1 4 , a = 4 and 0 = 2 − a − 1 . For a given 0 <  ≤ 0 we consider the case where | 1 − 1| < . From X     exp − 1 − 1 az − 1 ≤ z|a| exp(−az), X

z ≥ 0,

we obtain Z ∞       Z ∞ 1 2 −az Rn (z) exp − − 1 az − 1 e dz ≤ |a| Rn2 (z)z exp(−a(1 + )z)dz X 0 0 Z |a| ∞ 2 Rn (z) exp(−[a(1 + ) − η]z)dz. ≤ η 0 Z ∞ |a| Rn2 (z) exp(−a ∗ z)dz. ≤ η 0 Thus in any case there is some a ∗ > −1 such that, for each 0 <  ≤ 0 Z ∞       Z |a| ∞ 2 1 2 ≤ exp − − 1 az − 1 exp(−az)dz Rn (z) exp(−a ∗ z)dz R (z) n η 0 X 0 R∞ if | 1 − 1| < . Using that 0 Rn2 (z) exp(−a ∗ z)dz has a limit distribution, 1 = 1 + o P (1) and X X obtain (6) and, additionally, Z ∞ G n,a = Rn2 (z) exp(−az)dz + o P (1). 0

It remains to derive the limit distribution of Rn (z) = Hn (z) + (X − 1)L n (z),

R∞ 0

Rn2 (z)e−az dz. To this end note that

z ≥ 0,

where Hn (z) =

√ n

! n 1X −z [min(X k , z) − I (X k ≤ z) − (X k − 1)(1 − e )] , n k=1

z ≥ 0,

1 3 X

= 1 + o P (1) we

L. Baringhaus, N. Henze / Statistics and Probability Letters 78 (2008) 1006–1016

1009

and L n (z) =

√ n

! n   1X −z 1 − e − I (X k ≤ z) , n k=1

z ≥ 0.

Hn and L n are random elements in the Hilbert space L 2 (R+ , B+ , µa ), where R+ = [0, ∞), B+ is the Borel σ -field on R+ , and µa is the σ -finite measure having density exp(−ax), x ≥ 0, with respect to Lebesgue measure on (R+ , B+ ). Since Z ∞ Z ∞  [1 − e−z ]e−(a+1)z dz < ∞ Var min(X 1 , z) − I (X 1 ≤ z) − (X 1 − 1)(1 − e−z ) dµa (z) = 0

0

and ∞

Z

 Var 1 − e−z − I (X 1 ≤ z) dµa (z) =



Z

[1 − e−z ]e−(a+1)z dz < ∞ 0

0

the central limit theorem for Hilbert space-valued random variables applies (see Ledoux and Talagrand (1991), Corollary 10.9). Recognizing that ρ(w, z) = min(1 − e−w , 1 − e−z ) − (1 − e−w )(1 − e−z ),

w, z ≥ 0,

(7)

is the covariance function of the process (Hn (z), z ≥ 0) and also that of the process (L n (z), z ≥ 0), there is a zero mean Gaussian process (H (z), z ≥ 0) with sample paths in L 2 (R+ , B+ , µa ) and covariance kernel (7) such that Z Z D Hn2 (z)dµa (z) −→ H 2 (z)dµa (z) and Z

D L 2n (z)dµa (z) −→

Z

H 2 (z)dµa (z),

D

where “−→” means convergence in the distribution. Due to X − 1 = o P (1), this implies that Z ∞ Z ∞ D H 2 (z) exp(−az)dz. Rn2 (z) exp(−az)dz −→ 0

0

The result just proved is summarized as follows. R Theorem 1. The limit null distribution of the test statistic G n,a is that of H 2 (z)dµa (z) where (H (z), z ≥ 0) is a zero mean Gaussian process with sample paths in L 2 (R+ , B+ , µa ) and covariance function (7).  The Gaussian process (H (z), z ≥ 0) has the same covariance function as the process B(1 − e−z ), z ≥ 0 where (B(t), 0 ≤ t ≤ 1) is the Brownian bridge. Consequently, Z Z 1 Z 1 D D H 2 (z)dµa (z) = B 2 (1 − t)t a−1 dt = B 2 (t)(1 − t)a−1 dt, 0

D

0

where “=” denotes equality in distribution. Since the limit distribution may be of independent interest, it is studied further. Let Z 1 Ga = B(t)2 (1 − t)a−1 dt. 0

Theorem 2. (a) The Laplace transform of G a is !−1/2  2 ∞ Y 2 2z ψa (z) = 1+ a+1 γ j2 j=1

1010

L. Baringhaus, N. Henze / Statistics and Probability Letters 78 (2008) 1006–1016

      1 = Γ 1+  a+1   

1 1  1 I a+1 √ a+1 1 a+1 2z



−1/2    2 √ 2z ,  a+1  

z ≥ 0,

where Iν is the modified Bessel function of the first kind of order ν. (b) The first four cumulants of G a are 1 , (a + 1)(a + 2) 2 κ2 = Var(G a ) = , 2 (a + 2) (a + 1)(2a + 3) 16 κ3 = , (a + 2)3 (a + 1)(2a + 3)(3a + 4) 48(11a + 16) κ4 = . (a + 2)4 (2a + 3)2 (a + 1)(3a + 4)(4a + 5) κ1 = E(G a ) =

Proof. Although part (a) of the theorem can be obtained from the material presented in Deheuvels and Martynov (2003), we give a derivation for the special case considered here. Note that Z Ga = B 2 (t)dνa (t), where νa is the σ -finite measure with density (1 − t)a−1 with respect to the Lebesgue measure on the Borel sets B[0,1] of the interval [0, 1]. Let A be the integral operator on L 2 ([0, 1], B[0,1] , νa ) associated with the covariance kernel k(s, t) = min(s, t) − st, 0 ≤ s, t ≤ 1, of the Brownian bridge. Thus Z 1 A f (t) = k(t, s) f (s)dνa (s), 0 ≤ t ≤ 1, 0

for f ∈ L 2 ([0, 1], B[0,1] , νa ). The integral operator A is positive definite. It is well-known (see, e.g. Vakhania (1981), p. 58) that DX Ga = λk Z k2 , (8) k≥1

where the Z k are independent unit normal random variables, and the λk are the eigenvalues of A. To obtain these eigenvalues, assume that an eigenfunction f of A with associated positive eigenvalue λ is smooth enough so that, starting with the equation Z k(t, s) f (s)dνa (s) = λ f (t), 0 ≤ t ≤ 1, (9) we may differentiate twice on both sides of (9). This leads to the differential equation λ f 00 (t) = −(1 − t)a−1 f (t),

0 < t < 1.

From (9) we also infer the boundary conditions f (0) = 0 and f (1) = 0. Putting ϕ(t) = f (1 − t), ϕ satisfies the differential equation λϕ 00 (t) = −t a−1 ϕ(t),

0 < t < 1,

(10)

subject to the boundary conditions ϕ(0) = ϕ(1) = 0.

(11)

The general solution of (10) was found by Lommel (1868); for a reference see Nielsen (1904, p. 130). In view of (11), the general solution is   2 1/2 −1/2 (a+1)/2 λ ct J 1 t , 0 ≤ t ≤ 1, a+1 a+1

L. Baringhaus, N. Henze / Statistics and Probability Letters 78 (2008) 1006–1016

1011

where Jν is the Bessel function of the first kind of order ν, and c is some constant. In what follows, 0 < γ1 < γ2 < · · · are the positive zeros of J 1 . Putting a+1

λj =



2 a+1

2

1 , γ j2

j = 1, 2, . . . ,

and f j (t) =

  (a + 1)1/2 J 1 γ j (1 − t)(a+1)/2 (1 − t)1/2 , J a+2 (γ j ) a+1 a+1

(0 ≤ t ≤ 1, j = 1, 2, . . .), it follows from Z Z 1     a+1 f j (t) f k (t)dνa (t) = (1 − t)a J 1 γ j (1 − t)(a+1)/2 J 1 γk (1 − t)(a+1)/2 dt a+1 a+1 J a+2 (γ j )J a+2 (γk ) 0 a+1  a+1 0, j 6= k, = 1, j = k, (see Erd´elyi et al. (1953, p. 70)) that { f j , jP≥ 1} is an orthonormal set in L 2 ([0, 1], B[0,1] , νa ). For f ∈ L 2 ([0, 1], B[0,1] , νa ) the Fourier–Bessel series j≥1 α j f j with Z αj = f (t) f j (t)dνa (t), j ≥ 1, converges in L 2 ([0, 1], B[0,1] , νa ), i.e. 2 Z X n lim α j f j (t) − f (t) dνa (t) = 0. n→∞ j=1 This result can be proved in a rather elementary way by using the work of Hochstadt (1967), for example. Thus { f j , j ≥ 1} is a complete orthonormal set in L 2 ([0, 1], B[0,1] , νa ). Using − f j (t)(1 − t)a−1 = λ j f j00 (t),

0 < t < 1,

integration by parts gives Z t Z (1 − t) s f j (s)(1 − s)a−1 ds + t 0

or, equivalently, Z k(t, s) f j (s)dνa (s) = λ j f j (t),

1

f (s)(1 − s)a ds = λ j f j (t),

0 ≤ t ≤ 1,

t

0 ≤ t ≤ 1.

Thus the f j , j ≥ 1, form a complete orthonormal system of eigenfunctions of A with associated eigenvalues λ j , j ≥ 1. In view of (8) the Laplace transform of G a is !−1/2  2 ∞ Y 2 2z 1+ ψa (z) = , z ≥ 0, a+1 γ j2 j=1  −1/2          1 2 √ 1 = Γ 1+ I 2z , z ≥ 0, 1 1 a+1   a + 1  1 √  a+1 a+1     a+1 2z where Iν is the modified Bessel function of the first kind of order ν. For the second equality see Nielsen (1904, p.358). This proves part (a) of the theorem. To prove part (b) which asserts an interesting connection between the cumulants

1012

L. Baringhaus, N. Henze / Statistics and Probability Letters 78 (2008) 1006–1016

κν of G a , the latter being an integral involving the Brownian bridge, and the sums of the even powers of the reciprocal zeros γ j of the Bessel functions J 1 we note that a+1

κν = 2ν−1 (ν − 1)!

∞ X

λνj = 2ν−1 (ν − 1)!

j=1



2 a+1

2ν X ∞  j=1

1 γj

2ν

,

ν ≥ 1.

We can derive the first four cumulants of G a directly by using Fubini’s theorem and calculating mixed moments of a zero mean multivariate normal vector with covariance structure specified by the Brownian bridge. Alternatively, we can use that ∞ X 2−2 1 = 2 1 1 + a+1 j=1 γ j ∞ X 1 = 4 γ j=1 j 1+

1 a+1

2−4 2 

∞ X 1

= γ j6 1+

1 a+1



+2

2−6 · 2 3 

  1 + 2 a+1 +3   −8 5 1 + 11 ∞ 2 X a+1 1 = 4  2    8 γ 1 1 1 1 j=1 j 1 + a+1 + 2 + 3 + 4 a+1 a+1 a+1 j=1

1 a+1

1 a+1

(see Nielsen (1904, p. 360)) to obtain the first four cumulants stated in part (b) of the theorem.



It is also possible to derive ‘limit test statistics’ in the cases a → −1 and a → ∞. It is immediately seen that G n,−1 = lim G n,a = −2n + a↓−1

n 1X exp (min(Uk , U` )) . n k,`

Of course, one can use G n,−1 as a test statistic for testing the hypothesis of exponentiality. However, presently we are unable to present any asymptotic theory for G n,−1 as n → ∞. In the second case, a → ∞, putting U1:n = min(U1 , . . . , Un ) we have   G n,a 1 lim a −2 exp(aU1:n ) a 3 − 2 = − 2U1:n . a→∞ n n Thus one may suggest U1:n as the test statistic rejecting the hypothesis of exponentiality for small values of U1:n . We remark that a test based on this statistic is easily done because U1:n has the beta distribution B(1, n − 1) with the density (n − 1)(1 − t)n−2 , 0 ≤ t ≤ 1. Finally, we remark that for each −1 < a < ∞ and given level α the test obtained by rejecting the hypothesis of exponentiality if G n,a > gn,a (α), where gn,a (α) is the (1 − α)-quantile of G n,a in the case where the hypothesis of exponentiality is true, is consistent against any fixed alternative distribution. A proof of this assertion is easily done by adapting the arguments used by Baringhaus and Henze (2000). 3. Empirical results For sample sizes n = 10, 20, 30, 40, 50, 100, 200, parameter values a = −0.99, −0.9, −0.5, 0, 0.5, 1, 1.5, 2, 5, 10 and levels α = 0.05, 0.1 we got approximations of the critical values gn,a (α) by simulation with 100 000 replications. The results are shown in Tables 1 and 2 of the Appendix. An empirical power study based on simulations with 10 000 replications was done for the sample sizes n = 20, 50, 100 and level α = 0.05. The alternative distributions were chosen from distribution families considered also by Baringhaus and Henze (1991, 2000). The distributions included are the Gamma distributions (G), the Weibull distributions (W), the Lognormal distributions (LN ) with scale parameter 1 and shape parameter θ, the uniform distribution U[0, 1], the Half-Normal distribution (HN ), the HalfCauchy distribution (HC), the χ12 -distribution, the Power distributions (PW) with density θ −1 x 1/θ−1 , 0 < x < 1, the

1013

L. Baringhaus, N. Henze / Statistics and Probability Letters 78 (2008) 1006–1016

linear increasing failure rate distributions (LIFR) with density (1 + θ x) exp(−(x + θ x 2 /2)), x > 0, and the JSHAPE distributions (JS) with density (1 + θ x)−1/θ−1 , x > 0. The empirical power values (rounded to the nearest integer) are shown in Tables 3–5 of the Appendix. For estimated power values of various other competitive procedures we refer to Baringhaus and Henze (1991) and also to Henze and Meintanis (2005), although there is merely a partial overlap with the alternative distributions considered in the latter paper. We conclude, that taking a weight parameter 1 ≤ a ≤ 2 is a rather good choice. For some alternative distributions, the Gamma distributions or Lognormal distributions with shape parameter θ < 1 for example, a choice of a > 2 can be recommended. In these cases the power performance is also better than that of the top ranked test in the comparative study of Henze and Meintanis (2005). Acknowledgement The authors are indebted to Nora G¨urtler for assistance in computation. Appendix. Tables of critical values and power values

Table 1 Critical values of G n,a for α = 0.05 Sample size n 10 20 30 40 50 100 200

Parameter a −0.99 −0.9

−0.5

0

0.5

1

1.5

2

5

10

5.056 8.069 9.886 11.299 12.433 15.848 18.338

1.846 2.336 2.597 2.723 2.846 3.072 3.233

1.017 1.147 1.193 1.224 1.249 1.279 1.296

0.634 0.678 0.691 0.702 0.699 0.713 0.711

0.427 0.449 0.450 0.446 0.453 0.457 0.464

0.305 0.313 0.316 0.316 0.320 0.316 0.320

0.226 0.230 0.231 0.234 0.232 0.233 0.235

0.065 0.067 0.068 0.069 0.069 0.069 0.069

0.018 0.021 0.021 0.022 0.022 0.022 0.022

3.944 5.983 7.282 8.107 8.793 10.654 11.900

Table 2 Critical values of G n,a for α = 0.1 Sample size n 10 20 30 40 50 100 200

Parameter a −0.99 −0.9

−0.5

0

0.5

1

1.5

2

5

10

3.063 4.395 5.341 6.033 6.569 8.200 9.917

1.406 1.705 1.847 1.945 2.042 2.226 2.359

0.793 0.875 0.908 0.927 0.948 0.977 0.991

0.490 0.515 0.528 0.531 0.536 0.539 0.544

0.332 0.340 0.341 0.343 0.346 0.346 0.345

0.237 0.238 0.240 0.240 0.238 0.240 0.239

0.174 0.176 0.176 0.176 0.175 0.177 0.176

0.049 0.050 0.050 0.051 0.051 0.051 0.051

0.014 0.016 0.016 0.016 0.016 0.016 0.017

2.580 3.526 4.180 4.581 4.946 5.929 6.961

Table 3 Percentage of 10 000 Monte Carlo samples declared significant; level α = 0.05; sample size n = 20 Distribution W(0.6) W(1.2) W(1.4) W(1.6) χ12 PW(0.8)

Parameters a −0.99

−0.9

−0.5

0

0.5

1

1.5

2

5

10

59 1 0 0 38 1

61 1 0 0 39 13

65 4 11 26 44 87

69 9 26 50 47 94

69 12 33 60 49 94

70 13 35 63 52 93

72 14 37 64 55 92

73 14 37 65 57 89

78 12 32 57 66 69

80 8 20 39 72 40

(continued on next page)

1014

L. Baringhaus, N. Henze / Statistics and Probability Letters 78 (2008) 1006–1016

Table 3 (continued) Distribution PW(1.2) PW(1.4) PW(2.0) PW(3.0) LIFR(1) LIFR(2) LIFR(4) LIFR(6) LIFR(10) HN (0, 1) LN (0.7) LN (0.8) LN (1.0) LN (1.5) HC JS(0.5) JS(1.0) U[0, 1] G(0.4) G(0.6) G(0.8) G(1.4) G(1.6) G(1.8) G(2.0) G(2.4) G(3.0)

Parameters a −0.99

−0.9

−0.5

0

0.5

1

1.5

2

5

10

0 0 1 13 0 0 0 0 0 0 2 6 21 66 72 50 81 0 56 26 11 1 0 0 0 0 0

1 0 1 17 0 0 0 0 0 0 3 6 21 67 72 50 82 3 58 28 11 1 0 0 0 0 1

33 16 4 28 6 10 17 23 30 7 16 10 20 68 73 51 84 58 65 30 12 4 6 10 15 28 49

48 28 8 38 14 24 36 43 53 17 34 19 18 67 71 49 84 74 68 30 11 9 16 24 33 53 77

48 27 10 46 18 29 42 50 60 21 46 23 16 67 69 47 83 74 72 32 10 13 21 31 42 64 85

44 24 11 53 19 30 43 51 61 21 53 27 15 66 67 45 83 71 75 33 10 14 24 34 46 69 89

40 21 12 59 19 30 43 52 61 22 59 31 14 65 66 44 83 67 77 35 11 15 26 38 50 72 91

35 18 14 64 19 30 42 50 60 21 64 34 13 64 65 43 82 62 80 37 11 16 27 39 52 74 92

17 9 25 79 13 20 30 37 47 14 73 43 11 59 59 39 80 38 86 44 13 14 25 38 49 72 91

7 7 37 87 7 11 16 20 26 8 70 39 8 53 52 35 77 18 90 50 16 9 17 26 36 57 80

Table 4 Percentage of 10 000 Monte Carlo samples declared significant; level α = 0.05; sample size n = 50 Distribution W(0.6) W(1.2) W(1.4) W(1.6) χ12 PW(0.8) PW(1.2) PW(1.4) PW(2.0) PW(3.0) LIFR(1) LIFR(2) LIFR(4) LIFR(6) LIFR(10) HN LN (0.7) LN (0.8) LN (1.0)

Parameters a −0.99

−0.9

−0.5

0

0.5

1

1.5

2

5

10

86 0 0 0 58 50 1 0 0 12 0 0 0 0 0 0 4 10 36

88 0 1 5 62 94 20 5 0 19 0 1 2 4 8 0 5 10 37

94 8 41 80 75 100 92 72 19 59 20 41 64 75 84 28 43 20 36

96 21 66 95 82 100 96 80 30 78 39 64 84 90 95 49 75 37 34

97 28 75 97 87 100 94 75 30 87 45 69 87 93 97 54 88 51 29

98 30 77 98 89 100 90 65 28 91 45 69 87 93 97 53 94 60 25

98 31 78 98 90 100 84 54 29 94 43 67 86 92 96 51 96 67 22

98 32 79 98 92 100 78 45 32 95 42 65 85 91 96 50 98 73 21

99 29 73 96 95 99 40 15 50 99 29 49 71 81 89 34 100 89 22

99 22 61 90 97 89 16 8 66 100 17 31 49 60 72 21 100 95 29

1015

L. Baringhaus, N. Henze / Statistics and Probability Letters 78 (2008) 1006–1016

Table 4 (continued) Distribution LN (1.5) HC JS(0.5) JS(1.0) U[0, 1] G(0.4) G(0.6) G(0.8) G(1.4) G(1.6) G(1.8) G(2.0) G(2.4) G(3.0)

Parameters a −0.99

−0.9

91 95 78 98 9 80 38 13 1 0 0 0 0 1

92 95 79 98 56 83 40 13 1 0 0 1 5 20

−0.5 94 96 82 99 99 93 51 16 7 18 33 49 79 97

0

0.5

1

1.5

2

5

10

95 96 82 99 100 97 58 17 21 40 61 78 95 100

95 95 81 99 100 98 63 17 28 51 72 86 98 100

95 94 80 99 99 99 67 18 32 56 77 90 99 100

94 94 79 99 98 99 69 18 33 59 79 91 99 100

94 93 78 99 97 99 72 19 35 62 81 93 99 100

91 88 70 99 79 100 78 23 35 62 82 93 99 100

84 80 61 98 49 100 81 27 30 56 77 90 99 100

Table 5 Percentage of 10 000 Monte Carlo samples declared significant; level α = 0.05; sample size n = 100 Distribution W(0.6) W(1.2) W(1.4) W(1.6) χ12 PW(0.8) PW(1.2) PW(1.4) PW(2.0) PW(3.0) LIFR(1) LIFR(2) LIFR(4) LIFR(6) LIFR(10) HN (0, 1) LN (0.7) LN (0.8) LN (1.0) LN (1.5) HC JS(0.5) JS(1.0) U[0, 1] G(0.4) G(0.6) G(0.8) G(1.4) G(1.6) G(1.8) G(2.0) G(2.4) G(3.0)

Parameters a −0.99

−0.9

−0.5

0.1

0.5

1

1.5

2

5

10

98 0 0 8 78 100 51 14 0 23 0 0 2 6 14 0 6 15 53 99 100 94 100 94 96 52 16 1 0 0 1 6 39

99 0 8 53 83 100 94 66 6 47 2 10 31 47 66 4 12 16 54 99 100 95 100 100 98 58 18 1 1 5 13 45 90

100 23 84 100 95 100 100 100 76 95 57 85 98 99 100 70 83 39 56 100 100 97 100 100 100 77 24 22 49 75 91 99 100

100 44 96 100 98 100 100 100 81 99 76 95 100 100 100 85 99 71 53 100 100 97 100 100 100 86 27 43 76 93 99 100 100

100 51 98 100 99 100 100 99 75 100 78 95 100 100 100 87 100 85 47 100 100 97 100 100 100 90 29 54 84 97 99 100 100

100 54 98 100 99 100 100 96 69 100 77 95 100 100 100 86 100 92 44 100 100 97 100 100 100 92 31 59 88 98 100 100 100

100 56 98 100 100 100 100 90 66 100 76 95 100 100 100 84 100 96 42 100 100 96 100 100 100 94 34 62 90 98 100 100 100

100 56 98 100 100 100 98 82 65 100 73 93 99 100 100 82 100 97 41 100 100 96 100 100 100 95 35 64 91 99 100 100 100

100 52 97 100 100 100 72 28 77 100 56 82 96 99 100 64 100 100 45 99 99 92 100 98 100 97 40 66 93 99 100 100 100

100 44 93 100 100 100 30 10 89 100 37 61 84 91 97 43 100 100 61 98 96 85 100 84 100 97 43 61 91 99 100 100 100

1016

L. Baringhaus, N. Henze / Statistics and Probability Letters 78 (2008) 1006–1016

References Baringhaus, L., Henze, N., 1991. A class of consistent tests for exponentiality based on the empirical Laplace transform. Ann. Inst. Statist. Math. 43, 551–564. Baringhaus, L., Henze, N., 2000. Tests of fit for exponentiality based on a characterization via the mean residual life function. Statist. Papers 41, 225–236. Deheuvels, P., Martynov, G., 2003. Karhunen–Lo´eve expansions for weighted Wiener processes and Brownian bridges via Bessel functions. In: Hoffmann-Jørgensen, J., Marcus, M., Wellner, J. (Eds.), High Dimensional Probability III. Birkh¨auser, Basel, pp. 57–93. Erd´elyi, A., Magnus, W., Oberhettinger, F., Tricomi, F., 1953. Higher Transcendental Functions, vol. II. McGraw-Hill, New York. Henze, N., Meintanis, S., 2005. Recent and classical tests for exponentiality: A partial review with comparisons. Metrika 61, 29–45. Hochstadt, H., 1967. The mean convergence of Fourier–Bessel series. SIAM Rev. 9, 211–218. Ledoux, M., Talagrand, M., 1991. Probability in Banach Spaces. Springer, New York. Nielsen, N., 1904. Handbuch der Theorie der Zylinderfunktionen. Teubner, Leipzig. Vakhania, N.N., 1981. Probability Distributions on Linear Spaces. North Holland, New York.