ARTICLE IN PRESS
Applied Mathematics and Computation 135 (2003) 345–360 www.elsevier.com/locate/amc
Exact Bahadur slope for combining independent tests for normal and logistic distributions Walid A. Abu-Dayyeh a, Marwan A. Al-Momani a, Hassen A. Muttlak b,* b
a Department of Statistics, Yarmouk University, Irbid, Jordan Department of Mathematical Sciences, King Fahd University of Petroleum and Minerals No. 1676, Dhahran 31261, Saudi Arabia
Abstract Combining independent tests of hypotheses is an important and popular statistical practice. Usually, data about a certain phenomena come from different sources in different times, so we want to combine these data to study these phenomena. Several combination methods were used to combine infinity independent tests. These methods are Fisher, logistic, sum of P-values and inverse normal for testing simple hypotheses against one-sided alternative. These methods are compared via the exact Bahadur slope (EBS). These non-parametric methods depend on the P-value of the individual tests combined. Ó 2002 Elsevier Science Inc. All rights reserved. Keywords: Fisher method; Inverse normal method; Logistics distribution; Logistics method; Normal distribution; p-Value method
1. Introduction The problem of combining n independent tests of hypotheses considered by many authors. For simple null hypotheses, Brinbaum [4] showed that given any
*
Corresponding author. E-mail address:
[email protected] (H.A. Muttlak).
0096-3003/02/$ - see front matter Ó 2002 Elsevier Science Inc. All rights reserved. PII: S 0 0 9 6 - 3 0 0 3 ( 0 1 ) 0 0 3 3 6 - 8
ARTICLE IN PRESS
346
W.A. Abu-Dayyeh et al. / Appl. Math. Comput. 135 (2003) 345–360
non-parametric combination method which has a monotone increasing acceptance region, there exists a problem for which this method is most powerful against some alternative. Little and Folks [6] studied four methods for combining a finite number of independent tests. They found that Fisher method is better than the other three methods via Bahadur efficiency. Again, Little and Folks [7] studied all methods of combining a finite number of independent tests. They showed that Fisher method is optimal under some mild conditions. Aby-Dayyeh and Bataineh [2] showed that Fisher method is strictly dominated by the sum of P-values method via exact Bahadur slope (EBS) in case of combining an infinite number of independent shifted exponential tests when the sample size remains finite. Abu-Dayyeh and El-Masri [3] studied the problem of combining n independent tests as n ! 1 in case of triangular distribution. They used six methods of combining independent tests, these methods are sum of P-values, inverse normal, logistic, Fisher, minimum of Pvalues and maximum of P-values. They showed that the sum of P-values is dominating all other methods. In this paper we will study four combination methods namely the Fisher method, logistic method, the sum of P-values method and the inverse normal method for testing a simple hypothesis against one-sided alternative as the number of combined tests tends to infinity in case of the normal and logistic distributions. These methods are compared via the EBS in case of simple random sample (SRS) data using the P-value of the individual tests combined. 2. Notations and some useful results Suppose that we have n simple hypotheses ðiÞ
ðiÞ
H0 : hi ¼ h0i vs: H1 : hi > h0i ;
i ¼ 1; 2; . . . ; n;
ð1Þ
ðiÞ H0
where h0i is known for i ¼ 1; 2; . . . ; n and is rejected for sufficiently large values of some continuous real valued test statistic T ðiÞ , i ¼ 1; 2; . . . ; n, and we want to combine the n hypotheses into one hypothesis as follows: H0 : ðh1 ; h2 ; . . . ; hn Þ ¼ ðh01 ; h02 ; . . . ; h0n Þ vs: H1 : hi P h0i for some i; i ¼ 1; 2; . . . ; n:
ð2Þ
Many methods have been used for combining several tests of hypotheses into one overall test. Among these methods are the non-parametric methods that combine the P-values of the different tests. The P-value of the ith hypothesis is given by Pi ¼ P ðT ðiÞ P tÞ ¼ 1 F ðtÞ; ðiÞ
H0
ðiÞ
H0
ð3Þ
ARTICLE IN PRESS
W.A. Abu-Dayyeh et al. / Appl. Math. Comput. 135 (2003) 345–360
347
where FH ðiÞ ðtÞ is the cumulative distribution function (c.d.f.) of T ðiÞ under 0 ðiÞ ðiÞ H0 . Note that Pi U ð0; 1Þ under H0 . In this paper, we will consider h0i ¼ 0 for i ¼ 1; 2; . . . ; n. Also we are assuming that T ð1Þ ; T ð2Þ ; . . . ; T ðnÞ are independent. Then Eq. (2) is reduced to H0 : h ¼ 0 vs: H1 : h > 0:
ð4Þ
Thus, the P-values P1 ; P2 ; . . . ; Pn are independent identically distributed (i.i.d.) random variables (r.v.’s) distributed with a uniform distribution U ð0; 1Þ under H0 which is given by (4). We shall assume that the ith problem in case of the normal distribution is ðiÞ ðiÞ ðiÞ based on T1 ; T2 ; . . . ; Tðni Þ which are independent r.v.’s. By sufficiency we may ðiÞ assume ni ¼ 1 and T ¼ Xi for i ¼ 1; 2; . . . ; n. Then we consider the sequence fT ðnÞ g of independent test statistics that is we will take an SRS X1 ; X2 ; . . . ; Xn of size n and let n ! 1 and compare the four non-parametric (omnibus) methods ðiÞ via EBS. Although Xi is not sufficient for hi under H0 for the other distribuðiÞ tions, but we will assume ni ¼ 1 and T ¼ Xi for i ¼ 1; 2; . . . ; n. Our justification for this is that we will get new tests in this case beside other known tests. We will state some definitions and preliminaries in this section that will be used in this paper. Definition 1 (Bahadur efficiency and exact Bahadur slope (EBS)). Let X1 ; X2 ; . . . ; Xn be i.i.d. from a distribution with a probability density function (p.d.f.) f ðx; hÞ, and we want to test H0 : h ¼ h0 vs. H1 : heH fh0 g. Let fTnð1Þ g and fTnð2Þ g be two sequences of test statistics for testing H0 . Let the significance ðiÞ ðiÞ ðiÞ attained by TnðiÞ be LðiÞ n ¼ 1 Fi ðTn Þ, where Fi ðTn Þ ¼ PH0 ðTn 6 ti Þ, i ¼ 1; 2. Then there exists a positive valued function Ci ðhÞ called the exact Bahadur slope of the sequence fTnðiÞ g such that Ci ðhÞ ¼ lim h!1
2 lnðLin Þ n
with probability 1 (w.p.1) under h and the Bahadur efficiency of fTnð1Þ g relative to fTnð2Þ g is given by u12 ¼ C1 ðhÞ=C2 ðhÞ. The following 1; uF ¼ 0 ( 1; uL ¼ 0 1; uS ¼ 0
test will be used in this paper: P 2 ni¼1 ln ðpi Þ > c; the Fisher test; o:w:; Pn pi i¼1 ln 1p > c; i the logistic test; o:w:; Pn i¼1 pi > c; the sum of P -values test; o:w:;
ARTICLE IN PRESS
348
W.A. Abu-Dayyeh et al. / Appl. Math. Comput. 135 (2003) 345–360
uN ¼
1; 0
P ni¼1 U1 ðpi Þ > c; o:w:;
the inverse normal test:
Theorem 1 (Large P deviation theorem). Let X1 ; X2 ; . . . ; Xn be i.i.d., with distribun tion F and put Sn ¼ i¼1 Xi . Assume that the moment generating function (m.g.f.) MðzÞ ¼ EF ðezx Þ exists in a neighborhood of zero. Put mðtÞ ¼ inf z etz MðzÞ, then (see [8]) lim
n!1
2 ln pF ðSn P ntÞ ¼ 2 ln mðtÞ: n
Theorem 2 (Bahadur theorem). Let fTn g be a sequence of test statistics which satisfies the following: 1. Under H1 : Tn pffiffiffi ! bðhÞ w:p:1; n
where bðhÞ 2 R:
2. There exists an open interval I containing fbðhÞ: h 2 Xg, and a function g continuous on I such that pffiffiffi 2 lim ln½1 Fn ðt nÞ ¼ gðtÞ; n!1 n then the EBS is given by CðhÞ ¼ gðbðhÞÞ, see [8]. The following theorems are due to [1]. Theorem 3. Let X1 ; X2 ; . . . ; Xn be i.i.d. with p.d.f. P f ðx; hÞ, and pffiffiffi we want to test H0 : h ¼ 0 vs. H1 : h > 0. For j ¼ 1; 2, let Tn;j ¼ ni¼1 fi ðxi Þ= n be a sequence of statistics such that H0 will be rejected for large values of Tn;j and let uj be the test based on Tn;j . Assume Eh ðfi ðxÞÞ > 0 for all h 2 H, E0 ðfi ðxÞÞ ¼ 0, Varðfi ðxÞÞ > 0, the m.g.f. Mj ðfj ðxÞÞ exists in a neighborhood of zero and fj ðxÞ is differentiable for j ¼ 1; 2. Then: n 1. If the derivative bj ð0Þ is finite for j ¼ 1, 2, then " #2 n C1 ðhÞ Varh¼0 ðf2 ðxÞÞ b1 ð0Þ ¼ lim ; h!0 C2 ðhÞ Varh¼0 ðf1 ðxÞÞ bn2 ð0Þ where bi ðhÞ ¼ Eh ð fj ðxÞÞ, and Cj ðhÞ is the EBS of test uj at h. 2. If the derivative bnj ð0Þ is infinite for some j ¼ 1; 2, then " #2 n C1 ðhÞ Varh¼0 ðf2 ðxÞÞ b1 ðhÞ ¼ lim lim : h!0 C2 ðhÞ Varh¼0 ðf1 ðxÞÞ h!0 bn2 ðhÞ
ARTICLE IN PRESS
W.A. Abu-Dayyeh et al. / Appl. Math. Comput. 135 (2003) 345–360
349
Theorem 4. If Tnð1Þ and Tnð2Þ are two test statistics for testing H0 : h ¼ 0 vs. ð1Þ ð2Þ H1 h > 0 with distribution functions F0 an F0 under H0 , respectively, and that ð1Þ ð2Þ Tn is at least as powerful as Tn at h for any a, then if uj is the test based on TnðjÞ , j ¼ 1; 2, then ð1Þ
ð2Þ
Cu1 ðhÞ P Cu2 ðhÞ: Corollary 4.1. If Tn is the uniformly most powerful test for all a, then it is the best via EBS. Theorem 5 2t 6 ms ðtÞ 6 et
8t : 0 6 t 6 0:5;
where ms ðtÞ ¼ inf etz z>0
ez 1 : z
Theorem 6 1. mL ðtÞ P 2tet 8t P 0, 2. mL ðtÞ 6 te1t 8t P 0:852, t2 3 1t 3. mL ðtÞ 6 tð1þt 8t P 4, 2Þ e where mL ðtÞ ¼ inf fetz pzcscðpzÞg z2ð0;1Þ
and csc is an abbreviation for cosecant function. Theorem 7. For x > 0,
1 1 uðxÞ : uðxÞ 3 6 1 UðxÞ 6 x x x Theorem 8. For x > 0, 1 UðxÞ >
/ ð xÞ pffiffi : x þ p2
The following lemma is from [3]. Lemma 1 et . 1. mL ðtÞ P inf 0
ARTICLE IN PRESS
350
W.A. Abu-Dayyeh et al. / Appl. Math. Comput. 135 (2003) 345–360
ezt ð1 ez Þ ezt 6 inf 6 et z>0 z>0 z z
3. mS ðtÞ ¼ inf
mS ðtÞ P 2t
for t < 0.
if 12 6 t 6 0.
3. Exact Bahadur slope for SRS from normal distribution In this section we will study testing problem (4). We will compare the four methods viz. Fisher, logistic, sum of P-values and the inverse normal method via EBS. Let X1 ; X2 ; . . . ; Xn be i.i.d. N ðh; 1Þ, and we want to test (4). Let Tn be a continuous real valued test statistic, n ¼ 1; 2; 3; . . . ; such that H0 will be rejected for sufficiently large values of Tn . The P -value in this case is given by Pn ðTn Þ ¼ 1 FH0 ðTn Þ ¼ 1 UðTn Þ: We will take the special case Ti ¼ Xi for i ¼ 1; 2; . . . ; n. So the P-value becomes as follows: Pn ðTn Þ ¼ Pn ðXn Þ ¼ 1 FH0 ðXn Þ ¼ 1 UðXn Þ: 3.1. The exact Bahadur slope using four combination methods The exact Bahadur’s slope (EBS’s) result for the tests, which is given in Section 2, are as follows: A1 (Fisher method). CF ðhÞ ¼ bF ðhÞ 2 ln bF ðhÞ þ 2 ln 2 2, where Z 1 2 eðxhÞ =2 2 lnð1 UðxÞÞ pffiffiffiffiffiffi dx: bF ðhÞ ¼ 2p 1 A2 (Logistic method ). CL ðhÞ ¼ 2 lnðmðbL ðhÞÞÞ, where pz and bL ðhÞ mL ðtÞ ¼ inf ezt 0
Z 1 2 1 Uð xÞ eðxhÞ =2 pffiffiffiffiffiffi dx: ¼ ln U ð xÞ 2p 1 A3 (Sum of P-values method). CS ðhÞ ¼ 2 lnðmðbS ðhÞÞÞ, where Z 1 2 ezt ð1 ez Þ eðxhÞ =2 and bS ðhÞ ¼ ð1 UðxÞÞ pffiffiffiffiffiffi dx: mS ðtÞ ¼ inf z>0 z 2p 1 A4 . CN ðhÞ ¼ h2 . Only A1 will be proved here, because the proof for the other cases are similar.
ARTICLE IN PRESS
W.A. Abu-Dayyeh et al. / Appl. Math. Comput. 135 (2003) 345–360
351
Proof of A1 TF ¼ 2
n X ln ð1 Uðxi ÞÞ pffiffiffi : n i¼1
By the strong low of large number (SLLN) TF w:p:1 pffiffiffi ! Eh ð 2 ln ð1 UðxÞÞÞ ¼ bF ðhÞ; n where bF ðhÞ ¼
Z
2
1
2 lnð1 UðxÞÞ
1
Under H0 : mF ðtÞ ¼ inf e
zt
z>0
1 1 2z
eðxhÞ =2 pffiffiffiffiffiffi dx: 2p
t ¼ e1ðt=2Þ : 2
Then by Theorem 2
CF ðhÞ ¼ 2 ln ðmF ðbF ðhÞÞÞ ¼ 2 ln
bF ðhÞ 1ðbF ðhÞ=2Þ e 2
¼ bF ðhÞ 2 ln ðbF ðhÞÞ þ 2 ln ð2Þ 2:
3.2. The Limiting ratio of the EBS for different tests with respect to Fisher method The limits of the ratios relative to CF ðhÞ as h ! 0 and relative to C N ðhÞ as h ! 1 are as follows: CL ðhÞ CF ðhÞ
¼ 1:215854.
B2 .
limh!0 CCFS ðhÞ ðhÞ
¼ 1:170599.
B3 .
ðhÞ limh!0 CCNF ðhÞ
¼ 1:225849.
B1 . limh!0
It is clear that locally the inverse normal method is the best one since it has the highest limit in a neighborhood around zero followed by the sum of Pvalues method, logistic method and Fisher method, respectively. Proof of B1 . bF ðhÞ ¼ Eh ðgF ð xÞÞ, where gF ðxÞ ¼ 2 lnð1 UðxÞÞ. n
bF ðhÞ ¼ Eh ðð x hÞðgF ð xÞÞÞ. Also, bL ðhÞ ¼ Eh ðgL ðxÞÞ, where gL ðxÞ ¼ lnð½1 UðxÞ =UðxÞÞ. n
bL ðhÞ ¼ Eh ðð x hÞðgL ð xÞÞÞ.
ARTICLE IN PRESS
352
W.A. Abu-Dayyeh et al. / Appl. Math. Comput. 135 (2003) 345–360 n
n
Now the numerical values of the derivatives when h ¼ 0 are bF ð0Þ ¼ bL ð0Þ ¼ n n 1:80639, so bF ð0Þ < 1, bL ð0Þ < 1. 2 Under H0 : gF ðxÞ vð2Þ and gL ðxÞ logisticðh ¼ 0; b ¼ 1Þ, so Varh¼0 ðgF ðxÞÞ ¼ 4 and Varh¼0 ðgL ðxÞÞ ¼ 13 p2 . By applying Theorem 3 we can get limh!0 CCLF ðhÞ ¼ 1:215854. Similarly we can prove the other two parts. ðhÞ 3.3. The limiting ratio of the EBS for different tests with respect to inverse normal method D1 . limh!1
CF ðhÞ CN ðhÞ
¼ 1.
D2 . limh!1
CL ðhÞ CN ðhÞ
¼ 1.
limh!1 CCNS ðhÞ ðhÞ
¼ 12.
D3 .
Proof of D1 lim
h!1
C F ð hÞ bF ðhÞ 2 ln ðbF ðhÞÞ 2 þ 2 ln ð2Þ ¼ lim : CN ðhÞ h!1 h2
It is sufficient to find limh!1 bF ðhÞ=h2 . Then R1 2 ln ð1 Uð x þ hÞÞ/ð xÞ bF ð hÞ ¼ 1 dx lim 2 h!1 h h2
Z 1 2 ln ð1 Uð x þ hÞÞ ¼ lim /ð xÞ dx; h2 1 h!1 by Theorem 4.1 in [5, p. 29]. Now by using L’Hopital’s rule twice, we get lim
h!1
2 ln ð1 Uð x þ hÞÞ ð x hÞ/ð x þ hÞ ¼ lim 2 h!1 h/ð x þ hÞ þ ð1 Uð x þ hÞÞ h 1 hx ¼ lim ¼ 1: h!1 1 þ 1Uð xþhÞ h/ð xþhÞ
So, lim
h!1
bF ðhÞ ¼ h2
Z
1
1 /ðxÞ dx ¼ 1; 1
which completes the proof.
Proof of D2 . By Theorem 6 part (1), we have CL ðhÞ 6 2bL ðhÞ 2 ln ðbL ðhÞÞ 2 ln ð2Þ;
ARTICLE IN PRESS
W.A. Abu-Dayyeh et al. / Appl. Math. Comput. 135 (2003) 345–360
353
then lim
h!1
C L ð hÞ 2bL ðhÞ 2 ln ðbL ðhÞÞ 2 ln ð2Þ 2bL ðhÞ 6 lim ¼ lim : h!1 h!1 C F ð hÞ bF ðhÞ 2 ln ðbF ðhÞÞ þ 2 ln ð2Þ 2 bF ðhÞ
Now
lim
h!1
ð xÞ 2Eh ln 1U Uð xÞ
2bL ðhÞ ¼ lim h!1 Eh ð 2 ln ð1 Uð xÞÞÞ bF ðhÞ 2Eh ð ln ð1 Uð xÞÞÞ þ 2Eh ð ln ðUð xÞÞÞ 2Eh ð ln ð1 Uð xÞÞÞ Eh ð ln ðUð xÞÞÞ ¼ 1 0 ¼ 1: ¼ 1 lim h!1 Eh ð ln ð1 Uð xÞÞÞ
¼ lim
h!1
Therefore, limh!1
CL ðhÞ CF ðhÞ
6 1.
Similarly, from Theorem 6 part (2), we have CL ðhÞ P 2bL ðhÞ 2 ln ðbL ðhÞÞ 2; then, lim
h!1
C L ð hÞ 2bL ðhÞ 2 ln ðbL ðhÞÞ 2 2bL ðhÞ P ¼ lim CF ðhÞ bF ðhÞ 2 ln ðbF ðhÞÞ þ 2 ln ð2Þ 2 h!1 bF ðhÞ
and this limit equals the last one. Therefore limh!1 By pinching theorem, we have lim
h!1
CL ðhÞ CN ðhÞ
¼ 1, which completes the
Proof of D3 . By Lemma 1 part (3), we can get that CS ðhÞ P 2 2 ln ð bS ðhÞÞ; so lim
CS ðhÞ 2 2 ln ð bS ðhÞÞ P lim ; h!1 CN ðhÞ h2
and by using L’Hopital’s rule, we get that n
lim
h!1
P 1.
C L ð hÞ ¼ 1: CF ðhÞ
From D1 and (5) we conclude that limh!1 proof.
h!1
CL ðhÞ CF ðhÞ
CS ðhÞ b ð hÞ P lim S : h!1 hbS ðhÞ CN ðhÞ
ARTICLE IN PRESS
354
W.A. Abu-Dayyeh et al. / Appl. Math. Comput. 135 (2003) 345–360
Now, bS ðhÞ ¼
Z
2
1
ð1 Uð xÞÞ
1
eðxhÞ =2 pffiffiffiffiffiffi dx: 2p
Putting t ¼ x h, we get Z 1 ð1 Uðt þ hÞÞ/ðtÞ dt; bS ðhÞ ¼ 1
then
Z
1 2
1
e 4 h /ðt þ hÞ/ðtÞ dt ¼ pffiffiffi : 2 p 1 2 n 3 2 nn 3 n hbS ðhÞbS ðhÞ bS ðhÞ n bS ð hÞ 2 h 5; ¼ lim 4 h 5 ¼ lim 4 lim n h!1 hbS ðhÞ h!1 h!1 bS ðhÞ bS ðhÞ n
bS ðhÞ ¼
nn
n
by using L’Hopital’s rule, but bS ðhÞ ¼ 12 hbS ðhÞ, then 2 n 3 2 n bS ðhÞ h2 1 bS ð hÞ 1 5 ¼ 1; lim ¼ lim 4 2 n h!1 hbS ðhÞ h!1 2 h bS ðhÞ then CS ðhÞ lim P h!1 CN ðhÞ
1 2
1 ¼ : 2
Also, by Lemma 1 part (4), we have CS ðhÞ 6 2 ln ð2Þ 2 ln ð bS ðhÞÞ; so n
lim
h!1
CS ðhÞ 2 ln ð2Þ 2 ln ð bS ðhÞÞ b ð hÞ 6 lim ; ¼ lim S 2 h!1 CN ðhÞ h!1 hb h S ðhÞ
which is equal to the previous limit, that is CS ðhÞ 1 lim 6 ; h!1 CN ðhÞ 2 and hence by pinching theorem, we have lim
h!1
CS ðhÞ 1 ¼ ; CN ðhÞ 2
this completes the proof.
ARTICLE IN PRESS
W.A. Abu-Dayyeh et al. / Appl. Math. Comput. 135 (2003) 345–360
355
Table 1 EBS for the normal distribution h
CF ðhÞ
CL ðhÞ
CS ðhÞ
CN ðhÞ
0.01 0.05 0.10 0.25 0.50 1.00 1.50 2.00 2.50 3.00 3.50 4.00 4.50 5.00 5.50 6.00
0.00008162 0.00204523 0.00820401 0.05169940 0.20951100 0.85792900 1.96950000 3.56147000 5.64526000 8.22802000 11.3139000 14.9053000 19.0032000 23.6081000 28.7199000 34.3385000
0.00009918 0.00247947 0.00991602 0.06189440 0.24649200 0.97206700 2.15385000 3.78499000 5.87618000 8.44493000 11.5068000 15.0720000 19.1459000 23.7301000 28.8246000 34.4295000
0.00009549 0.00238690 0.00954251 0.05941920 0.23458400 0.89315700 1.87205000 3.08551000 4.51160000 6.15528000 8.02202000 10.1162000 12.4412000 14.9999000 17.7946000 20.8267000
0.00010000 0.00250000 0.01000000 0.06250000 0.25000000 1.00000000 2.25000000 4.00000000 6.25000000 9.00000000 12.2500000 16.0000000 20.2500000 25.0000000 30.2500000 36.0000000
3.4. Comparison of the EBS for the four methods Considering the results of Section 3.2 it is clear that locally the best method, which has a higher EBS, is the inverse normal method since it has the highest limit as h approaches to zero. Whereas, from result of Section 3.3 the worst method for large values of h is the sum of P-values and the other three methods remain the same, since they have the same limit as h approaches to infinity. For the other values of h, the EBS’s will be compared numerically. These results are summarized in Table 1.
4. Exact Bahadur slope for SRS from logistic distribution Let X1 ; X2 ; . . . ; Xn be i.i.d. logisticðh; 1Þ, and we want to test (4). The P -value in this case is given by Pn ðXn Þ ¼ 1 FH0 ðXn Þ ¼ 1 F0 ðXn Þ; where F0 ðxÞ ¼ 1þe1 x . 4.1. The exact Bahadur slope using four combination methods The EBS’s for the same tests under study are given in the following result: 2h A1 . CF ðhÞ ¼ 2 lnð1 eh Þ þ 1e h 2 lnðhÞ 2.
ARTICLE IN PRESS
356
W.A. Abu-Dayyeh et al. / Appl. Math. Comput. 135 (2003) 345–360
A2 . CL ðhÞ ¼ 2 lnðmL ðbL ðhÞÞÞ, where mL ðtÞ ¼ inf
0
ezt pz sinðpzÞ
and
bL ðhÞ ¼ h:
A3 . CS ðhÞ ¼ 2 lnðmS ðbS ðhÞÞÞ, where mS ðtÞ ¼ inf
z>0
ezt ð1 ez Þ z
and
bS ðhÞ ¼
eh ð1 eh hÞ ðeh 1Þ
2
:
2
A4 . CN ðhÞ ¼ ðbN ðhÞÞ , where
Z 1 1 eðxhÞ 1 bN ðhÞ ¼ U dx 1 þ ex ð1 þ eðxhÞ Þ2 1 Z 1 U1 ð y Þ h dy: ¼ e 2 h 0 ðe ð1 y Þ þ y Þ We can prove all statements in the same way. Therefore, we will prove only A3 . Proof of A3 n n X pi 1 X 1 pffiffiffi ¼ pffiffiffi TS ¼ : n i¼1 1 þ exi n i¼1 By SLLN and Theorem 2 part (1), we have
TS w:p:1 1 pffiffiffi ! Eh ¼ bS ðhÞ; 1 þ ex n Z 1 1 eðxhÞ bS ðhÞ ¼ dx 2 x 1 1 þ e ð1 þ eð xhÞ Þ Z 1 1 ¼ eh dx; 2x þ ex Þðeh þ ex Þ2 ð e 1 then put y ¼ ex to get Z 1 bS ðhÞ ¼ eh 0
1 ð y þ 1Þð eh y þ 1Þ
2
dy:
Then use partial fraction integration to get bS ðhÞ ¼
eh ð1 eh hÞ ð e h 1Þ
2
:
Now, by Theorem 1, we have mS ðtÞ ¼ inf z>0 ezt MS ðzÞ, where MS ðzÞ ¼ EF ðezy Þ. 1 1ez Under H0 : y ¼ 1þe , by Theorem 2 part (2) x U ð1; 0Þ, so MS ðzÞ ¼ z we complete the proof, that is CS ðhÞ ¼ 2 lnðmS ðbS ðhÞÞÞ.
ARTICLE IN PRESS
W.A. Abu-Dayyeh et al. / Appl. Math. Comput. 135 (2003) 345–360
357
4.2. The limiting ratio of the EBS for different tests with respect to Fisher method as h ! 0 CL ðhÞ CF ðhÞ
¼ 1:215854.
B2 .
limh!0 CCFS ðhÞ ðhÞ
¼ 1:333333.
B3 .
ðhÞ limh!0 CCNF ðhÞ
¼ 1:273239.
B1 . limh!0
Locally, the sum of P-values method is the best one since it has the highest limit in a neighborhood around zero followed by the inverse normal, logistic and Fisher methods, respectively. We will prove B2 only since all statements can be proved in the same way. Proof of B2 . bF ðhÞ ¼ Eh ðgF ð xÞÞ, where
x e gF ðxÞ ¼ 2 ln ; 1 þ ex so bF ðhÞ ¼ Eh ð2xÞ þ Eh ð2 ln ð1 þ ex ÞÞ ¼ 2h þ I; where I ¼ Eh ð2 lnð1 þ ex ÞÞ Z 1 I ¼2 ln ð1 þ ex Þ 1
eðxhÞ ð1 þ eðxhÞ Þ
2
dx:
Put y ¼ ex and use integration by parts to get Z 1 1 dy: I ¼ 2eh ð1 þ y Þðeh þ y Þ 0 Then use partial fraction integration to get I¼
2h : 1 eh
2h So bF ðhÞ ¼ 1e h . Therefore
bnF ðhÞ ¼
2 2eh 2heh ð1 eh Þ2
:
By using L’Hopital’s rule twice, we get n
lim bF ðhÞ ¼ lim h!0
h!0
1 ¼ 1 < 1: eh
ARTICLE IN PRESS
358
W.A. Abu-Dayyeh et al. / Appl. Math. Comput. 135 (2003) 345–360
From the previous results, we have bS ðhÞ ¼
eh ð1 eh hÞ ð e h 1Þ 2
:
By using L’Hopital’s rule three times, we get n
lim bS ðhÞ ¼ lim h!0
h!0
2h þ 1 1 ¼ < 1: 6ð2eh 1Þ 6
Under H0 : gF ðxÞ v2ð2Þ and gS ðxÞ U ð1; 0Þ, so VarðgF ðxÞÞ ¼ 4 h¼0
and VarðgS ðxÞÞ ¼ h¼0
1 : 12
Now apply Theorem 3 to get lim h!0
CS ðhÞ ¼ 1:333333: CF ðhÞ
4.3. The limiting ratio of the EBS for different tests with respect to Fisher method as h ! 1 CL ðhÞ CF ðhÞ
¼ 1.
limh!1 CCFS ðhÞ ðhÞ
¼ 1.
D1 . limh!1 D2 .
Proof of D1 . By Lemma 1 part (1) CL ðhÞ 6 2h. So lim
h!1
C L ð hÞ 2h 6 lim : 2h CF ðhÞ h!1 2 ln ð1 eh Þ þ 1e h 2 lnðhÞ 2
By using L’Hopital’s rule, we get that lim
h!1
CL ðhÞ 6 1: CF ðhÞ
Also, by Lemma 1 part (2), we have
2h2 ph ph CL ðhÞ P 2 ln þ 2 ln sin : hþ1 hþ1 hþ1
ARTICLE IN PRESS
W.A. Abu-Dayyeh et al. / Appl. Math. Comput. 135 (2003) 345–360
So, lim
h!1
2h2 hþ1
2 ln
ph hþ1
359
ph þ 2 ln sin hþ1
C L ð hÞ P lim : h!1 2 ln ð1 eh Þ þ 2hh 2 lnðhÞ 2 CF ðhÞ 1e
Apply L’Hopital’s rule to get that lim
h!1
CL ðhÞ P 1: CF ðhÞ
By pinching theorem, we have lim
h!1
CL ðhÞ ¼ 1: CF ðhÞ
Similarly, we can prove D2 , but unfortunately we could not find limh!1
CN ðhÞ . CF ðhÞ
4.4. Comparison of the EBS for the four methods Through the results of Section 4.2 it is clear that locally the best method, which has a higher EBS, is the sum of P-values method since it has the highest limit as h approaches to zero, then the inverse normal method, logistic method and Fisher method, respectively. Whereas, from the results of Section 4.3 for large values of h the sum of P-values, logistic and Fisher methods remain the same, since they have the same limit as h approaches to infinity. For the other values of h, the EBS’s will be compared numerically. These results are summarized in Table 2. Table 2 EBS for the logistic distribution h
CF ðhÞ
CL ðhÞ
CS ðhÞ
CN ðhÞ
0.01 0.05 0.10 0.25 0.50 1.00 1.50 2.00 2.50 3.00 3.50 4.00 4.50 5.00 5.50 6.00
0.00002500 0.00062498 0.00249965 0.01561150 0.06228420 0.24660300 0.54575600 0.94894900 1.44325000 2.01501000 2.65111000 3.33970000 4.07061000 4.83544000 5.62745000 6.44134000
0.00003040 0.00075985 0.00303871 0.01896180 0.07542240 0.29527000 0.64850000 1.09683000 1.63628000 2.24327000 2.90355000 3.60616000 4.34279000 5.10710000 5.89424000 6.70045000
0.00003333 0.00083326 0.00333222 0.02079000 0.08264600 0.32266700 0.69863500 1.18168000 1.74306000 2.36054000 3.02076000 3.71666000 4.44345000 5.19700000 5.97365000 6.77021000
0.00003183 0.00079574 0.00318254 0.01987250 0.07922900 0.31283900 0.68933600 1.19177000 1.80030000 2.49469000 3.25617000 4.06861000 4.91889000 5.79685000 6.69488000 7.60745000
ARTICLE IN PRESS
360
W.A. Abu-Dayyeh et al. / Appl. Math. Comput. 135 (2003) 345–360
5. Numerical values of the EBS for the two distributions using SRS Through Tables 1 and 2 we give the EBS using the SRS for the two distributions under study. We observe that under the normal distribution the inverse normal method in general is the best method since it has the highest values of the EBS for all values of h followed by logistic method, Fisher method and the sum of P-values method, respectively. As for the logistic distribution and when h is small the best method is the sum of P-values followed by the inverse normal, logistic and Fisher methods, respectively, and when h becomes big the inverse normal method becomes the best than the sum of Pvalues followed by logistic and Fisher methods, respectively.
References [1] W.A. Abu-Dayyeh, Bahadur exact slope, Pitman efficiency and local power for combining independent tests, Ph.D. Theses, University of Illinois at Urban-Champaign, 1989. [2] W.A. Abu-Dayyeh, A. Bataineh, Comparing the exact Bahadur slopes of the Fisher and sum of P-values methods in case of shifted exponential distribution, Mu’tah J. Res. Stud. 8 (1) (1992) 119–130. [3] W.A. Abu-Dayyeh, A. El-Masri, Combining independent tests of triangular distribution, Statist. Probab. Lett. 21 (1994) 195–202. [4] A. Birnbaum, Combining independent tests of significance, J. Amer. Statist. Assoc. 49 (1955) 559–574. [5] E.L. Lehmann, Testing Statistical Hypotheses, second ed., Wiley, New York, 1986. [6] R.C. Little, L.J. Folks, Asymptotic optimality of Fisher’s method of combining independent tests, J. Amer. Statist. Assoc. 66 (1971) 802–806. [7] R.C. Little, L.J. Folks, Asymptotic optimality of Fisher’s method of combining independent tests II, J. Amer. Statist. Assoc. 68 (1973) 193–194. [8] R.J. Serfling, Approximation Theorems of Mathematical Statistics, Wiley, New York, 1980.