Statistics & Probability Letters 53 (2001) 143 – 152
Blockwise empirical Euclidean likelihood for weakly dependent processes Lu Lin, Runchu Zhang ∗ Department of Statistics, School of Mathematical Sciences, Nankai University, Tianjin, 300071, People’s Republic of China Received April 2000; received in revised form March 2001
Abstract This paper introduces a method of blockwise empirical Euclidean likelihood for weakly dependent processes. The strong consistency and asymptotic normality of the blockwise empirical Euclidean likelihood estimation are proved. It is deduced that the blockwise empirical Euclidean likelihood ratio statistic is asymptotically a chi-square statistic. These results show that the blockwise empirical Euclidean likelihood estimation is more asymptotically e/cient than the original c 2001 Elsevier Science empirical (Euclidean) likelihood estimation and it is useful for weakly dependent processes. B.V. All rights reserved Keywords: Blockwise; Dependent data; Empirical Euclidean likelihood
1. Introduction We 3rst outline the empirical likelihood as discussed by Owen (1988, 1990) and Qin and Lawless (1994). Let y1 ; : : : ; yN be i:i:d observations from a unknown d-variate distribution f(y; ), where = (1 ; : : : ; p ) is a unknown p-dimensional parameters vector. We assume that the information about and f(y; ) is available in the form of independent unbiased estimate function u1 (y; ) = (u1 (y; ); : : : ; ur (y; )) , i.e. E(u(y; )) = 0; where u(y; ) is known. The empirical log-likelihood ratio is de3ned as N N N l() = sup log Npi pi = 1; pi ¿ 0; i = 1; : : : ; N; pi u(yi ; ) = 0 : i=1
i=1
i=1
This paper is supported by NNSF project 19771049 of China.
∗
Corresponding author. E-mail address:
[email protected] (R. Zhang). c 2001 Elsevier Science B.V. All rights reserved 0167-7152/01/$ - see front matter PII: S 0 1 6 7 - 7 1 5 2 ( 0 1 ) 0 0 0 6 6 - 9
144
L. Lin, R. Zhang / Statistics & Probability Letters 53 (2001) 143 – 152
Using Lagrange multipliers, the empirical log-likelihood ratio can be expressed as N l() = − log(1 + t ()u(yi ; ));
(1)
i=1
N where Lagrange multiplier t() = (t1 (); : : : ; tr ()) satis3es (1=N ) i=1 1=(1 + t ()u(yi ; )u(yi ; )) = 0: N N On the other hand, if −log i=1 Npi is substituted by the Euclidean distance 12 i=1 (pi − (1=N ))2 (Owen, 1991; Qin and Lawless, 1994; Luo, 1994, 1997), the empirical Euclidean log-likelihood ratio is de3ned as N N N 1 2 lE () = sup − (Npi − 1) pi = 1; pi ¿ 0; i = 1; : : : ; N; pi u(yi ; ) = 0 : 2 i=1
i=1
i=1
By Lagrange multipliers, the empirical Euclidean log-likelihood ratio is expressed by N (2) lE () = − u ()S −1 ()u(); 2 N N where u() H = (1=N ) i=1 u(yi ; ) and S() = (1=N ) i=1 (u(yi ; ) − u())(u(y H H . As the empirical i ; ) − u()) log-likelihood ratio (1), the empirical Euclidean log-likelihood ratio (2) has some superior properties, for example, the strong consistency, the asymptotic normality, and the empirical Euclidean likelihood ratio statistic tends to a chi-square statistic. Besides, comparing with (1), the empirical Euclidean log-likelihood ratio (2) is more simple and more convenient for studying its properties and applying it in practice. It should be noted, however, that the existing literature on empirical Euclidean likelihood focus only on independent data generating processes. In the models with dependent processes, the method of empirical Euclidean likelihood used in the case of independent processes is invalid. If one wishes to use empirical Euclidean likelihood for general dependent processes, a new device is called for. It is well known that some studies of bootstrap, and original empirical likelihood mainly utilize block resampling (BIuhlman and KIunsch, 1993; Kitamura, 1997), which preserves the dependent properties of data by appropriately choosing blocks of data. In this paper, a method of blockwise empirical Euclidean likelihood for weakly dependent data is introduced. The strong consistency and asymptotic normality of the blockwise empirical Euclidean likelihood estimation is proved. It is deduced that the blockwise empirical Euclidean likelihood ratio statistic is asymptotically a chi-square statistic. These results show that the blockwise empirical Euclidean likelihood is more asymptotically e/cient than the original empirical (Euclidean) likelihood and it is useful for weakly dependent processes. 2. Blockwise empirical Euclidean likelihood Let y1 ; : : : ; yN be dependent observations from a unknown d-variate distribution f(y; ). The information about and f(y; ) is available in the form of an unbiased estimate function u(y; ), i.e. E(u(yi ; )) = 0; where yi ∈ Rd ; i = 1; : : : ; N; ∈ ⊂ Rp ; u(y; ) : Rd × → Rr ; r ¿ p: Let M and L be integers depending only on N , satisfying M = O(N 1− ); 0 ¡ ¡ 1; L=M → c as N → ∞ for some constant c; 0 ¡ c 6 1. Denote Bi =(y(i−1)L+1 ; : : : ; y(i−1)L+M ) ; i=1; : : : ; Q; Q=[(N −M )=L]+1; where [x] stands for the integer part of x. One can see that Bi is a block of observations, M is the window-width, L is the separation between the block start points, L = O(N 1− ) and Q = O(N ). Instead of considering the empirical Euclidean likelihood of the original estimate function u(y; ), we use a function Ti; M; L (Bi ; ) of the observation blocks Bi ; where the function Ti; M; L (Bi ; ) has the following particular form: M 1 Ti; M; L (Bi ; ) = u(y(i−1)L+n ; ): M n=1
L. Lin, R. Zhang / Statistics & Probability Letters 53 (2001) 143 – 152
145
For convenience, we denote Ti; M; L (Bi ; ) as Ti (): Though more Mexibility would be considered by M -moving averages with various weights, for simplicity, we shall focus on the equally weighted sum as de3ned above. Then, the blockwise empirical Euclidean log-likelihood ratio for dependent data is de3ned as Q Q Q 1 2 (Qpi − 1) pi = 1; pi ¿ 0; i = 1; : : : ; Q; pi Ti () = 0 : lEB () = sup − 2 i=1
i=1
i=1
As in Qin and Lawless (1994) and Luo (1994), this optimization problem is solved by considering Lagrange multipliers. Let Q Q Q 1 2 L=− (Qpi − 1) − Qt () pi Ti () − pi − 1 ; 2 i=1
i=1
i=1
where and t() = (t1 (); : : : ; tr ()) are Lagrange multipliers. Taking derivatives with respect to pi , it can be easily veri3ed (Luo, 1994) that Q 1 1 (3) lEB () = − TH () SB−1 ()TH (); pi = + TH ()SB−1 ()(TH () − Ti ()); t() = SB−1 ()TH (); 2 Q Q Q Q ˆ = where TH () = (1=Q) i=1 Ti (); SB () = (1=Q) i=1 (Ti () − TH ())(Ti () − TH ()) : If ˆ satis3es lEB () sup∈ lEB (), we call ˆ a blockwise empirical Euclidean likelihood estimator of . 3. Strong consistency and asymptotic normality To study properties of the blockwise empirical Euclidean likelihood estimation, we 3rst introduce the following regularity conditions: A1: TH ( 0 )=O(Q−1=2 (Q)), a.s., where 0 is the true value of ; and the function satis7es (Q)=o(Q" ) for some " ¿ 0; Q a:s: a:s: A2: In a neighborhood of 0 ; (M=Q) i=1 Ti ()Ti ()→%(); @TH ()=@→P() = E(@u(y; )=@); %() is a positive de7nite matrix, and rank(P()) = p; A3: In a neighborhood of 0 ; @u(y; )=@ and @2 u(y; )=@k @j are continuous; j; k = 1; : : : ; p: Remark 1. Since the dependency of {Ti (); i=1; 2; : : :} is weaker than that of {y1 ; y2 ; : : :}, it is often supposed that {Ti (); i = 1; 2; : : :} has some properties similar to that of independent random variables (Dimitris and Joseph, 1992). For example, if {yi ; i = 1; 2; : : :} is a mixing dependent sequence ()-mixing, or *-mixing and so on), then, {Ti (); i = 1; 2; : : :} is asymptotically independent (Dimitris and
Joseph, 1992). In this case, {Ti (); i = 1; 2; : : :} satis3es the law of iterated logarithms TH ( 0 ) = O(Q−1=2 log log Q) a.s. and the law of strong large numbers. The assumption conditions for these laws can be obtained from Lu and Lin (1997). From the law of iterated logarithms, we can see that {Ti (); i =1; 2; : : :} satis3es condition A1. On the other hand, it follows from the law of strong large numbers √ Nthat {Ti (); i=1; 2; : : :} satis3es condition A2 because under usual condition, the limit limN →∞ Var((1= N ) i=1 u(yi ; )) exists (Lu and Lin, 1997), and if %() denotes this M M limit, then the law of strong large numbers for {(1=M ) n=1 u(y(i−1)L+n ; ) n=1 u (y(i−1)L+n ; ); i = 1; : : : ; Q} is Q M M M M 1 1 1 a:s: u(y(i−1)L+n ; ) u (y(i−1)L+n ; ) − E u(yn ; ) u (yn ; ) →0; Q M M i=1
n=1
n=1
Q
n=1
a:s:
n=1
the 3rst part of condition A2: (M=Q) i=1 Ti ()Ti () − %()→0; the other law of strong large numbers for a:s: a:s: {@Ti =@; i = 1; : : : ; Q} is @TH ()=@ − E(@TH ()=@))→0; the second part of condition A2: @TH ()=@ − P()→0:
146
L. Lin, R. Zhang / Statistics & Probability Letters 53 (2001) 143 – 152
Therefore, conditions A1 and A2 are mild conditions for weakly dependent processes. Some examples to illustrate the regularity conditions will be presented in Corollaries 1 and 3. Theorem 1. If conditions A1–A3; ¿ 2+; + 2+ ¿ 1 and " = 1=2 − (+=) hold for some + ¿ 0; then; as Q → ∞; with probability 1; lEB attains its maximum value at some point ˆ in interior of the ball { | − 0 6 N −+ }; and ˆ satis7es @lEB () a:s: =ˆ = 0; ˆ→ 0 ; as Q → ∞: @ Proof. From the de3nition of Ti (), it is obvious that Ti (); i = 1; : : : ; Q; satisfy condition A3. Denote Ti = (Ti1 ; : : : ; Tir ) ; − 0 = vN −+ ; where v = 1. For ∈ { | − 0 = N −+ }, by Taylor expansion, we have Til () = Til ( 0 ) +
@Til ( 0 ) 1 ( − 0 ) + ( − 0 ) K(Bi ; cil ; ; 0 )( − 0 ); @ 2
where 0 ¡ cil ¡ 1; K(Bi ; cil ; ; 0 ) is a p × p matrix, its components are @2 Til (Bi ; 0 + cil ( − 0 ))=@j @k : By condition A3, we get Q
Q
Q
i=1
i=1
i=1
1 1 1 @Til ( 0 ) ( − 0 ) + O( − 0 2 ) Til () = Til ( 0 ) + Q Q Q @
a:s:
Then, we obtain @TH ( 0 ) −+ vN + O(N −2+ ) a:s: TH () = TH ( 0 ) + @
(4)
and √
@TH ( 0 ) vO(N (1−−2+)=2 ) + O(N (1−−4+)=2 ) a:s: @ √ a:s: It follows from (5), conditions A1 and A2 that M TH ()→0 for ∈ { | − 0 = N −+ }. By (3), M TH () = O(Q(1=2)−1 (Q)) +
(5)
Q
MSB () =
M Ti ()Ti () − M TH ()TH (): Q i=1
Using condition A2 and
√
a:s:
M TH ()→0, we have, for ∈ { | − 0 = N −+ };
a:s:
MSB ()→%( 0 ):
(6)
Thus, by (3), (4), (6) and 1=2 − (+=) = ", we get lEB () Q2+= H 0 @TH ( 0 ) −+ −2+ T ( ) + = lim − ) (MSB )−1 () lim vN + O(N Q→∞ MQ 1−2+= Q→∞ 2 @
@TH ( 0 ) −+ vN + O(N −2+ ) × TH ( 0 ) + @ 1 = lim − Q→∞ 2
O(Q
−1=2++=
@TH ( 0 ) (Q)) + vO @
M L
+=
+ O(N
−+
)
(MSB )−1 ( 0 )
L. Lin, R. Zhang / Statistics & Probability Letters 53 (2001) 143 – 152
× O(Q−1=2++=
@TH ( 0 ) vO (Q)) + @
M L
= −Cv P ( 0 )%−1 ( 0 )P( 0 )v 6 −Cmin
+=
147
+ O(N −+ )
a:s:;
(7)
where min ¿ 0 is the smallest eigenvalue of P ( 0 )%−1 ( 0 )P( 0 ); and C is a positive constant. On the other hand lEB ( 0 ) = −M
Q H 0 T ( )(MSB )−1 ( 0 )TH ( 0 ) = −MO( 2
2
(Q))
a:s:
(8)
It follows from (7) and (8) that with probability 1; lEB () 6 lEB ( 0 ) for ∈ { | − 0 = N −+ }. Since lEB () is continuous, lEB () has the maximum value ˆ in the interior of { | − 0 6 N −+ }: Hence, using the continuity de3ned in condition A3, we get @lEB () a:s: =ˆ = 0; ˆ→ 0 ; as Q → ∞: @ The proof is completed. The weakly dependent sequences, for example, )-mixing sequence, *-mixing sequence and ’-mixing sequence appearing in Corollaries 1 and 3 below, are common, and can be easily found from some literature (Lu and Lin, 1997; Dimitris and Joseph, 1992). So we omit their de3nitions and use them directly. Corollary 1. Assume that condition A3; ¿ 2+; + 2+ ¿ 1 and " = 1=2 − (+=) hold for some + ¿ 0. If {yi ; i = 1; 2; : : :} is a ’-mixing sequence sequence with the coe;cient √ Nwith the coe;cient ’(n) (or *-mixing 2 ) i=1 u(yi ; ))=%(); |uk (y *(n)); satisfying limN →∞ Var((1= N i ; )| 6 C a.s.; for i =1; : : : ; N; k =1; : : : ; r ∞ ∞ 1=2 nL−M nL−N and some positive constant C; and ) ¡ ∞ (or ) ¡ ∞); then; the results of n=1 ’ (2 n=1 *(2 Theorem 1 hold. Proof. For saving space, we only provide an outline of the proof. By the method of proving Lemma 1 in Dimitris and Joseph (1992), we can see that {Ti (); i = 1; 2; : : :} is also a ’-mixing (or *-mixing) sequence and its coe/cient ’T (n) (or *T (n)) satis3es ’T (n) 6 ’(nL − M ) (or *T (n) 6 *(nL − M )) for n ¿ [M=L] + 1. Then, using Lemmas 1:2:7 and 1:2:8 and Theorem 8:2:2 in Lu and Lin (1997), we can verify that conditions A1 and A2 hold. Then, the proof is obtained from Theorem 1. Corollary 2. Conditions A1–A3; ¿ 2+; + + ¿ 1 and " = ˆ = O(N 1−−+ ) a:s:; tˆ = t()
Proof. From (4) and Theorem 1, we have that a:s:
and ˆ = MSB ()
Q
M ˆ ˆ a:s: 0 ˆ TH () ˆ →%( Ti ()Ti () − M TH () ): Q i=1
− (+=) hold for some + ¿ 0; then (9)
where t() is the Lagrange multiplier as in (3).
H 0 ˆ − TH ( 0 ) = @T ( ) (ˆ − 0 ) + O(N −2+ ) TH () @
1 2
148
L. Lin, R. Zhang / Statistics & Probability Letters 53 (2001) 143 – 152
As a result, ˆ −1 M TH ( 0 ) + (MSB ()) ˆ −1 M (TH () ˆ − TH ( 0 )) ˆ = (MSB ()) t() = O(Q1=−3=2 (Q)) + O(N 1−−+ ) = O(N 1−−+ ): This completes the proof. Remark 2. (1) Theorem 1 shows that the blockwise empirical Euclidean likelihood estimator ˆ belongs to the closed ball with center 0 and radius N −+ . It can be veri3ed by the conditions of Theorem 1 that the in3mum of radius N −+ for 3xed N is N −1=2 , i.e., inf ; + N −+ = N −1=2 ; which gives the order of ˆ almost surely converging to 0 : By an analogous argument in Corollary 2, we can obtain that the in3mum of order of tˆ almost surely converging to 0; N −1=2 . (2) Corollary 1, and the following Corollaries 3 and 4 give some examples to illustrate that for some weakly dependent processes, the blockwise empirical Euclidean likelihood method is valid and conditions A1, A2 and the following A4 are useful. For convenience in the proof of the following theorem, we denote: @GQ (; t) @GQ (; t) t SB ()t GQ (; t) = − TH ()t; U2Q (; t) = ; U1Q (; t) = = SB ()t − TH (): 2 @ @t Theorem 2. If the conditions in Corollary 2 and √ √ L A4: MQU1Q ( 0 ; 0) = − MQTH ( 0 )→N (0; %( 0 )) hold; then
L L MQ(ˆ − 0 )→N (0; V ); M −1 Q(tˆ − 0)→N (0; U );
(10)
where V = (P ( 0 )%−1 ( 0 )P( 0 ))−1 ; U will be expressed in (16). ˆ tˆ) = 0 a.s. and U2Q (; ˆ tˆ) = 0 Proof. Note that condition A4 is based on condition A2. It is obvious that U1Q (; a.s. By Taylor expression, Theorem 1 and Corollary 2, we have 0 = U1Q ( 0 ; 0) +
@U1Q ( 0 ; 0) ˆ @U1Q ( 0 ; 0) ( − 0 ) + (tˆ − 0) + O(N 2(1−−+) ) @ @t
a:s:
(11)
and ˆ ˆ 0) + @U2Q (; 0) (tˆ − 0) + O(N 2(1−−+) ) a:s: 0 = U2Q (; @t ˆ In (12), U2Q (; 0) = 0 and ˆ 0) ˆ 0) @U2Q ( 0 ; 0) @U2Q ( 0 ; 0) @U2Q (; @U2Q (; tˆ (tˆ − 0) = (tˆ − 0) + − @t @t @t @t @U2Q ( 0 ; 0) (tˆ − 0) + O(N 1−−2+ ) @t Hence, (12) can be expressed as =
@U2Q ( 0 ; 0) (tˆ − 0) + O(N 2(1−−+) ) a:s: @t From (11) and (13) it comes that tˆ − 0 −U1Q ( 0 ; 0) + O(N 2(1−−+) ) = BQ ˆ − 0 O(N 2(1−−+) )
(12)
a:s:
(13)
0=
a:s:;
L. Lin, R. Zhang / Statistics & Probability Letters 53 (2001) 143 – 152
where
149
@U1Q ( 0 ; 0) @U1Q ( 0 ; 0) BQ11 BQ12 @t @ := BQ = @U2Q ( 0 ; 0) BQ21 BQ22 0 @t
a:s:
a:s:
with MBQ11 →B11 = %( 0 ); BQ12 →B12 = −P( 0 ); B21 = B12 ; B22 = 0: Therefore, by the above equation, we obtain that
MQ(ˆ − 0 ) = (M −1 BQ22:1 )−1 BQ21 (MBQ11 )−1 MQU1Q ( 0 ; 0) + o(1) a:s: (14)
and
M −1 Qtˆ = −((MBQ11 )−1 + (MBQ11 )−1 BQ12 (M −1 BQ22:1 )−1 BQ21 (MBQ11 )−1 )
MQU1Q ( 0 ; 0) + o(1)
a:s:;
−1 where BQ22:1 = BQ22 − BQ21 BQ11 BQ12 : Let Q → ∞, by the asymptotic normality of (15), we get
L L MQ(ˆ − 0 )→N (0; V ); M −1 Q(tˆ − 0)→N (0; U );
√
(15) MQU1Q ( 0 ; 0), (14) and
where −1 −1 −1 −1 −1 −1 −1 −1 U = (B11 + B11 B12 B22:1 B21 B11 )B11 (B11 + B11 B12 B22:1 B21 B11 ) −1 −1 −1 −1 = B11 − B11 B12 (B21 B11 B12 )−1 B21 B11
(16)
−1 with B22:1 = B22 − B21 B11 B12 : The proof is completed.
Remark 3. (1) It can be checked that under some regularity conditions, the asymptotic normality also holds for the empirical Euclidean likelihood estimator without blocking; however, its asymptotic covariance matrix has the form W ( 0 ) = (P 4−1 P)−1 P 4−1 %4−1 P(P 4−1 P)−1 , where 4 = E(u(y; 0 )u (y; 0 )): Therefore ˆ which is similar to the result in Kitamura (1997). such an estimator is asymptotically less e/cient than , 0 (2) %( ) can be estimated by the blocks of blocks of jackknife (Dimitris and Joseph, 1992): By focusing attention on the triangular array of the Ti ; i = 1; : : : ; Q, de3ne Bj to be the block consisting of b consecutive Ti ’s starting from T( j−1)h+1 , i.e., Bj = (T( j−1)h+1 ; : : : ; T( j−1)h+b ); j = 1; : : : ; q; q = [(Q − b)=h] + √ 1: Now, using q the blocks of blocks Bj ; j = 1; : : : ; q; we de3ne a estimator of %( 0 ) as %˜ = (1=q) i=1 (B˜ i − bTH )2 ; where √ (i−1)h+b P Tj : Then, under some conditions, we have that %˜ →%( 0 ) (Dimitris and Joseph, 1992). B˜ i =(1= b) j=(i−1)h+1
Corollary 3. Assume condition A3; ¿ 2+; ++ ¿ 1 and "= 12 −(+=) hold for some + ¿ 0. If {yi ; i=1; 2; : : :} is a ’-mixing sequence the coe;cient ’(n) (or *-mixing sequence with the coe;cient *(n)); satisfying √ with N limN →∞ Var((1= N ) i=1 u(y E|uk (yi ; )|2+9 6 C for i=1; : : : ; N; k =1; : : : ; r and some positive ∞ i ; ))=%(); ∞ 9=(2+9) constants C and 9; and n=1 ’ (nL − M ) ¡ ∞ (or n=1 *9=(2+9) (nL − M ) ¡ ∞); then; the results of Theorem 2 hold. Proof. Using Lemma 1:2:5, Theorem 3:2:1, Theorem 8:2:2, the relations between )-mixing, ’-mixing and *-mixing presented in Lu and Lin (1997) and the method of proving Corollary 1, we can verify that conditions A1, A2 and A4 hold. Then, the proof is obtained by Theorem 2. Under the conditions of the mixing dependent sequences, there are some other results similar to Corollaries 1 and 3. For saving space, we don’t discuss them in this paper.
150
L. Lin, R. Zhang / Statistics & Probability Letters 53 (2001) 143 – 152
4. Asymptotic chi-square distribution The following results allow us to use the blockwise empirical Euclidean log-likelihood ratio statistic for testing or obtaining con3dence limits of parameters. Theorem 3. If the conditions of Theorem 2 hold; then L
−2lEB ( 0 )→:2 (r): √
(17) L
MQTH ( 0 )→N (0; % 0 )); MSB ( 0 ) → %( 0 ) a.s., we get
L −2lEB = ( MQTH ( 0 )) (MSB ( 0 ))−1 MQTH ( 0 )→:2 (r):
Proof. Since
This completes the proof. Theorem 4. If H0 : = 0 and the conditions of Theorem 2 hold; then L
R1 →:2 (p);
L
R2 →:2 (r − p):
(18)
ˆ − 2lEB ( 0 ); R2 = −2lEB (): ˆ where R1 = −2lEB () Proof. It comes from (14), (15) and @TH ( 0 )=@ = −BQ12 that
H 0 ˆ tˆ = Q TH ( 0 ) + @T ( ) (ˆ − 0 ) + O(N 2(1−−+) ) Qtˆ R2 = QTH () @
−1 −1 = ( QTH ( 0 ) − QBQ12 BQ22:1 BQ21 BQ11 U1Q ( 0 ; 0)) + O(N 2(1−−+ )))
−1 −1 −1 −1 ×(− Q(BQ11 + BQ11 BQ12 BQ22:1 BQ21 BQ11 )U1Q ( 0 ; 0) + O(N 2(1−−+) ))
−1 −1 BQ21 BQ11 U1Q Q( 0 ; 0) + O(N 2(1−−+) )) = ( QU1Q ( 0 ; 0) + BQ12 BQ22:1
−1 −1 −1 −1 ×((BQ11 + BQ11 BQ12 BQ22:1 BQ21 BQ11 )
QU1Q ( 0 ; 0) + O(N 2(1−−+) ))
−1 −1 BQ21 BQ11 U1Q Q( 0 ; 0)) = ( QU1Q ( 0 ; 0) + BQ12 BQ22:1
−1 −1 −1 −1 ×((BQ11 + BQ11 BQ12 BQ22:1 BQ21 BQ11 )
QU1Q ( 0 ; 0)) + o(1)
−1=2 −1=2 −1 = ( MQ(MBQ11 )−1=2 U1Q ( 0 ; 0)) (I + BQ11 BQ12 BQ22:1 BQ21 BQ11 )
×( MQ(MBQ11 )−1=2 U1Q ( 0 ; 0)) + o(1): Using
L MQ(MBQ11 )−1=2 U1Q ( 0 ; 0)→N (0; I ); −1=2 −1=2 −1=2 −1=2 −1 −1 BQ12 BQ22:1 BQ21 BQ11 → I + B11 B12 B22:1 B21 B11 := G1 I + BQ11
a:s:;
and G12 = G1 ;
−1=2 −1=2 −1 tr(G1 ) = r − tr(B11 B12 (B21 B11 B12 )−1 B21 B11 ) = r − p;
L. Lin, R. Zhang / Statistics & Probability Letters 53 (2001) 143 – 152
151
we get L
R2 →:2 (r − p): On the other hand, we have ˆ −1 () ˆ TH () ˆ R1 = QTH ( 0 )SB−1 ( 0 )TH ( 0 ) − QTH () B −1 ( 0 ; 0)BQ11 U1Q ( 0 ; 0) − ( = QU1Q
−1=2 QBQ11 U1Q ( 0 ; 0))
−1=2 −1=2 −1 ×(I + BQ11 BQ12 BQ22:1 BQ21 BQ11 )(
−1=2 QBQ11 U1Q ( 0 ; 0)) + o(1)
−1=2 −1=2 −1 = ( MQ(MBQ11 )−1=2 U1Q ( 0 ; 0)) (−B11 B12 B22:1 B21 B11 )
×( MQ(MBQ11 )−1=2 U1Q ( 0 ; 0)) + o(1): Since −1=2 −1=2 −1=2 −1=2 −1 −1 BQ11 BQ12 BQ22:1 BQ21 BQ11 → B11 B12 B22:1 B21 B11 −1=2 −1=2 −1 =B11 B12 (B21 B11 B12 )−1 B21 B11 := G2
a:s:
and G22 = G2 ;
−1=2 −1=2 −1 tr(G2 ) = tr(B11 B12 (B21 B11 B12 )−1 B21 B11 ) = p;
we get L
R1 →:2 (p): The proof is completed. Obviously, by Corollary 3, Theorems 3 and 4, we get the following corollary. Corollary 4. (i) If the conditions of Corollary 3 hold; then (17) holds. (ii) If the conditions of Corollary 3 and H0 : = 0 hold; then (18) holds. Acknowledgements Authors are grateful to the Editor and especially the referees for their constructive comments that greatly improve the paper. References BIuhlman, P., KIunsch, H.R., 1993. The Blockwise Boostrap for General Parameters of a Stationary Time Series, Research Report N0.70. Dimitris, N., Joseph, P., 1992. A general resampling scheme for triangular arrays of )-mixing random variables with application to the problem of spectral density estimation. Ann. Statist. 20, 1985–2007. Kitamura, Y., 1997. Empirical likelihood methods with weakly dependent processes. Ann. Statist. 25, 2084–2102. Lu, C.R., Lin, Z.Y., 1997. Limit Theorem for Mixing Dependent Variables. Science Press, Beijing. Luo, X., 1994. Large sample properties of the empirical Euclidean likelihood estimation for semiparametric model. Chinese J. Appl. Pro. Statist. 10, 344–352. Luo, X., 1997. Empirical Euclidean likelihood estimation for two sample semiparametric model. Chinese. J. Appl. Pro. Statist. 13, 133–141.
152 Owen, A., 1988. Owen, A., 1990. Owen, A., 1991. Qin, J., Lawless,
L. Lin, R. Zhang / Statistics & Probability Letters 53 (2001) 143 – 152 Empirical likelihood ratio con3dence intervals for a single functional. Biometrika 75, 237–249. Empirical likelihood ratio con3dence intervals regions. Ann. Statist. 18, 90–120. Empirical likelihood for linear models. Ann. Statist. 19, 1725–1747. J., 1994. Empirical likelihood and general estimating equations. Ann. Statist. 22, 300–325.