Statistics and Probability Letters 80 (2010) 1348–1353
Contents lists available at ScienceDirect
Statistics and Probability Letters journal homepage: www.elsevier.com/locate/stapro
The asymptotic efficiency of improved prediction intervals Paul Kabaila ∗ , Khreshna Syuhada La Trobe University, Australia Institut Teknologi Bandung, Indonesia
article
info
Article history: Received 29 October 2009 Received in revised form 20 April 2010 Accepted 26 April 2010 Available online 10 May 2010 Keywords: Asymptotic efficiency Estimative prediction limit Improved prediction limit
abstract We consider the Barndorff-Nielsen and Cox (1994, p. 319) method of modifying an estimative prediction interval to obtain an improved prediction interval with better conditional coverage properties. The parameter estimator, on which this improved interval is based, is assumed to have the same asymptotic distribution as the conditional maximum likelihood estimator. This improved interval depends strongly on the asymptotic conditional bias of this estimator, which can be very sensitive to small changes in this estimator. We show, however, that the asymptotic efficiency of this improved prediction interval does not depend on this bias. © 2010 Elsevier B.V. All rights reserved.
1. Introduction Suppose that {Yt } is a discrete-time stochastic process with probability distribution determined by the parameter vector θ , where the Yt are continuous random variables. Also suppose that {Y (t ) } is a Markov process, where Y (t ) = Yt −p+1 , . . . , Yt . For example, {Yt } may be an AR(p) process or an ARCH(p) process. The available data are Y1 , . . . , Yn . Suppose that we are concerned with k-step-ahead prediction where k is a specified positive integer. We use lower case to denote the observed value of a random vector. For example, y(n) denotes the observed value of the random vector Y (n) . Firstly, suppose that our aim is to find an upper prediction limit z (Y1 , . . . , Yn ), for Yn+k , such that it has coverage probability conditional on Y (n) = y(n) equal to 1 − α i.e. such that
Pθ Yn+k ≤ z (Y1 , . . . , Yn ) | Y (n) = y(n) = 1 − α for all θ and y(n) . The desirability of a prediction limit or interval having coverage probability 1 −α conditional on Y (n) = y(n) has been noted by a number of authors. In the context of an AR(p) process, this has been noted by Phillips (1979), Stine (1987), Thombs and Schucany (1990), Kabaila (1993), McCullough (1994), He (2000), Kabaila and He (2004) and Vidoni (2004). In the context of an ARCH(p) process, this has been noted by Christoffersen (1998), Kabaila (1999), Vidoni (2004) and Kabaila and Syuhada (2008). Define zα (θ , y(n) ) by the requirement that Pθ Yn+k ≤ zα (θ , y(n) ) | Y (n) = y(n) = 1 − α for all θ and y(n) . The estimative
b , Y (n) ), where Θ b is an estimator of θ with the same asymptotic distribution 1 − α prediction limit is defined to be zα (Θ as the conditional maximum likelihood estimator of θ . This prediction limit may not have adequate coverage probability properties, unless n is very large. In Section 2, we recap the argument (due to Cox, 1975, p. 50) showing that the coverage b , Y (n) ) conditional on Y (n) = y(n) that is 1 − α + O(n−1 ). probability of zα (Θ Methods for obtaining prediction limits with better asymptotic coverage properties, conditional on Y (n) = y(n) , than the estimative prediction limit have been described by Cox (1975), Barndorff-Nielsen and Cox (1994, p. 319), Corcuera (2001,
∗ Corresponding address: Department of Mathematics and Statistics, La Trobe University, Victoria 3086, Australia. Tel.: +61 3 9479 2594; fax: +61 3 9479 2466. E-mail address:
[email protected] (P. Kabaila). 0167-7152/$ – see front matter © 2010 Elsevier B.V. All rights reserved. doi:10.1016/j.spl.2010.04.016
P. Kabaila, K. Syuhada / Statistics and Probability Letters 80 (2010) 1348–1353
1349
section 4), Vidoni (2004, 2009) and Kabaila and Syuhada (2008). For convenience, we refer to Barndorff-Nielsen and Cox (1994, p. 319) as BNC94. The key advantages of the BNC94 method are its simplicity, ease of explanation and accessibility to a relatively wide audience. In Section 2, we recap the argument that shows that the BNC94-improved 1 − α prediction limit has coverage probability conditional on Y (n) = y(n) that is 1 − α + O(n−3/2 ). b the conditional maximum likelihood estimator or θ , but also for any The BNC94 method is applicable not only for Θ b ˆ . For example, estimator Θ with the same asymptotic distribution as this estimator. There are many possible choices for Θ for a stationary Gaussian AR(1) model, commonly-used estimators of the autoregressive parameter include least-squares, Yule–Walker and Burg estimators. It is natural to ask the following question. What difference does it make to the BNC94b is used? The BNC94-improved 1 − α prediction limit is obtained from the improved prediction limit which estimator Θ b conditional on Y (n) = y(n) . This estimative 1 − α prediction limit using a correction that includes the asymptotic bias of Θ b asymptotic conditional bias can be very sensitive to small changes in the estimator Θ (Kabaila and Syuhada, 2007, Section 4 and Syuhada, 2008). For example, the Yule–Walker and least-squares estimators of the autoregressive parameter of a stationary Gaussian AR(1) model have quite different asymptotic conditional biases. So, the form of the BNC94-improved b . What difference does this make to the 1 − α prediction limit will typically depend very strongly on the choice of Θ asymptotic efficiency of the BNC94-improved prediction limit? In Section 2, we show that the asymptotic efficiency of this b is used. improved prediction limit is not influenced by which estimator Θ We extend these results to prediction intervals as follows. In Section 3, we show that the estimative 1 − α prediction interval has coverage probability conditional on Y (n) = y(n) is 1 − α + O(n−1 ). In this section, we also present a modification of an estimative 1 − α prediction interval, analogous to the BNC94 modification of an estimative prediction limit, to obtain an improved 1 −α prediction interval with better coverage properties. We show that this improved 1 −α prediction interval has coverage probability conditional on Y (n) = y(n) that is 1 −α + O(n−3/2 ). This modification involves the use of a correction b conditional on Y (n) = y(n) . Does it make any difference to the asymptotic efficiency of that includes the asymptotic bias of Θ b of θ is used? In Section 3, we show that it does not make any difference this improved prediction interval which estimator Θ which estimator is used. In Section 5, we present an illustration of this result. b of The fact that the asymptotic efficiencies of these prediction limits and intervals do not depend on which estimator Θ θ is used, provided that it has the same asymptotic distribution as the conditional maximum likelihood estimator, has the b whose asymptotic conditional bias is easiest to find. Usually, this will following consequence. We can use that estimator Θ be conditional maximum likelihood estimator whose asymptotic conditional bias can be found using the formula of Vidoni (2004, p. 144). When the extensive algebraic manipulations required to find the BNC94-improved prediction limit or interval are too messy, we can use the Kabaila and Syuhada (2008) simulation-based methodology to find approximations to this improved prediction limit or interval. The implications of the asymptotic efficiency results described in Sections 2 and 3 for these approximations are described in Section 4.
2. Asymptotic efficiency result for improved prediction limits In this section we recap the argument, due to Cox (1975), that the conditional coverage probability of the estimative 1 − α upper prediction limit is 1 − α + O(n−1 ). We then recap the argument, due to BNC94, that the conditional coverage probability of their improved 1 − α upper prediction limit is 1 − α + O(n−3/2 ). This prediction limit includes a correction b . We show, however, that the asymptotic efficiency of term that depends strongly on the asymptotic conditional bias of Θ this prediction limit (which we measure to O(n−1 )) does not depend on this bias (which we also measure to O(n−1 )). Let F ( · ; θ , y(n) ) denote the cumulative distribution function of Yn+k , conditional on Y (n) = y(n) . Also, let f ( · ; θ , y(n) ) denote the probability density function corresponding to this cumulative distribution function. Assume, as do Cox (1975), BNC94 and Vidoni (2004), that
b − θ | Y (n) = y(n) = b(θ , y(n) )n−1 + · · · Eθ Θ b − θ )(Θ ˆ − θ )T | Y (n) = y Eθ (Θ
(n)
(1)
= i−1 (θ ) + · · ·
(2)
where i(θ ) denotes the expected information matrix. We assume that every element of i(θ ) is O(n−1 ). Henceforth, we use the Einstein summation notation that repeated indices are implicitly summed over. b , y(n) ) | Y (n) = y(n) , which is the conditional coverage probability of the Define Hα (θ|y(n) ) = Pθ Yn+k ≤ zα (Θ 1 − α estimative prediction limit. Using the fact that the distribution of Yn+k given (Y1 , . .. , Yn ) = (y1 , . . . , yn ) depends b , y(n) ); θ , y(n) ) | Y (n) = y(n) . Now define Gα (Θ b ; θ |y(n) ) = only on y(n) , it may be shown that Hα (θ|y(n) ) = Eθ F (zα (Θ
b , y(n) ); θ , y(n) ). Thus Hα (θ|y(n) ) = Eθ Gα (Θ b ; θ|y(n) ) | Y (n) = y(n) . We now use the stochastic expansion F (zα (Θ
∂ Gα (θˆ ; θ |y(n) ) b ; θ |y ) = Gα (θ; θ |y ) + Gα (Θ ˆ ∂ θˆi (n)
(n)
θ=θ
bi − θi ) + (Θ
ˆ
1 ∂ 2 Gα (θˆ ; θ |y(n) )
2
∂ θˆr ∂ θˆs
θ=θ
br − θr )(Θ bs − θs ) + · · · . (3) (Θ
1350
P. Kabaila, K. Syuhada / Statistics and Probability Letters 80 (2010) 1348–1353
By the definition of zα (θ , y(n) ), Gα (θ; θ|y(n) ) = 1 − α . Thus Hα (θ|y(n) ) = 1 − α + cα (θ , y(n) )n−1 + · · · where cα (θ , y
(n)
)n
−1
=n
−1
∂ Gα (θˆ ; θ |y(n) ) ˆ ∂ θˆi
(n)
b(θ , y
θ=θ
)i +
ˆ
1 ∂ 2 Gα (θˆ ; θ |y(n) )
∂ θˆr ∂ θˆs
2
irs
(4)
θ=θ
where b(θ , y(n) )i denotes the ith component of the vector b(θ , y(n) ) and irs denotes the (r , s)th element of the inverse of the expected information matrix i(θ ). In other words, the conditional coverage probability of the estimative 1 − α upper b , Y (n) ) is 1 − α + O(n−1 ). prediction limit zα (Θ Define dα (θ , y(n) ) = −
cα (θ , y(n) )n−1 f (zα (θ , y(n) ); θ , y(n) )
.
(5)
The BNC94-improved 1 − α prediction limit is
b , Y (n) ) = zα (Θ b , Y (n) ) + dα (Θ b , Y (n) ). zα+ (Θ b , Y (n) ) depends strongly on the asymptotic conditional bias b(θ , y(n) )n−1 of the estimator Θ b. The correction term dα (Θ b , y(n) ) | Y (n) = y(n) = The conditional coverage probability of this improved prediction limit is Pθ Yn+k ≤ zα+ (Θ b , y(n) ); θ , y(n) ) | Y (n) = y(n) . We now use the expansion Eθ F (zα+ (Θ
b , y(n) ); θ , y(n) = F zα (Θ b , y(n) ); θ , y(n) + f zα (Θ b , y(n) ); θ , y(n) dα (Θ b , y(n) ) + · · · F zα+ (Θ
b ; θ |y(n) ) + f zα (θ , y(n) ); θ , y(n) dα (θ , y(n) ) + · · · . = Gα (Θ Thus
b , Y (n) ) | Y (n) = y(n) ) = Hα (θ|y(n) ) + f zα (θ , y(n) ); θ , y(n) dα (θ , y(n) ) + · · · Pθ (Yn+k ≤ zα+ (Θ = 1 − α + O(n−3/2 ). We assess the asymptotic efficiency of an upper prediction limit Z , with coverage probability 1 − α + o(n−1 ) conditional on Y (n) = y(n) , by examining the asymptotic expansion of Eθ (Z | Y (n) = y(n) ). Specifically, for the case that
Eθ Z | Y (n) = y(n) = zα θ , y(n) + a θ , y(n) n−1 + · · · we assess the asymptotic efficiency of Z by the term a θ , y(n) . The larger this term, the lower the asymptotic efficiency of Z. b , Y (n) as follows. Using Therefore, we measure the asymptotic efficiency of the improved prediction limit zα+ Θ
b ; θ |y(n) = F zα (Θ b , y(n) ); θ , y(n) , we find that Gα Θ
(n)
dα (θ , y
) = −n
−1 ∂ zα (θ , y
∂θi
(n)
)
b(θ , y
(n)
)i −
f 0 (zα (θ , y(n) ); θ , y(n) ) ∂ zα (θ , y(n) ) ∂ zα (θ , y(n) ) 2f (zα (θ , y(n) ); θ , y(n) )
∂θr
∂θs
+
1 ∂ 2 zα (θ , y(n) ) 2
∂θr ∂θs
irs .
b , y(n) ) is equal to Now, zα+ (Θ ∂ zα (θ , y(n) ) 1 ∂ 2 zα (θ , y(n) ) bi − θi ) + br − θr )(Θ bs − θs ) + dα (θ , y(n) ) + · · · . (Θ (Θ ∂θi 2 ∂θr ∂θs b , y(n) ) | Y (n) = y(n) is equal to Thus Eθ zα+ (Θ zα (θ , y(n) ) +
zα (θ , y(n) ) −
f 0 (zα (θ , y(n) ); θ , y(n) ) ∂ zα (θ , y(n) ) ∂ zα (θ , y(n) ) 2f (zα (θ , y(n) ); θ , y(n) )
∂θr
∂θs
irs + · · · .
We see that the asymptotic conditional bias b(θ , y(n) )n−1 does not enter into this expression. This has the following b whose asymptotic conditional bias is easiest to find. consequence. We can use that estimator Θ 3. Asymptotic efficiency result for improved prediction intervals In this section, we extend the results of Section 2 to prediction intervals. We show that the estimative 1 − α prediction interval has coverage probability conditional on Y (n) = y(n) that is 1 − α + O(n−1 ). We also present a modification of an estimative 1 − α prediction interval, analogous to the BNC94 modification of an estimative prediction limit, to obtain an improved 1 − α prediction interval with coverage probability conditional on Y (n) = y(n) that is 1 − α + O(n−3/2 ). This b . We show, however, modification includes a correction term that depends strongly on the asymptotic conditional bias of Θ that the asymptotic efficiency of this prediction interval (which we measure to O(n−1 )) does not depend on this bias (which we also measure to O(n−1 )).
P. Kabaila, K. Syuhada / Statistics and Probability Letters 80 (2010) 1348–1353
1351
Suppose that our aim is to find a prediction interval `(Y1 , . . . , Yn ), u(Y1 , . . . , Yn ) for Yn+k , such that it has coverage probability conditional on Y (n) = y(n) equal to 1 − α i.e. such that
Pθ Yn+k ∈ `(Y1 , . . . , Yn ), u(Y1 , . . . , Yn ) | Y (n) = y(n) = 1 − α for all θ and y(n) . As in Section 2, define F ( · ; θ , y(n) ) and f ( · ; θ , y(n) ) to be the cumulative distribution function and probability density function (respectively) of Yn+k , conditional on Y (n) = y(n) . Suppose that f ( · ; θ , y(n) ) is a continuous unimodal function for all y(n) and θ . Define `α (θ , y(n) ) and uα (θ , y(n) ) by the requirements that f (`α (θ , y(n) ); θ , y(n) ) = f (uα (θ , y(n) ); θ , y(n) ) and
Pθ Yn+k ∈ `α (θ , y(n) ), uα (θ , y(n) ) | Y (n) = y(n) = 1 − α for all θ and y(n) . If θ is known then `α (θ , y(n) ), uα (θ , y(n) ) is the shortest prediction interval for Yn+k , having coverage probability 1 − α conditional on Y (n) = y(n) . We define the estimative 1 − α prediction interval to be
b , Y (n) ) = `α (Θ b , Y (n) ), uα (Θ b , Y (n) ) . Iα (Θ
Assume that (1) and (2) hold true. b , y(n) ) | Y (n) = y(n) , the Using the same sort of methodology as in Section 2, it may be shown that Pθ Yn+k ∈ Iα (Θ
b , Y (n) ), is 1 − α + O(n−1 ). Suppose that conditional coverage probability of the estimative 1 − α prediction interval Iα (Θ ` (n) u (n) −1 (n) dα (θ, y ) and dα (θ , y ) are both O(n ) for every θ and y . Also suppose that d`α (θ , y(n) ) + duα (θ , y(n) ) = −
cα (θ , y(n) )n−1 f (uα (θ , y(n) ); θ , y(n) )
.
(6)
Note that we could replace f (uα (θ , y(n) ); θ , y(n) ) in the denominator of the expression on the right-hand side by f (`α (θ, y(n) ); θ , y(n) ), since f (`α (θ , y(n) ); θ , y(n) ) = f (uα (θ , y(n) ); θ , y(n) ). The improved 1 − α prediction interval is
b , Y (n) ) = `α (Θ b , Y (n) ) − d`α (Θ b , Y (n) ), uα (Θ b , Y (n) ) + duα (Θ b , Y (n) ) . Iα+ (Θ
Using the same sort of methodology as in Section 2, it may be shown that the conditional coverage probability of this improved prediction interval is 1 − α + O(n−3/2 ). We assess the asymptotic efficiency of a prediction interval [L, U ], with coverage probability 1 − α + o(n−1 ) conditional on Y (n) = y(n) , by examining the asymptotic expansion of Eθ (U − L | Y (n) = y(n) ). Specifically, for the case that
Eθ U − L | Y (n) = y(n) = uα θ , y(n) − `α θ , y(n) + a θ , y(n) n−1 + · · · we assess the asymptotic efficiency of [L, U ] by the term a θ , y(n) . The larger this term, the lower the asymptotic efficiency of [L, U ]. b , Y (n) ) by examining the Therefore, we measure the asymptotic efficiency of the improved prediction interval Iα+ (Θ + b (n) (n) (n) asymptotic expansion of Eθ length of Iα (Θ , y ) | Y = y . Using the same sort of methodology as in Section 2, we
b , y(n) ) | Y (n) = y(n) is equal to find that Eθ length of Iα+ (Θ
uα (θ , y(n) ) − `α (θ , y(n) ) −
1
f 0 (uα (θ , y(n) ); θ , y(n) ) ∂ uα (θ , y(n) ) ∂ uα (θ , y(n) )
2
f (uα (θ , y(n) ); θ , y(n) )
∂θr
∂θs
!
−
f 0 (`α (θ , y(n) ); θ , y(n) ) ∂`α (θ , y(n) ) ∂`α (θ , y(n) ) rs i + ···. f (`α (θ , y(n) ); θ , y(n) ) ∂θr ∂θs
We see that the asymptotic conditional bias b(θ , y(n) )n−1 does not enter into this expression. This has the following b whose asymptotic conditional bias is easiest to find. consequence. We can use that estimator Θ (n) Now consider the particular case that f ( · ; θ , y(n) ) is also symmetric about m(θ , y(n) ) for all y and θ . In other words, suppose that, for every y(n) and θ , f m(θ , y(n) ) − w; θ , y(n) = f m(θ , y(n) ) + w; θ , y(n) for all w > 0. In this case, we may choose d`α (θ , y(n) ) = duα (θ , y(n) ) = δα (θ , y(n) ), say. Define wα (θ , y(n) ) by the requirement that Pθ Yn+k ∈ m(θ , y(n) ) −
wα (θ, y(n) ), m(θ , y(n) ) + wα (θ , y(n) ) | Y (n) = y(n) = 1 − α for all θ and y(n) . Thus `α (θ , y(n) ) = mα (θ , y(n) ) − wα (θ , y(n) ) and uα (θ , y(n) ) = mα (θ , y(n) ) + wα (θ , y(n) ). It may be shown that δα (θ , y(n) ) is equal to (n) 1 ∂ 2 Gα (θˆ ; θ |y(n) ) −1 ∂wα (θ , y ) (n) −n b(θ , y )i − irs ˆ ∂θi 4f (uα (θ , y(n) ); θ , y(n) ) ∂ θˆr ∂ θˆs θ=θ
1352
P. Kabaila, K. Syuhada / Statistics and Probability Letters 80 (2010) 1348–1353
4. Implications for simulation-based improved prediction intervals The BNC94-improved prediction limit zα+ (θˆ , y(n) ) may be found algebraically using (4) and (5). When these algebraic manipulations become too complicated, the method of Kabaila and Syuhada (2008) may be used. For any given θ , these b , y(n) ) | Y (n) = y(n) − (1 − α) by Monte Carlo simulation and use this estimate authors estimate Pθ Yn+k ≤ zα (Θ as an approximation to cα (θ , y(n) )n−1 (which appears in (5)). In particular, these authors propose that this simulation method be used to obtain an approximation to cα (θˆ , y(n) )n−1 . This is then used to obtain an approximation to dα (θˆ , y(n) ), and so obtain an approximation to zα+ (θˆ , y(n) ). In Kabaila and Syuhada (2008), the formula for r (ω, y(n) ) should be n−1 c (ω, y(n) )/f (z (y(n) , ω); ω|y(n) ) instead of c (ω, y(n) )/f (z (y(n) , ω); ω|y(n) ), so that d(ω, y(n) ) = n−1 c (ω, y(n) ), to order n−1 . The improved prediction interval Iα+ (θˆ , y(n) ), described in Section 3, may be found algebraically using (4) and (6). When these algebraic manipulations become too complicated, a simulation-based method, similar to that described by Kabaila and b , y(n) ) | Y (n) = y(n) −(1 −α) Syuhada (2008) for prediction limits, may be used. For any given θ , we estimate Pθ Yn+k ∈ Iα (Θ by Monte Carlo simulation and use this estimate as an approximation to cα θ , y(n) n−1 (which appears in (6)). In particular,
we propose that this simulation method be used to obtain an approximation to cα θˆ , y(n) n−1 , which is then used to obtain
(n)
an approximation to Iα+ θˆ , y . The results of Sections 2 and 3 imply that these simulation-based improved prediction limits and intervals have ˆ of θ is used, provided that it has the same asymptotic asymptotic efficiencies that do not depend on which estimator Θ distribution as the conditional maximum likelihood estimator. 5. Illustration of the main result In this section, we present an illustration of the result of Section 3 in the same context as that considered in Section 4 of Kabaila and Syuhada (2007). In other words, we consider a stationary zero-mean Gaussian first-order autoregressive process {Yt } satisfying Yt = ρ Yt −1 +εt for all integer t, where |ρ| < 1 and the εt are independent and identically N (0, v) distributed. Let θ = (ρ, v). Suppose that we wish to find a prediction interval for Yn+1 . Let q = Φ −1 1 − α2 , where Φ denotes the N (0, 1) cumulative distribution function. If θ is known then Iα (θ , Yn ) =
i h 1 1 ρ Yn − v 2 q, ρ Yn + v 2 q is a prediction interval for Yn+1 with coverage probability 1 − α conditional on Yn = yn . For b = (ρ, b , Yn . In this case, the improved prediction a specified estimator Θ ˆ Vˆ ), the estimative prediction interval is Iα Θ b , Y (n) ) − δα (Θ b , Y (n) ), uα (Θ b , Y (n) ) + δα (Θ b , Y (n) ) , described at the end of Section 3 of the present paper, is interval `α (Θ h i 1 1 b , Yn = ρˆ Yn − b b , Yn , ρˆ Yn + b b , Yn Iα+ Θ V 2 q − n− 1 d Θ V 2 q + n− 1 d Θ
where d(θ , yn ) = −c (θ , yn )v 1/2 / (2φ(q)) and c (θ , yn ) is defined in Appendix B of Kabaila and Syuhada (2007). Note that the expression d(θ , yn ) = −c (θ , yn )/ (2φ(q)), given byKabaila and Syuhada (2007), is incorrect. e = ρ, ¯ = ρ, Consider the estimators Θ ˜ e V and Θ ¯ V¯ of θ , described in Section 4 of Kabaila and Syuhada (2007). As noted in this section, the asymptotic biases of these estimators conditional on Yn = yn are quite different. Using the results of this e, section, we may show the following. For the estimator Θ d(θ , yn ) =
v 1/2
2
3
+
2
y2n
v
−1
1−ρ
2
+
1 2
2
q
q.
¯, For the estimator Θ d(θ , yn ) = v 1/2
5 4
1 + q2 q. 4
e , Yn and Iα+ Θ ¯ , Yn are both Θ q3 2qv 1/2 + n−1 v 1/2 q + + ···.
The expected lengths of
Iα+
2
This illustrates the result of Section 3. As a further illustration of this result, we carried out the following simulation study. Suppose that n is given. Let
e , Yn − length of Iα+ Θ ¯ , Yn X = n length of Iα+ Θ
/v 1/2 .
It is straightforward to show that the distribution of X does not depend on the parameter v . A consequence of the result of Section 3 is that, for large n, the magnitude of E (X ) is small by comparison with the interquartile range of X . Let the ith simulation run consist of an observation xi of X . We have chosen ρ = 0.9, v = 1 and 1 − α = 0.95. We have used M = 1,000,000 simulation runs for each of n = 50, n = 100, n = 200 and n = 300. For n = 50, the 95% confidence interval
P. Kabaila, K. Syuhada / Statistics and Probability Letters 80 (2010) 1348–1353
1353
for E (X ) is [0.545, 0.557] and the sample interquartile range of x1 , . . . , xM is 2.999. For n = 100, the 95% confidence interval for E (X ) is [0.298, 0.309] and the sample interquartile range of x1 , . . . , xM is 2.700. For n = 200, the 95% confidence interval for E (X ) is [0.144, 0.169] and the sample interquartile range of x1 , . . . , xM is 2.522. For n = 300, the 95% confidence interval for E (X ) is [0.105, 0.116] and the sample interquartile range of x1 , . . . , xM is 2.472. These numerical results are in accord with the result of Section 3. 6. Remarks Barndorff-Nielsen and Cox (1996) first reduce the data to the maximum likelihood estimator of θ and an appropriatelychosen approximately ancillary statistic A. They then seek a prediction limit with good asymptotic coverage properties conditional on A = a. This method is more difficult to apply than the BNC94 method. However, both of these methods are closely linked since A includes a component consisting of Y (n) transformed to approximate ancillarity. Corcuera (2001, 2008) has applied the Barndorff-Nielsen and Cox (1996) method to a Gaussian autoregressive process of order p. However, this method seems intractable for processes such as the ARCH process. In the present paper, we have compared the asymptotic efficiencies of improved 1 − α prediction intervals. We have, however, not sought to compare the asymptotic efficiencies of estimative 1 − α prediction intervals, which have conditional coverage probability 1 − α + O(n−1 ). The reason for this is as follows (cf Kabaila and Syuhada, 2007). There can be a tradeoff between the O(n−1 ) terms in the asymptotic expansion of the conditional coverage probability of a prediction interval and the asymptotic expansion of its conditional expected length. Consider a prediction interval I = [L, U ], with coverage −1 probability 1 − α + O(n−1 ) conditional on Y (n) = y(n) . Another prediction interval, with coverage probability 1 − α + O(n ) conditional on Y (n) = y(n) , is J = [L − n−1 , U + n−1 ]. Now Eθ length of J | Y (n) = y(n) = Eθ length of I | Y (n) = y(n) + 2n−1 . Yet the prediction interval J cannot be said to be worse than the prediction interval I, as J has coverage probability (n) conditional on Y (n) = y(n) that is greater than the coverage = y(n) . Specifically, probability of I conditional on Y Pθ Yn+k ∈ J | Y (n) = y(n) = Pθ Yn+k ∈ I | Y (n) = y(n) + c (θ , y(n) )n−1 + · · · where c (θ , y(n) ) > 0. Therefore, the asymptotic expansion of the conditional expected length of the estimative prediction interval cannot be used to assess its asymptotic efficiency. Consequently, it is not possible to compare the asymptotic efficiency of estimative and improved prediction intervals. This type of tradeoff is not possible for the improved prediction intervals, which have conditional coverage probability 1 − α + O(n−3/2 ). Acknowledgement The authors are grateful to an anonymous referee for some comments and suggestions that helped to improve the paper. References Barndorff-Nielsen, O.E., Cox, D.R., 1994. Inference and Asymptotics. Chapman and Hall, London. Barndorff-Nielsen, O.E., Cox, D.R., 1996. Prediction and asymptotics. Bernoulli 2, 319–340. Christoffersen, P.F., 1998. Evaluating interval forecasts. International Economic Review 39, 841–862. Corcuera, J.M., 2001. Prediction in first order autoregressive processes, a small sample simulation. American Journal of Mathematical and Management Sciences 21, 125–143. Corcuera, J.M., 2008. Approximate predictive pivots for autoregressive processes. Statistics & Probability Letters 78, 2685–2691. Cox, D.R., 1975. Prediction intervals and empirical Bayes confidence intervals. In: Gani, J. (Ed.), Perspectives in Probability and Statistics. Academic Press, London, pp. 47–55. He, Z., 2000. Assessment of the accuracy of time series predictions. Ph.D. Thesis, Department of Statistical Science, La Trobe University. Kabaila, P., 1993. On bootstrap predictive inference for autoregressive processes. Journal of Time Series Analysis 14, 473–484. Kabaila, P., 1999. The relevance property for prediction intervals. Journal of Time Series Analysis 20, 655–662. Kabaila, P., He, Z., 2004. The adjustment of prediction intervals to account for errors in parameter estimation. Journal of Time Series Analysis 25, 351–358. Kabaila, P., Syuhada, K., 2007. The relative efficiency of prediction intervals. Communications in Statistics: Theory and Methods 36, 2673–2686. Kabaila, P., Syuhada, K., 2008. Improved prediction limits for AR(p) and ARCH(p) processes. Journal of Time Series Analysis 29, 213–223. McCullough, B.D., 1994. Bootstrapping forecast intervals: an application to AR(p) models. Journal of Forecasting 13, 51–66. Phillips, P.C.B., 1979. The sampling distribution of forecasts from a first-order autoregression. Journal of Econometrics 9, 241–261. Stine, R.A., 1987. Estimating properties of autoregressive forecasts. Journal of the American Statistical Association 82, 1072–1078. Syuhada, K., 2008. Prediction intervals for financial time series and their assessment. Ph.D. Thesis, Department of Mathematics and Statistics, La Trobe University. Thombs, L.A., Schucany, W.R., 1990. Bootstrap prediction intervals for autoregression. Journal of the American Statistical Association 85, 486–492. Vidoni, P., 2004. Improved prediction intervals for stochastic process models. Journal of Time Series Analysis 25, 137–154. Vidoni, P., 2009. A simple procedure for computing improved prediction intervals for autoregressive models. Journal of Time Series Analysis 30, 577–590.