On bootstrap inference in cointegrating regressions

On bootstrap inference in cointegrating regressions

Economics Letters 72 (2001) 1–10 www.elsevier.com / locate / econbase On bootstrap inference in cointegrating regressions Zacharias Psaradakis* Birkb...

78KB Sizes 0 Downloads 60 Views

Economics Letters 72 (2001) 1–10 www.elsevier.com / locate / econbase

On bootstrap inference in cointegrating regressions Zacharias Psaradakis* Birkbeck College, University of London, School of Economics, Mathematics and Statistics, 7 – 15 Gresse Street, London W1 P 2 LL, UK Received 11 September 2000; accepted 21 December 2000

Abstract This paper considers the construction of bootstrap hypothesis tests and confidence regions for the parameters of cointegrating regressions. We suggest using a sieve bootstrap scheme based on resampling residuals from an autoregressive approximation to the innovation process driving the cointegrated system. Simulations demonstrate the small-sample effectiveness of this bootstrap method in the case of two commonly used estimators for cointegrating regressions.  2001 Elsevier Science B.V. All rights reserved. Keywords: Autoregressive approximation; Cointegrating regression; Sieve bootstrap JEL classification: C12; C22

1. Introduction This paper addresses the problem of inference in cointegrating regressions involving I(1) variables. The coefficients of such regressions may be estimated by a variety of different methods which produce estimators that are asymptotically mixed Gaussian and median unbiased and which allow inference to be carried out using asymptotic normal or chi-squared test criteria (see, inter alia, Phillips and Hansen, 1990; Saikkonen, 1991; Park, 1992; Stock and Watson, 1993; Pesaran and Shin, 1999). Monte Carlo studies have revealed, however, that conventional large-sample theory often provides poor approximations to the distributions of associated test statistics in small and moderately sized samples. One way of improving the quality of finite-sample inference in these situations is by using the bootstrap. In stationary situations, this resampling technique provides approximations to the sampling distributions of estimators and test statistics that are often more accurate than those obtained by first-order asymptotic theory, and has been found to perform impressively in a variety of settings (see Shao and Tu, 1995). *Tel.: 144-20-7631-6415; fax: 144-20-7631-6416. E-mail address: [email protected] (Z. Psaradakis). 0165-1765 / 01 / $ – see front matter PII: S0165-1765( 01 )00410-4

 2001 Elsevier Science B.V. All rights reserved.

Z. Psaradakis / Economics Letters 72 (2001) 1 – 10

2

The possibility of using the bootstrap for improved inference in cointegrating regressions was also investigated by Li and Maddala (1997). Their simulation analysis showed that bootstrap methods can be successful both in correcting for estimation bias and in reducing the difference between the true and nominal levels of significance tests that are observed when asymptotic critical values are used.1 Since the errors driving cointegrated systems are typically autocorrelated, Li and Maddala’s (1997) ¨ procedures were based on the blockwise bootstrap of Kunsch (1989) and Liu and Singh (1992) and the stationary bootstrap of Politis and Romano (1994). These block resampling schemes have the attractive feature of not requiring a parametric specification of the structure of autocorrelation, but they often produce bootstrap data that are less dependent than the original series. In addition, samples generated by block resampling exhibit artifacts as the result of joining together randomly selected blocks of data. In this paper, we investigate an alternative model-based resampling procedure, adapted from proposals that have been put forward for bootstrapping stationary autoregressive processes of infinite ¨ order (see Kreiss, 1992; Paparoditis, 1996; Buhlmann, 1997). The main idea is to approximate the (possibly infinite-dimensional) generating mechanism of the errors by an autoregressive model of finite order and use this to resample appropriately defined residuals and to generate bootstrap data. This so-called sieve bootstrap (or AR(`)-bootstrap) procedure has been used successfully to handle a variety of statistical inference problems, including problems related to spectral density estimation (Swanepoel and van Wyk, 1986), time-series model selection (Paparoditis and Streitberg, 1992), ¨ parameter estimation in multivariate models (Paparoditis, 1996), kernel smoothing (Buhlmann, 1998), and unit-root testing (Psaradakis, 2000). Since, like Li and Maddala (1997), we are interested in situations where the errors driving the cointegrating relationships are autocorrelated, we consider here two estimation methods specifically designed for such cases, namely, the fully modified ordinary least squares (FMOLS) method of Phillips and Hansen (1990) and the canonical cointegrating regressions (CCR) method of Park (1992).2 For inference based on these methods, our analysis demonstrates the small-sample superiority of the sieve bootstrap over both the usual asymptotic approximations and the blockwise bootstrap.

2. Asymptotic inference in cointegrating regressions Consider the following system for the R m -valued time series h( y t ,x t¡ )¡ j tn51 , ¡

y t 5 b x t 1 u 1t ,

x t 5 x t 21 1 u 2t ,

(1) ¡ ¡

where b is an (m 2 1)-dimensional parameter and hu t 5 (u 1t ,u 2t ) j is a strictly stationary and ergodic random process with zero mean, finite covariance matrix, and continuous spectral density matrix f u ( ? ) 1

Li (1994) and Vinod and McCullough (1995) also examined the properties of bootstrap inference procedures for cointegrating regressions. Their studies, however, are limited by the fact that only regressions with exogenous regressors and i.i.d. or AR(1) errors were considered. 2 Bootstrap inference based on cointegrating-vector estimators such as those of Johansen (1991), Saikkonen (1991), Stock and Watson (1993); and Pesaran and Shin (1999) does not require the use of residual-based resampling schemes that allow for autocorrelation since the models involved have white-noise disturbances.

Z. Psaradakis / Economics Letters 72 (2001) 1 – 10

3

which is positive definite at zero. This is a typical cointegrated system in which hx t j is an I(1) process that is not cointegrated, and where y t 2 b ¡ x t 5 u 1t defines the only cointegrating relationship in the system. In the above system, we are interested in inference about b based on the FMOLS and CCR ¡ estimators. To define these estimators, let L 5 o `j51 E(u t u ¡ t 1j ) and S 5 E(u t u t ), and decompose the ¡ long-run covariance matrix V 5 2pf u (0) of hu t j as V 5 L 1 D, where D 5 L 1 S. Also, partition V and D conformably with u t to obtain

S

D

S

¡ v11 v 21 V5 , v 21 V 22

D

d11 d12 , ¡ d 12 D 22

D5

¡ ¡ ¡ D 2 5 (d 12 ,D 22 ) .

The FMOLS estimator of b is then defined as

SO D SO n

b˜ 5

D

n

21

¡

xt x t

x t y˜ t 2 nd˜ ,

t 51

t51

¡ ¡ ˆ 21 ¡ ˆ 21 ¡ ˆ are consistent ˆ 21 ˆ 21 where y˜ t 5 y t 2 v V 22 x~ t , x~ t 5 x t 2 x t21 , d˜ 5 Dˆ 2 (1, 2 v V 22 ) , and Dˆ and V estimators of D and V, respectively. Tests of hypotheses about individual elements of b can be based on fully modified t-type statistics, constructed in the usual manner using b˜ and the diagonal elements of the covariance matrix

SO D n

ˆ 21 ˆ ˆ¡ V˜ 5 ( vˆ 11 2 v 21 V 22 v 21 )

21

¡

xt x t

.

t51

Such statistics have an 1 (0,1) asymptotic null distribution. The CCR estimator of b is defined as ] b5

SO D O n

]x ]x ¡ t t

t51

21 n

]x ]y , t t

t 51

21 21 ¡ ¡ ˆ 21 ¡ ¡ ˆ 21 where ]y t 5 y t 2 [Sˆ Dˆ 2 bˆ 1 (0,v V 22 ) ] uˆ t , ]x t 5 x t 2 (Sˆ Dˆ 2 )¡ uˆ t , uˆ t 5 ( y t 2 bˆ x t ,x~ t¡ )¡ , bˆ 5 n ¡ n x t x t d 21 o t51 x t y t , and Sˆ is a consistent estimator of S. Asymptotically normal t-type statistics so t51 ] based on b can be constructed in the usual way using the covariance matrix

SO D

] ˆ 21 ˆ ˆ¡ V5 ( vˆ 11 2 v 21 V 22 v 21 )

n

]x ]x ¡ t t

21

.

t51

21 n ¡ In the sequel, we take Sˆ 5 n o t51 uˆ t uˆ t and estimate D and V nonparametrically, as in Andrews and Monahan (1992), using the Parzen lag window, a plug-in bandwidth estimator, and the n least-squares residuals huˆ t j t 51 prewhitened via a first-order autoregression.

3. Bootstrap hypothesis testing Suppose we are interested in testing the hypothesis *0 :bi 5 bi,0 against *1 :bi ± bi,0 , where bi is the ith element of b. As noted earlier, a test may be based on an appropriately defined t-statistic

Z. Psaradakis / Economics Letters 72 (2001) 1 – 10

4

2 denoted here by T( bi,0 ) 2 which has an 1 (0,1) asymptotic null distribution. However, under conditions that are relevant to many applications, the use of the normal approximation of the sampling distribution of T( bi,0 ) can result in tests having a Type I error probability much larger than the specified nominal significance level. Below, we discuss how the sieve bootstrap may be used to provide an alternative approximation to the null distribution of T( bi,0 ).

3.1. Bootstrap algorithm Our bootstrap method is based on the assumption that hu t j admits an AR(`) representation

OP u `

ut 5

j

t2j

1 et ,

(2)

j 51

for a zero-mean white-noise process he t j and an absolutely summable sequence of matrices hP j j such that det(I m 2 o `j51 P j z j ) ± 0 for all uzu < 1. This is an assumption that has been used extensively in the cointegration literature and which is satisfied by a large class of random processes, including causal and invertible multivariate ARMA processes. The idea is to approximate (2) by a finite-order autoregressive model for a suitable proxy for u t and use this to generate bootstrap observations as in the usual residual-based bootstrap schemes. For a test of *0 :bi 5 bi,0 against *1 :bi ± bi,0 , the bootstrap procedure is as follows. ¡ ¡ ¡ n 1. Calculate the residuals hu¨ t 5 ( y t 2 b¨ x t ,x~ t ) j t51 , where b¨ is the FMOLS or CCR estimate of b in (1) obtained under the null hypothesis (so the ith element of b¨ is bi,0 ). ˆ , . . . ,F ˆ of the coefficient matrices of an 2. For a fixed positive integer p, obtain estimates F 1 p n m-variate AR( p) model for hu¨ t j t 51 (these may be Yule–Walker estimates, least-squares estimates, n or Burg-type estimates); let heˆ t j t5p 11 be the corresponding residuals. 3. For a given positive integer r, take a random sample he *t j nt 52r 11 from the empirical distribution that 21 21 n puts mass (n 2 p) on each eˆ t 2 (n 2 p) o t5p 11 eˆ t (t 5 p 1 1, . . . ,n). Then, construct a bootstrap n noise series hu t* j t52r11 via the recursion p

u *t 5

OFˆ u* j

t2j

1 e t*

(t 5 2 r 1 1, . . . ,n),

j51

where u *t 5 0 for t < 2 r. 4. Generate bootstrap replicates hx *t j nt51 and hy *t j nt51 by setting

* x *t 5 x t*21 1 u 2t

and

¡ y t* 5 b¨ x t* 1 u *1t ¡ ¡

(t 5 1, . . . ,n),

where we have partitioned u t* as u *t 5 (u *1t ,u *2t ) and set x *0 5 0. 5. Estimate b (by FMOLS or CCR) using the bootstrap sample h( y *t ,x *t )j nt51 and compute the value of the associated t-statistic for *0 :bi 5 bi,0 , say T 1* ( bi,0 ). 6. Repeat steps 3–5 independently B times to obtain a sample hT *b ( bi,0 )j Bb 51 of T( bi,0 ) values. 7. Approximate the null sampling distribution of T( bi,0 ) by the empirical distribution of

Z. Psaradakis / Economics Letters 72 (2001) 1 – 10

5

ˆ bi,0 ) of T( bi,0 ) is hT *b ( bi,0 )j Bb 51 . The bootstrap P-value of the observed realization T( 21 B 3 ˆ bi,0 )uj, where 1A h ? j is the indicator of A. B o b 51 1 [0, `] huT *b ( bi,0 )u 2 uT( Notice that the bootstrap procedure described above uses residuals hu¨ t j obtained under the null hypothesis and test statistics based on the Studentized difference between bi,0 and an estimate of bi , corresponding to what Li and Maddala (1997) defined as sampling scheme S3 and test statistic T 2 . Bootstrap tests based on such a resampling scheme have generally been found to have good small-sample size and power properties (cf. Nankervis and Savin, 1996; Li and Maddala, 1997; Psaradakis, 2000). Also note that the implementation of the sieve bootstrap necessitates the selection of an appropriate value for the order p of the approximating autoregression for u¨ t . In line with earlier ¨ work (Buhlmann, 1997, 1998), we propose choosing p by minimizing the familiar Akaike information criterion (AIC) in a range 0 < p < pmax , where pmax grows with the sample size.

3.2. Simulation study In this subsection, we report on Monte Carlo experiments which evaluate the small-sample performance of sieve bootstrap tests. For ease of computation, the data-generating process used for the simulations is the bivariate version of (1) considered in Phillips and Hansen (1990): y t 5 b x t 1 u 1t ,

S DSDS

x t 5 x t21 1 u 2t ,

u 1t ´1t 0.3 u 2t 5 ´2t 1 u

2 0.4 0.6

DS D

S´´ D | i.i.d. 1SS00D,S1s s1 DD. 1t

´1,t21 ´2,t21 ,

(3) (4) (5)

2t

We set b 5 2 and choose u and s by the grid (u,s ) [ h 2 0.8, 2 0.4,0.8j 3 h 2 0.85, 2 0.5,0.5j. In all the simulations, 1,000 independent samples of n 1 30 observations on ( y t ,x t ) are generated according to (3)–(5) by setting initial values equal to zero, but only the last n of these observations are used for estimation and testing purposes. For each artificial sample, the hypothesis *0 :b 5 2 is tested against the alternative *1 :b ± 2 using t-type statistics based on FMOLS and CCR estimates of b. The P-values of these statistics are calculated using a standard normal approximation (‘‘NOR’’), a sieve bootstrap approximation (‘‘SB’’), and a blockwise bootstrap approximation (‘‘BB’’) to their null sampling distribution. In the latter case, the bootstrap is implemented in a way similar to that of Li and Maddala (1997), the only difference being that our resampling scheme uses the circular block method of Politis and Romano (1992) and Shao and Yu (1993).4 Like Li and Maddala (1997), we use blocks of length 10 for the blockwise bootstrap, while the approximating order p in step 2 of the sieve 3

Alternatively, a bootstrap estimate of an a -level critical value for T( bi,0 ) is given by the (1 2 a )th quantile of the B empirical distribution of huT b* ( bi,0 )uj b 51 . 4 Circular block resampling (i.e., resampling of overlapping blocks of consecutive observations that have been wrapped around in a circle) is preferable to the scheme used in Li and Maddala (1997) since it ensures that observations at the start and end of the series have an equal chance of being drawn as observations in the middle. Also, as Lahiri (1999) recently showed, the blockwise bootstrap is generally superior to the stationary bootstrap.

Z. Psaradakis / Economics Letters 72 (2001) 1 – 10

6

bootstrap algorithm is selected by the AIC with pmax 5 3. Since the quality of asymptotic approximations is fairly good for moderately or large sized samples, we follow Li and Maddala (1997) in focusing here on cases with n 5 50. This sample size is admittedly small compared to those that are typical in applied work, but is chosen so as to provide a fairly tough challenge for the sieve bootstrap. The number of bootstrap replications is always B 5 399. We note that, if the normal distribution or the empirical distribution obtained by the blockwise or sieve bootstrap provides a good approximation to the true sampling distribution of the t-statistics, then the corresponding 1000 P-values should be approximately uniformly distributed on [0,1]. Furthermore, the empirical level of a nominal a -level test (i.e. the fraction of P-values less than or equal to a ) should be about a for every value of a. The results of the experiments are summarized in Table 1, where we report the value of the Kolmogorov goodness-of-fit test statistic for uniformity of the P-values, as well as the empirical levels of tests for a [ h0.05,0.10j. It is immediately clear that the asymptotic 1 (0,1) approximation to the null distribution of FMOLS-based and CCR-based t-statistics is generally unsatisfactory, yielding Table 1 Empirical level of t-type tests a

s

Kolmogorov Stat.

a 5 0.05

a 5 0.10

u

u

u

20.8

20.4

0.8

20.8

20.4

0.8

20.8

20.4

0.8

FMOLS NOR SB BB

20.85 20.85 20.85

0.097 0.026 0.126

0.092 0.028 0.138

0.077 0.030 0.162

0.130 0.059 0.022

0.136 0.064 0.016

0.099 0.040 0.003

0.183 0.119 0.038

0.189 0.124 0.039

0.158 0.074 0.020

NOR SB BB

20.50 20.50 20.50

0.075 0.022 0.146

0.068 0.032 0.166

0.071 0.045 1.191

0.113 0.054 0.003

0.104 0.054 0.004

0.110 0.032 0.001

0.165 0.112 0.027

0.160 0.102 0.022

0.160 0.078 0.016

NOR SB BB

0.50 0.50 0.50

0.038 0.051 0.180

0.033 0.055 0.197

0.124 0.066 0.174

0.082 0.035 0.004

0.078 0.032 0.001

0.131 0.028 0.001

0.131 0.091 0.016

0.130 0.077 0.010

0.197 0.057 0.018

CCR NOR SB BB

20.85 20.85 20.85

0.062 0.028 0.103

0.069 0.032 0.108

0.079 0.040 0.170

0.105 0.062 0.022

0.109 0.072 0.017

0.104 0.040 0.005

0.152 0.117 0.043

0.163 0.121 0.042

0.158 0.074 0.015

NOR SB BB

20.50 20.50 20.50

0.073 0.021 0.138

0.070 0.025 0.160

0.074 0.049 0.191

0.118 0.052 0.008

0.106 0.054 0.005

0.111 0.029 0.002

0.164 0.111 0.030

0.158 0.104 0.021

0.159 0.076 0.016

NOR SB BB

0.50 0.50 0.50

0.042 0.052 0.179

0.035 0.050 0.190

0.122 0.064 0.166

0.088 0.033 0.004

0.077 0.033 0.003

0.136 0.030 0.002

0.131 0.092 0.017

0.132 0.073 0.010

0.195 0.060 0.020

a

An approximate 1% (5%) critical value for the Kolmogorov test is 0.051 (0.043). Empirical levels in the intervals (0.032, 0.068) and (0.076, 0.124) are not significantly different (at the 0.01 level) from the nominal level 0.05 and 0.10, respectively.

Z. Psaradakis / Economics Letters 72 (2001) 1 – 10

7

tests with empirical levels larger than the nominal value and P-values which are not uniformly distributed (although matters do improve when s . 0 and u , 0). The blockwise bootstrap corrects the problem of over-rejection associated with the asymptotic method but unfortunately does so by more than it is necessary. In fact, in the majority of cases, blockwise bootstrap tests are almost as inaccurate as asymptotic tests, albeit in a conservative direction.5 In contrast, the quality of inference improves substantially when the sieve bootstrap approximation is used. For most design points, the sieve bootstrap yields tests that have empirical levels that do not differ significantly from the nominal level and P-values that are uniformly distributed. Only when s . 0 does the performance of the sieve bootstrap leave something to be desired, but even then the level distortions are considerably smaller than the distortions of tests that make use of the asymptotic or blockwise bootstrap critical values. It is also particularly noteworthy that the sieve bootstrap works well when u 5 2 0.8. This is a design point which was included in the analysis as a case where the sieve bootstrap might break down, for u 5 2 0.8 implies that the moving-average process defined by (4) is non-invertible and so it is not representable as an AR(`) process like (2). It seems, however, that the autoregressive approximation is still good enough for tests to have the correct empirical Type I error probability.6

4. Bootstrap confidence intervals An alternative way of making inferences about the individual elements of the cointegrating vector is via confidence intervals. Although these can be obtained by inversion of the significance tests discussed earlier, we focus here on a simpler option which has proved popular in practice, namely the percentile-t, or bootstrap-t, method (e.g., Shao and Tu, 1995, pp. 131–132). Taking the FMOLS estimator as an example, the percentile-t method for constructing a confidence ] interval for the ith element of b is based on the Studentized statistic ( b˜ i 2 bi ) /œv˜ ii , where v˜ ii is the ˜ The distribution of this statistic may be estimated by means of a sieve ith diagonal element of V. bootstrap procedure. For that purpose, the bootstrap algorithm described in subsection 3.1 needs to be modified slightly. More specifically, we need to: (i) replace the restricted residuals hu¨ t j nt 51 used in ¡ ¡ ¡ n steps 1–2 by the unrestricted residuals hu˜ t 5 ( y t 2 b˜ x t ,x~ t ) j t 51 ; (ii) generate bootstrap replicates n ¨ and (iii) calculate standardized hy *t j t 51 in step 4 using the unrestricted estimate b˜ instead of b; ] ˜ ˜ ˜ * replicates T *b ( bi ) 5 ( b i 2 bi ) /œv˜ *ii (b 5 1, . . . ,B) in steps 5–6, where b˜ *i and v˜ *ii are the bootstrap analogues of b˜ i and v˜ ii , respectively. The resulting (1 2 a ) two-sided equal-tailed bootstrap confidence interval for bi is given by ]

]

* a / 2 ( b˜ i )œv˜ ii , b˜ i 2 t *a / 2 ( b˜ i )œv˜ iig, fb˜ i 2 t 12

(6)

where t *a ( b˜ i ) denotes the a th quantile of the empirical distribution of hT *b ( b˜ i )j bB51 . The corresponding 5 It is worth noting that different values for the block length failed to improve the performance of the blockwise bootstrap tests to any significant degree. 6 ¨ As Bickel and Buhlmann (1997) have shown, the closure (with respect to certain metrics) of the class of stationary linear processes and AR(`) processes is fairly large. This implies that the sieve bootstrap is likely to work reasonably well even when the data-generating mechanism does not belong to the AR(`) family.

Z. Psaradakis / Economics Letters 72 (2001) 1 – 10

8

normal confidence interval based on b˜ i is obtained by replacing t *12 a / 2 ( b˜ i ) and t *a / 2 ( b˜ i ) in (6) by the (1 2 a / 2)th and (a / 2)th quantiles of the 1 (0,1) distribution, respectively. The construction of bootstrap confidence intervals based on the CCR estimator is entirely analogous.7 To shed some light on the small-sample effectiveness of the sieve bootstrap for confidence interval construction we carry out some sampling experiments. The experimental design is identical to that in subsection 3.2, with n 5 50 and B 5 399. Monte Carlo estimates (from 1000 replications) of the coverage probability of (1 2 a ) two-sided equal-tailed confidence intervals for b obtained by using the normal approximation, the sieve bootstrap, and the blockwise bootstrap are reported in Table 2. In all cases, inference based on the asymptotic approximation is the least accurate, the asymptotic intervals undercovering consistently. Bootstrap confidence intervals have significantly better coverage than the normal approximation, and the sieve bootstrap has a slight advantage over the blockwise bootstrap (especially when s . 0).

Table 2 Empirical coverage probability of confidence intervals for b a 1 2 a 5 0.95

1 2 a 5 0.90

s

u 5 2 0.8

u 5 2 0.4

u 5 0.8

u 5 2 0.8

u 5 2 0.4

u 5 0.8

FMOLS NOR SB BB

20.85 20.85 20.85

0.870 0.939 0.928

0.864 0.940 0.935

0.901 0.920 0.914

0.817 0.904 0.890

0.811 0.909 0.892

0.842 0.862 0.866

NOR SB BB

20.50 20.50 20.50

0.887 0.956 0.942

0.896 0.962 0.941

0.890 0.926 0.916

0.835 0.907 0.893

0.840 0.913 0.904

0.840 0.877 0.862

NOR SB BB CCR NOR SB BB

0.50 0.50 0.50

0.918 0.969 0.917

0.922 0.967 0.935

0.869 0.946 0.921

0.869 0.924 0.868

0.870 0.930 0.886

0.803 0.889 0.853

20.85 20.85 20.85

0.895 0.937 0.903

0.891 0.920 0.881

0.896 0.953 0.881

0.848 0.903 0.850

0.837 0.888 0.818

0.837 0.923 0.820

NOR SB BB

20.50 20.50 20.50

0.882 0.918 0.907

0.894 0.914 0.912

0.889 0.958 0.949

0.836 0.897 0.874

0.842 0.880 0.865

0.841 0.928 0.904

NOR SB BB

0.50 0.50 0.50

0.912 0.936 0.909

0.923 0.922 0.907

0.864 0.948 0.946

0.869 0.890 0.869

0.868 0.805 0.884 0.924 0.871 0.910 ]]] a ˆ 2 cˆ ) /R, where R 5 1000. An estimate of the standard error of the empirical coverage probability cˆ of an interval is œc(1

7

We should note that the same bootstrap procedure may also be used to estimate the bias of the FMOLS and CCR estimators and thus make a bias correction (cf. Li and Maddala, 1997). We do not investigate this possibility in our simulations below because we found that our choice of long-run covariance matrix estimators resulted in the FMOLS and CCR estimators exhibiting little small-sample biases.

Z. Psaradakis / Economics Letters 72 (2001) 1 – 10

9

5. Conclusion This paper has investigated the usefulness of bootstrap methods for constructing hypothesis tests and confidence intervals for the parameters of cointegrating regressions with autocorrelated errors. We have proposed using a sieve bootstrap procedure based on residual resampling from an autoregressive approximation to the generating mechanism of the errors. Monte Carlo experiments have demonstrated that sieve bootstrap inference represents a substantial improvement over inference based on first-order asymptotic theory and is often superior to inference based on the blockwise bootstrap. An important topic for future work is the asymptotic justification of the sieve bootstrap for the type of problems considered here. In the light of the first-order correctness of analogous methods in univariate models with a unit autoregressive root (see Psaradakis, 2000), it is conjectured that sieve bootstrap approximations are at least as sound as conventional first-order asymptotic approximations.

References Andrews, D.W.K., Monahan, J.C., 1992. An improved heteroskedasticity and autocorrelation consistent covariance matrix estimator. Econometrica 60, 953–966. ¨ Bickel, P.J., Buhlmann, P., 1997. Closure of linear processes. Journal of Theoretical Probability 10, 445–479. ¨ Buhlmann, P., 1997. Sieve bootstrap for time series. Bernoulli 3, 123–148. ¨ Buhlmann, P., 1998. Sieve bootstrap for smoothing in non-stationary time series. Annals of Statistics 26, 48–83. Johansen, S., 1991. Estimation and hypothesis testing of cointegration vectors in Gaussian vector autoregressive models. Econometrica 59, 1551–1580. ¨ Kreiss, J.-P., 1992. Bootstrap procedures for AR(`)-processes. In: Jockel, K.-H., Rothe, G., Sendler, W. (Eds.), Bootstrapping and Related Techniques. Lecture Notes in Economics and Mathematical Systems, Vol. 376. Heidelberg, Springer-Verlag, pp. pp. 107–113. ¨ Kunsch, H.R., 1989. The jackknife and the bootstrap for general stationary observations. Annals of Statistics 17, 1217–1241. Lahiri, S.N., 1999. Theoretical comparisons of block bootstrap methods. Annals of Statistics 27, 386–404. Li, H., Maddala, G.S., 1997. Bootstrapping cointegrating regressions. Journal of Econometrics 80, 297–318. Li, Y., 1994. Bootstrapping cointegrating regressions. Economics Letters 32, 229–233. Liu, R.Y., Singh, K., 1992. Moving blocks jackknife and bootstrap capture weak dependence. In: LePage, R., Billard, L. (Eds.), Exploring the Limits of Bootstrap. Wiley, New York, pp. pp. 225–248. Nankervis, J.C., Savin, N.E., 1996. The level and power of the bootstrap t test in the AR(1) model with trend. Journal of Business and Economic Statistics 14, 161–168. Paparoditis, E., 1996. Bootstrapping autoregressive and moving average parameter estimates of infinite order vector autoregressive processes. Journal of Multivariate Analysis 57, 277–296. Paparoditis, E., Streitberg, B., 1992. Order identification statistics in stationary autoregressive moving-average models: vector autocorrelations and the bootstrap. Journal of Time Series Analysis 13, 415–434. Park, J.Y., 1992. Canonical cointegrating regressions. Econometrica 60, 119–143. ¨ S. Pesaran, M.H., Shin, Y., 1999. An autoregressive distributed lag modelling approach to cointegration analysis. In: Strom, (Ed.), Econometrics and Economic Theory in the 20th Century: The Rangar Frisch Centennial Symposium. Cambridge University Press, Cambridge. Phillips, P.C.B., Hansen, B.E., 1990. Statistical inference in instrumental variables regression with I(1) processes. Review of Economic Studies 57, 99–125. Politis, D.N., Romano, J.P., 1992. A circular block-resampling procedure for stationary data. In: LePage, R., Billard, L. (Eds.), Exploring the Limits of Bootstrap. Wiley, New York, pp. pp. 263–270. Politis, D.N., Romano, J.P., 1994. The stationary bootstrap. Journal of the American Statistical Association 89, 1303–1313. Psaradakis, Z., 2000. Bootstrap tests for an autoregressive unit root in the presence of weakly dependent errors. Journal of Time Series Analysis, forthcoming.

10

Z. Psaradakis / Economics Letters 72 (2001) 1 – 10

Saikkonen, P., 1991. Asymptotically efficient estimation of cointegration regressions. Econometric Theory 7, 1–21. Shao, J., Tu, D., 1995. The Jackknife and Bootstrap. Springer-Verlag, New York. Shao, Q., Yu, H., 1993. Bootstrapping the sample means for stationary mixing sequences. Stochastic Processes and their Applications 48, 175–190. Stock, J.H., Watson, M.W., 1993. A simple estimator of cointegrating vectors in higher order integrated systems. Econometrica 61, 783–820. Swanepoel, J.W.H., van Wyk, J.W.J., 1986. The bootstrap applied to power spectral density function estimation. Biometrika 73, 135–141. Vinod, H.D., McCullough, B.D., 1995. Estimating cointegration parameters: an application of the double bootstrap. Journal of Statistical Planning and Inference 43, 147–156.