Journal
of Econometrics
48 (1991) 15-27.
North-Holland
Fighting the teflon factor Comparing classical and Bayesian estimators for autocorrelated errors Peter
Kennedy
and Daniel
Simons
Simon Fraser Uniuersity, Burnaby, B.C. V5A lS6, Canada Received
December
1988, final version
received
November
1989
This paper claims, on the basis of a Monte Carlo study, that current estimators employed in the context of first-order autocorrelated errors should be abandoned in favor of a Bayesian alternative. Frequentists are confronted on their own playing field: The Bayesian estimator employs an ignorance prior and the criterion adopted is the mean square error of the competing estimators. With the predictable exception of the OLS estimator when the autocorrelation parameter is close to zero, the Bayesian estimator dominates the competition by a substantive margin.
1. Introduction Like Ronald Reagan, classical statistics seems blessed with a teflon coating: Criticism, no matter how potent, does not stick. This paper is an effort to penetrate this teflon factor: It argues that current estimators employed in the context of first-order autocorrelated errors should be abandoned in favor of a Bayesian alternative. This is of course not a new suggestion; Bayesians have for years been trying to persuade nonbelievers to convert, the most recent examples being Poirier (19881, who introduced the teflon analogy (p. 138), and Zellner (1988). Frequentist reaction, such as Pagan’s (1988) reply to Poirier, dismisses these arguments on the grounds that they are primarily theological in nature. We eschew this ideological debate; instead, we confront the frequentists on their own playing field, using an ignorance prior and comparing the sampling distribution properties of Bayesian and frequentist formulas. We report below the results of a Monte Carlo study comparing the mean square error of a Bayesian estimator to the mean square errors of a variety of 0304-4076/91/$03.50
0 1991-El
sevier Science
Publishers
B.V. (North-Holland)
16
P. Kennedy and D. Simons, Fighting the tefron factor
estimated generalized least squares (EGLS) and related pretest estimators suggested by frequentists for use in the context of suspected first-order autocorrelated errors. Twenty design matrices are employed, some simulated to match data employed in existing Monte Carlo studies, and others taken from a variety of real-world time series data. The results are unequivocal: With only one exception, in every case the Bayesian estimator dominates all the competition by a substantive margin. The sole exception, as would be expected, is the ordinary least squares (OLS) estimator whenever the autocorrelation parameter is very small. This result is not entirely unknown in the literature. In a paper examining the optimal level of significance for the Durbin-Watson test, Fomby and Guilkey (1978) found that a Bayesian alternative compared favorably to the frequentist competition, but this result appears not to have had much impact on the profession - most textbooks ignore this estimator when discussing autocorrelated errors, as do computer packages. An exception illustrates the problem: Although Judge et al. (198.5) cite the Fomby and Guilkey result (p. 293), they do not include this estimator in their summary recommendations (p. 331). The purpose of this paper is to extend the Fomby and Guilkey study by comparing their Bayesian estimator to a larger number of competing estimators in the context of a broader range of design matrices. By providing more convincing evidence of the superiority of the sampling distribution properties of this Bayesian estimator, we hope to persuade authors of textbooks and computing packages to incorporate a Bayesian alternative.
2. The Monte Carlo study Data on the dependent variable y were generated using the relationship yt= 1.0+X,+& f with E, = PF,_, + u,, where the ~1, are iid N(0,0.0036). Twenty sets of data on x, fixed in repeated samples, were employed, with sample sizes ranging from 10 to 65; they are described in appendix B. The use of a single explanatory variable is consistent with several other studies, such as Griliches and Rao (1969), Nicholls and Pagan (1977), and Fomby and Guilkey (1978). For each x data set, 600 replications were undertaken for each of ten values of p, varying by tenths from 0.0 to 0.9. We followed Judge and Bock (1978, ch. 7) and Ring and Giles (1984) in looking only at positive values of p. From these replications, the mean square error of the slope estimate was estimated for each of the ten estimators described below, and was normalized by dividing by the estimated mean square error of the generalized least squares (GLS) estimator (i.e., using the true value of p). Examination of the mean square error of the slope estimate, rather than the sum of the mean square errors of the slope and intercept estimates, follows Griliches and Rao (1969) and Griffiths and Beesley (1984).
P. Kennedy and D. Simons, Fighting the teflon factor
Ten estimators
17
are compared:
1. OLS. The ordinary least squares estimator. 2. DUR. The Durbin two-stage estimator, incorporating the PraisWinsten transformation for the first observation. This variant of EGLS was chosen because of its good performance in Monte Carlo studies such as Griliches and Rao (19691, Judge and Bock (1978, ch. 71, and Fomby and Guilkey (1978). 3. MLE. The maximum likelihood estimator, assuming normally-distributed errors, as described in Beach and Ma&&non (1978). 4. FTDS. A pretest estimator choosing between OLS and DUR using the Durbin-Watson statistic. Critical values for a one-sided test were taken from the upper bound distribution at the 5% significance level, as in Judge and Bock (1978, ch. 7). We used this approach, rather than an exact Durbin-Watson test, in the belief that this reflects what most practitioners do, following the advice of Judge et al. (1985, p. 330). 5. Fl’D50. A pretest estimator choosing between OLS and DUR using an exact Durbin-Watson test at the 50% significance level. This follows the Fomby and Guilkey (1978) recommendation that the significance level of pretest estimators should be in the order of 50% rather than the traditional 5%. 6. PTM5. A pretest estimator comparable to PTDS, but choosing between OLS and MLE. 7. FTMSO. A pretest estimator comparable to PTDSO, but choosing between OLS and MLE. 8. BPTD. A Bayesian pretest estimator consisting of a weighted average of OLS and DUR, where the weight on OLS is the integral between -0.3 and +0.3 of the marginal posterior distribution of p using an ignorance prior (see appendix A for the relevant formula). Several Monte Carlo studies, for example Griliches and Rao (19691, Spitzer (1979), and Magee, Ullah, and Srivastava (1987), have suggested that OLS outperforms EGLS methods for values of p less than 0.3 in absolute value. Thus the weight placed on OLS in BPT reflects the Bayesian probability that OLS is preferred to DUR. Bayesian pretest estimators traditionally employed in this context, such as that advanced by Griffiths and Dao (1980), require an informative prior, taking the form of a prior probability that p = 0. The methodology described above circumvents this need for an informative prior and thus produces an estimating formula palatable to frequentists. This estimator is new to the literature. The main advantage of this pretest estimator over the traditional pretest estimator is that the weighting is a continuous rather than a discontinuous function of the data. [Results of Cohen (1965) and Zaman (1984) suggest that discontinuous functions of the data are inadmissible.]
18
P. Kennedy and D. Simons, Fighting the teflon factor
9. BF’TM. A Bayesian pretest estimator consisting of a weighted average of OLS and MLE, where the weights are the same as for BPTD. 10. BAY. The Bayesian estimator of Fomby and Guilkey (19781, calculated by taking a weighted average of forty GLS estimates, with the weights calculated from the marginal posterior distribution of p. See appendix A for the relevant formula and related discussion. It is of interest to consider these estimators as special cases of a weighted average of all possible GLS estimates. The OLS and EGLS estimators each have nonzero weight on only one estimate. The traditional pretest estimators have nonzero weights on two estimates, with the weights discontinuous functions of the data. The Bayesian pretest estimators have nonzero weights on two estimates, with the weights continuous functions of the data. The Bayesian estimator, as operationalized here, has nonzero weights on forty estimates, with the weights continuous functions of the data. This taxonomy suggests a rationale for the superior performance of the Bayesian estimator.
3. Monte Carlo results For each estimator, the results are measured as the ratio of its estimated MSE to the estimated MSE of the GLS estimator. Space limitations rule out presenting the output for all twenty cases, so to give a flavor of the overall results, table 1 reports the average of these relative MSEs for the twenty data sets. The conclusion is dramatic: The Bayesian estimator not only dominates the competition, it does so by a substantive magnitude, in sharp contrast to other Monte Carlo studies of this nature which find that the best of the competing estimators do not differ by much, especially when viewed in the context of the entire range of p. Here the closest non-OLS competitor to the Bayesian estimator is the Bayesian pretest estimator BPTD, which by a small margin dominates all other estimators (except the Durbin estimator when p = 0.9). When p = 0, the Bayesian pretest estimator’s MSE is 2.6% higher than that of the Bayesian estimator, and this difference rises steadily as p increases, to a dramatically large 23% when p = 0.9. For p = 0.3, the value at which earlier studies found that EGLS begins to beat OLS, the Bayesian estimator beats both OLS and the Bayesian pretest estimator by more than 5%. The preceding comments relate to the results as averaged over the twenty individual cases. Because this averaging is done over different design matrices and different sample sizes, it should only be viewed as a means of expositing the overall results. Fortunately, the individual twenty cases exhibit remarkably similar results; looking at them individually suggests very little modification to these general conclusions. To give a sense of this, tables 2 and 3 present the results of cases 16 and 17, reflecting, respectively, one of
19
P. Kennedy and D. Simons, Fighting the teflon factor Table 1 Average relative mean square errors. P
OLS
DUR
MLE
PTDS
PTDSO
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
1.000 1.007 1.030 1.064 1.150 1.297 1.387 1.743 2.046 2.799
1.039 1.050 1.070 1.090 1.115 1.145 1.176 1.240 1.299 1.395
1.050 1.069 1.089 1.114 1.133 1.170 1.202 1.266 1.330 1.415
1.016 1.027 1.045 1.081 1.125 1.181 1.218 1.328 1.416 1.585
1.019 1.031 1.048 1.068 1.105 1.149 1.184 1.258 1.320 1.417
PTMS
PTM50
BPTD
BPTM
BAY
1.019 1.042 1.069 1.100 1.149 1.196 1.246 1.334 1.421 1.580
1.022 1.041 1.058 1.084 1.118 1.174 1.220 1.284 1.342 1.438
1.015 1.023 1.041 1.062 1.092 1.135 1.163 1.236 1.297 1.399
1.030 1.049 1.065 1.087 1.112 1.147 1.182 1.261 1.322 1.428
0.989 0.997 1.003 1.009 1.020 1.034 1.049 1.063 1.091 1.136
P
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Table 2 Relative mean square errors: Case 16. P
OLS
DUR
MLE
PTDS
PTDSO
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
1.000 1.002 1.003 1.012 1.047 1.050 1.092 1.126 1.176 1.252
1.005 1.011 1.015 1.016 1.031 1.033 1.044 1.064 1.069 1.114
1.006 1.009 1.014 1.029 1.035 1.041 1.042 1.054 1.077 1.083
1.001 1.003 1.006 1.015 1.016 1.035 1.038 1.061 1.070 1.113
1.000 1.013 1.038 1.039 1.089 1.116 1.143 1.187 1.193 1.215
P
PTMS
PTMSO
BPTD
BPTM
BAY
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
1.004 1.008 1.019 1.032 1.043 1.061 1.068 1.071 1.091 1.100
1.002 1.007 1.010 1.033 1.046 1.058 1.067 1.075 1.089 1.100
1.001 1.004 1.008 1.016 1.022 1.036 1.042 1.064 1.068 1.116
1.005 1.009 1.014 1.020 1.028 1.030 1.034 1.041 1.049 1.161
1.003 1.006 1.007 1.008 1.014 1.020 1.022 1.024 1.025 1.032
20
P. Kennedy and D. Simons, Fighting the teflon factor Table 3 Relative mean square errors: Case 17.
P
OLS
DUR
MLE
PTDS
PTDSO
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
1.000 1.001 1.008 1.024 1.057 1.163 1.123 1.469 1.627 2.245
1.006 1.022 1.026 1.034 1.082 1.112 1.114 1.160 1.243 1.353
1.004 1.022 1.036 1.043 1.062 1.114 1.138 1.181 1.239 1.343
1.011 1.019 1.022 1.034 1.062 1.113 1.130 1.193 1.270 1.396
1.001 1.011 1.019 1.026 1.075 1.108 1.110 1.160 1.243 1.352
PTMSO
BPTD
BPTM
BAY-
1.003 1.012 1.029 1.038 1.056 1.111 1.144 1.186 1.238 1.344
1.007 1.010 1.014 1.016 1.056 1.097 1.105 1.157 1.243 1.352
1.001 1.010 1.022 1.024 1.043 1.106 1.112 1.182 1.236 1.343
0.940 0.95 1 0.961 0.970 0.972 0.992 0.995 0.998 1.022 1.088
P
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
PTMS 1.009 1.014 1.030 1.050 1.075 1.127 1.146 1.210 1.254 1.365
-
the worst and one of the best relative performances of the Bayesian estimator. The sample size for both cases is 65. Results for all the individual cases can be found in Simons (1988). Examining the twenty cases separately modifies the overall conclusions noted above in only one respect - in the majority of cases OLS beats the Bayesian estimator by a small amount when p is less than 0.1, a result consistent with Fomby and Guilkey (1978). To be explicit, in seven cases BAY dominates OLS for all p (cases 1,7,9,11,14,17,19), in three cases for all p except p = 0 (cases 6,13,15), in seven cases for all p except p I 0.1 (cases 2,3,5,X, 12,18,20), in two cases for all p except p I 0.2 (cases 4,161, and in one case for all p except p 5 0.3 (case 10). As is reflected in tables 2 and 3, in those cases in which BAY beat OLS for low values of p, it usually beat it by a considerable amount, whereas when OLS beat BAY, the difference in their performances was marginal. This is what gives rise to the uniform superiority of BAY in the averaged results. In only two of the twenty cases did any estimator other than OLS beat BAY for any value of p, and then only marginally. In case 10, BAY was beaten by PTMS and PTMSO for p = 0. In case 16, as shown in table 2, BAY was beaten by PTDS for p I 0.2, by BPTD for p s 0.1,and by PTD50 for p = 0.
21
P. Kennedy and D. Simons, Fighting the teflon factor Table 4 Percent by which mean square error of MLE exceeds that of BAY. P
T
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
30 60 90 120
2.1 0.2 0.1 0.0
3.2 1.9 0.5 0.1
5.6 1.6 0.9 0.2
6.0 2.9 1.1 0.7
7.1 3.2 1.9 0.9
10.9 4.5 2.3 1.2
10.9 5.3 2.7 1.5
16.8 6.6 3.4 1.8
17.3 7.4 3.9 2.2
29.6 7.9 4.1 2.5
We were unable to discern any pattern in the design matrices corresponding to cases in which the Bayesian estimator did exceptionally well or poorly relative to its overall performance. Nor were we able to uncover much evidence regarding the influence of sample size. In theory, both EGLS and the Bayesian estimator should have similar asymptotic properties. As the sample size grows, the sampling distribution of EGLS will collapse on that of the GLS estimator, and the marginal posterior density of p will become heavily concentrated over the true value of p, causing the Bayesian estimator also to collapse to the GLS estimator. This of course does not say anything about the relative rates at which they collapse to GLS, but it does lead one to speculate that their relative performances should not be as disparate when the sample size is large as when the sample size is small. We tried to investigate this by calculating the average performance of MLE relative to BAY for the six cases with sample sizes ranging from 19 to 25 and also for the seven cases with sample sizes ranging from 60 to 65. Although the results of this comparison did indicate that the relative superiority of BAY is less when the sample size is larger, the mixing of design matrices in this way renders the comparison unconvincing. To investigate this phenomenon further, we ran additional simulations using x values generated by the data-generating mechanism of case 3 (see appendix B) and estimated the percent by which the mean square error of the MLE estimator exceeds that of BAY for sample sizes T = 30, 60, 90, and 120. The results, reported in table 4, indicate that although the Bayesian estimator maintains its superiority as the sample size rises, its relative superiority falls. All the results reported thus far are based on Monte Carlo studies in which the errors came from normal distributions. Since the formula for the Bayesian estimator is derived using the assumption that the errors are normal, and since in the real world errors may not be distributed normally, it is of interest to know whether the results reported here are sensitive to this assumption. Regardless of the way in which the errors are distributed, the GLS estimator is best, linear unbiased. Since our Bayesian estimator is a weighted average
1.300 1.380 1.360
1.104 1.049 1.018
1.001 1.005 1.001
1.300 1.380 1.360
DUR MLE BAY
DUR MLE BAY
DUR MLE BAY
DUR MLE BAY
0.0
1.386 1.399 1.381
1.005 1.012 1.010
1.285 1.379 1.085
1.386 1.399 1.381
0.1
0.3 0.4
P
0.5
1.471 1.470 1.403 1.583 1.565 1.466 1.623 1.592 1.513
1.617 1.553 1.385 1.742 1.579 1.423 1.783 1.764 1.598
1.036 1.056 1.023 1.049 1.088 1.042
1.117 1.129 1.099
1.410 1.425 1.389
1.471 1.470 1.403
1.583 1.565 1.466
1.623 1.592 1.513
Case D: t-distribution errors, T = 20, design matrix case 4
1.014 1.017 1.014
Case C: Uniform errors, T = 60, design matrix case 15
1.291 1.394 1.266
Case B: Mired Normal errors, T = 30, design matrix 3
1.410 1.425 1.389
Case A: Cauchy errors, T = 20, design matrix case 4
0.2
Table 5 Relative mean square errors.
1.741 1.763 1.687
1.122 1.138 1.111
1.958 1.911 1.894
1.741 1.763 1.687
0.6
1.890 1.782 1.711
1.129 1.142 1.127
2.642 2.389 2.121
1.890 1.782 1.711
0.7
1.956 1.887 1.791
1.143 1.159 1.398
4.130 4.409 2.998
1.956 1.887 1.791
0.8
2.065 1.975 1.878
1.185 1.199 1.173
4.642 4.980 3.096
2.065 1.975 1.878
0.9
M
P. Kennedy and D. Simons, Fighting the teflon factor
23
of forty GLS estimators, it should perform well as long as the weighting system is suitable. The weighting system comes from the marginal posterior distribution of p, calculated using the assumption of normally distributed errors, so it is possible that the performance of the Bayesian estimator could deteriorate as the errors depart from normality. But what about its competition? The best, linear unbiased estimator, GLS, is not operational (because p is unknown). Its feasible counterpart, EGLS, requires that p be estimated; when the errors are distributed nonnormally, the estimator of p may deteriorate, causing the performance of EGLS to deteriorate. The maximum likelihood estimator, for example, should deteriorate because its implicit estimate for p is based on the assumption of normally distributed errors. To investigate this question, we repeated the Monte Carlo study for a variety of cases using errors generated from Cauchy distributions, mixtures of normal distributions, uniform distributions, and t distributions. The mean squared errors for the DUR, MLE, and BAY estimators relative to those of the GLS estimator were estimated, with some of the results reported in table 5. Comparison of these results with the normally distributed error results reported in Simons (1988) shows that, as expected, all three of these estimators have deteriorated relative to GLS. It is clear from table 5, however, that the Bayesian estimator retains its superiority relative to the EGLS estimators DUR and MLE.
4. Conclusions The results of this Monte Carlo study suggest that the Bayesian estimator, operationalized as a weighted average of only forty GLS estimates, is unequivocally superior to its frequentist competition, as measured by the frequentist mean square error criterion. This superiority is maintained in the face of nonnormal errors and increasing sample size, although in the latter case it diminishes as the sample size increases. These results tempt us to speculate that Bayesian estimators have similar properties when applied to other types of nonspherical errors. Surekha and Griffiths (1984) and Kennedy and Adjibolosoo (19901, for example, find this to be the case for the heteroskedastic error models they examine. But Simons (1988) finds that in the case of MA(l) errors, although the Bayesian estimator dominates the competition whenever the MA parameter is greater than about 0.3, it is in turn dominated by most competitors when this parameter is less than about 0.2. Regardless of the outcome of additional research, the results reported here on the question of first-order autocorrelated errors suggest very strongly that textbooks and computer packages should no longer neglect the Bayesian alternative.
24
Appendix
P. Kennedy and D. Simons, Fighting the tepon factor
A
The Bayesian estimator employed in this paper is that used by Fomby and Guilkey (1978), a textbook exposition of which is in Judge et al. (1985, pp. 291-293). Our model is written in the usual notation as y =X/3 + F, with &,= PE+.~ + U, and U, N N(0, (T’) for t = 1, . . . , T. We assume that all parameters are distributed independently a priori, and, following Jeffrey’s rule, we assume that the p parameters and In u are distributed uniformly over the real line, and p has a beta distribution with parameters (3, +) over the range IpI < 1. This gives rise to the noninformative prior density
dP,f,a)
a (1 -P*)-“*o-‘,
which produces the posterior density
dP,f,dy)
au -(r+‘)exp[ -
( y*
-X*p)‘(
y* - X*P)/2a2],
with y* = Py and X* = PX, where P is the Prais-Winsten matrix. Integrating out (T, we get
dP7fly) a (RW -“‘[1-t
(p -p*)‘x*‘x*(p
transformation
-p*)/RSS]
where /3* = (X*‘X*)-‘X*‘y* and RSS = (y* -X*p*)‘(y* -X*/3*>. The Bayesian point estimate of p is the mean of the posterior density for p, E(Ply), given by
-T’2,
marginal
which can be interpreted as a weighted average of GLS estimators (p*‘s) with weights given by g(pIy), the marginal posterior for p. This density, obtained by integrating out p from g(p, ply) above, is given by
P. Kennedy and D. Simons, Fighting the tej7onfactor
25
tractable, so it is calcuThe integral above for E(Ply) is not analytically lated by numerical integration, which, as noted earlier, amounts to taking a weighted average of GLS estimators. To operationalize this, a decision must be made concerning the size of steps in p used in the numerical integration, or, what is the same thing, how many GLS estimators to include in the weighted average. Preliminary Monte Carlo work suggested that there is no substantive benefit (in terms of reduced MSE) from averaging more than forty GLS estimates. To contain the computer costs of our Monte Carlo study, and to emphasize the feasibility of building this Bayesian estimator into computer packages, we chose to average only forty p*‘s. To be explicit, this was accomplished in the following way. The p range from - 0.99 to +0.99 was divided into forty equal intervals, with the forty values of p for the GLS estimates being the mid-points of these intervals. Then numerical integration of (RSS)-(T~ 2)/2 IX*‘X* 1~ “2 was performed to find the normalizing constant for g(ply). The area under g(ply) was calculated for each p interval, producing the forty weights for the averaging.
Appendix
B
Listed below are the twenty T is the sample size.
sets of data used for the explanatory
variable.
1. The x2 variable used by Magee, Ullah, and Srivastava (1987). T = 10. 2. Resealed U.S. investment expenditure, as used by Zellner and Tiao (1964). T = 15. 3. A trended variable X, = exp(0.04t) + uI, where u, N N(0,0.009), as used by Beach and MacKinnon (1978) and Griffiths and Beasley (1984). T=30. 4. The Australian rate of inflation, taken from Clements and Taylor (1987). T=20. 5. A variable evolving as X, = 0.4x,_, + uf, where u, - N(0,11, as used by Griliches and Rao (1969) and Fomby and Guilkey (1978). T = 35. The next eight data sets are taken
from Maddala
(1988, pp. 147-157).
6. Unemployment rates in the U.K., 1920-1938. T= 19. 7. The three-month U.S. T-bill rate, monthly, January 1980 to September 1983. T = 45. 8. Per capita U.S. food production, 1922-1941. T = 20. 9. Real U.S. disposable income, 1922-1941. T = 20. 10. U.S. money stock Ml, 1959-1983. T = 25. 11. U.S. domestic nonfinancial sector debt (December average), 1959-1983. T=25.
P. Kennedy and D. Simons, Fighting the tefon factor
26
12. Canadian estimated real interest rate, quarterly, 1955-1 to 1978-2. T = 50. 13. Canadian housing starts, quarterly, 1955-1 to 1978-2. T = 50. The last seven data sets were taken from the Cansim tapes. 14. Percentage annual change in the Canadian money supply, quarterly, 1957-4 to 1973-4. T = 65. 15. Direct investment in Canada, quarterly, 1970-l to 1984-4. T = 60. 16. Percentage annual change in Australian money supply, quarterly, 1957-4 to 1973-4. T = 65. 17. Percentage annual change in Australian consumer price index, quarterly, 1961-4 to 1977-4. T = 65. 18. Australian gross fixed capital formation, quarterly, 1961-4 to 1977-4. T=65. 19. Canadian gross national expenditure, quarterly, 1961-l to 1975-4. T = 60. 20. Percentage per annum changes in Canadian consumer prices, quarterly, 1957-4 to 1974-4. T = 65.
References Beach, C.M. and J.G. MacKinnon, 1978, A maximum likelihood procedure for regression with autocorrelated errors, Econometrica 46, 51-58. Clements, K.W. and J.C. Taylor, 1987, The pattern of financial asset holdings in Australia, in: M.L. King and D.E.A. Giles. eds.. Suecification analysis in the linear model (Routledge & Kegan Paul, London) 268-288. _ Cohen, A., 1965, Estimates of linear combinations of the parameters in the mean vector of a multivariate distribution, Annals of Mathematical Statistics 36, 78-87. Fomby, T.B. and D.K. Guilkey, 1978, On choosing the optimal level of significance for the Durbin-Watson test and the Bayesian alternative, Journal of Econometrics 8, 203-213. Griffiths, W.E. and P.A.A. Beesley, 1984, The small sample properties of some preliminary test estimators in a linear model with autocorrelated errors, Journal of Econometrics 25, 49-61. Griffiths, W.E. and D. Dao, 1980, A note on a Bayesian estimator in an autocorrelation error model, Journal of Econometrics 12, 390-392. Griliches, Z. and P. Rao, 1969, Small sample properties of several two-stage regression methods in the context of autocorrelated errors, Journal of the American Statistical Association 64, 253-272. Judge, G.G. and M.E. Bock, 1978, The statistical implications of pre-test and Stein-rule estimators in econometrics (North-Holland, Amsterdam). Judge, G.G., W.E. Griffiths, H. Liitkepohl, and T. Lee, 1985, The theory and practice of econometrics, 2nd ed. (Wiley, New York, NY). Kennedy, P.E. and S. Adjibolosoo, 1990, More evidence on the use of Bayesian estimators for nonspherical errors, Journal of Quantitative Economics, forthcoming. King, M.L. and D.E.A. Giles, 1984, Autocorrelation pre-testing in the linear model: Estimation, testing and prediction, Journal of Econometrics 25, 35-48. Maddala, G.S., 1988, Introduction to econometrics (McGraw-Hill, New York, NY). Magee, L., A. Ullah, and V.K. Srivastava, 1987, Efficiency of estimators in the regression model with first-order autoregressive errors, in: M.L. King and D.E.A. Giles, eds., Specification analysis in the linear model (Routledge & Kegan Paul, London) 81-98. Nicholls, D.F. and A.R. Pagan, 1977, Specification of the disturbance for efficient estimation an extended analysis, Econometrica 45, 211-217.
P. Kennedy and D. Sitnons, Fighting the tef7on factor
27
Pagan, A.R., 1988, Comment on Poirier: Dogma or doubt, Journal of Economic Perspectives 2, 153-158. Poirier, D., 1988, Frequentist and subjectivist perspectives on the problems of model building in economics, Journal of Economic Perspectives 2, 121-144. Simons, D., 1988, The sampling distribution properties of the Bayesian estimator in the case of autocorrelated errors: A Monte Carlo study, Ph.D. dissertation (Simon Fraser University, Burnaby, BC). Spitzer, J.J., 1979, Small-sample properties of nonlinear least squares and maximum likelihood estimators in the context of autocorrelated errors, Journal of the American Statistical Association 74, 41-47. Surekha, K. and W.E. Griffiths, 1984, A Monte Carlo comparison of some Bayesian and sampling theory estimators in two heteroscedastic error models, Communications in Statistics - Simulation and Computation 13, 85-105. Zaman, A., 1984, Avoiding model selection by the use of shrinkage techniques, Journal of Econometrics 25, 73-85. Zellner, A., 1988, Bayesian analysis in econometrics, Journal of Econometrics 37, 27-50. Zellner, A. and G.C. Tiao, 1964, Bayesian analysis of the regression model with autocorrelated errors, Journal of the American Statistical Association 59, 763-778.