International North-Holland
Journal
of Forecasting
17
7 (1991) 17-30
Forecasting with vector ARMA and state space methods Celal Aksu Accounting Department, PA 19122, USA
School of Business and Management,
Temple
University,
Philadelphia,
Jack Y. Narayan Department
of Mathematics,
State University of New York at Oswego, Oswego, NY 13126, USA
Abstract: The theory and practice of modeling with vector ARMA and state space methods are reviewed. Currently available computer software packages for these techniques are discussed and compared. Further, the model identification and estimation procedures employed by these packages are illustrated in detail using a data set consisting of monthly observations of ‘90-day treasury bill rates’ and ‘changes in money supply’. In addition, the usefulness of the vector ARMA and state space methods are tested empirically by evaluating the out-of-sample one-step-ahead forecasting performance of the univariate and bivariate models of treasury bill rates obtained from each method/package. Keywords: Forecast evaluation, and state space Software.
Vector
ARMA
methods,
State
space
methods,
Forecast
accuracy,
Vector
ARMA
1. Introduction Univariate autoregressive moving average (ARMA) time srnes models, mainly due to the contributions of Box and Jenkins (1970), are widely used in practice for forecasting. The availability of microcomputer packages and the recent Ml competition have undoubtedly contributed to this use. However, vector autoregressive moving average (vector ARMA) and state space time series models have received relatively little attention from practitioners even though they permit the inclusion of explanatory variables and permit the testing for lead, lag, independent, contemporaneous and feedback relationships among them. This may be attributed to: (1) the unavailability of a simple modeling procedure to identify the underlying process in practice, (2) the perceived difficulty in utilizing the available computer software and the lack of a detailed comparative evaluation of such software, and (3) the preconceived notion that the expected gain in accuracy from these complicated models may not justify the effort involved in using them. The purpose of this paper is: (1) to review vector ARMA and state space techniques, (2) to discuss, compare, and evaluate the currently available computer software packages, and (3) to illustrate the model identification and estimation procedures of these techniques and provide some empirical evidence on the superiority of the multivariate models over their univariate counterparts in terms of one-step-ahead out-of-sample forecasting accuracy. We discuss the vector ARMA and state space methods in the next section. It should be noted that, in spite of their theoretical equivalence, these two multivariate methods differ with respect to the approaches used to determine the order of the model, their numerical stability, the parameter estimation methods utilized, and their sensitivity to model m&specifications. 0169-2070/91/$03.50
0 1991 - Elsetier
Science Publishers
B.V. (North-Holland)
C. Ak.w. J. Y. Nuruyan
18
Detailed expositions Granger and Newbold 1985).
Downes
James
et al., 1985;
u,ith actor
ARMA
and stnte space nwthods
of the vector ARMA models can be found in Quenouille (1957) (1977) Tiao and Box (1981), Jenkins and Alavi (1981) and Tiao
Since then, a number
1983;
/ Forecastq
and Rocke,
of applications 1983;
&let-,
1985;
Hillmer Cholette
with mixed success have appeared et al., 1983;
and Lamy,
Riise and Tjostheim, 1986;
Pankratz,
1989;
Hannan (1970) and Tsay (1983,
(Umashankar
1984;
Kling
and Ledolter,
and Bessler,
1985;
Lin, 1989).
Expositions of the state space approach to multivariate forecasting can be found in Akaike (1974a, 1974b, 1975, 1976) Mehra (1979, 1982), Kitagawa (1981) Kitagawa and Gersch (1984) and Aoki (1987). Its applications (1987) Koehler
include Fey and Jain (1982) Narayan and Aksu (1985) Sastri (1985) Nelson (1987) and Murphree (1988) Gunter and Aksu (1989) and Aksu and Narayan (1990).
Aksu
In this paper, the vector ARMA analysis is carried out using the SCA Statistical System of Scientific Computing Associates (Liu et al., 1986) and the MTS package of Automatic Forecasting Systems (Reilly, 1986). The former implements the approach developed by Tiao and Box (1981) whereas the latter implements a fast estimation method developed by Spliid (1983). The state space analysis is carried out using the state space procedure of SAS/ETS (SAS Institute, Inc., Version 5, 1984) and the FORECAST MASTER package of SSI (Scientific Systems, Inc., 1986). The former is an implementation of Akaike and Mehra’s work, while the latter represents a modification of their work by R. Goodridge and W. Larimore formerly associated
with SSI.
The rest of the paper is arranged as follows: Section 2 reviews the vector ARMA and state space methods, Section 3 includes a comparative discussion of the mainframe and microcomputer software packages for both
methods,
Section
4 reports
the empirical
results,
and Section
5 provides
some
conclusions
and
recommendations.
2. The vector ARMA and state space methods 2.1. The vector ARMA
method
The vector ARMA models described below are based on work which was developed Box (1981). A vector ARMA model takes the form
where
I##)
!qB,‘)
kxk
kxk
y,
kxl
=
c
+ 8,(B)
kxl
kxk
s&q
kxk
initially
by Tiao and
E,
(1)
kxl’
I: = (Y,,, . . . , Y,,)’
is a stationary k X 1 vector of k time series each having n observations, C is a and and E, = (e,,, . . . , Ed*)’ is a sequence of white noise vectors, independently as multivariate normal with mean 0 and covariance matrix X. The k X k matrices
k x 1 vector of constants, identically $,(B)
distributed
and eq( B) are nonseasonal
matrix
+p(B)=I-q5,B-$,B2-
polynomials
...
in the back shift operator
B and have the form
-$B”
and
t$(B)=I-8,B-8,B2-
...
4$Bq,
where I is the k X k identity matrix, BJx = y_, and the e’s and 8’s are k x k matrices. Finally, @,( B”) and SQ( B’) are seasonal matrix polynomials in B and have the form
k X k matrices
Gp( B”) = Iand
@,B” _ @#y
_ . . . _ QpBPS
the
C. Aksu, J. Y. Narayan / Forecasting with uector
ARMA
and state space methods
19
where the Q’s and 8’s are k x k matrices, and s is the seasonal period. Thus, equation (1) is the matrix representation of the familiar univariate ARMA model ~p(B)QjF(B.~)~=C+~~(B)O,(B”)E,. To ensure the properties
of stationarity and invertibility, all roots of the determinantal polynomials I+P(B)I? I@P(BS)I, l@,(B) I? and I@,( B”) 1 are required to lie outside the unit circle. We refer the reader to Tiao and Box (1981) for more details. As an example, consider the following vector ARMA model where q = & - C is the vector of deviations from some origin C: (I-+,B)I:=
(Z-fliB)~,,
which is equivalent to
Causality and feedback relationships may be investigated by examining the off-diagonal elements. For example, if &i = S,, = 0, then Y2 affects Yl but not vice versa. If the off-diagonal elements are all zero, then the two series are not cross correlated except possibly at lag 0 and the model is referred to as a diagonal multivariate time series model (see Umashankar and Ledolter, 1983). Contemporaneous relationships can be detected by checking for the significance of the off-diagonal elements of the error correlation matrix. 2.2. State space method Let q be a stationary multivariate time series of dimension k. Then the state space representation Yf+l is given by Z r+1 =
Pxl
F
4
PxPPxl
+
G e,+l pxkkxl’
T kxl
=
H kxppxl’
of
Z, (2)
where Z,, F, and G are the state vector, transition matrix, and input matrix, respectively. The first equation specifies how the state vector evolves through time while the second equation specifies the relation between the time series and state vector. An alternative state space representation (Z,, 1 = FZ, + Gq; y = HZ, + E,) is also in common use (e.g., Aoki, 1987). Akaike (1976) uses the first representation which is the one adopted by SAS. Z, contains the set of present and past information that is correlationally relevant for forecasting. Its first k components comprise q. E,,, is a sequence of independent, zero-mean random vectors (called the innovation of q+,) with constant covariance matrix. H, under the assumption of nonsingularity of the covariance matrix, takes the form [I 0] where Z is a (k X k) identity matrix and 0 is a (k X ( p - k)) zero matrix. The upper (k x k) submatrix of G is an identity matrix. The above state space representation of q is not unique. Multiplication of (2) by any nonsingular matrix will yield another state space representation that is also valid. However, Akaike (1976) showed that a multivariate linear stochastic system has a canonical representation which is unique and identified the state space model in its canonical form. The identification of the canonical state space model is accomplished in two steps. The first step involves the determination of the amount of past information to be used in the canonical correlation analysis. This is achieved by fitting successively higher order autoregressive models to the observed series, and computing the Akaike Information Criterion (AIC) for each fitted model (Akaike, 1974b). The optimum lag into the past ( p) is chosen as the order of the autoregressive model for which AIC is minimum. The second step involves the selection of the state vector via canonical correlation analysis between the set of present and past values 1 Yl,,
Y2 ,,...,
Yk,, Yl,_ ,,...,
Yk,_l ,,..,
Yk,_,},
20
C. Aksu, J. Y. Nrrrayun / Forecasting with uector
and the set of present
and future values
{ Yl,. y2 ,,...‘Yk,. where Yi,,,,,
denotes
Yl,, ,,,‘...'Yk,,,,,
the conditional
ARMA
and
state
space
methods
,..., Yk,+p,,},
expectation
of Yi,,,
at time t, that is, E[Yi,+,
1Yi,, Yi,_,, . . .]. The
two sets are referred to as data space and predictor space, respectively. Note that the predictor space does not include the conditional expectations Yi, +7,, for r > p since, for any vector ARM.4( p, q) process, Yi,,,,, for T > p is completely determined by Yi,, Yi,, , , ,~. . . , Yi,,, , ,. Hence, once the optimum order, p, of the autoregressive process is chosen as described above, the predictor space consists of conditional expectations Yi,, Yi,, , ,,, . . . , Yi,,,,,. Furthermore, if the coefficient matrices of the vector ARMA( p, q) process are less than full rank, then it is possible for some components of the predictor space to be linearly dependent. The state vector is made up of the linearly independent components of the predictor space. Canonical correlation analysis is used to identify these linearly independent components. The components Yl,, . . . , Yk, are always part of the state (i.e., the minimum order of the state vector is k), and the remaining components of the predictor space is evaluated successively for inclusion in the state vector. The canonical correlation coefficients are computed for the sample covariance matrices of the sets of successively increasing number of present and future values and the fixed set of present and past values. If the smallest canonical evaluated included
correlation coefficient of the sample covariance matrix that corresponds to the component being for inclusion in the state vector at the current step is non-zero, then that particular component is in the state vector.
It should be noted that canonical discussion
of two other alternative
correlation
analysis
is not the only way to select the state vector.
ways, we refer the reader to Aoki (1987).
Following
the selection
For a of the
state vector, F and G matrices are estimated. The optimum autoregressive model provides an estimate of the input matrix G and the variance matrix of the innovation. An estimate of the F matrix is obtained during the canonical correlation analysis. These estimates can be used directly in the forecasting procedure or as initial values in a maximum likelihood procedure for obtaining the final efficient estimates. The theoretical equivalence of (vector) ARMA and state space models are well documented in the literature and thus will not be demonstrated here. Instead, we refer the reader to Akaike (1974a, 1975, 1976). Aksu and Narayan (1985. 1990) and Aoki (1987) for detailed illustrations (vector) ARMA models into their corresponding state space forms. and vice versa.
3. Discussion
of computer
of converting
different
software
The vector ARMA procedure of Scientific Computing Associates (hereafter SCA-VARMA) is designed to model and forecast stationary multivariate time series on mainframe computers in a non-automatic mode. Data with seasonal variation can be modeled without being seasonally scA-system is that the roots of the AR and MA polynomials are available
adjusted. A nice feature of the for checking the assumptions of
stationarity and invertibility. In the identification phase the sample auto and cross-correlation matrices (CCM) and the sample partial autoregression matrices (PARM) play the role of the sample autocorrelation and sample partial autocorrelation functions in univariate time series analysis. However, unlike the partial autocorrelations in the univariate case, the elements in the PARM may exceed one in absolute value. The sample CCM are useful in characterizing low order vector MA processes, while the sample PARM are useful in identifying low order vector AR processes. The identification of mixed processes is more difficult. The extended cross-correhtion matrices (ECCM) introduced by Tiao and Tsay (1983) are sometimes useful in the identification of such processes. In addition to the ECCM method, Tiao and Tsay (1985) introduced the concept of smallest canonical correlation anulysis (SCAN) to assist in the task of identifying a mixed process. Both the ECCM and SCAN methods are available in the SCA statistical system. Low order vector MA processes are characterized by a ‘cutting off’ of the sample CCM after q lags. A coefficient matrix is considered insignificant if each entry is less than twice its standard error. However,
C. Aksu, J. Y. Narayan
/ Forecasting with vector
ARMA
and state space methods
21
unless all entries are significant, there are no statistics available to the user for judging the significance of the whole coefficient matrix. Tiao and Box (1981) define the PARM at lag 1, P(I), as the last coefficient matrix when an AR(l) model is fitted to the data. This means that, for an AR(P) process, P( 1) = 0 for I > p. This makes PARM a useful tool in identifying the order of vector AR processes. More specifically a ‘die out’ pattern exhibited by the partial sample ccM suggests a vector AR process, the order of which is given by the last significant autoregression matrix. In the SCA system, the order of the AR process is determined by testing the null hypothesis P(1) = 0 against the alternative P(I) # 0 when an AR(/) model is fitted using the statistic M(I) which is asymptotically distributed as x2 with k2 degrees of freedom under the null hypothesis. We refer the reader to Tiao and Box (1981) for more details. Once the model has been tentatively identified, efficient estimates for the parameter matrices are obtained by maximizing either a ‘conditional’ or ‘exact’ likelihood function. It is generally agreed that estimates for the moving average parameters in the exact likelihood approach are superior but more costly. Diagnostic tests for model misspecification can be carried out using several methods. First, the CCM of the residual series are computed and checked for significant elements. The model is considered adequate if these matrices represent a white noise process. Second, an eigenvalue analysis can be carried out on the error covariance matrix to check for linear dependencies (contemporaneous or lagged) among the series. The analysis enables one to detect such dependencies and limit the subsequent analysis to linearly independent series. A major difficulty encountered in vector ARMA modeling is the large number of parameters to be estimated. This number varies directly as the square of the number of series modeled. As a result, the degrees of freedom are quickly exhausted for a model with a modest number of coefficient matrices. to zero of insignificant Although the procedure is capable of handling several series, the restriction parameters can sometimes be a frustrating experience. Another problem is that of deciding whether or not differencing operator(s) should be used in order to achieve stationarity, an action which is often executed in univariate modeling of nonstationary series. As it was pointed out by Granger (1981) and Engle and Granger (1987), the vector process X, as well as all individual series X,, which comprise it may be integrated while there may exist linear combination(s) of the levels which are stationary (referred to as a co-integrated system). In such a case, differencing all variables would make the system noninvertible, i.e., the system will be over-differenced. Recent research on co-integration provides some procedures to be used in identifying and testing the existence and dimension of co-integration (Engle and Granger, 1987; Johansen, 1988; Phillips and Ouliaris, 1988). None of the packages provide any procedures on co-integration. Finally, for series which exhibit nonconstant variance, the procedure does not provide any information that will enable the user to determine whether a power transformation (such as the Box-Cox 1964 type) is necessary. The vector ARMA package of Automatic Forecasting Systems Inc. (hereafter AFS-VARMA) is designed for use on the IBM microcomputers and compatibles. The package is menu driven asking the user to choose/specify options/values in response to prompts. It permits modeling both univariate and multivariate series in either the automatic or non-automatic modes. In the former mode, the identification, estimation, and forecasting is done entirely by the package. The user has to supply only the time series data, periodicity, and stationarity transformations. In the non-automatic mode, on can study the sample autocorrelation matrices and the sample partial lag autocorrefution matrices (PLCM). The computation of PLCM is based on the work of Heyse and Wei (1984), and differs from the PARM used by SCA. The coefficients in the PARM may exceed unity in absolute value, whereas the coefficients in the PLCM are restricted to lie in the interval [ - 1, 11. In particular, Heyse and Wei, analogous to the PACF in the univariate case, define PLCM as the autocorrelation between the elements of two vectors in the time series, after removing the linear dependence of each on the vectors at intervening lags. In other words, PLCM is defined as the autocorrelation matrix between the elements of F and y +,, after removing the linear dependence of each on the
C. Aksu, J. Y. Narqyun / Forecasting with cvcror
22
intervening
vectors YI+/
=
y+ ,, y,,,
a,_,.,I:+,_,
ARMA
und stme spcrce method.s
. . . , q+[_ ,. That is, letting
+“,-,.,-12:+1
+ ...
+ u,-1.,+/
and
+ ... +B,-,.,-,I:+,-1
I:=&,.,1:+1 the PLCM
at lag 1, P(l),
is defined
+ c-1.1,
as the autocorrelation
between
the elements
of the residuals
V,_ ,,t and
u I-l.l+l’ The basic difference between PLCM and PARM is that the former’s coefficients correspond to properly normalized correlation coefficients whereas the latter’s do not. However, it should be noted that they have the same ‘cut off’ property
for autoregressive
processes.
That is, for an AR(~)
process,
B( /) = P(I)
= 0 for
I>p. In the AFS system, against the alternative
the order of the AR process is determined by testing the null hypothesis P(1) = 0 8(l) # 0, when an AR(/) model is fitted to the data, using the statistic X(l) which is
asymptotically distributed as x2 with k2 degrees of freedom under the null hypothesis. We refer the reader to Heyse and Wei (1984) for more details. Once the model has been identified, the parameters of the model are estimated automatically. A nice feature of the package is that it provides an option whereby the insignificant estimates may be dropped automatically. The estimation procedure used in AFS-VARMA is based on the work of Spliid (1980, 1983). The algorithm used is essentially a method of moments, and simple algorithm for estimation of the parameters estimates squares
obtained (approximate
are shown maximum
to lie somewhere likelihood)
between
estimates
but not in the traditional form. It is a very fast of large multivariate time series models. The the moment
estimates
of Box and Jenkins
(1976,
and the nonlinear p. 196 - Table
least
6.7 ~ and
p. 239 - Table 7.13 ~ respectively). As Spliid points out, the relative efficiency of the algorithm’s estimates decreases near the invertibility boundary which can cause problems when the series have been differenced. This is because differencing tends to introduce root(s) near or on the invertibility boundary part of the model. Furthermore, Spliid’s iterative procedure does not always converge. Stoica provide the condition required for the iterations to converge. Diagnostic tests are available to check for model adequacy. whether an autocorrelation matrix at a certain lag is significant.
into the MA et al. (1985)
A chi-square statistic is used to determine In the automatic mode, the package makes
use of this information to modify the model, if necessary, automatically. The problem of whether or not to difference a series is still in the hands of the analyst as is the case with the SCA-VARMA. However, AFS-VARMA includes the Box-Cox transformation as an option. If a transformation is taken, forecasts are given in terms of both the original and transformed data. The state space procedure of SAS/ETS (hereafter SAS-SS) can model and forecast stationary multivariate time series. Model identification and estimation steps are performed automatically. In the identification stage, the procedure fits a sequence of increasing order vector AR models and selects the order, p, of the model for which AK is minimum as the optimum number of lags into the past that should be used in the canonical correlation analysis. During the canonical correlation analysis, the linearly independent components of the predictor space are identified and included in the state vector. Since
Yi, is always part of the state,
the analysis starts by forming the sample covariance matrix of the ,,} and space { Yi,, Yi,_,, . . , Yi,_, } and the first subset of the predictor space { Yi,, Yl,,, the canonical correlations of the covariance matrix. If the smallest canonical correlation coefficient is non-zero, then Yl,, , , f is included in the state vector, otherwise it is not. Similar analysis is performed on the sample covariance matrices of the fixed data space and other subsets of the predictor Once a given conditional expectation of a space with successively increasing number of components. particular Yi, say Y3, is deemed to be linearly dependent on previous conditional expectations of Y3, then
fixed data computing
all subsequent conditional expectations of Y3 are linearly dependent and thus excluded from further consideration. At each step in the analysis the significance of the smallest canonical correlation coefficient, pmin, is determined
by Bartlett’s
x2 statistic
and the information
criterion
c (Akaike,
1976).
If c < 0, then
C. Aksu, J. Y. Narayan
/ Forecasrmg with vector
ARMA
and state space methods
23
the smallest canonical correlation coefficient that corresponds to the component evaluated is taken to be zero and that component is not included in the state vector. Following the identification of the state vector, the procedure estimates the parameters of the identified state space model automatically. Both the preliminary and final estimates of the parameters are provided. Preliminary estimates of the input matrix G and the covariance matrix 2 of the innovation are obtained from the optimum autoregressive model fitted at the initial stage. Preliminary estimate of the transition matrix F is obtained from the canonical correlation analysis. Final parameter estimates are obtained by maximizing an approximate likelihood (ML) function (the default option) which is based on the sample autocovariance matrix. However, the user may request that the estimates be computed using conditional least squares (CLS) method. In addition, the user has the option of initializing and/or restricting the elements of the F and/or G matrices to some specified values. One problem with the SAS-SS system is that in cases where the series are nearly nonstationary, estimates are often either unstable or fail to converge. The procedure does not provide any readily available checks for model adequacy. If the model is deemed to be inadequate, one can try different stationarity inducing operators, or utilize the FORM statement to change the dimension of the state vector by limiting the number of components to be included or by forcing the program to include certain components into the state vector. The use of the FORM statement is especially desirable if certain components of the predictor space are included/excluded in/from the state vector based on the information criterion values that are nearly zero. However, in some instances, investigating alternative models that would arise with opposite decisions made regarding the smallest canonical correlation coefficients can be time consuming. In summary, SAS-SS provides the user with much flexibility in modeling. However, its performance is adversely affected if the series are nearly nonstationary. The state space program in the FORECAST MASTER package of Scientific Systems, Inc. (hereafter SSI-ss) is developed for the IBM microcomputers and compatibles. The program, like the SAS-SS software, can analyze and forecast stationary multivariate time series data. However, unlike SAS-SS, all stationarity inducing operators can be specified easily on a user friendly menu. SSI-ss differs from Akaike’s state space implemented by SAS in two basic ways: (i) The predictor space does not include the present y (i.e., the components Yl,, Y2,, . . . , Yk,). As a result, for a vector ARMA( p, q) process, the state vector consists of the components y+, ,,, Y,+ 2 ,,, . . . , y+ ,,,,. The advantage of using a non-overlapping data space and predictor space is that it leads to a lower dimensional model; (ii) Instead of Akaike’s canonical correlation method, a ‘generalized canonical variate’ procedure is used to identify oprimal predictors that minimize a quadratic prediction error criterion. In other words, the canonical variate problem is interpreted as one of choosing components from the data space that optimally predict the future, rather than as one of maximizing correlation between the data space and predictor space. The solution to the canonical variate problem involves primarily putting the covariance structure of the past and future in a canonical form using a singular value decomposition (Larimore, 1983). SSI-ss provides semiautomatic model order determination. It compares the canonical correlation coefficients which represent the increase in forecasting power due to each successive increase in the order of the state vector to the corresponding treshold values based upon AIC. The canonical correlation coefficients that are above the threshold values are deemed to be statistically significant. The order that the program suggests as optimal is the number of significant canonical correlations. However, since the threshold values are based on an approximation, the suggested order is often erroneous. The user should try other higher and lower orders (especially when the correlations closely parallel the threshold values) to determine the order that minimizes the AIC. Parameter estimation step is performed automatically. The canonical variables themselves are used directly in obtaining the parameter estimates. Larimore et al. (1984) show that the resulting estimates are extremely close to maximum likelihood estimates. The manual does not describe the estimation procedure utilized, but it is claimed that the algorithm used does not fail for large number of parameters and is also stable when the series is nearly nonstationary. In such cases, SAS-ss maximum likelihood method either fails or is unstable. However, unlike in SAS-SS, the user does not have the option of initializing and/or restricting the estimates of the parameters of the F and/or G matrices to prespecified values. This option
24
C. Aksu, J. Y. Narayan
/ Forecasrrng usith uector ARMA und state .spme methods
gives the SAS user a greater flexibility to model the significant auto or cross-correlations at higher lags as indicated by the residual analysis. This can be accomplished by increasing the order of the state vector for the desired variable(s) only, and by restricting the insignificant parameters to be zero. Once an order is chosen, the program provides a set of useful model checks to evaluate the model adequacy. These include adjusted R’, LjunggBox overall chi-square test statistic, AK and Schwarz error statistics. If the model is inadequate, the user can respecify the order of the model or the preprocessing transformations. When the state vector consists of equal number of linearly independent conditional expectations (i.e., equal number of predictors) of each Yi, the procedure, upon request, displays the vector ARMA form of the fitted state space model. In both SAS-SS and SSI-SS,the seasonal components are not explicitly modeled. Deseasonalization is first required. In SSI-SS,this is available as an option, whereas in SAS-SS, deseasonalization must be carried out outside the package. In addition, neither package offers a solution to the variance nonstationarity problem.
4. An illustrative
example
In this section, the model building procedures employed by the four software packages are further illustrated. The data set used for the illustration consists of 99 monthly observations of the first differences of the logs of ‘90-day treasury bill rates’ (TB) and the levels of ‘changes in money supply’ (MS) series obtained from Business Conditions Digest that cover the January 1974 - March 1982 period. To evaluate the forecasting performance of univariate (TB) and bivariate (MS and TB) models of TB series, the data set is divided into two sets: the first set, referred to as sumple duta, consists of 87 observations of each series and is used for initial model identification and estimation; the second set, referred to as post sample data, consists of the last 12 observations of each series and is used to obtain ex ante one-step-ahead forecasts. The approach used may be considered to be adaptive in the sense that it permits the models to change as new data pionts become available. The process of updating the sample data, remodeling and forecasting is repeated for the entire post sample data. For all procedures, except SSI-FM, the insignificant parameters of the identified models are restricted to be zero. Since the number of estimated models is very large, they will not be provided here. Instead, we provide a brief description of the model building approaches used and the bivariate models identified with each procedure only for the last period of the post sample data consisting of 98 observations. For further details of the models identified, we refer the reader to Aksu and Narayan (1990). For the other post sample periods, we just provide the one-step-ahead forecast errors from the univariate and bivariate TB models that are developed. The forecasting performance of the models are evaluated using both the mean squared (MSE) and mean absolute (MAE) error metrics.
4.1.
SCA- VARMA
The j3,,( /)‘s are significant up to lag 7, and the k,,(/)‘s cut off after lag 1 with significant correlations at lags 5 and 6, which, when the correlations at lags 5 and 6 are ignored, would be indicative of a vector AR(~) model. Otherwise. a high order vector AR model or a mixed vector ARMA model would be appropriate. The results of the stepwise autoregression procedure indicate the plausibility of a vector ARMA(3, 4) model whereas both the SCAN and ECCM procedures suggest the plausibility of a vector ARMA(~, 1) model. As a result, we decided to tentatively estimate all three models. All insignificant estimates were restricted to zero. except in the case of vector ARMA(~, 1) model where restricting insignificant parameters to zero made the model noninvertible. Vector AR(~) model was dropped from further evaluation because of the large number of significant correlations in the residual series, whereas vector ARMA(~, 1) model was dropped because it contained insignificant estimates restriction of which to zero resulted in a noninvertible model.
C. Aksu, J. Y. Narayan / Forecastrng with vector Consequently,
4.2.
our choice
was the vector ARMA(3,
4) model
ARMA
and state space methods
with coefficient
matrices
25 &,
&,
&,
dJ:
AFS- VARMA
Since the forecasting performance of the automatic mode is of interest to the users, we decided to identify models of TB series using both the automatic and nonautomatic modes and compare the performance of the models obtained in each mode. The sample CCM ($,,(f)'s) and PLCM (.G@,,( /)'s) of AFS-VARMA are indicative of a correlation structure that is very similar to the one suggested by the sample CCM and PARM of SCA-VARMA. Accordingly, we entertained the same models that were tentatively identified in SCA-VARMA. All insignificant estimates were restricted to be zero automatically by the program. As in the case of SCA-VARMA, the vector ARMA(3, 4) model was preferred since it had the smallest number of significant correlations in the residual series as well as the smallest matrices
entries
in the error
& = [;
,.097],
&=
On the other hand, when AFS-VARMA coefficient
covariance
matrix.
The parameter
estimates
of the coefficient
of this model is given below:
matrices
&,
G4, 4,
[,.244
4.3. One-step-ahead
;I,
e;=
[ i.616
mode, a vector ARMA(~,
performance
G4,= [;
of
,o.045]>
7) model with the
by the package:
;.342]. e;= [; ,0~0~7], e,=[,.279;I.
that the simpler vector ARMA(~, 7) model selected by the automatic by virtue of parsimony, it should be given serious consideration.
forecasting
The one-step-ahead and the corresponding
8, = [ ;.383
was used in the automatic
8, was selected
“I.826 ,‘l”]’ Diagnostic tests indicate also appropriate. Hence,
01,
SCA- VARMA
and
procedure
is
AFS- VARMA
forecast errors from univariate and bivariate SCA-VARMA MSE and MAE metrics are given in Panel A of Exhibit
and AFS-VARMA models 1. The results in Panel A
indicate that bivariate models outperform their univariate counterparts in one-step-ahead all procedures. The MSE (MAE) of the univariate models is reduced by 23.4% (18%) modeled jointly with MS series using the SCA-VARMA procedure. MSE (MAE) due to joint modeling is 36.7% (24.2%) and
forecasting with when TB series is
On the other hand, the reduction in the 46.6% (24.5%) with the automatic and
non-automatic modes of AFS-VARMA, respectively. The results also indicate that the AFS-VARMA models produce more accurate one-step-ahead forecasts of the TB series than the SCA-VARMA models. It is also interesting to observe that the models obtained with the automatic mode of AFS-VARMA produce more accurate forecasts than the SCA-VARMA models and slightly less accurate forecasts than the models obtained with the non-automatic mode of AFS-VARMA. Our results indicate that substantial gains in forecasting accuracy may be attributed to modeling with multivariate techniques regardless of the procedure utilized. However, the choice between the procedures is not that clear cut. In particular, we find the restriction of insignificant estimates to zero to be a tedious process in the SCA system. Special constraint matrices must be set up and particular elements of the coefficient
matrices
must be set equal
to zero by a method
which
is cumbersome
especially
in cases
26
C. A&
Exhibit
J. Y. Nura~an
/ Forecasting
with wctor
methods
1
One-step-ahead
post sample error statistics for the univariate
(TB)
Panel A Period
4/1981
- 1.36
S/1981
0.98
b/1981
2.51 - 2.84
981
and bivariate
(TB
and MS) ARMA
and state space models
vector ARMA models
AFS-VARMA
SCA-VARMA
Univariate
7/l
ARMA and state spnce
Bivariate
- 2.01
Automatic
mode
Non-automatic
mode
Univariate
Bivariate
Univariate
Bivariate
~ 1.14
-1.63
- 1.14
-. 1.80
0.x5
- 0.54
0.85
1.04
2.62
0.38
2.62
0.76
- 1.49
- 2.72
- 2.72
- 1.13
-0.11
-0.52
-- 0.72
X/1981
1.36
1.83
0.72
1.72
0.72
1.50
9/1981
0.69
- 0.05
0.81
1.09
0.81
0.63 - 0.77
10/1981
- 1.29
- 1.66
~ 1.17
- 1.18
- 1.17
1 l/1981
~ 0.42
- 0.28
~ 0.58
-0.15
- 0.58
- 0.45
12/19X1
-1.87
- 2.37
-1.82
- 2.44
-1.82
- 1.98
l/l982
-0.17
-0.18
~ 0.06
-0.36
0.15
~ 0.06
2,‘1982
1.32
0.86
1.52
1.09
1.52
0.68
3/1982
0.90
- 1.04
0.98
- 0.44
0.99
- 0.45
(b) Meun
squared
error und mecln absolute
error metrics
MS.5
2.285
1.750
2.146
1.358
2.148
1.148
MAE
1.309
1.074
1.249
0.947
1.250
0.936
Panel B -
state space models
Period
S.&S-SS
Univariate
SWss
Bwariate
Univariate
Bivariate
4/1981
0.56
- 0.44
0.72
5/1981
0.62
0.94
2.36
0.73
6/1981
~ 3.61
- 0.07
- 2.92
- 0.30
- 0.82
7/1981
1.02
1.25
1.12
1.13
8/1981
0.49
1.42
0.46
1.07
9,‘1981
~ 0.34
-0.17
-1.00
- 1.39
10,‘1981
- 0.49
- 0.44
- 0.81
- 0.70
II/1981
~ 1.47
-1.51
- 2.36
- 2.48
12,‘1981
- 0.36
- 0.62
0.48
- 0.91
l/1982
- 0.08
0.95
1.24
0.54
0.88
- 0.09
0.80
- 0.73
- 2.25
- 0.03
2/l
982
3,‘1982
- 3.25
-1.24
MSE
2.417
0.833
2.565
1.162
MAE
1.098
0.762
1.377
0.903
involving several such estimates. In addition, there does not seem to be a method for selecting the order in which these estimates should be restricted to zero. Moreover, the reduced models may not be estimable as is the case with our vector ARMA(~, 1) model. Another problem arises because the computer stores the most recent estimates of the parameters to be used as initial estimates in succeeding runs. For data where the estimation method is sensitive to the choice of initial estimates, the program may have to be reinitialized several times which could be a very time-consuming process. In the automatic mode of the AFS-VARMA, in general, there seems to be a bias toward the selection of more complicated models as a result of the program trying to capture many significant correlations, some
C. Aksu. .I. Y. Naryan
/ Forecastmg
wth vector
ARMA
and state space methods
27
of which may be spurious. We feel that the automatic version should be run initially and the resulting model be carefully examined. If the selected model appears to be unnecessarily complicated, one should try modeling in the manual mode. Another potential problem with the AFS system is that its estimation routine seems to have problems with data which give rise to models that are nearly non-invertible. 4.4. SAS-SS For our sample data consisting of 98 observations of each series, AK is minimum for the vector AR(6) mode1 (i.e., p = 6), and the mode1 identified by the procedure results in an unstable system in which the residual autocorrelation function of the TB series is inflated. In an attempt to obtain a stable system, we used the CLS method to perform the final estimation. However, the instability persisted. As an alternative course of action, we decided to use the FORM statement to change the dimension of the state vector. For that purpose, we analyzed the univariate model which was identified for the TB series. Initially, both the ML and CLS methods failed to converge. Convergence was achieved by increasing the upper limit on the number of iterations in the ML or CLS estimation from the default value of 20 to 72. The program picked up an order 6 model indicating the presence of six conditional expectations of the TB variable in the state vector. The final parameter estimates were obtained using the CLS method. The sample CCM of the residual series obtained from this mode1 was consonant with that of a white noise process. Note that the order of the state vector of the univariate TB model is the same as the order of the optimal vector AR process in the bivariate case for which AIC isminimum. Accordingly, we used the FORM statement to force the program to include six conditional expectations of the TB series in the state vector for the bivariate model. We also specified that the final estimation be done with the CLS method and set the maximum number of iterations in the estimation to 72. This resulted in a stable and adequate model. We then restricted the insignificant parameter estimates of the F and G matrices to be zero. The final parameter estimates of the restricted mode1 is provided below: Parameter
Final estimate
F(1, 1)
F(1, 2)
G(4, 2)
G(5,2)
0.200
- 1.683
-0.468
- 0.252.
F(7, 2)
F(7, 3)
F(7, 4)
G(3, 1)
G(4, 1)
- 0.294
0.470
- 0.416
0.107
0.066
In order to compare with the form of the models identified with SCA-VARMA and AFS-VARMA, the identified state space model is converted into the following corresponding vector ARMA(6, 5) model with coefficient matrices &, &, &, &, e,, 4, S;, f&, 8,:
[~;I,+, =[,.200 -:.h831K~l,+ [iii
-:.416][:;],-3+
::
-:.294][::],-s+[:
0 -0.066 -[
;I[;]
:.468][;],-,-
[:
-0.045 I[E1 0
-0.470 0
i.25:
[:
:.4701[::],-4
I[1 -0 0 IL & 1 0
?J , E
-0.416
17 r-3
1) r-4’
4.5. SSI-ss SSI-ssprovides
building program
the user with the option of treating each variable as endogenous or exogenous. In the bivariate models, we have specified both variables as endogenous. As discussed before, the suggests a mode1 order which is often erroneous. It also provides an AK figure for each mode1
28
C.Aksu,J.Y.Nurqvan/ Forecastrng withwctotARMA und.stcltespace methods
that is fit to the data. Accordingly, we tried a number of different orders the program and selected the model that minimized the AIC. For our sample data of 98 observations, the program suggested a However, we finally chose a model of order 8 which provided an adequate purposes, the vector ARMA(4, 4) process that corresponds to the state space (as displayed by the program):
including
the one suggested
by
state space model of order 4. fit to the data. For comparison model of order 8 is given below
-E:l[El,_2+ [-;:A: -~:~xTl,~3 +i: ~lkl,+,I[ 0.88 + [ - 8.51
i 24i:
4.6. One-step-uheud
forecasting
performance
of
SAS-SS
-1.47 0.26 I[1 E ?l ,-
i - 17.28 1.95
-0.25 1.17
77 E I r-1
and SSI-ss
The one-step-ahead forecast errors for the post sample data from the univariate and bivariate models and the corresponding MSE and MAE metrics are given in Panel B of Exhibit 1 for both SAS-SS and SSI-ss procedures. The results in Panel B indicate that bivariate models outperform their univariate counterparts in one-step-ahead forecasting with both procedures. In SAS-SS, the MSE (MAE) of the univariate models is reduced by 65.5% (30.6%) when TB series is modeled jointly with MS series. In SSI-SS. this reduction is 54.7% (34.4%). As it is the case with vector ARMA procedures, there are substantial gains in forecasting accuracy due to modeling TB series jointly with MS series, but it is not easy to choose one procedure over the other. In particular, SAS system provides greater flexibility in modeling by allowing the user (i) to restrict the insignificant parameters to zero, and (ii) to increase the order of the state vector for desired variable(s) only. This allows the identification of higher order models with few number of parameters. Neither of these options are available in SSI system. Increasing the order of the state vector in order to model the significant auto and cross-correlations at higher lags as indicated by the residual analysis leads to nonparsimonious models which are penalized heavily by all order determination criteria. Consequently, there is a bias towards the choice of low-order models. On the other hand, restricting the insignificant parameters to be zero to obtain a reduced model with parameter estimates that are all significant can be a very tedious task in the sAs-system, especially in the case of higher order models with many variables. In addition, the system’s estimation routines seem to have stability and/or convergence problems with data that is nearly nonstationary. The sst-system, however, does not have similar stability problems, and is very user friendly, fast and easy to use.
5. Conclusion
In this paper we have reviewed two multivariate techniques for simultaneously modeling two or more time series. One of the techniques is vector ARMA. introduced by Tiao and Box (1981) and the other is state .space as described by Akaike (1976) and Mehra (1982). Computer packages that operationalize these techniques are discussed and compared. In particular. one mainframe package and one microcomputer package is discussed for each technique. For the vector ARMA technique, the mainframe package evaluated is from SCA while the microcomputer package is from AFS. For the state space technique, the corresponding packages are from SAS and SSI. respectively. Both microcomputer packages provide options for (semi)automatic modeling.
C. A&
J. Y. Narayan
/ Forecasting wth vector
ARMA
and state space methods
29
An illustrative example is given in which treasury bill rates are modeled simultaneously with money supply series. It is shown that the forecasts of treasury bill rates from the bivariate models are superior to those from the corresponding univariate models regardless of the technique and/or procedure utilized. The results indicate that the substantial gains in forecasting accuracy accrued from the use of the multivariate models justify the additional effort involved. Our experience indicate that there is no single package that will serve the purpose of each practitioner. For example, the vector ARMA packages permit the explicit modeling of seasonal components in a time series. However, such models tend to have many parameters. Consequently, modeling more than two such series simultaneously can be a challenging task, especially in the SCA system. The state space package from SAS is sensitive to changes in nearly nonstationary data. Instability and convergence problems frequently occur. The microcomputer package from SSI seems to be a good alternative to the SAS procedure. However, the SSI package does not provide the user with the option of restricting any of the parameters to zero. Naturally this leads to the selection of non-parsimonious models when the order of the state vector is increased to capture the significant correlations at higher lags as suggested by the residual analysis. Although state space models are theoretically equivalent to vector ARMA models, the empirically identified models, when converted to their equivalent ARMA structure, tend to include some terms not present in the models identified directly from the vector ARMA procedures. In other words, state space models in general, are less parsimonious in structure than the vector ARMA models. Moreover, the identified models may not be as easy to explain as the more parsimonious vector ARMA models. For both techniques, the microcomputer packages are easier and faster to apply. However, the mainframe packages provide the user with more flexibility in modeling which may lead to the identification of models with superior forecasting performance. Forecasting with multivariate time series techniques requires some level of expertise and can be a very time consuming process especially if the mainframe packages are used. However, we feel that the gain in forecasting accuracy and the opportunity to study the lead/lag relationships among the variables is worth the time and effort involved.
References Akaike, H.. 1974a, “Markovian representation of stochastic processes and its application to the analysis of autoregressive moving average processes”, Annals of [he Institute of Statistical Mathematics. 26. 363-381. Akaike, H., 1974b, “A new look at the statistical model identification”, IEEE Transactrons on Automatic Control. AC-19, 716-723. representation of stochastic Akaike, II.. 1975, “Markovian processes by canonical variables”, SIAM Journal on Control, 13, 162-173. Akaike, H., 1976. “Canonical correlation analysis of time series and the use of an information criterion”, in: R.K. Mehra and D.G. Lainiotis, eds., System Identificatron: Aduances and Case Studies (Academic Press, New York) 27-95. Aksu, C., 1987. “An empirical investigation of time series properties of quarterly cash flows and usefulness of earnings in predicting cash flows”, Unpublished Ph.D. dissertation (Syracuse University, Syracuse, NY). Aksu. C. and J.Y. Narayan, 1985, “Model building and forecasting with a state space procedure”, Working paper no. 84-59 (Syracuse University, Syracuse. NY). Aksu, C. and J.Y. Narayan. 1990, “Forecasting with vector ARMA and state space methods”, Working paper (Temple University. Philadelphia. PA).
Aoki, M., 1987, State Space Modeling of Time Series (Springer, Berlin). Box, G.E.P. and D.R. Cox. 1964. “An analysis of transformations”, Journal of Royal Statistical Society, B 26, 211-243. Box, G.E.P. and G.M. Jenkins, 1970, Time-Set-/es Ana!v.wt Forecastmg and Control (Holden-Day. San Francisco. CA). Box, G.E.P. and G.M. Jenkins, 1976, Time-Serres Ana/,vsls: Forecastrng and Control. revised ed. (Holden-Day, San Francisco, CA). Cholette, P.A. and R. Lamy. 1986, “Multivariate ARIMA forecasting of irregular time series”, International Journal of Forecasting. 2, 201-216. Dowries, G.W. and D.M. Rocke, 1983, “Municipal budget forecasting with multivariate ARMA models”, Journal of Forecastmg. 2. 377-387. Engle, R.F. and C.W.J. Granger. 1987. “Cointegration and error correction: Representation, estimation and testing”, Econometrica, 55. 251-276. Fey, R.A. and N.C. Jain, 1982, “Identification and testing of optimal lag structures and causality in economic forecasting”, in: O.D. Anderson and M.R. Perryman, eds., Apphed Time Serres Analysis (North-Holland. Amsterdam) 65-73. Granger. C.W.J. and P. Newbold, 1977. Forecasting Economw Time Series (Academic Press, New York).
30
C. Aksu,
Granger,
C.W.J..
and
their S.I.
J.F.
lation
C.
1970,
and
sity.
Journal
of Forecasrrng. of the
output.
combinations (Wiley.
New
partial
of
causal
D.A.
M.
G.M.
ing
and
Tune
A.?.
Series
tors”,
Phillips,
1985.
among
analysis”.
“A
stock
real
“Some
aspects
time
series”.
Journcrl
G..
1981,
fltting
by
of cointegration
Dwwmrc.v
Ana!ysrs.
2. 103-l
Kitagawa, space
modeling
12, 231~
and
variate
series
results
and
from
trend
1985.
“A for
Murphree.
space
economx
198X.
comparison
Journul
of
from
the
(I/ Forecuxt-
H.S.
and
modeling
Rao
and
Amrnccln Larlmore,
T.
Control W.E..
variate
“System
wa canonical
and
M.P.
.swn
und
Conrrol
eds..
Con/erenw model
dad
identification.
Dorato.
S. Mahmood
adaptive
Polia.
In:
of the
IYH3
New
R.K.
algorithmic
(IEEE
analysis”.
(IEEE, and
eds..
variate
Proceedmg,
23rd
m: A.H.
1989.
movements: proaches”, L.-M.
Conference
Center.
New
and
forecasting
Univarlate
and
multiple
Inrernatronal
Muller.
(Scientific Mehra,
“Modeling
Journal
and Ci.B. Hudak,
M.E.
G.C.
Tiao.
Computing
R.K.,
1979.
forecasting”.
on Dew Jersey.
NJ)
TIMS
time
of Forecastrng.
1986,
Statrstrcul
filter5
G.E.P.
~\%em
DeKalb,
IL). applications
Stud1e.s rn the Munagement
to
Screncev.
12,
R.K., in:
C‘urrmt rru,
“Identification
Dervlopment
Muthematrcx
Narayan. ex
1982.
M. Hazewinkel
J.Y. and ante
Ana&xc: dam)
263-275.
in control A.H.G.
In the Interface: (Reidel,
C. Aksu.
forecasts”, Theory
and
in: and
Dordrecht) O.D.
econometKan,
Economrcs,
Anderson.
eds.,
Economet-
261-288.
19X5. “Causahty
Practrc,e
and
Rlnnooy
testing ed.,
7 (North-Holland.
based
Time
on
Series Amster-
Time
Series
Forecasting
Systems,
Hat-
“Theory
and
practice
of
of Forecastrng,
SAS/ETS,
Version
5 (SAS
Spliid,
space
modeling
Management 1986.
3,
Institute,
analysis
approach
Science.
for
time
31. 1451-1470.
FORECAST
MASTER
(Scien-
MA).
“Estimation
1980.
Institute
of
algorithm”,
time
series
Research
of Mathematical
The Technical
Statistics
University
models
report
by
a
no. 17/1980
and
Operations
of Denmark,
DK-2800,
Denmark)
H.,
“A
19X3,
fast
moving
Journal
bles”,
estlmatlon average
method
model
of the Amerrcan
for
with
the
vector
exogenous
Stutrst~ul
varia-
Assocration.
78,
x43-x49. Stolca,
P.. 1‘. Siiderstriim. thr
Tiao.
G.C.
series
ing
and
Busmess Tiao.
G.C.
G.E.P.
Box.
and
& Econonuc R.S.
Statistical
Association
variate
1983,
models”.
al-
41, 1429-1444. multiple
of the Amerrcan
Stututus. Tuy.
time Statisti-
1985, 1985
“A
Journal
series
model-
Journal
canonical time
o/
correlation
series”.
Proceedings
Section.
series
time
1. 43-56.
multivariate
Statiatlcc time
“ Multiple
cross-correlations”,
S. and J. Ledolter.
multlple
1985.
regression
19X1. “Modeling
sample
to modeling
onal
Solbrand,
76. 802-816.
extended
Economic
G.
of Control,
Journal
R.S. Tsay,
and
and
pseudo-linear
Journul
applications”.
approach and
of
Internatronul and
G.C.
A. AhlCn
convergence
with
Umashankar,
Box.
and
their
ap-
5, 195-208.
wth
The SCA
patient
series
in collaboration
Associates.
“Kalman
hospital
75-94. rics”.
H..
Lyngby.
Tiao.
Had-
of Eco-
Journal
state
Inc..
WI Asso~~urtron.
445-451.
19X4, “Multi-
control”, IEEE
Service
York)
Mehra,
o/ Multrple
1984,
Cambridge,
gorithms”,
reduced-order
675-680. W.T.,
Systems
“On 19X3.
for cointegraJournal
forecasting”,
1984, “A
Systems,
Research.
forecasts
Internutronul
Inc.,
autoregressive
“A
with
ARMA
1985,
(The
series”.
extra-model
12. 205-230.
(Automatic
forecasting”.
19. 378-389. time
“Testing
methods”,
The Anal,‘su
D. TJostheim.
regression
1~ 5-24.
forecasting
competition”,
W.E.,
filtering
Mehra.
T..
tific
4, 45-55.
Larimorr.
Liu.
Sastri. Scientific
of multi-
1957.
and
8, 75-83.
NC).
seasonality”,
comparison
of fbrecastrng,
E.S.
and
1988,
components
19X6. MTS
Institute Cary.
and
priors-state
A.rsocratron,
procedures
state
Makridakis
with
Sratutical
Journal
A.B.
T/me
1, 143-150.
forecasts
of Forecasrmg.
S. Ouliaris,
a vector
economy”.
309-317.
Serres
model
of
1984. “A smoothness
Bessler,
forecaattng
Koehler,
serirs
Journal
series
with
Finnish
PA).
T. and
Spliid,
of time D.A.
Inrerncrr~onul
time
filter”.
16.
of the Amerrcun
J.L.
Lin.
D.P..
multivariate
of vec-
und Control.
nonstationary
G. and W. Gersch,
Journal
mg.
“A
a recursive
6,
London).
boro.
SAS analysis
and
of the
of Forrcastmg.
“Time
principal M.H..
series
its
Kling,
Reilly,
of model-
254. Kitagawa,
com-
OJ’ Forecasting,
forecasting
study
D_gnamrcs and Control.
(Griffin.
of Finance,
A case
Journal
using
nom~
VARMA
returns,
The Journal
multivariate
of Economrc
of residential,
Journal
“Macroeconomic
1989,
P.C.B.
tion
Ancr!r’s~s. 2, I-47.
Journal
modeling
demands”,
Journal
A.,
Quenouille.
1981,
S., 198X. “Statistical
1985, model:
Internatlonul
1983. “Fore-
time-serleb
rates”.
Alavi.
forecasting
Johansen,
L.-E..
Rliae.
and
“State-space peak
information”,
PA).
Schroeder.
Partch.
relations interest
ijller,
Univer-
40, 1375-1384. Jenkms.
1987, and
Pankratz.
2. 3X9-404.
and nomlnal
R.,
mercial
York).
lag autocorre-
No. 3.? (Temple
A multiple
and
and state space methods
ARIMA
Philadelphia.
and
data:
S. Koreisha
analysis
Report
of Statistics, Larcker
accounting
C..
Nelson,
Journal
8. 253-267.
Tame Series
Technical
D.F.
“N-step
1984, “The
Wei.
casting James,
data
ARMA
with wctor
97-115.
1989.
Multrple
Department SC..
series
specification”,
of Forecastrng.
W.W.S.
function”.
Hillmer,
of time
model
Aksu,
Journal
E.J.,
Heyse,
/ Forecastmg
16, 121-130.
and
forecasts”, Hannan,
properties
use in econometric
of Econometrics, Gunter.
“Some
1981,
J. Y. Narayan
of
the
American Business
112- 120.
19X3. “Forecasting models:
OJ Murkrtrng
An
extension
Research.
Biographies: C&l AKSU received his Ph.D. ministration from Syracuse University and Assistant Professor of Accounting at Temple research interests include time series analysis. casts. and issues regarding accounting earnings uation.
with of
diagum-
20, 5X-63.
in Busmess Adis currently an Umversity. His combining foreand firm val-
Jack Y. NARAYAN received hia Ph.D. in Mathematics from Lehigh University. He is currently a Professor of Mathematics at the State Umverslty of New York at Oswego. His research interests Include time series analysis. numerical analysis, and ordinary differential equations.