494
Book reviews
not one, bat two, Marchetti romps through logistic curve fits to long term innovation diffusions. I'd label another set of chapters as explorations of emerging technologies. These characterize the state of the art and likely advances of the coming, say, decade in advanced materials, machinery and automation, electronics and information technologies, telecommunications, and biotechnologies. These are quite nice, but are somewhat aged in that they are 1989 vintage analyses. Other chapters explore various aspects of innovation management. These encompass general perspectives on innovation processes, European Community joint ventures to stimulate technology transfer, national programs to stimulate innovation, and case studies. The case studies reach to the specificity of describing how one Germany companynNUKEMmmanages technological innovation (I had to get that company name in somehow). Understanding the processes of technological innovation provides the basis for technological forecasting. In that this book gives the flavor of European thinking on innovation processes and how to manage these, it provides a valuable backdrop to forecasting efforts per se. Alan L. Porter
Technology Policy and Assessment Center Georgia Tech Atlanta, USA
References Martino, J.P., 1993, Technological Forecasting for DecisionMaking, (McGraw-Hill, New York) Porter, A.L., Roper, A.T., Mason, T.W., Rossini, F.A. and Banks, J., 1991, Forecasting and Management of Technology, (John Wiley, New York). Twiss, B.C., 1992, Forecasting for Technologists and Engineers: A Practical Guide for Better Decisions, (Peter Peringrinus, London).
James D. Hamilton, 1994, Time Series Analysis, (Princeton University Press, Princeton, NJ), 799 pp., US $55.00, ISBN 0-691-04289-6. Day-to-day forecasting is a process of combining statistical methods with luck. This book contributes to that process by explaining statistical methods relevant for the modeling of timeseries data. Hamilton develops each method from its most simple representation to its most general formulation as simplifying assumptions are relaxed one by one. Rigorous proofs appear in each chapter's appendix; problem sets and empirical applications illustrate the scope of each method. Clear exposition, thorough derivations, and extensive coverage account for the large number of pages. Chapters 1-4 explain difference equations, lag operators, stationarity, autoregressive and moving-average models, and forecasting with these models; this material is central to the exposition in subsequent chapters. I found interesting the discussion on forecasting with infinite data and aggregating alternative time-series models. By design, Hamilton derives the analytical results of these chapters treating the parameters as known and devotes the rest of the book to parameter estimation and hypothesis testing of various time-series models. Chapter 5 presents the method of maximum likelihood and applies it to autoregressive and moving-average models; of particular interest is the exposition on the conditional and unconditional maximum likelihood functions. Chapter 6 discusses parameter estimation in the frequency domain including a nice discussion of aliasing but excluding Engle's Band Spectrum estimator. Chapter 7 describes the various notions of convergence, explains laws of large numbers and central limit theorems for both independent and serially correlated variables. These results are used for deriving the asymptotic distributions of the estimators presented in subsequent chapters. Chapters 8-9 develop estimators based on ordinary least squares and full information maximum likelihood, present test-statistics for several hypotheses, derive the associated asymptotic distributions, and discuss parameter identification in simultaneous systems. The exposition
Book reviews
begins with the classical regression model (deterministic regressors with Gaussian white noise) the assumptions of which are relaxed one by one to obtain additional estimators. For each case, such as the linear model with first-order autocorrelation of regression residuals and no lagged endogenous variables, Hamilton derives the estimator and its asymptotic distribution patiently and clearly. Chapters 10-14 derive useful results for macromodeling based on vector-autoregressive (VAR) models. Chapter 10 generalizes the results from Chapters 3 and 6 to the multivariate case and Chapter 11 examines estimation, testing, and forecasting with VARs; I found the discussion on structural VARs quite useful. The remaining chapters cover Bayesian estimation, Kalman filtering, and the Generalized Method of Moments. Chapters 15-20 deal with parameter estimation and hypothesis testing for models explaining non-stationary data. From my standpoint, these chapters are the book's jewels. The presentation covers models with deterministic trends and unit roots, the spurious-regression problem and associated solutions, hypothesis testing with unit roots in univariate and multivariate processes, the difficulties in discriminating among timeseries formulations, and the unit-root tests of Kickey and Fuller and those of Phillips and Perron. The exposition on cointegration analysis (estimation, testing, and interpretation) covers univariate and multivariate processes with the techniques of Engle and Granger, Stock and Watson, and Johansen explained, compared, and illustrated with empirical examples. The discussion of Brownian motion and functional limits could use, however, more motivation by relating these topics to subsequent chapters. Chapters 21 and 22 focus on modeling processes with heteroskedastic errors and with regime shifts. The modeling of heteroskedasticity is widely practiced but the modeling of regime shifts is not. The study of regime shifts will be, however, increasingly relevant given the difficulties in discriminating among time-series formulations and the accumulation of observations into a
495
lengthy historical record. The book concludes with appendices on trigonometry, complex numbers, calculus, matrix algebra, and probability along with tables of critical values from several distributions. Hamilton is interested in explaining econometric results to you: What do they mean? How are they derived? How can you use then? This interest makes Time Series Analysis useful for practitioners that might view recent econometric research as a collection of negative discoveries arguing against old methods but unable to agree on the substitute. Hamilton helps translate the new findings into a reality for day-to-day forecasting by supplementing them with the new, and technically demanding, background and all this with one consistent notation. The exposition excludes, however, references to the early work on time series as developed by Frisch, Mitchell, Orcutt, Pearson, Slutsky, and Yule (reviewed by Morgan, 1990). Including appendixes (in future editions) with such references would give the readers an appreciation of just how tall are the giants' shoulders. In brief, anyone interested in modeling macroeconomic developments will profit from opening Time Series Analysis again and again, especially if that interest involves combining statistical methods with luck. Jamie Marquez 1 Federal Reserve Board Washington, DC USA
References
Morgan, Mary, 1990, The History of Econometric Ideas, .(Cambridge University Press, Cambridge). 1The views expressed in this paper are the author's and should not be interpreted as reflecting those of the Board of Governors of the Federal Reserve Systemor other members of its staff.