TECHNOLOGICAL
FORECASTING
AND SOCIAL CHANGE
13, 149-I 56 (1979)
149
Problems of Forecasting and Technology Assessment WILLIAM
ASCHER
ABSTRACT A broad appraisal of forecasts of U.S. national trends in population, economics, transportation, energy-use, and technology reveals some of the limitations of forecasting and some avenues for improvement. The development of greater methodological sophistication has not significantly improved forecast accuracy. The (often linear) deterioration of accuracy with lengthening of forecast time horizons proceeds regardless of method. Methodology expresses (and traces the implications of) core assumptions reflecting the forecasters’ fundamental outlook. Sophisticated methodology cannot save a forecast based on faulty core assumptions. High inaccuracy results from the persistence of out-of-date core assumptions (“assumption drag”), caused by overspecialization, wishful thinking, the infrequency of forecast studies (due to the common preference for expensive approaches), and the weakness of sociopolitical forecasting. This diagnosis calls for more frequent, less elaborate, and interdisciplinary forecasting efforts. Sociopolitical forecasting, required both as a source of core assumptions for projecting other trends and to trace the social impact component of technology assessments, has suffered from a lack of specificity and meaningfulness. The greater uncertainty in forecasting technological developments requiring political decisions and large-scale programs indicates the importance of improving sociopolitical analysis. The social-indicators and scenario approaches are two means for achieving this improvement. Their potential contributions, as well as limitations, are reviewed.
Introduction Forecasting is an integral part of technology assessment. Yet despite notable attempts to assess the methodology and records of technology assessment, little systematic appraisal of forecasting per se has been attempted. Perhaps because forecasting so easily captures the public imagination, its reputation has been influenced far too much by the prominent cases-prominent successes and prominent failures, both of which are very misleading. In Forecasting: An Appraisal for Policy-Makers and Planners I have tried to present a systematic appraisal of the accuracy, biases, and utility of forecasting. This assessment covers U.S. national trends in population, economic growth, energy use, and transportation, as well as several aspects of technological forecasting and resourceavailability estimation. Finally, I reviewed some of the efforts in social and political forecasting, but because of the general lack of specificity of such forecasts, appraisal of their accuracy was precluded. The focus on national trends naturally presents some problems of generalizability to more specific or localized trends. Furthermore, generalizing from the findings on technological forecasting is limited by the scarcity of technological trends subjected to WILLIAM Forecasting:
ASCHER
teaches
An Appraisalfor
political
Policy-Makers
science
at the Johns Hopkins University and is the author of Johns Hopkins University Press, Baltimore, Md.,
and Plnnners.
1978. @ Elsevier North-Holland,
Inc., 1979
0040-1625/79/02014908/$02.25
150
WILLIAM
ASCHER
enough forecasting attempts to establish an appraisable record. Yet the high degree of consistency of findings in all of the forecasting areas covered fosters a confidence that the results are indeed generalizable, both to other trends and for future efforts in forecasting. The first general finding concerns the relevance of methodology to the accuracy of the forecasts. Identifying the method used in a particular forecasting effort is often difficult, not only because forecasters are rarely explicit about their methods, but also because they often employ a mixture of methods even when one method is designated as the primary one. Often we do not know to what extent the forecaster’s judgmental approach is molded by his scratchpad extrapolations, or to what extent he accepts his extrapolations or correlations because they conform to his personal judgment. Nevertheless, in analyzing the relationships between primary methods and forecast accuracy, we find three results. First, for particular trends, certain methodologies do perform better than others. It is worthwhile to examine the past successes of different methods. When the standard distinctions among judgment, curve fitting, correlation, and modeling are made, advantages of one particular method do emerge specific to each trend. However, no single method or approach proves to be superior in general. The second, and related point, is that methodological sophistication contributes very, very little to the accuracy of forecasts. The introduction of more sophisticated methods in population forecasting, with the elaborate accounting divisions of components and cohorts, has not resulted in more accurate demographic forecasts. The introduction of econometric modeling for economic forecasting also has not improved forecast accuracy. The newest econometric models do no better than the earlier models, and in fact, largely judgmental forecasts of GNP are still more accurate than the econometric forecasts produced by Wharton, DRI, and other sophisticated modeling operations. Similarly, both simple and sophisticated methods of forecasting the demand for petroleum and electricity have had the same general level of accuracy. In transportation forecasting, the more elaborate recent modeling approaches of the Federal Aviation Agency (FAA) to predicting both commercial air traffic and the size of the general aviation fleet have not improved their accuracy record. In fact, an examination of many simulation and econometric models of air traffic demand reveals that their forecasts for 1980- 1990 have as much dispersion (indicating unreliability) as do comparable forecasts produced by judgment or extrapolation. The third finding on methodology relates the methods to the core assumptions underlying the forecast. It must be recognized that behind any forecast, regardless of the sophistication of methodology, are irreducible core assumptions representing the forecaster’s basic outlook on the context within which the specific trend develops. These core assumptions are not derivable from methodology; on the contrary, methods are basically the vehicles, or accounting devices, for determining the consequences or implications of core assumptions that were originally chosen more-or-less independently of (and prior to) the method. The choice of method signifies a preconception of future growth. For example, envelope curves are chosen as a method not because all growth patterns can be described that way, but rather because of the forecaster’s preconception that the relevant technology will “explode” in capability under the spur of cumulative breakthroughs. This obviously does not happen to all technologies. Similarly, the decision to use mny form of curve-fitting methodology embodies an assumption that some commonly encountered growth pattern will reoccur, just as the decision to forecast via analogy commits the
PROBLEMS
OF FORECASTING
AND TECHNOLOGY
ASSESSMENT
151
forecasters to equate the forecasted growth pattern to a specific historical experience of some other trend. The core assumptions are the major determinants of forecast accuracy. When the core assumptions are valid, the choice of methodology is either secondary or obvious. When the core assumptions fail to capture the reality of the future context, other factors such as methodology generally can make little difference-they cannot redeem a forecast based on faulty core assumptions. For example, the noted demographers L. I. Dublin and A. J. Lotka, the first to explicitly use the cohort method, produced highly inaccurate U.S. population forecasts simply because they had adopted an empirically unsubstantiated assumption of an equilibrium in population growth. Similarly, the major source of inaccuracy of nuclear-energy capacity forecasts has been the faulty assumption that technological rather than political problems would be the limiting factors in the growth of nuclear capacity. The next general finding is perhaps the most obvious: the time horizon of the forecast is the strongest and most consistent correlate of accuracy. Although there have been numerous exceptions, the general rule is that shorter forecasts are more accurate, often in a nearly linear relationship. For example, 5-year petroleum-consumption forecasts have had a median error of about 6% and IO-year forecasts, an error of around 13%. Motorvehicle registration forecasts have median errors in percentages roughly equal to the forecast lengths in years. Even for technological forecasting, there is a consistent relationship between the remoteness of forecasted innovations and the spread of experts’ predictions, which is an indicator of at least the minimum magnitude of forecast error. Working from Dr. Joseph Martino’s analysis of correlations between median forecast lengths and the spread of Delphi forecasts [ 11, we find that each 5-year increment in remoteness (or forecast length) is associated with a 2-year increase in the minimum expected error of a typical technological forecast. The importance of forecast length has two implications. The first is that although we cannot guarantee the accuracy of forecasts, we can get a rough idea of the likely accuracy of a forecast of given length for a particular trend. In other words, confidence limits can be established, which can give the forecast-user a better feel for what the forecast really signifies in terms of certainty. The second implication, which is also supported by the cruciality of the role of core assumption validity and the relatively low importance of methodological sophistication, is that reliance on elaborate previously made forecasts, either for direct use or as a basis for projecting related trends, can be costly if it entails the use of what are, in effect, longer forecasts-forecasts of time horizons longer than necessary. For example, on examining the few bad motor-vehicle population forecasts made in the 1960s (the general accuracy of such forecasts is quite high), we discovered that these forecasts were based on information several years out of date. The need for appropriate core assumptions makes the problem of relying on antiquated core assumptions, which we have called “assumption drag,” particularly important. It has been the source of some of the most drastic errors in forecasting. The worst overall population forecasts were made in the late 1930s and early 194Os, after the assumption of declining birth rates became invalid but was not recognized as such. Similarly, the electricity demand forecasts of the 1960s continued to project fairly low electricity-use increases, even when the actual rates of increase were contradicting this assumption. As one critic of these forecasts pointed out [2], “Utility planners persisted in
WILLIAM
152
ASCHER
their forecasting course even after it began to look askew. So committed were they to traditional theories of load growth that the deviations were rationalized as weather aberrations.” Assumption drag occurs for several reasons, and these are helpful in understanding why the problem is widespread and chronic. The reasons include aspects of forecasting that could be corrected, but they also include intractable problems created by the uncertainty inherent in forecasting. The specialization of most forecasters is one reason why obsolete assumptions are often retained. A specialist in one forecasting area-say, energy demand-must rely implicitly or explicitly on forecasts in areas that are beyond his own expertise (e.g., population forecasting). Since his own knowledge of appropriate assumptions outside of his specialty is limited, he will not produce definitive forecasts in these other areas. More importantly, he may not be able to appraise the validity of the older forecasts lying around so conveniently. Unless the resources are available to mount new studies in these supportive areas, forecasters will often rely on older, existing studies, whose assumptions are “frozen” at the times when they were produced. Another source of assumption drag is the high cost of many forecasting efforts, which forces forecasters in other areas to rely on whatever has been produced, even if it is obsolete. Yet, although some effort and cost is required to generate forecasts using any methodology, some methods require less time and money than others, and thus can be completed more frequently for a given level of resources. Since the choice of methodology, which largely determines the cost of the study, is not as crucial to forecast accuracy as is the appropriate choice of core assumptions, recent inexpensive studies are likely to be more accurate than older, elaborate expensive studies. We have found that multipleexpert-opinion forecasts, which require very little time or money, do quite well in terms of accuracy because they reflect the most up-to-date consensus on core assumptions. When the choice is between fewer expensive studies and more numerous, up-to-date inexpensive studies, these considerations call for the latter. The final major source of assumption drag is a more profound and intractable problem. It is the persistent uncertainty as to whether recent data actually represent a new pattern that negates the old assumptions. There is a danger in taking every deviation from the past pattern seriously-the deviation may turn out to be a minor, short-lived “blip” in the basic pattern. This is illustrated by the fact that in the early 1920s some population forecasters, misled by a temporary drop in the growth rate (in fact caused by World War I), concluded that the U.S. population was leveling off to an ultimate ceiling of less than 200,000,000. Forecasters at times wisely choose to ignore departures from the expected pattern because the departures are interpreted as “noise.” When the facts are in, we discover who was “solid” for standing firm on his correct convictions and who was “bullheaded” for refusing to accept the new reality. The only way to cope with this intractable problem is to go with the judgment of the experts but at the same time try to enhance their awareness and that of the policymakers that these blips exist and are of potential importance. This is one of many considerations that call for the formation of slightly hysterical “lookout institutions,” which are permitted to cry wolf every so often without becoming completely discredited. Forecasting
and Technology
Assessment
Any decent effort in technology on, the assumption of interrelatedness
assessment already presumes, and indeed is based of technological development with social, political,
PROBLEMS
OF FORECASTING
AND TECHNOLOGY
ASSESSMENT
153
and economic trends. Socioeconomic factors influence technological development and the spread of technological innovations, whereas technological development and the rise of new technologies in turn have social, political, and economic effects that comprise part of the impact that must be assessed. The importance of this interconnectedness to technological forecasting is illustrated, again, by evidence from Delphi study results expressing the dispersion of expert forecasts on breakthroughs in different technological areas. It turns out that the dispersion, which represents typical errors at least that great, varies according to the nature of the technological development under study. The cluster of technologies with the smallest dispersion includes technological areas in which advancement depends on engineering refinements and the disaggregated, market diffusion of such innovations. The fields of communications, educational technology, and automation fall into this group. The second cluster of topics involves advances that require large-scale, official programs. Innovations in health-care systems, medical education, and space exploration all require official-though not always governmental-policy decisions at a high level. In other words, future events in these fields require not only engineering refinements, but also discrete high-level decisions as opposed to the multiple disaggregated decisions relevant to the first cluster. There is less certainty, or at least less agreement, in predicting discrete official policies. For predictions of advancement in large-scale programs, the political aspect adds an additional degree of uncertainty to that already surrounding the technical feasibility of the programs. The final cluster of technological areas, which showed the greatest disagreement among the experts, consisted of innovations requiring basic scientific breakthroughs. Scientific breakthroughs in both the physical and biological sciences fall into this category, including medical innovations. Although at first glance it might appear that such breakthroughs are the least related or interconnected to the socioeconomic context, the development of basic research is in fact sensitive to research-funding decisions, the fluctuations of interest on the part of the scientific community, and sometimes political opposition, on top of the inherent uncertainties of predicting the tractability of scientific problems. Instead of simply dwelling on the phenomenon of interconnectedness, I would like to discuss its operational significance in light of some of the findings on forecast accuracy. The dependence of each forecasting task on several others is, of course, discouraging to many forecasting specialists, who find that their own expertise is not sufficient to accurately project their pet trends. There is no way to reduce this dependence through the selection of sophisticated and clever methodology, because it is inherent in the interdependence of social activity. Therefore, the only constructive approach to the phenomenon of interconnectedness is to determine the best allocation of effort for the forecasting tasks that would make the greatest improvement in other areas of forecasting. Interconnectedness, or, to put it better, the contextual nature of all the trends we have considered, calls for a better balance between the development of more sophisticated techniques-which has been the major preoccupation of leading forecast theorists-and the currently neglected search for ways to establish core assumptions and to test their validity. Too often, the emphasis on methodology masks the fact that assumptions really do underlie any forecast, and allows the forecasters to neglect the validity of these assumptions. Assumptions on “background” trends and conditions are often adopted without the careful scrutiny they warrant by virtue of their ultimate importance to forecast
154
WILLIAM
ASCHER
accuracy. Frequently an elaborate and painstaking analysis is employed to forecast a given trend, but the other trend projections on which it depends are casually lifted from existing (and often obsolete) sources with no examination of their validity.
Social Forecasting Very often, the core assumptions of problematic validity are social or political forecasts. Any observer of social and political forecasting is immediately struck, not by the lack of predictions nor by their accuracy or inaccuracy, but by the impossibility of appraising the record. Sociopolitical predictions abound, but are rarely expressed in terms that permit evaluation. Predictions of discrete events often lack specific dates or sufficient definition to be scored as correct or incorrect. Other predictions are couched in vague, conditional terms, such as “if the situation does not change,” which preclude the verification of the prediction. Of course, vague predictions can be found in any field of forecasting. The problem peculiar to social forecasting is that, even if these requisite conditions of specificity were met, a more fundamental difficulty of general appraisal remains: there is no typical sociopolitical predictions, nor is there an overarching, comprehensive trend whose predictability could summarize or faithfully reflect the difficulty of making other specific social predictions. Beyond the problem of appraising sociopolitical forecasts, their usual lack of specificity hampers their use as decision-making tools, their utility as explicit core assumptions, and their use in the production of other forecasts. There are two approaches emerging, both designed to make sociopolitical forecasting both more meaningful and more appraisable. One is the specification of scenarios that consist of integrated sets of events or conditions, and the other is the use of social indicators as forecasted trends. The advantage of sociopolitical forecasting through scenarios is that it permits the forecaster to convey enough of the social context to make each element of the forecast meaningful. Richness in detail makes the forecast more comprehensible and useful as a basis for core assumptions or as a basis for policymaking. However, the fact that scenario forecasts are in effect stories involving numerous points complicates their interpretation, if one or more points are doubted or turn out to be incorrect. To what extent does the accuracy of one aspect of the scenario depend on the accuracy of another? Rarely are these relationships spelled out in the scenario forecasts being developed today. Moreover, the multiple nature of the events and conditions of a scenario makes its appraisal difficult, since the scenario forecast can be partially wrong, and usually there are no explicit indications of which elements are to be regarded as more important and by how much. Any “rightness” score based on a ratio of right elements to wrong elements would be sensitive to arbitrary decisions on how to divide up the scenario into subevents and conditions. These problems can be overcome by additional effort on the part of social scientists preparing these forecasts. The elements of a social or political forecast can be “nested”; the aspects that depend on other events or conditions can be designated as such. If probability ranges are to be assigned to the scenario as a whole, it is also feasible to assign probabilities to each aspect, both conditional probabilities and absolute probabilities. Occasionally the same outcome may be an element of more than one scenario, in which case the probabilities can be suitably combined. The result of these efforts would be an
PROBLEMS
OF FORECASTING
AND TECHNOLOGY
ASSESSMENT
155
organized set of scenarios and their elements, with clearer indications of relatedness to ease interpretation, and of levels of importance to aid appraisal. The second promising mode of sociopolitical forecasting is the projection of aggregate measures of social interaction, or “social indicators. ” Social indicators are summary measures, usually of society-wide phenomena such as the distribution of wealth, levels of alienation, consumption patterns, broad aspects of the “political climate,” and so forth. Efforts devoted to developing social indicators have focused on the need for universally applicable measures, precisely because of existing difficulties in comparing and differentiating social contexts. Thus social indicators are analogous to the functional capabilities projected in technological forecasting, in that social indicators standardize the outcomes of diverse social structures just as functional capabilities standardize the performance of diverse inventions. When a social indicator is projected, the problem of deciding whether the actual result is very much different from the forecast is eliminated as long as the indicator can be measured; thus appraisability is guaranteed. The shortcoming of social indicators as opposed to scenarios is that social indicators generally do not paint a full contextual picture that would clarify the relationships among elements of the context. A scenario such as George Orwell’s 1984 is a far more explicit and vivid depiction of the relationships among totalitarian rule, surveillance, dissent, and the meaning of war than are the projections of citizen participation levels, oppositional activity, and war casualties. The advantages of social indicators are that they are explicit, generally widely applicable with relatively little additional contextual information (because they encompass part of the context themselves), and directly appraisable and interpretable. Now, it may seem strange to employ social indicators for forecasting; they are usually regarded as instruments of appraisal, for measuring the performance of society. However, it was recognized quite early that the study and measurement of social change presupposes the capacity to study and measure existing social conditions. In fact, attempts to develop social indicators to aid social forecasting date back to the work of William F. Ogbum [3] in the 1920s. The social indicators approach and its application to social forecasting have developed very slowly. The development of the indicators themselves has been slow because of the difficulty of obtaining data, which usually are not the same as the official data collected by governmental agencies. Historical data for establishing trend lines are particularly difficult to recreate. Yet the United States is in an enviable position compared to other countries in terms of our capacity to develop social indicator data, considering the federal government’s data-gathering capacity and the advanced capabilities of American commercial and academic survey organizations. The second reason for the slow development of forecasting with social indicators is the lack of means to relate social indicators to either their causes or their consequences. Otis Dudley Duncan [4] points out that the social indicators approach, far from being ready to forecast levels of social indicators in the future, is not even certain about which specific measures to develop. This uncertainty, however, has the same origin as the almost universal problem of selecting the right variables to develop theories and explanations: one must first have a theory. Theories are needed to explain and predict the levels of social indicators; other theories are needed to determine the effects that forecasted levels of social indicators will have on other trends, so that the social indicators can be used as meaningful core assumptions. The problem is not any inherent vagueness in the social indicators themselves, but rather that most current theories in the social sciences are
156
WILLIAM
ASCHER
rarely addressed to explain summary outcomes cast in the same broad terms as the social indicators. This problem is, again, parallel to that of technological forecasting, wherein specific innovations are, of course, acknowledged to occur, but theories explaining the growth of functional capabilities are hardly developed. Ironically, the “grand social theory” of the 19th century is more suitable to linking social indicators to their correlates than is the more specific level of social science theory of today. Unless theories are cast in terms of outcomes that can be related directly to social indicators, their incorporation into any forecasting effort beyond simple and probably inappropriate extrapolation is likely to remain rudimentary. It is appropriate to end this brief overview by pointing out that although forecasting us a ~chno~o~y has not advanced as successfully as one might have hoped, and in fact the nature of forecasting limits such advances, the study of forecasting and its constraints seems to be advancing to a new stage of awareness on the part of its practitioners. To quote Dr. Harold Linstone 1.51:“We have already learned quite a bit about our needs and capabilities, about forecasting and planning. The best minds today are less arrogant and narrow, far more cognizant of the problematique than they were a decade ago. They understand the limitations of the state of the art. . .” This puper is based on a talk originully prepared for the Workshop on Appraisal of Technology Assessment, University of Dayton Research Institute, December 13-15, 1977 under the sponsorship of the National Science Foundation. References I. Martino, Joseph. Twhnologicvl For-ectr.xrin,gf;w Decisionmoking. Elsevier. New York, 1972, pp. 4 148. 2. Hou Wrong Forecasts Hurt the Utilitie+. Business Week. 44 (I 3 Febmary 197 I). 3. Ogburn, William F.. Socitrl Chwtgr. Viking Press, New York. 1922; The Socirrl /$JKI.s of Avirrtion. Houghton-Mifflin, Boston, 1946. 4. Duncan, Otis D.. Social Forecasting-The State of the Art. Public lntrrest 17. 88-l 18 (1969). 5. Lin\tone, Harold. Editorial Comment, Tchnol. Forecmf. SK. Chtrnge 7, 2 (1975). Received 14 December
1977; revised I3 Fehrutrry I978