ARTICLE IN PRESS
Technology in Society 30 (2008) 17–29 www.elsevier.com/locate/techsoc
A weakness in diffusion: US technology and science policy after World War II John A. Alic
Abstract The post-war shift in US technology and science policies has been somewhat misunderstood. It was not only a shift toward support for research, but a shift away from support for diffusion. Pre-war policies centered on agriculture. They sought to foster the spread of improved farming practices in order to raise living standards at a time of widespread rural poverty. Post-war research policies, motivated by national security, relied on military procurement to drive diffusion. Radically different objectives have obscured the overall nature of the policy shift. Today, the consequences of the US neglect of diffusion are most evident in health care, with government spending huge sums on research while disregarding service delivery. r 2007 Elsevier Ltd. All rights reserved. Keywords: Technology diffusion; Economic development; Government policy; R&D
1. Before World War II: agricultural research and extension By the 1930s, the large corporations that dominated the US economy had learned to manage technology. Except in agriculture, government had not. Although military arsenals had contributed to innovation in piece-part metal manufacturing in the 19th century, the US government did not thereafter follow the path of increasing technological capability marked out by such firms as DuPont and General Electric. In 1923, when the Naval Research Laboratory (NRL), opened its doors—the nation’s first military facility with ambitions much beyond routine engineering and testing—some 500 American companies were already operating research laboratories [1]. In 1940, with NRL-developed radars entering the fleet, demonstrating the laboratory’s value through the dramatic tactical advantages conferred in making both friends and foes visible at night and in bad weather, the NRL budget was only $370,000. In that year, the US government channeled more money to agricultural research ($29.1 million) than to all military R&D ($28.6 million) [2]. Since the early 19th century, federal, state, and local governments had been attentive to economic development, notably in the form of infrastructure projects such as canals and railroads. On occasion, Congress made direct grants to inventors and entrepreneurs. But science, technology, and innovation policies remained inchoate. The US government had not created an institutional framework and experience base that Tel./fax: +1 252 995 6870.
E-mail address:
[email protected] 0160-791X/$ - see front matter r 2007 Elsevier Ltd. All rights reserved. doi:10.1016/j.techsoc.2007.10.005
ARTICLE IN PRESS 18
J.A. Alic / Technology in Society 30 (2008) 17–29
extended much beyond the patent system, enshrined in the Constitution, until Congress passed legislation creating the US Department of Agriculture (USDA) in 1862. That same year, the Morrill Act made land grants available to the states for support of colleges to promote ‘‘agriculture and the mechanic arts’’. As President, George Washington himself had urged Congress to establish a national university staffed by faculty charged with ‘‘diffusing information to farmers’’ ([3], p. 34). But policy construction had to await the Civil War—Congress could not muster the votes even for a non-cabinet department of agriculture until the exit of members from slave-holding states who were strongly opposed to anything resembling concentrated federal power. The early post-bellum models for support were European imports [4]. At first, state governments took the initiative, attaching agricultural experiment stations to land-grant colleges and, beginning late in the 19th century, sending out extension agents to work directly with farmers. Then in 1914, when USDA already had in place a well-funded policy of support for research in its own laboratories and in colleges and universities—Congress appropriated $64 million for USDA research that year, compared with $5 million in 1900—the Smith-Lever Act established the federal–state ‘‘cooperative’’ extension system, adding federal dollars to state and local funds for the diffusion of agricultural technologies and practices. The objective—to raise the living standards of farm families at a time when many Americans still lived on, and off, the land. Research and extension were responses to a social problem, not the perception of a technological problem. To reformers and to interest groups such as the Grange, scientific agriculture was a means to raise the income levels of farm families, many of which lived in poverty, isolated and with nothing that compared to the social networks that helped small manufacturers in the industrial districts of New England learn about recent innovations. Nor did risk-averse small farmers have much incentive to experiment with new livestock breeds, seed varieties, and cultivation practices. They quite understandably preferred low but predictable yields to a venture into the unknown that might end in crop failure and bank foreclosure. County-based extension agents worked closely and continuously with farmers: Equipped with his own stock of common sense and with material provided by the Department of Agriculture, supplemented later by that from the agricultural colleges, the [extension] agent traveled about his territory y visiting farmers, inspecting demonstrations, and launching others. In a very real sense, the agent lived with his farmers and was truly an itinerant teacher ([3], p. 227). Over time, extension agents acquired credibility through demonstration effects. If an agent could persuade one farmer, bolder than his neighbors, to sow a field with a non-traditional crop in a closely watched experiment, a profitable harvest would spur emulation come the next planting season. Diffusion-oriented policies take time to show results; they do not produce discrete ‘‘breakthroughs’’ to be trumpeted. Extension agents had to be trained, build networks of contacts, earn the trust of those with whom they worked. As they did, productivity increased—by an order of magnitude in growing corn (Fig. 1). Farmers registered similar gains for crops including wheat and soybeans, while milk production per dairy cow doubled from the 1920s to the 1960s, as did egg-laying rates of chickens [5]. The largest increases in productivity came after 1940, reflecting the cumulative effects of the boost to extension funding provided by USDA beginning at the time of the First World War, ongoing research (e.g., leading to the hybrid corn varieties introduced from the mid-1930s), mechanization, and rising levels of education, which, among its other benefits, provides contextual knowledge that helps in locating, digesting, evaluating, and acting upon information. To survive politically, diffusion-oriented policies need reliable constituencies. Farmers today have much easier access to information than in the 1920s, when electricity and radio had yet to reach many parts of the country. Private companies are now the chief delivery agents for new agricultural technologies (as they were from the beginning for farm equipment), developing and marketing seeds and agrochemicals (fertilizers, herbicides, pesticides) based in part on USDA-funded research. An entrenched lobby and nostalgia for a vanishing way of life nonetheless keep extension going in an urbanized nation in which agribusiness firms, rather than extension agents, tell contract farmers how to raise their chickens.
ARTICLE IN PRESS J.A. Alic / Technology in Society 30 (2008) 17–29
19
140
Labor hours per 100 bushels
120 100 80 60 40 20 0 1915-19
1925-29
1935-39
1945-49
1955-59
1965-69
Fig. 1. Labor inputs—Corn. (Source: Evenson RE ([5], p. 238–39)).
2. From attritional warfare to high technology While broadening and deepening its policy support for agriculture during the first several decades of the 20th century, the US government paid little attention to other technologies. Only in the 1950s did the neardisaster of the Korean War, reinforcing the earlier lessons of World War II, implant a fundamentally new perspective. The new policies resembled those in agriculture in their support of research but contrasted sharply in their disregard of diffusion. Military technological imperatives had been strongly signaled during World War I. As that conflict began, the major European powers had already built up sizable air armadas. The United States had not. At a time when the German, French, and Russian armed forces each had around 500 aircraft, and Britain perhaps had half that number, the US Army and Navy counted 19 planes between them [6]. In 1915, Congress established the National Advisory Committee for Aeronautics (NACA) to help prepare for war, but a decade passed before its operations were well underway and these were to be hampered by continuing political opposition. When Japan attacked Pearl Harbor in 1941, the United States was again unprepared. Radar and the atomic bomb notwithstanding, the nation fought World War II much as it had fought previous conflicts, i.e., as wars of attrition. When, for example, the US Army crossed the Atlantic in 1942, it had only light tanks, which could not stand up to German armor. By declining to develop heavy tanks, Army leaders had been able to affirm to isolationists inside and outside government, and especially to members of congressional appropriation committees, that American soldiers would not again, as in 1917–18, be sent to fight and die on European battlefields [7]. In 1945, US tanks were still inferior. But American industry could build them in unequaled numbers—25,000 in 1942 alone, compared with 330, 2 years earlier. Rapid advances in radar, based in part on British developments, were the principal exception to US mediocrity in technical systems. Jet propulsion developed first in Britain and Germany. German engineers and scientists led in aerodynamics: ‘‘Swept-back wings, delta wings, wings with variable sweep-back, leading-edge flaps—all came from Germany during the war’’ [8]. At sea, ‘‘German submarines had better surface and submerged speed and superior sonar, optics, diesel engines, and batteries [and] could dive deeper and faster,’’ while ‘‘[t]he Japanese submarine torpedo was far superior’’ [9]. Labor productivity in US defense plants, on the other hand, exceeded that in Germany by a factor of 2, that in Japan by a factor of 5 [10,11]. And the Manhattan Project too depended on a production effort of mammoth scale to yield enough fissionable material for a handful of bombs. Military R&D, puny in 1940, shot upward during the war. Most of the money went for the support of engineering design and development conducted by private firms under Army or Navy contract. Vannevar Bush’s Office of Scientific Research and Development (OSRD) concentrated on research and prototyping.
ARTICLE IN PRESS 20
J.A. Alic / Technology in Society 30 (2008) 17–29
Bush believed that existing government agencies were too cumbersome and cautious to move rapidly into science-based technologies that could help win the war. For this purpose, Bush and his colleagues at OSRD tapped their peers among the nation’s scientific and technical elite [12]. Focused as it was on less costly early-stage work, OSRD controlled only a little over 7% of wartime R&D funding. Nonetheless, Bush and his organization deeply influenced post-war policies and practices. Thus, OSRD contracts, derived from arrangements under which MIT had conducted work for industrial firms before the war (Bush was an MIT professor and administrator), became the template for contracting procedures at the Office of Naval Research (ONR). Established in 1946 as OSRD was shutting down, ONR in turn served as the model for sponsorship of extramural research by other arms of the Department of Defense (DoD) and also civilian agencies such as the National Science Foundation (NSF). Before World War II, most of the leaders of the US armed forces had been indifferent or resistant to new technologies (aviation was the principal exception). But the experience of war demonstrated the value of high technology in ways that even the most hidebound could not mistake. The lessons of radar, the proximity fuze, and the German lead in aeronautics and rocketry penetrated deeply. As Henry H. (Hap) Arnold, Commanding General of the Army Air Forces, put it in 1944, ‘‘For twenty years the Air Force was built around pilots, pilots, and more pilots. The next twenty years is going to be built around scientists’’ [13]. Even so, military R&D and procurement declined precipitously after 1945. Leaving aside jet propulsion and nuclear weapons (and nuclear-powered submarines), obviously too important to neglect, the US government channeled little money toward new weapons until the nation’s next war, on the Korean peninsula. The Korean conflict, for which the United States was again woefully unprepared, led to fundamental changes in perceptions and attitudes toward R&D [14]. Following the 1953 Armistice, which freed up funds from combat operations, the United States embarked on a fundamentally new approach to Cold War security strategy, this based on technology as a ‘‘force multiplier’’ (to use today’s terminology). High technology weapons would, it was hoped, enable the nation’s armed forces to fight and win when outnumbered, as they had been outnumbered by Chinese troops in Korea and expected to be if war with the Soviet Union should come (e.g., if nuclear deterrence failed). Military R&D rose even as the defense budget declined. By 1960, more than 12% of US defense spending went for R&D, compared with 3.5% in 1955; in only one subsequent year, 1970 (with the Vietnam War draining funds away), did the share of R&D dip below 10%. American firms designed more new military aircraft in the 1950s than in all the years since—ten supersonic fighters alone. Large sums went for a nuclear-powered bomber and a space plane called Dyna-Soar intended to skip along Earth’s outer atmosphere. Neither Dyna-Soar nor the nuclear bomber was built; they were canceled after R&D expenditures reached nearly $1 billion for each. But as these and other technologically ambitious programs of the time, such as missile defense, illustrate, the policy pushed R&D spending ever upward and bore witness to the US commitment to do almost anything to establish technologically based advantages over any and all possible adversaries. The new military systems demanded expertize in aerospace and electronics that was scarce within DoD and took time for defense firms to build. Since World War I, the military services had relied almost entirely on private industry for aircraft design and production. Neither service sought to develop internal technical expertize in aviation comparable to that of the Army in ordnance and the Navy in ship design and construction. Now, with the advent of jet propulsion, supersonic flight, guided missiles, and computer-based early warning systems—technologies in which the services lacked broad-based competencies—reliance on private firms for both R&D and production increased. In spite of their enthusiasm for high technology when incorporated into fielded weapons systems, not a few high-ranking officers viewed research with bemused tolerance at best. Thus, Harvey Sapolsky ascribes the creation of ONR to ‘‘bureaucratic accident’’ and the agency’s survival to the budgetary taps opened by the Korean War [15]. Yet, indifference or opposition by officers who preferred immediately visible paybacks did not lead to neglect, in part because research, relative to procurement, did not seem that costly. Both the Army and the Air Force established counterparts to ONR, which still remained the largest single source of federal dollars for basic research in the physical sciences and engineering until 1965. Only then did the faster-growing budget of NSF, the initially stunted fruit of debate on the future of federally supported research generated by the publication of Vannevar Bush’s famous 1945 report Science—The Endless Frontier, overtake ONR’s spending.
ARTICLE IN PRESS J.A. Alic / Technology in Society 30 (2008) 17–29
21
With the Korean War triggering a fundamental shift in the US approach to military technologies, defense spending heavily influenced the entire US ‘‘national system of innovation’’. By the end of the 1950s, a new architecture for that system had emerged. On the government side, it has changed only modestly since the germinal period of the half-a-dozen years following the Korean War. Not only did DoD remain the largest public-sector supporter of R&D by far, military procurement continues to be a major influence on technological innovation, as recently illustrated, for instance, by civilian applications of the satellite-based Global Positioning System. The large and lasting influence of defense has inspired reactions ranging from dismay on the part of those who believe that guns and butter trade off directly to applause by those who were ready to claim that President Ronald Reagan’s Strategic Defense Initiative would pay back its costs through spin-offs. Rarely have such reactions been based on meaningful analysis, in part because so many military systems are very complex and cannot be understood by non-specialists, and in part because so many of the critical decisions in government emerge from bureaucratic competition, political deals, and inter- and intra-service rivalries opaque to outsiders. The important point here is that it was the Pentagon that dominated federal technology and science policies well into the 1960s in terms of objectives, administrative practices, and budget. Health-related research was still modest in scale. USDA remained on the sidelines, isolated and without imitators elsewhere in government. As they competed for DoD’s business, defense firms built up their technical capabilities. Universities put graduate students to work on government-funded research projects; when those students moved into the labor market, they took the latest knowledge and methods with them. Conditions were totally different from those in agriculture a half-century earlier—anything resembling the diffusion-oriented approach of USDA and state-level agricultural agencies would have been viewed as senseless. Despite the continuing skepticism of some in the military services towards research itself, the armed forces were avid customers for new technology incorporated in concrete design proposals. In sharp contrast to the agricultural support policies established a half-century earlier, policymakers assumed that applications would take care of themselves. Sometimes they did. In orchestrating the development of nuclear-powered submarines, the irascible and indomitable Hyman G. Rickover, based in the Navy’s powerful Bureau of Ships and at the same time serving on staff of the Atomic Energy Commission, pushed the Commission and its laboratories to move beyond research to attack the many practical problems that had to be solved before a nuclear reactor could be crammed into a submarine hull [16]. In other cases, decades elapsed before reduction to practice, yet the appeal of possible applications was more than enough to keep funds flowing. Two of the best-known military technologies of the post-Vietnam period, stealth and precision weapons, illustrate this. R&D on stealth began in the early 1950s in the form of experiments on scale models [17]. It took 25 years for the ‘‘state of the art’’ to reach the point when Lockheed could build a pair of full-scale demonstrators under the Have Blue program, paving the way for the F-117 and B-2. Precision weapons exhibit a more variegated evolutionary tree, one that can be summarized by noting that the Army Air Forces managed to destroy several bridges in Burma during World War II with radio-controlled bombs (the idea goes back at least to World War I), after which many years of largely incremental improvements in sensors and guidance systems yielded laser-directed munitions capable, by the late stages of the Vietnam War, of destroying targets that had survived hundreds of earlier sorties—871 in the much advertised case of the Thanh Hoa bridge, which Air Force pilots finally managed to take down in 1972 [18]. Over the years, the services strengthened their internal research capabilities, expanded their ties with university research groups, and supported a wide range of speculative projects conducted by defense contractors. DoD’s own laboratories worked with industry and with academic experts in fields such as electronics and materials science (the latter created as an independent discipline largely as a result of military patronage). ONR and its counterparts searched out specialists who could help with well-defined problems— algorithms for predicting the propagation of underwater sound, methods for purifying semiconductor materials, mathematical models for heat transfer in jet engine components. Defense contractors hired research-oriented staff to join the design-oriented engineers who were their predominant technical employees until the 1950s. It was the contractors’ job to turn new knowledge from disparate sources into functioning weapons. This called for new skills and integrative capabilities using systems analysis and management. Comparing the
ARTICLE IN PRESS J.A. Alic / Technology in Society 30 (2008) 17–29
22
supersonic F-4 to the World War II Mustang, fighter ace Robin Olds said, ‘‘There was an elaborate strap-in procedure. The F-4 seat alone was more complex than the whole P-51!’’ [19]. Olds exaggerated to make a point. Still, it was no trivial task to integrate an ejection seat with the rest of the plane. Air-flows had to be mapped to minimize the risk that an ejecting pilot would be hurled against the tail. Engineers had to make sure missiles launched from beneath the wings would not tumble out of control and endanger the plane that fired them. The services had to learn to oversee large-scale programs in which several contractors worked together. Technological change spurred organizational and sometimes institutional change. Both technological and organizational change had spillover effects in other parts of the economy. Even firms far-removed from the defense industry learned to profit from the results of military R&D. Defense acquisition (i.e., R&D plus procurement) became a powerful if not always visible force for change. World War II and then Korea had transformed the attitudes of the armed forces toward innovation. Military leaders, like pre-war farmers, had many reasons to be conservative in adoption of innovations, given that simulation, tests, and maneuvers cannot accurately foreshadow the actual use of weapons under conditions of war. A risk remained that a new system might fail when used in anger, with unpredictable and perhaps disastrous consequences. Nonetheless, military leaders and government officials were willing to spend heavily on R&D to explore almost anything that might help offset the numerical advantages of the Soviet Union and Warsaw Pact (and China). Cold War imperatives meant that diffusion of new technologies into military systems did not seem to be a policy issue. 3. Politics and policy At the beginning of the 20th century, state and federal policymakers understood that farmers, cashstrapped, credit-starved, and subject to the vagaries of prices set in far-off markets, could not be expected to innovate. Government would innovate for them, through USDA research. Because innovation without adoption would accomplish little, diffusion-oriented policies were necessary. With minor exceptions, the second part of the lesson has been lost. Whether or not the need remains, agricultural extension continues to exist; whether need may exist for diffusion-oriented policies elsewhere in the economy is a question that has rarely been asked. Indeed, the question has often been seen as illegitimate, raising the prospect of policies that tamper with the workings of the market. Vannevar Bush had opened his proposal for structuring post-war US science and technology policy as follows: We all know how much the new drug, penicillin, has meant to our grievously wounded men on the grim battlefronts of this war y Science and the great practical genius of this Nation made this achievement possible. Some of us know the vital role which radar has played y Again, it was painstaking scientific research over many years that made radar possible [20]. Science—The Endless Frontier got much attention following its release (including condensation in a supplement to Fortune magazine) but had little immediate impact. Its lasting influence stems not from the specifics of Bush’s recommendations but from the generalized appeal he articulated so effectively for support of ‘‘painstaking scientific research’’. Bush, himself an engineer, decreed as a matter of OSRD policy that engineers were to be labeled scientists and that engineering be presented as a part of science ‘‘in order to counter what he saw as the American military’s antipathy toward engineering salesmanship and British snobbery toward engineering’’ [21]. He must have known that his encomium to the scientific research underlying innovations in radar was a half-truth at best, given the many contributions of trial-and-error engineering, which stemmed, in part, from experience with radio, the field in which Bush himself had begun his career. Indeed, few people were as well placed to appreciate the extent to which new technology depends on old. Yet, if the notion of a simple, straightforward progression from science to applications was false, by supporting research, the United States would be acting today to put in place a knowledge base to support innovation tomorrow. Before the war, government had
ARTICLE IN PRESS J.A. Alic / Technology in Society 30 (2008) 17–29
23
contributed little or nothing to new knowledge. Bush had seen the consequences first-hand, in the nation’s lack of military preparedness before World War II, and wished to avoid a repetition in the future. Regardless of his full range of motives, which are unknowable, and regardless also of the neglect by policymakers of the specifics Bush recommended, his rhetoric and reputation implanted the linear model in the consciousness of policymakers and the public at large. The assumption that technological diffusion proceeds in an autonomous manner, guided efficiently and effectively by the invisible hand of the market, encapsulates views widely accepted in American political economy, laissez-faire for short, as applied to the realm of technology and science, a realm that was relatively unfamiliar to policymakers in the 1940s. Within a dozen years, students of innovation demolished the linear model [22,23]. It was too late. The image of a cornucopia of new technologies spewing from a pipeline fed by research had too much appeal. It has retained that appeal; if anything, snowball increases in R&D spending by the National Institutions of Health (NIH), now more than $28 billion, some 95% of it for basic and applied research, suggest that the influence of the linear model continues to grow. In one sense, the debate set off by Science—The Endless Frontier ended in 1950 with congressional authorization of NSF. In another sense, it continues as policymakers struggle to draw lines between basic research of a sort that nearly everyone finds appropriate for public support and more applied work, closer to the immediate concerns of private firms but lacking direct connection with accepted government missions such as defense. Non-mission R&D has often been assailed as wastefully inefficient, ‘‘corporate welfare’’ in the language of recent years. The larger issue is hardly new. It goes back to the tangled disagreements between followers of Hamilton and Jefferson, the Federalists and anti-Federalists, mercantilists and anti-mercantilists, over fiscal policy and trade, desirable pathways of economic development, and the role of the state in charting and traversing those pathways. The current iteration began in the late 1970s with proposals for ‘‘industrial policy’’ in the form of sectoral measures intended to shore up portions of US manufacturing seemingly threatened by international competition. A decade later, it had metamorphosed into a debate over ‘‘technology policy,’’ one that continued as Republicans sought to shrink or kill a pair of programs authorized in the 1988 Omnibus Trade and Competitiveness Act—the Advanced Technology Program, set up to provide funds for R&D conducted by private firms without regard to government mission, and the Manufacturing Extension Partnership (MEP, originally called the Manufacturing Technology Centers program), intended to accelerate adoption of advanced production technologies by small establishments. The larger debate into which these programs were swept concerns, most fundamentally, government’s ability to promote economic development without succumbing to giveaways such as 19th century land grants to railroad magnates. Dobbin [24] argues that aversion to activist industrial policy in the United States stems in considerable part from reaction against the corruption that accompanied the building of the nation’s early railway networks, a reaction that led to withdrawal by government at all levels, local, state, and federal, and displacement of subsidies provided through loans, loan guarantees, and land grants by regulations intended to maintain ‘‘natural selection’’ through market mechanisms. Put differently, the long-running conflict turns on assessment of the risks of government failure as compared with market failure. Opponents of industrial policy (and technology policy and managed trade) believe that political logrolling and bureaucratic sloth, incompetence, and malfeasance lead inevitably to wasteful and counterproductive outcomes; not only to a bottomless pork barrel but also to doomed efforts to prop up firms that should be left to fail and economic sectors that should be left to decline. In this view, the less government attempts the better—markets are the only reliable guide to socially beneficial outcomes. To those in the other camp, markets are abstractions with virtues easier to locate in theory than in a disequilibrium economy populated by myopic decision makers unable to weigh choices open to them on anything approaching a rational basis and not uncommonly blind to their own self-interest. Many in this camp also believe that even if economic actors were on average rational in the sense posited by neoclassical theory, the aggregate of their decisions would not be socially optimal. Worried more about market failures than the inevitable shortcomings of government, they accept the disorderliness of American politics as part of the price to be paid for a society in which outcomes promise to be more equitable than if determined by unfettered or minimally fettered economic forces. Beneath the veneer of economic theory, market failures are of course ubiquitous. Some are inconsequential; others have serious consequences, more so as service-producing sectors come to dominate post-industrial societies. Intangibles such as health care, for example, have no predetermined identity. They take on attributes
ARTICLE IN PRESS 24
J.A. Alic / Technology in Society 30 (2008) 17–29
only in the course of production and thus cannot be evaluated in advance of delivery. Because of this, they confront consumers with decisions unlike those among alternative goods, which can be inspected before purchase. This may make little difference—a mobile phone pricing plan can always be dropped. But there are no easy exchanges or warranties for unneeded surgery or unforeseen drug side-effects. While many types of market failures are difficult to analyze, and only the most technocratic of policymakers can hope to have a deep understanding of them, absence of facts or theory poses few obstacles to politicians accustomed to making decisions on matters they do not understand. In devising agricultural research and extension policy, a century ago, elected officials implicitly recognized the existence and consequences of market failure. Farmers were price-takers. They produced commodities, limiting profitability and leaving them without either resources or incentives to innovate, even though the social returns to innovation might be high. A similar rationale underlies public support of research as urged by Vannevar Bush and the science community after 1945. Although market failure in the provision of scientific research had been recognized at least since the writings of J.S. Mill, economists did not explore the consequences in depth until the 1950s, when the policy was already in place. Over the next several decades, the justification for public support of research came to be widely accepted in Congress and the executive branch and among the public at large. The situation is very different for diffusion. In the simplest theoretical pictures, market pull generated by rational if often implicit economic calculation paces adoption of new technology. Entrepreneurial firms innovate and other businesses adopt these innovations, purchasing new equipment, reorganizing operations, and retraining workers if anticipated returns exceed those of alternative investments. Consumers, likewise, buy flat-screen digital televisions or Botox injections because they value them above alternative choices. In practice, the technological learning involved can be piecemeal and halting, as in the early days of scientific agriculture (and flat-screen video displays and cosmetic surgery). Case studies show that the learning associated with diffusion tends to be heavily experiential, fraught with uncertainty, and almost always and everywhere, a matter of trial and error and trial and success [25]. Because of this, useful analytical models have been hard to develop (e.g., Ref. [26]). Although empirical studies of diffusion have been common, there is no comparably well-developed theory to provide justification for policy. Education and training provide a foundation for technological learning, and during the half-century in which Kansas wheat and Texas beef became iconic images of American prosperity, agricultural research and extension found essential complements in the spread of public schooling. At the beginning of the 20th century, while primary school attendance was common in the United States, fewer than 10% of young people earned a high school diploma (Fig. 2a). Farmers averaged little more than 4 years of education. Even literate subscribers to magazines offering advice on raising crops and livestock, could not be expected to distinguish between hucksters and genuine experts, much less choose among, adopt, and then adapt to local conditions the most promising of proffered innovations. Their situation was something like that of consumers of health care today, when asked to evaluate the provision of professional services which they may not fully understand even after those services have been provided. Following the First World War, secondary education expanded greatly and so did specialized training in agriculture (Fig. 2b), part of the generalized support from which the sector benefited. By the 1950s, farmers were becoming farm managers. Even so, city folk sometimes depicted them as rubes who could be sold a twocylinder John Deere tractor on the basis that it would consume half the fuel of a four-cylinder model from Ford or Farmall. Factory owners and managers, on the other hand, were supposed to be able to look out for themselves. They were ‘‘exploiters,’’ not the ‘‘exploited’’, and had no need of government assistance in learning to use new technologies. And yet, during the 1980s it became apparent that small manufacturing establishments did need help. Productivity data showed smaller US manufacturers (those with fewer than 500 employees) to be operating at perhaps three-quarters the efficiency levels of larger establishments, a gap that appeared to be widening [27]. Surveys (e.g., Ref. [28]) found small firms lagging in adoption of productivity-enhancing technologies such as numerically controlled (NC) machine tools and organizational practices such as total quality management (TQM). Many of these same surveys revealed widespread failures by managers to grasp the logic of new technologies. Some, for example, stated that NC equipment was inapplicable to their business while at the same time reporting information on product mix (e.g., number of different part designs, average lot sizes)
ARTICLE IN PRESS J.A. Alic / Technology in Society 30 (2008) 17–29
25
70
Graduates per 100 17-Year Olds
60
50
40
30
20
10
0 1900
1910
1920
1930
1940
1950
1960
800
Thousands of Students
600
400
200
0 1920
1930
1940
1950
1960
Fig. 2. Human capital complements to agricultural research and extension: (a) high school graduation rates (Source: Snyder TD, editor. 120 Years of American education: a statistical portrait. Washington: Department of Education, National Center for Education Statistics; January 1993, Table 19, p. 55), (b) enrollments in federally aided agricultural training programs (Source: Historical statistics of the United States: colonial times to 1970, part 2. Washington: Department of Commerce, Bureau of the Census; September 1975, Series H 572–586, p. 378).
indicating its appropriateness. Like the small farmers of earlier decades, small manufacturers appeared to be slow learners. At a time when Congress could find little ground for agreement on industrial policy or trade policy, consensus on the small manufacturing sector, which could be portrayed as part of the small business community for which politicians of all stripes liked to voice support, held promise. If US manufacturing was in crisis, smaller firms were part of the problem. Although few exported or faced import competition directly,
ARTICLE IN PRESS 26
J.A. Alic / Technology in Society 30 (2008) 17–29
many sold their output to firms that did, such as the Detroit automakers. Low productivity (and quality) among suppliers raised costs to US firms deeply immersed in the international economy. In the early 1980s, state governments, especially those in the Midwest anxious to retain well-paid manufacturing jobs, began to establish ‘‘industrial modernization’’ programs to help small establishments. As in agriculture, Washington followed the lead of the states, with the MEP program, based, just as agricultural extension was, on federal–state cooperation, channeling funds to fast-growing industrial extension networks. MEP has recently delivered services to about 20,000 firms each year [29]. TQM, just-in-time (JIT), and other ingredients of ‘‘lean production’’ embody a more complex logic of efficiency than NC and require deeper organizational changes. It took about 15 years for even the largest and most sophisticated US manufacturers to understand and adopt these ‘‘organizational technologies’’, which diffused more rapidly in Japan, where most had originated or been substantially reshaped. The differences can be traced to institutions [30]. In Japan, broad-based industry and employer associations helped codify best practices. Large manufacturers transmitted and taught them to suppliers, while government programs helped unaffiliated small firms. In the United States, by contrast, arm’s-length relationships left manufacturers large and small to learn the lessons of JIT, TQM, and lean production on their own. Business, trade, and professional groups, consultants, and government played only minor roles in untangling myth and misinformation from methods of proven effectiveness. Consultants, for example, had incentives to hold insights (if they had any) close to their chest, so as to be able to sell the same advice to several clients. Many of the incremental innovations that coalesced in JIT and lean production originated in learning-bydoing on the factory floor. New medical practices, by contrast, stem principally from laboratory research and clinical trials. Despite this and other differences, the underlying issues of productive efficiency and quality are not that dissimilar. Even as health care expenditures pass 16% of US gross domestic product, suggestive of modest efficiency gains at best, a large and growing body of evidence shows the average quality of health services to be poor. Surveys, for instance, show that little more than half of patients receive care in accordance with recommended best practices [31]. Rising costs and growing recognition of the existence of poor quality and, by extension, low productive efficiency have yet to generate much of a response aimed at systematic improvement, even for the relatively short list of chronic illnesses such as diabetes and heart disease that afflict millions of Americans and account for a large fraction of morbidity and expenditure. Managed care seems a spent force, given incentives that reward short-term cost control rather than long-term improvements in health (in the long term, some other party is likely to bear the cost of a particular individual’s care). Lacking effective pressures for improvement such as those at work in manufacturing, where customers can compare product offerings from Ford and Toyota in dealer showrooms and the pages of Consumer Reports, health care providers have yet to discover and implement practices such as TQM, much less some sort of parallel to lean production. Meanwhile, NIH spends huge sums on research and new scientific knowledge issues forth in vast volumes (each month the National Library of Medicine adds more than 30,000 citations to its databases). Although physicians are perhaps the best educated and most extensively trained of major occupations, not even the most diligent among the 800,000 plus in the US labor force can hope to keep pace with advances in clinical knowledge. The less diligent seem to forget too quickly too much of what they learned in medical school. Government pays about 45% of the $2 plus trillion US health care bill, around 60% if tax expenditures are included, which provides ample leverage to seek improvements in productivity and the quality of care. But government has shown little willingness to use that leverage. Although the NIH ‘‘mission’’ is to improve public health, questions of service delivery and service quality are mostly considered not part of NIH’s mission, no doubt for fear that the search for improvement would be portrayed as straying into ‘‘socialized medicine’’. Government remains content to support research as if improved service delivery will follow automatically, even though the evidence accumulated over the past three decades provides little support for such expectations. During the Cold War especially, politically inspired rhetoric contrasted the lightly regulated US economy with Soviet-style planning, holding that the fruits of science and technology would be more or less easily harvested once the seeds of innovation had been more or less widely dispersed. Ideological predispositions common in Washington made it easy to assume that demand-driven learning would proceed swiftly and spontaneously. Only at the end of the 1980s, by which time the success of Japanese firms in the design and
ARTICLE IN PRESS J.A. Alic / Technology in Society 30 (2008) 17–29
27
manufacture of a wide range of consumer and industrial goods had created great anxiety in some US circles, did Congress create the MEP, a decidedly modest program that by no means represented a break with mainstream policy. Still, if deliberate encouragement of diffusion has been uncommon, government policies have sometimes contributed to it indirectly. In the illustrations that follow, taken from Alic, Mowery, and Rubin [32] (which includes further citations), stimulus for diffusion was incidental to policies adopted for other reasons. In a noteworthy example from the earliest days of computing, Pentagon managers recognized that an extensive research base and industrial infrastructure would be needed to fully exploit digital processing for military purposes and sought to ensure that technical information would reach the widest possible audience. As recalled by Goldstine [33]: A meeting was held in the fall of 1945 at the Ballistic Research Laboratory to consider the computing needs of that laboratory ‘‘in the light of its post-war research program.’’ The minutes indicate a very great desire y to make their work widely available. ‘‘It was accordingly proposed that as soon as the ENIAC was successfully working, its logical and operational characteristics be completely declassified and sufficient publicity be given to the machine y that those who are interested y will be allowed to know all details y’’ (The embedded quotations are from ‘‘Minutes, Meeting on Computing Methods and Devices at Ballistic Research Laboratory,’’ 15 October 1945). Later, DoD’s second-sourcing requirements for semiconductors spurred inter-firm exchanges of knowledge at a time when essentially all microelectronic devices were sold for military and space applications. In both the semiconductor and computer industries, antitrust enforcement also accelerated technology flows. As part of its response to a 1949 Justice Department antitrust suit, AT&T released information on the technical characteristics of its new invention, the transistor, and licensed the relevant patents to all comers. In a 1956 settlement of an antitrust case involving punched card equipment, IBM agreed to unrestricted licensing not only of patents related to punched cards but also of its computer patents. A dozen years later, again under threat of antitrust proceedings, IBM unbundled its software and hardware products, providing a major boost to the fledgling independent software industry. And, still more dramatically, the 1982 settlement of the government’s antitrust suit against AT&T restructured the entire US telecommunications industry, spawning a great deal of innovation in both technology and services. Economic regulation, although usually assumed to hinder innovation by stifling competition, has sometimes had the effect of fostering diffusion. A lightly regulated labor market has likewise fostered knowledge diffusion. Skills rooted in formal education and experiential learning migrate through the economy as workers move from job to job, whether on the factory floor or in engineering, finance, and indeed the executive suite. On the other hand, ideologically rooted aversion to ‘‘interference’’ in the labor market has inhibited development of institutions for vocational education and workforce training of a sort that enhanced the diffusion of know-how in countries including Germany and Japan during much of the 20th century (doing so in very different ways). In the United States, government-funded training programs have been directed mostly at disadvantaged, displaced, or otherwise unemployed workers. Intended to help people get a job—any job—these programs stress skills of ‘‘employability’’ such as behavior in interviews rather than skills needed in the workplace. As a result, most of those helped find themselves in low-wage ‘‘unskilled’’ jobs from which they have little prospect of advancing to better-paying positions, in part because businesses provide their employees with training that averages only around 3 days per year [34]. For non-professionals, employer-provided training tends to deal with task- and firm-specific matters such as scripted interactions with customers, computer upgrades, and health or safety regulations. Managers and professionals are more likely to get training in higher-level skills, but among professional occupations only the outcomes of continuing medical education (CME) have been carefully studied. It has been found generally ineffective. CME, required by many licensing boards for certification renewals, rarely alters physician behavior—e.g., propensity to follow consensus best-practices such as those associated with ‘‘evidence-based medicine’’—and thus does little to translate research advances into improved care [35]. Even so, CME reform has yet to be widely discussed, much less implemented— another illustration of the pathologies afflicting the health care system and another small facet of the overall US weakness in diffusion. This weakness stems from forces that policymakers have barely recognized, much less addressed. Politicians generally come to Washington (or to state capitols) with their underlying beliefs more or less fully formed;
ARTICLE IN PRESS 28
J.A. Alic / Technology in Society 30 (2008) 17–29
indeed, they may win election because of the strength of those convictions. By the 1980s, if not before, the simple linear model, with its assumption that scientific research provides a sufficient underpinning for economic growth, had become taken for granted, and is reinforced annually by the appeals of scientists for more money. There has been no comparable set of beliefs concerning diffusion, nor much awareness of it as an issue, and no constituency—not even for training, despite the litany of business complaints concerning skills ‘‘shortages’’. Agriculture remains exceptional in benefiting from large-scale support for diffusion, and that is a carryover from an earlier era. From the 1920s into the 1970s, USDA spending on research and extension rose roughly in parallel, the extension budget averaging roughly half the research budget [36]. Since the 1970s, state and local governments have provided a growing share of funds for extension and USDA support of extension has fallen to about 25% of the agency’s research budget. Such a percentage is still extraordinarily high, given that diffusionrelated spending by other agencies seldom approaches even 5% of R&D and more commonly seems to have been less than 1% (much of this evidently for printing reports and conference participation) [37]. No R&Dintensive agency other than USDA has exhibited anything that might be called a strategy for diffusion. 4. Conclusion Vannevar Bush’s linear model must appear absurd on its face to anyone with practical experience of innovation. Yet, popular and political discussions, whether of medical practice or energy security, shift back and forth between ‘‘innovation’’ and ‘‘research’’ as if the two were synonymous. Bush’s model derives much of its power from its fit with the deep-seated American predisposition to ‘‘let the market work’’. Few policymakers recognize the extent to which the linear model has conditioned their thinking. Nothing illustrates better than the propensity of Congress and successive administrations to pump ever greater sums into NIH research. At a time when policymakers have seized on school choice and standardized tests as clubs to beat public schools into performance improvement, no such clamor has arisen in health care and no such club has been found. Instead, Americans are urged to have faith that research results will trickle down as miracle cures—linear thinking writ larger perhaps than ever before. Acknowledgments An earlier version of this paper was presented at the Conference on Science and Technology in the 20th Century: Cultures of Innovation in Germany and the United States, German Historical Institute, Washington, DC, October 15–16, 2004. The author expresses his appreciation to conference participants and to Vernon W. Ruttan and Philip Shapira for helpful comments. References [1] Perazich G, Field PM. Industrial research and changing technology. Report no. M-4. Philadelphia: Works Projects Administration; January 1940. [2] Federal funds for science XI: Fiscal years 1961, 1962, and 1963, NSF 63-11. Washington: National Science Foundation; 1963. p. 136. [3] Scott RV. The reluctant farmer: the rise of agricultural extension to 1914. Urbana, IL: University of Illinois Press; 1970. [4] Huffman WE, Evenson RE. Science for agriculture: a long-term perspective. Ames, IA: Iowa State University Press; 1993. [5] Evenson RE. Agriculture. In: Nelson RR, editor. Government and technical progress: a cross-industry analysis. New York: Pergamon; 1982. p. 238–9. [6] Turnbull AD, Lord CL. History of United States naval aviation. New Haven, CT: Yale University Press; 1949. p. 40. [7] Hendrix JT. The interwar army and mechanization: the American approach. J Strategic Stud 1993;16:75–108. [8] Miller R, Sawers D. The technical development of modern aviation. London: Routledge & Kegan Paul; 1968. p. 173. [9] Blair Jr C. Silent victory: the US submarine war against Japan. Philadelphia/New York: Lippincott; 1975. p. 881. [10] Milward AS. War, economy and society, 1935–1945. Berkeley, CA: University of California Press; 1977. p. 67. [11] Harrison M. Resource mobilization for World War II: the USA, UK, USSR, and Germany, 1938–1945. Economic history review 2nd series 1998;XLI:171-92. [12] Zachary G. Endless frontier: Vannevar Bush, engineer of the American century. New York: Free Press; 1997. [13] Gorn MH. Harnessing the genie: science and technology forecasting for the air force, 1944–1986. Washington: Office of Air Force History; 1988. p. 17/18.
ARTICLE IN PRESS J.A. Alic / Technology in Society 30 (2008) 17–29
29
[14] Blanpied WA, editor. Impacts of the early cold war on the formulation of US science policy: selected memoranda of William T. Golden, October 1950–April 1951. Washington: American Association for the Advancement of Science; 1995. [15] Sapolsky HM. Science and the navy: the history of the office of naval research. Princeton, NJ: Princeton University Press; 1990. p. 118. [16] Hewlett RG, Duncan F. Nuclear navy, 1946–1962. Chicago: University of Chicago Press; 1974. [17] Bahret WF. The beginnings of stealth technology. IEEE Trans Aerospace Electron Syst 1993;29:1377–85. [18] Werrell KP. Did USAF technology fail in Vietnam. Three case studies. Airpower J 1998;12(1):87–99. [19] Sims EH. Fighter tactics and strategy, 1914-1970. Fallbrook, CA: Aero Publishers; 1970. p. 244. [20] Bush V. Science—The endless frontier: a report to the president on a program for postwar scientific research. Washington: National Science Foundation; 1990 reprint of July 1945 original, p. 10. [21] Kline R. Construing technology as applied science: public rhetoric of scientists and engineers in the United States, 1880–1945. Isis 1995;86:194–221. [22] Jewkes J, Sawers D, Stillerman R. The sources of invention. 2nd ed. London: Macmillan; 1969. [23] Isenson RS. Project hindsight: an empirical study of the sources of ideas utilized in operational weapon systems. In: Gruber WH, Marquis DG, editors. Factors in the transfer of technology. Cambridge, MA: MIT Press; 1969. p. 155–76. [24] Dobbin F. Forging industrial policy: the United States, Britain, and France in the railway age. Cambridge: Cambridge University Press; 1994. [25] Rosenberg N. Inside the black box: technology in economics. New York: Cambridge University Press; 1982. [26] Stoneman P. The economics of technological diffusion. Oxford: Blackwell; 2002. [27] Russell J. Federal policy and manufacturing productivity. In: Teich AH, Nelson SD, McEnaney C, editors. AAAS science and technology policy yearbook—1993. Washington: American Association for the Advancement of Science; 1994. p. 307–17. [28] Current industrial reports: manufacturing technology-1988. Washington: Department of Commerce; May 1989. [29] Shapira P. Evaluating manufacturing services in the United States: experiences and insights. In: Shapira P, Kuhlman S, editors. Learning from science and technology policy evaluation: experiences from the United States and Europe. Cheltanham: Elgar; 2003. p. 261–94. [30] Cole RE. Strategies for learning: small-group activities in American, Japanese, and Swedish industry. Berkeley, CA: University of California Press; 1989. [31] McGlynn EA, Asch SM, Adams J, Keesey J, Hicks J, DeCrisstofar A, et al. The quality of health care delivered to adults in the United States. N Engl J Med 2003;348(June 26):2635–45. [32] Alic JA, Mowery DC, Rubin ES. US technology and innovation policies: lessons for climate change. Arlington, VA: Pew Center on Global Climate Change; 2003. [33] Goldstine HH. The computer from Pascal to von Neumann. Princeton, NJ: Princeton University Press; 1972. p. 217, note 48. [34] Frazis H, Gittleman M, Joyce M. Results of the 1995 survey of employer-provided training. Mon Labor Rev 1998(June):3–13. [35] Grimshaw JM, Shirran L, Thomas R, Mowatt G, Fraser C, Bero L, et al. Changing provider behavior: an overview of systematic reviews of interventions. Med Care 2001;39(8, Suppl. 2):II-2–II-45. [36] Rogers EM, Eveland JD, Bean AS. Extending the agricultural extension model. Stanford, CA: Stanford University, Institute for Communication Research; 1976. p. 156. [37] Rogers EM. Models of knowledge transfer: critical perspectives. In: Beal GM, Dissanayake W, Konoshima S, editors. Knowledge generation, exchange, and utilization. Boulder, CO: Westview; 1986. p. 37–59.
John Alic has taught at several universities, been a staff member of the US Congress’s Office of Technology Assessment, and is the author or coauthor of many technology-related papers, articles, and case studies. His book Trillions for Military Technology: How the Pentagon Innovates and Why It Costs So Much was published in fall 2007.