The History of Information Security: A Comprehensive Handbook Karl de Leeuw and Jan Bergstra (Editors) Copyright © 2007 Elsevier B.V. All rights reserved.
20 A HISTORY OF COMPUTER SECURITY STANDARDS Jeffrey R. Yost
Charles Babbage Institute, University of Minnesota Minnesota, USA Contents
20.1. Setting technical standards: A brief overview / 596 20.2. Digital beginnings, physical security and electronic radiation/597 20.3. The early leadership of Willis Ware in computer security research / 600 20.4. Continuing challenges with computer security at the DoD / 603 20.5. James P. Anderson and the air force's focus on computer security / 604 20.6. Bell and LaPadula: modeling of computer security and the foundation for standards / 605 20.7. Moving toward a computer security organizational infrastructure / 605 20.8. The Orange Book (TCSEC) / 606
20.9. Common criteria and the globalization of computer system security standards / 608 20.10. The origin and early developments of cryptographic research in the academic community / 610 20.11. Early attention to and investing in computer security systems development in the private sector / 612 20.12. Cryptographic research and the early digital computer industry / 612 20.13. RSA Data Security: pioneering a security software industry / 613 20.14. Computer security and society / 616 20.15. Conclusion / 619 References / 620
Abstract
While computer security has only achieved widespread attention in recent years, the topic has a long and complex history. This chapter begins by briefly Surveying Computer Security Standards within the broader context of technical standard setting. It then details early work to address vulnerabilities resulting from electronic emanation, heightened Concerns with the proliteration of computer networking, the ongoing influence of the Department of Defense (DoD) with standards work, and the establishment of formalized national (TCSEC in the US) and international (Common Criteria) computer security standards. Early cryptographic research in the academy and the emergence of the software security industry are also explored, along with the relationship between security and privacy. Ultimately, formalized government standards have been limited, and the global commercial sector has been increasingly influential in seeking to mitigate accelerating threats. Keywords" computer security, computer security standards, public key, Clipper Chip, RSA Data Security, computer security industry, Orange Book, TCSEC, Common Criteria, Bell-LaPadula. In the 1980s, early forms of malware, or malicious software, primarily computer viruses that were passed by exchanging infected disks from
computer to computer, posed substantial threats to the integrity and operations of computer software and systems. More recently, computer crimes, in-
595
596
J.R. Yost
cluding denial of service attacks and computerbased identity theft, along with new categories of malware, such as worms and Trojan horses, have exacerbated the problem. The renaissance of the Internet and the ubiquity of its use with the advent and growth of the World Wide Web and browser software during the early to mid-1990s further fueled threats of increasingly destructive networkspread malware, cyberterrorism and other forms of computer crime. Concomitantly, these threats have bolstered computer security as: a growing area of study within computer science departments in the academy; a fundamental concern and procurement expense for government, corporations and other organizations; a rapidly expanding sector of the software industry; and an increasingly common topic explored by mass media. Computer security, however, has a much longer history than most of the literature on the issue implies. It, like digital computing more generally, is an area that first developed from the needs of and funding by the U.S. Department of Defense (DoD). As digital computing technology evolved and proliferated, and the earliest networks were formed, a number of DoD-funded researchers led projects to address potential computer security threats, and to establish common or standard, security protocols. The following chapter briefly touches upon the early history of technical standard setting work in other fields, and then goes on to survey the origin of computer security standards, changes in computer security with advancing technology, and the appearance and activities of a broad set of institutional, corporate, and individual actors that became involved in defining and shaping computer security standards in government and industry. While the history of computer security standards holds some commonalities with what we know of the broader range of technical standard setting work, computer security standards have also developed within some unique contexts and arguably have been less influential and useful than technical standards in many domains.
20.1
SETTING TECHNICAL STANDARDS:
A BRIEF OVERVIEW Technical standard setting is a rich historical topic that has been understudied. To the extent that historians have explored certain aspects of technical standard setting, it typically has been episodic and as a bit part to larger story of economic and technological change, such as in David S. Landes' The
Unbound Prometheus: Technological Change and Industrial Development in Western Europe from 1750 to the Present or David E Noble's America By Design: Science, Technology, and the Rise of Corporate Capitalism. In a small number of works, the history of technical standards has figured more prominently, including David A. Hounshell's From
the American System to Mass Production, 18001932; Janet Abbate's Inventing the Internet; and Merritt Roe Smith's Harpers Ferry Armory and the New Technology: The Challenge of Change. The US government, and in particular the military, has long had a hand in setting many types of technical standards, primarily for military, national security and trade purposes. In 1824 the US Congress formed the Office of Standard Weights and Measures. Throughout most of the 19th century, its work was generally limited to setting and regulating measurement for fair commerce. Also during the first half of the 19th century, the US military helped set standards and specifications with armory practice as the nation transitioned from craft to factory methods in manufacturing rifles [54]. This was done with the aim of achieving interchangeability of parts to ease the task of battlefield repairs, as well as to increase efficiency of production [54]. 1 Later, the need for efficient rail transportation during the Civil War led to heightened recognition of the need for standard track gauges (distance between rails) [40]. 1As a result of this book and the broader recognition of the impact of armory practice on factory methods, this is one of the more thoroughly explored episodes of standard setting work in the historical literature. The work highlights the difficulties with instituting standards that conflict with established cultural practices. It also indicates that interchangeability came at great expense rather than leading to lower cost production in its early years.
597
20.2 Digital beginnings, physical security and electronic radiation
~i¸!i~!i~i!i!ii~~!i~!~i~:ii!~i!i;!i~~iiiii!!~i!i!ii~i!~iii~i~i~i~!i~i~i~!~!i!i~!i!~i~i~i~i~i~i~!~i~!ii!~iiii~ii~iiii ....
.................. ii ......
Figure 20.1. National Bureau of Standards' Standards Eastern Automatic Computer (SEAC), c. early 1950s. (Courtesy of the Charles Babbage Institute.)
Many agencies and departments of the federal government have played a role in setting standards to facilitate trade, better ensure public safety (standards of air travel and air traffic control), and for other purposes. Additionally, industry trade associations and non-government technical organizations have had a major role in standard setting work [52]. More recently, international standards organizations have become ever more influential in an era increasingly defined by global commerce, perspectives, and policies. At the end of the nineteenth century, the Ofrice of Standard Weights and Measures purview expanded into conducting science and engineering research for the government in physics, chemistry, engineering and metallurgy. In recognition of this broader program, the organization's name was changed to the National Bureau of Standards (NBS) in 1901. The work of the NBS broadened into digital computing in the early 1950s with the Standards Eastern Automatic Computer (SEAC) and Standards Western Automatic Computer (SWAC).
The NBS became the primary computer security standards setting organization for the federal government in 1965, and though its relative importance in this area has waned to a certain extent, it (NBS changed its name to National Institute of Standards and Technology, or NIST, in 1990) has remained fundamental to this field to the present day [40]. 20.2
DIGITAL BEGINNINGS, PHYSICAL SECURITY AND ELECTRONIC RADIATION
Before exploring the role of the NBS/NIST in computer security standards, and then introducing and outlining the growing role of the National Security Agency (NSA) in this field, the chapter will first introduce the earliest appearance of computer security standards. These standards related to physical security and were designated by the military. In the early to mid-1940s the first meaningful digital computer, the Electronic Numerical Integrator and Computer (ENIAC), was designed and de-
598
J.R. Yost
Figure 20.2. Electronic Numerical Integrator and Computer (ENIAC) at the University of Pennsylvania's Moore School of Electrical Engineering, c. 1946. (Courtesy of the Charles Babbage Institute.)
veloped at the University of Pennsylvania's Moore School of Electrical Engineering. It had been financed by the Army Ballistic Research Laboratory (BRL) to provide a tool to quickly and accurately solve mathematical equations for ballistic firing tables during World War II. Though it was not completed until several months after the end of the war, the United States almost immediately became engaged in the Cold War with the Soviet Union, and the DoD continued to finance a number of computer development and design projects at universities, national laboratories, and a few select firms. By the early to mid-1950s, each of the four longtime leaders of the American office machine industry, Burroughs Corporation (accounting machines), National Cash Register (NCR - cash registers and accounting machines), Remington Rand (typewriters and tabulating machines), and International Business Machines ( I B M - tabulating machines and punched cards), had entered the computer industry. One of the first applications of the ENIAC was calculating equations for the DoD's classified "Super" project to develop a hydrogen bomb. Main-
frame computers of the early post-World War II era cost from several hundred thousand to naillions of dollars, and were only installed at and controlled by elites in the military and to a lesser extent industry (defense contractors in the aerospace field, large insurance firms, etc.). Many of the calculations completed on these early digital computers were secret. Further revising and enhancing security procedures on national defense matters (which had begun with legislation in 19'36) in 1944 the military began to use the classification "Top Secret" for documents and information that could lead to "grave damage to the nation" if compromised. In 1946 a number of Atomic scientists who had worked on the Manhattan Project successful lobbied congress to create a civilian nuclear research agency, the Atomic Energy Commission (AEC). This organization adopted the military's classification system for secret information and documentation, and coupled with related legislation, this formally placed restrictions on the export of nuclear information [50: 48-50]. Despite the secretive nature of the DoD, and to a lesser extent, industry, early computer se-
20.2 Digital beginnh~gs, physical security and electronic radiation
curity was fairly straightforward and posed few additional problems regarding the protection of confidential information. In the first decade after World War II computer security was typically limited to physical security. It was simply one element of the more general security of the installations where computers were housed. Security protocols were designed to focus upon and prevent or address theft, vandalism, sabotage and natural disasters (hurricanes, floods, tornadoes, volcanoes, earthquakes, etc.). Guards and alarms were used to prevent unauthorized personnel from entering and engaging in theft or destructive acts. One or a small number of skilled computer technicians usually handled the operation of the machines at an installation. Scientific researchers often had no physical contact with the computers that crunched their numbers. The complexity and specialized knowledge involved in operating early mainframes ensured and justified limited access and contributed a substantial degree of security. Input/output (I/O) systems of the time typically consisted of punched cards or paper tape containing encoded data, and jobs were processed in batches, where one job was done before the next one began. Computer operators were few in number, tended to go through strict screening, and rarely posed any risk to the security of computer systems or the data processed by them. Electronic radiation or emanation, however, did present some potential computer security risks for early mainframe computers. This was a problem for secret communication (or opportunity for spying) that was also present with cipher machines. In his autobiography, one senior British Security officer in the early post-World War II era, Peter Wright, states: Any cipher machine, no matter how sophisticated, has to encipher the clear text of the message into a stream of random letters. In the 1950s, most advanced ciphers were produced by typing the clear text into a teleprinter, which connected into a separate cipher machine, and the enciphered text clattered out on the other side. The security of the whole system depended on thorough screening. If the cipher machine was not electromagnetically screened from the input machine carrying the clear text, echoes of the unencoded message might be carried along the output cables along with the enciphered message.
599
With the right kind of amplifiers it was theoretically possible to separate the "ghosting" text out and read it off [60:11 ]. In fact, it was more than just a "theoretical" possibility. Such techniques allowed the British security agents, in MI 5 and GBHQ, to frequently read the high-grade French cipher used by the French Embassy in London in the early 1960s [60]. All electronic equipment emits a level of electrical and electromechanical radiation. During the 1950s some government computer specialists became concerned that "electronic eavesdroppers" could capture and "decipher" emanations from mainframe computers, and could do so with little risk of detection. The level of electronic radiation and the distance of the potential eavesdropper were fundamental factors in determining whether or not emanations were decipherable. Given the secure surroundings of most early government computer installations processing classified data the risks were modest. Nevertheless, by the latter part of the 1950s the government set the first standard, called TEMPEST, for the level of emanations that remained acceptable when classified information was being processed. In the following decades there were refinements to the original TEMPEST standard on electromechanical radiation. TEMPEST became a wide-ranging term for technology that suppressed signal emanations from electronic equipment, as well as for the research and development underlying such efforts. TEMPEST products were larger, heavier, and more expensive than their non-TEMPEST counterparts, and generally worked by creating containers or barriers to shield electronic emanations [47]. Physical shields can take the form of specially designed computer rooms or even entire buildings (Secure Compartmented Information Facilities or SCIFs), but more commonly were containers that surrounded and became part of computing equipment. This generally did not pose any feasibility challenges given the remote locations and abundance of space for early military computers. For instance, Lawrence Livermore National Laboratory's Jerome A. Russell described general measures taken at the lab by the mid-1960s to prevent "someone outside the fence" picking up the noises:
600
J.R. Yost
With the teletype setup, we have a multiprogramruing or multiprocessing system, which we call Octopus. We have twisted par cables carrying the teletype leads to the physicists' and mathematicians' offices. These cables are shielded according to a classified regulation which says you have to have a shield on it of a certain nature ... We don't share the telephone facility with the regular voice-lined systems [48].
Such security, however, became more difficult and less practical as mainframes, and later, minicomputers began to proliferate in number and location to more "open" environments by the late 1960s. Under TEMPEST, techniques were also used to protect information by adding additional radiating equipment to generate "noise" and confuse and suppress the ability of eavesdroppers to comprehend signals. During the 1960s, even though considerable attention was paid to TEMPEST protection, it lacked uniformity, and the various DoD agencies often had to establish criteria on a projectby-project basis within Requests For Proposals (RFPs). To achieve greater uniformity, in 1974, the first Industrial Tempest Program (ITP) was established with three goals: to set commercial standards; to designate criteria for testing equipment that met the set standards; and to certify vendor equipment. This allowed and encouraged vendors to develop "off-the-shelf" TEMPEST computers and other communications equipment that the government could purchase without devoting time, attention, and expense to the issue of electronic emanations. Over the years, the DoD and other areas of the federal government have remained heavily involved with monitoring and scrutinizing the research, development, standard setting, testing and sale of TEMPEST products [47:255]. TEMPEST, however, only addressed one potential type of computer security vulnerability: electrical emanation. While this form of risk remained and TEMPEST equipment has continued to be important, wired access to machines and transmission of data over networks soon became the fundamental focus and driving force to devote significant time and resources to the potential and real problem of computer security, and to identify and implement appropriate mechanisms or standards during the 1960s and 1970s [6; 18].
20.3
THE EARLY LEADERSHIP OF WILLIS WARE IN COMPUTER SECURITY RESEARCH
In the 1950s, the physical isolation of machines and their data and programs gradually began to wither alongside early digital computer networking technology, forever altering the computer security landscape. During the mid-1950s IBM received a major contract to work in conjunction with Massachusetts Institute of Technology (MIT) to develop computers for the Semi-Automatic Ground Environment (SAGE), a complex system of radar and networked computers designed to provide early detection against an enemy air attack. 2 Preceding and influencing the development of the SAGE project, MIT's Project Whirlwind (originally a flight simulator project that evolved into a broader and more influential digital computer development project during the early 1950s) introduced the technology of time-sharing. MIT remained a leader in this area, and the school's Multiple Access Computer project, or Project MAC (started in 1963), focused on the further development and extension of timesharing technology [45]. Time-sharing was one of the main areas of research supported by ARPA's Information Processing Techniques Office (IPTO) and typically involved connecting multiple users at simple teletypewriter terminals to a central computer using telephone lines. By 1963 MIT could connect twodozen users at once over a network. The purpose of time-sharing was to more effectively utilize expensive processing and memory resources of computers. A time-sharing system allocated split-second slices of time to individual users in rapid rotation to seek to create the user-experience of a dedicated stand alone, or individual system. In university settings, the system often had to be set to prevent overwriting, but access to reading and modifying data was not perceived as a significant issue at many of the early university computer centers [10]. A far different situation existed in the defense community. Nevertheless, there were also close ties between the military and a small number of 2For an impressive analysis of the work of MIT and MITRE on Whirlwind and SAGE see Redmond and Smith [45].
20.3
The early leadership of Willis Ware in computer security research
elite universities in science and technology. With SAGE and other projects, there was a very close partnership between Massachusetts Institute of Technology and a number of areas of DoD research and development. The research and use of timesharing at leading universities (like MIT), information technology firms (IBM and others), and in the military led to a new environment with regard to computer security. This was recognized by several individuals who became pioneers in the information technology security field. In the mid-to-late 1960s personnel in or associated with the DoD increasingly came to recognize the importance of computer security in light of time-sharing and other computer networking activity. Willis Ware was one of the early leaders in identifying and trying to better understand the nature of the emerging problem of computer security in an environment defined by the proliferation of time-shared computing systems. Ware had a strong background in computing and had deep connections to DoD research and development. He had worked with John von Neumann at the Institute for Advanced Study, and completed his PhD in electrical engineering at Princeton University in 1951. Von Neumann, one of the principle figures in early computer technology and a fundamental force behind the definition of the principle architecture in early digital computing (von Neumann Architecture), is examined with great skill and insight by leading computer historian William Aspray in John von Neumann and the Origins of Modern Computing. Willis Ware went on to serve in the Mathematics Department, and later as the longtime Chair of the Computer Science Department of the RAND Corporation. In its first decades, The RAND Corporation, which was formed in Santa Monica by the Air Force in 1946, existed primarily as an Air Force-sponsored research entity. The Air Force gave the RAND Corporation a great amount of leeway in the 1940s. 1950s and 1960s to decide how best to spend research money to address military issues and problems. While the Air Force was also conducting some internal research on computing systems and classified data
601
Figure 20.3. Computer security pioneer RAND Corporation's Willis Ware speaking at the 1979 National Computer Conference. (Courtesy of the Charles Babbage Institute.)
(including studies at the Air Force Rome Air Development Center), the combination of having a conglomeration of leading scientists and engineers in the computing field, substantial resources, past exposure to and work on military systems, and the interaction of these scientists and engineers with the broader technical community made RAND an ideal setting for recognizing and seeking to address the problem of computer security. As Ware put it: We would talk amongst ourselves in the hallways, at conferences, and gradually there emerged a concern that we really ought to do something about finding out how to protect computers and all the information in them, because the country had become too dependent on them [57]. In the 1960s the RAND Corporation, and its spin-off that did much of the programming and system integration work for SAGE, the System Development Corporation (SDC), engaged in some of
602
J.R. Yost
the first of the so-called "penetration studies" to try to infiltrate time-sharing systems in order to test their vulnerability. Simultaneously, the National Security Agency, a secretive intelligence gathering organization formed by President Harry S. Truman in 1952, became involved with computer security. In one of the rare public appearances by a NSA employee during the 1960s, the agency's Bernard Peters collaborated with Ware in leading a session on computer security at the 1967 Spring Joint Computer Conference in Fort Meade, Maryland. Both Peters and Ware emphasized the sea change in computer security that had occurred with time-sharing. They both saw how current control programs were entirely unable to maintain the military's classification protocol in comparison to that which existed with paper documents. More specifically, timesharing systems were incapable of meeting the security needs involved with keeping data separate - providing read-and-write access as appropriate to individuals with designated clearances, while denying access to people without proper clearances [5 8]. The time-sharing system used at SDC in the mid-1960s for instance accommodated up to 53 users at one time and trying to partition the system so that one user did not have access to another's data was an ongoing challenge. When working on classified DoD material SDC had to "kick everyone out" who did not have clearance for the data being processed. New programs often provided the greatest hurdles, as errors could result in jumping from one part of the system to another. An additional issue was the residual information sometimes left behind after overwriting. To address these types of challenges, Peters concentrated on the importance of the monitor or operating system to ensure security, the certification of the monitor, and embedding critical security functions in small amounts of code [33: 43]. Within the Defense Community the growing recognition of the security issue, brought about by the increased prominence and importance of resource-sharing systems (time-sharing, multiprogramming, etc.), resulted in the Defense Science Board establishing a task force in October 1967 to
examine security problems with such computing systems. Willis Ware chaired this task force, which included members from the RAND Corporation, SDC, a number of major defense contractors, academic computer scientists from colleges and universities, NSA officials, and CIA officers. This group recognized the vast range of hardware and software systems already in existence and sought to provide a "general compilation of techniques and procedures" that could be flexible and broadly useful in protecting confidential military information. At the same time, the group was cognizant of the limitations involved given the variety of systems, the newness of the field, and continuous innovation. While understanding the advantage of broad standards, they stressed the inevitable need to solve many problems on a case-by-case basis. The group deliberated for two years, bringing in a host of additional experts to testify. On February 11, 1970 they completed their classified report, entitled, Se-
curity Control for Computer Systems: Report of Defense Science Board Task Force on Computer Security [58]. 3 The Defense Science Board report was by far the most important and thorough study on technical and operational issues regarding secure computing systems of its time period. In general, the report emphasized how technology had surpassed the knowledge and ability to implement appropriate controls to ensure security for confidential military information. The report focused on the distinction between open environments, which were Characterized by access to non-cleared users, unl?rotected consoles, and unprotected communications; and closed environments, which consisted of only cleared users, protected consoles, and protected communications. Specifically, the report contained seven primary conclusions: • Providing satisfactory security controls in a computer system is in itself a system design problem. • Contemporary technology can provide a secure system acceptably resistant to external attack, accidental disclosures, internal subversion, and denial of use to legitimate users for a closed environment. 3The Report was declassified in 1975 and was republished by the RAND Corporation in 1979.
20.4 Continuingchallenges with computer security at the Do D
• Contemporary technology cannot provide a secure system in an open environment. • It is unwise to incorporate classified or sensitive information in a system functioning in an open environment. • Acceptable procedures and safeguards exist and can be implemented so that a system can function alternatively in a closed and open environment. • Designers of secure systems are still on the steep part of the learning curve and much insight and operational experience with such systems is needed. • Substantial improvement (in cost and performance) in security controlling systems can be expected if certain research areas are pursued [58: iv]. In addition to the conclusions, the report identified a fundamental problem with the current policy of the DoD on matters of computer security. Despite the fact that a number of systems were not entirely secure, the task force recognized that the situation might grow worse if all such systems lacking strong security were off limits, as was then stipulated. The group believed that exposure to and use of systems was critical to gaining understanding and allowing the development of design and operational guidelines. The sole action item of the report stated: The security policy directives presently in effect prohibit the operation of resource-sharing computer systems. This policy must be modified to permit contractors and military centers to acquire and operate such systems. This first step is essential in order that experience and insight with such systems be accumulated, and in order that technical solutions be tried [58]. Ware's task force also identified one "near action item", to establish a technical agent to define procedures and techniques for certifying securitycontrolling systems, especially computer software. The group believed that the need for such an agent was immediate, but it also posed a challenge in that it required advanced technical expertise in several disciplines. The group stated that the responsibility for finding and overseeing the work of such an
603
agent could fall upon the NSA, the Defense Intelligence Agency (DIA), Joint Tactical SIGINT Architecture (JTSA), or a multi-service agency. Finally, the Task Force made one last recommendation, as well as provided a prediction for the near future. They recommended that their group be maintained to provide advice to the Directorate of Security Policy, the Technical Agent, and the designers, certifiers, and users of secure systems. They also emphasized that in the future it would be the computer industry that would have to provide systems with appropriate safeguards. Given this, the Task Force believed their report should be circulated relatively broadly to government agencies, industry, research groups and defense contractors [58].
20.4
CONTINUING CHALLENGES WITH COMPUTER SECURITY AT THE DoD
These last points were related to perhaps the greatest early challenge to establishing computer security standards in what amounted to a secret community within and tied to the DoD. The conundrum was that while there was a recognition of the need to initially classify reports and documents useful to understanding the technical side of security standards, at the same time there was a need for an ever broader community, particularly the computer industry, to effectively take advantage of the rapidly growing possibilities of information technology as the Cold War heightened. Greatly simplified, it played into a tradeoff of speed versus security that had long existed within secret scientific communities within and tied to the DoD. There were also significant costs involved. About ten to fifteen years prior to the widespread proliferation of time-sharing systems, in 1953 the University of California Radiation Laboratory reported their security costs at $503,079 and Los Alamos indicated theirs were approximately $383,000. Adding the complexity of time-sharing systems could only exacerbate the challenges and costs involved (into many millions of dollars), but at the same time this was fundamental to advancing possibilities for security upon which it was difficult to assign a monetary value [50: 54].
604
J.R. Yost
The Science Defense Board's 1970 report generally sought to establish policies and mechanisms that would allow the incorporation of existing security infrastructure for classified military information into different computer system environments. The seemingly simple policy used with documents, that individuals could not "read up", or see materials above their level of clearance (the four levels were "top secret", "secret", "confidential" and "unclassified"- an individual with a secret clearance, for instance, could read secret, confidential and unclassified documents, but not top secret ones) was not easy to implement in computing systems given demands of users for computing resources and the internal functioning of existing operating systems. Operating systems are large and complex and it is difficult to know all the conditions and internal workings of these systems. In essence, multilevel computer security, or security adhering to different levels of classification, was a microcosm of a fundamental and daunting question underlying the young field of computer science, "how did systems actually operate and how did this change under different conditions and contexts?". Ware's committee had correctly identified the tremendous complexity of computer security both within, and potentially outside of, the military and classification environment. The group had offered a number of insights and recommended mechanisms for the future, but they were not able to provide solutions to the many problems that they raised - other than stressing the need for more openness and the involvement of a wider community to figure out remedies to the dilemmas at hand. Following the Ware-led study, a number of projects on computer security were funded by various agencies of the defense community, but most of these were similar to the Ware project in that they were long on problem identification, and short on solutions or workable practices. 20.5
JAMES P. ANDERSON AND THE AIR FORCE'S FOCUS ON COMPUTER SECURITY
In early 1972, after a couple years of relative uncertainty, Major Roger Schell of the Air Force
Electronic Systems Division initiated a new and influential effort. James E Anderson, a computer consultant out of Fort Washington, Pennsylvania, headed the study. James Anderson's project was more tightly defined than the exploratory and broader Ware study. It was focused solely on the problem as it existed in the Air Force and was completed in a matter of months in October 1972. Anderson and his group had a dire assessment of the situation, indicating that there was no cmTent system that could operate securely and have multiple or differential levels of access. He stressed that infiltration groups testing systems, or so-called "tiger teams", could usually break through security mechanisms currently in use, and that inefficiencies with regard to the Air Force's computer systems were resulting in costs of approximately $100 million annually. Additionally, on the rare occasions when tiger teams failed to infiltrate systems, this did not necessarily guarantee that adequate security was in place [33]. At the heart of the problem was the fact that resource-sharing systems relied on operating systems to maintain their separation, and users commonly program these operating systems to accomplish their work. In essence, users with different security clearances were accessing the same primary storage, and thus, had access to the same data. Additionally, various applications might contain a "trap door" inserted by programmers to secretly gain subsequent access to systems. Systems developed by programmers without clearances could be particularly vulnerable, or at the very least, uncertain with regard to security. Anderson's group, which consisted of members of NSA, Thompson-Romo-Wooldridge, Inc. (TRW), the SDC and MITRE (a MIT spin-off non-profit research corporation formed in July 1958), was not optimistic about the future, and emphasized how major and immediate changes were necessary. Chief among the recommendations was to allocate $8 million to a research and development computer program that consisted of discontinuing the integration of security functions within existing operating systems in favor of creating separate "security kernels", or smaller, simpler, peripheral operating system-like pieces of code to interact directly with system hardware. An implementation
20. 7 Moving toward a computer security organizational infrastructure
605
a major development mathematically, but it presented problems with implementation. It also highlighted the inflexibility between the military needs and the functionality that was necessary for some non-military systems. Technically, in considering the actual incorporation of the insights of the Bell-LaPadula system, the larger problem of computer security, understanding how a kernel, or an overall operating system served as a correct implementation of a mathematical model, moved to the forefront. Implementing models in working systems was something that was often easier said than done, and was at the heart of software engineering - of understanding how code conforms to a mathematical ideal. In the 1970s "program proof" became 20.6 BELL AND LAPADULA: MODELING OF a major area of computer science. While studied COMPUTER SECURITY AND THE in a variety of places, SRI (initially standing for FOUNDATION FOR STANDARDS Stanford Research Institute) became a leader in researching "program proof" or verification in softIn the years following the work of James An- ware. In 1973 SRI began a project called Provably derson's group, a number of researchers within Secure Operating System (PSOS) to design an opand outside of the DoD conducted research and erating system to meet security requirements and wrote reports on multilevel security (MLS) sys- verify security properties. During the decade SRI's tems. Of these, none would be more influential Peter Neumann and Richard Feiertag also worked than the model developed by David Elliott Bell on developing a kernel model as opposed to an enand Leonard J. LaPadula, or the Bell-LaPadula tire operating system. Progress was slow at SRI, as model. Bell and LaPadula recognized that "no were efforts at other centers focusing on computer read up" was not enough to provide security for a security such as the University of California at Los military system because individuals, processes, or Angeles. Besides the complexity and challenges involved programs (subjects) could (in the case of individuals inadvertently or intentionally) write confiden- with designing secure kernels and systems, and tial information from higher clearances or classi- verifying the security achieved, other issues existed fications to lower ones. Broadening the definition within the ever changing technical and organiof subjects (to include processes and programs) zational environment. For instance, classifications recognized and attempted to address the possible within the military were not static, but changed existence of a Trojan horse (a program that in ad- over time. This could pose a number of technical dition to its designated and perceptible operation and logistical hurdles. Moreover, computer secuor purpose also had a surreptitious or concealed rity needs outside the military did not necessarily function). Daniel Edwards of the NSA first used correspond to the military model. the term when he served on Anderson's group, and Trojan horse attacks have long been a major con- 20.7 MOVING TOWARD A COMPUTER cern of those working to achieve more secure comSECURITY ORGANIZATIONAL puter systems [33: 46]. Under the Bell-LaPadula INFRASTRUCTURE model, subjects could not read up, nor could they write d o w n - in theory, and possibly practice, they In 1965 the Brooks Act designated the National could write what they could not read. This was Bureau of Standards as the agency in charge of of the kernel concept was achieved during the early 1970s using a kernel called HYDRA in association with an operating system for the Carnegie Mellon Multi-Mini-Processor (C.mmp) at Carnegie Mellon University [12]. Most significantly, Anderson's group introduced the notion of a reference monitor to enforce authorized access relationships between subjects (generally perceived as people at the time) and objects (a passive entity that contains or receives information), and sought to implement a "no read up" system regarding classified data. The security kernel was the key tool to implement the resource monitor concept [2].
606
J.R. Yost
research and standard setting for federal procurement. During the 1970s the NBS concentrated its efforts on computer security in two fundamental areas: standards work for researching, building, testing, and procuring secure systems; and the development of national standards for cryptography. While cryptography provides tools for authentication, verification, and secure communications, and is thus related to the system security work discussed above, it is at the same time relatively distinct as an area of research and policy. Given this, NBS and other organizations' system security work will be discussed first, and later in the chapter the research, politics, and organizational efforts regarding cryptography standards will be addressed. In the late 1960s and early 1970s the NBS engaged in a small number of research efforts and organized collaborations to investigate computer security. On some of these efforts NBS collaborated with other organizations, including the Association of Computing Machinery (ACM). In 1973 NBS established a formal research program in developing standards for computer security, and by 1977 it was running invitational workshops of experts from government, industry, and academia in the computer and software engineering field to define problems and attempt to find solutions to ensure secure computing systems. As a result of the workshops NBS sought to define policy for sensitive, but not classified, data stored in and communicated between computing systems. It attempted to establish evaluation criteria and approved lists of computing equipment/software systems for handling classified data. While the NBS was providing some leadership, there was still substantial research conducted by DoD entities that lacked central coordination. During the mid-1970s ARPA and the different branches of the military were still conducting research with minimal coordination between them. In 1979, as another outgrowth of the workshops, the MITRE Corporation was given the task of developing criteria for evaluating the trust that could be placed in computing systems for handling classified data. Around this time, the Deputy Secretary of Defense gave the Director of the NSA the duty of extending the use of trusted computer systems within the DoD.
With this responsibility, in 1981, the NSA established the DoD Computer Security Center (CSC). This formalized the NSA's computer security standard setting authority - previously it had been important in shaping the policy of the NBS in this area. Several years later the CSC expanded substantially and assumed control of computer security for the entire federal government, and was renamed the National Computer Security Center (NCSC). This organization would formalize the strong work achieved in the 1970s and first years of the 1980s to produce a landmark work on computer system security. 20.8
THE ORANGE BOOK (TCSEC)
During the 1970s, after the Anderson task force had developed some key principles, Bell and LaPadula provided a fundamental computer security model. Work on design and security verification was also underway at SRI, UCLA, and elsewhere. The military recognized a heightened need to have standard computer security criteria for evaluation in order to avoid independent verification and validation for each new system being considered. The NCSC tackled the problem of bringing key elements of the work of the 1970s in the computer security field into a working framework to standardize the evaluation of system security. The work of the NCSC and MITRE resulted in the Department of Defense Trusted Computer System Evaluation Criteria (TCSEC) book, often referred to as "The Orange Book" because of the color of its cover. The Orange Book was by far the most influential publication to date (published originally in August 1983, and republished with minor revisions in 1985) on computer security, and remained the most significant document on the topic for many years to come [15]. Chief among the goals with TCSEC was to standardize government procurement requirements. It provided a structure for manufacturers to evaluate and measure the security of their systems. More specifically, it outlined a mechanism to designate different levels of security as well as a means to test if a particular system met a designated level. TCSEC recognized security as a continuum and not
20.8 The Orange Book (TCSEC)
an absolute, as well as the great value of having a common language to understand different levels of security within computing systems. The book established standards for the degree of trust that could be placed in computer systems by designating the criteria or properties for achieving a particular level of security. The fundamental underlying premise of TCSEC is that it is possible to evaluate a system and accurately place it a category that is useful to understanding the degree of trust that is built in with regard to being able to process and maintain the security of a range of classified and unclassified data. At its heart, trust refers to the assurance an evaluator can have that a system does what it is supposed to do and does not do what it is not supposed to do. Within the TCSEC there are four basic categories of security protection (listed in order from the strongest protection to the weakest): A Verified Protection B Mandatory Protection C Discretionary Protection D Minimal Security Protection Each of these categories then have a couple or more subdivisions designated by numbers, with higher numbers representing greater security within a category than lower numbers. For instance, C3 is a level or rating of greater security than C2, which in turn, is greater than C1. Any level of B, of course, offers a higher security designation than any of the subcategories of level C (Cl, C2 and C3) [15]. TCSEC refers to the totality of protection mechanisms of a computing system as the Trusted Computing Base (TCB). It recognizes that not all parts of the operating system need to be trusted, but rather that the computer architecture, assurance mechanisms, and elements of the TCB must be understood well enough to know that the base is protected against intentional and unintentional interference and tampering. TCSEC emphasizes that manufacturers should make the TCB as simple as possible for the functions it is to perform. For higher levels, the security elements should be centralized in a security kernel that can then be carefully scrutinized to assure the TCB offers superior security protection against tampering. Among
607
other operations, this kernel implements the reference monitor to enforce authorized access relationships between subjects and objects in the system. The different categories, and subcategories contain a host of requirements in many different areas: discretionary access requirements; identification and verification of user requirements; testing requirements; auditing requirements; system architecture requirements; documentation requirements; and design specification and verification requirements. For summary purposes some basic criteria for the different categories and subcategories, from least secure to most, follows. Minimal security systems, or D systems, offer little or no security features. It is a category for all systems that do not fit into any higher classifications. The types of systems (such as personal computers) that would fit in this category are not submitted for evaluation. Thus, it is a category that contains no systems, as it would offer no value to manufacturers to have such a designation [47: 155159]. C 1 Systems provide very minimal security functions. The level of security of such systems is only appropriate for cooperating users who are all at the same security level. The main requirements protect against users making accidental mistakes that might damage the system (the system is protected against user programs). While there is discretionary protection of files, passwords, and designated access, controls are far less than higher systems. The designation was of limited use in the 1980s and few systems have ever been designated in this subcategory. C2 Systems, on the other hand, require accounting and tracking of individual users on the systems, more involved discretionary controls (designated read and write access), object reuse (to assure data left on a system does not become available to a s u b s e q u e n t - unauthorized- user), and far more extensive testing and documentation requirements than C1 Systems. C2 Systems have been far more common than C1 and included Digital Equipment Corporation's VAX/VMS 4.3 and Control Data Corporation's Network Operating System [11; 47: 155-159]. There is a significant leap between the requirements of C and B 1 Systems. All B 1 subcategories
608
J.R. Yost
and higher (B2, B3 and A1) require a "Mandatory Access Control" which stipulates all files (and other significant objects) are given sensitivity labels to enforce the systems security. Protection of files is not discretionary, but instead is designated through a clearly defined classification and access system. The architecture of B 1 and higher systems maintains concrete distinctions between the secure parts and non-secure parts of the systems. IBM's MVS/ESA and Unisys' OS 1100 were classified as B 1 Systems. B2 primarily just extended rather than added to the security features of B 1, but it did offer "least privilege", which limits users and program access to the shortest necessary time to complete a function. Testing requirements are significantly higher than B 1. Honeywell's Information Systems' Multics System is one of the few to achieve a B2 classification. Likewise, B3 Systems are rare and require even tighter structure to the TCB to prevent penetration of the system. Honeywell Federated Systems XTS-200 had this classification. A1 systems are the highest rated systems in the TCSEC classification, but the publication does discuss the possibility of higher-level systems that have even more stringent designations for architecture, testing, and formal verification (A2). A1 systems go beyond B3 systems in requirements for trusted distribution (in shipping the system to the customer). They require formal proof that the actual design of a system completely matches the specifications or mathematical model. During the 1980s just two systems received the A1 designation: Honeywell Information Systems' Secure Communications Processor (SCOMP) and Boeing Aerospace's Secure Network Server (SNS) Multilevel Secure (MLS) Local Area Network (LAN) system [15]. At the various levels, each of these systems achieved its authentication through extensive testing known as the Department of Defense Information Technology Security Classification and Accreditation Process (DITSCAP). This involves testing and evaluation of both the technical aspects of systems, but also the mechanisms and practices of its operators. Final declaration is typically made by a senior operational commander with specified authority to approve a computer system given the potential risks.
Prior to TCSEC, there was no common and efficient language for different people and organizations to communicate about the level of security of computer systems. The laborious establishment of detailed specifications of security for each system was time consuming, expensive, and inefficient. Even more problematic, however, was the association of different meanings or criteria for what specifications meant, how they were measured, and the degree to which they were verified. The aim of TCSEC was to ease this process, to make it more efficient, accurate, and reliable by providing cl[ear levels and criteria for measurement that would in turn give manufacturers guidance in how to build their trusted systems for classified or sensitive applications. At the customer end, which was typically a government agency or entity, and often ,one within the DoD, the acquisition process could be quicker, easier and more precise. 20.9
COMMON CRITERIA AND THE GLOBALIZATION OF COMPUTER SYSTEM SECURITY STANDARDS
While substantial and influential work on computer security was achieved during the 1970s and 1980s in the United States by the DoD, it was not alone in this field, business and the academy, both in the US and internationally were becoming increasingly prevalent in developing and influencing standards. The networked world, as the ARPANET and other networks were transformed into the Intemet by the early 1980s, were fundamentally changing the environment and the overarching models for standards. Of the critiques of the relevance of the DoD spawned Bell-LaPadula model and TCSEC to other environments and aims, none was stronger than a talk given by David Clark, a senior research scientist at MIT's Laboratory for Computer Science (LCS) and David Wilson, the director of information security services for Ernst & Whitney, at the 1987 Institute for Electrical and Electronics Engineers (IEEE) symposium on security and privacy held in Oakland, California. In this paper these '.scientists argued that most discussions of computer security have focused on a military-oriented model that privileges disclosure as the primary issue even
20.9
Common criteria and the globalization of computer system security standards
though the "core of data processing [is] concerned with business operation and control of assets.., of data integrity". They claimed the military "lattice" model was simply insufficient to control disclosure and provide integrity in commercial settings [11:
609
have been rhetorically appropriated by those advocating moving from "an empirical to a deductive approach to computer system correctness" [34: 183]. To address this problem, MacKenzie calls for "a sociology of proof" to better understand the no1841. tion, understanding, and construction of this term Internationally, a considerable amount of re- in its mathematical and computer systems consearch and standard setting was achieved in Eu- texts [34: 159-164]. This challenge becomes even ropean countries, Canada, and other nations. In greater as the need for true international standards Europe, during the 1980s, standards were estab- becomes ever more imperative in our increasingly lished independently by the United Kingdom's global economy. Communications-Electronic Security Group Recognizing the increasing benefits for effi(CESG), Germany's Bundesamt fur Sicherheit in ciency in research, standard setting, and particuder Informationstechnik (BSI), Frances's Central larly procurement processes that are possible with Service for Information System Security (SCSSI) broader international cooperation, an attempt was and The Netherlands' National Communication made in the first half of the 1990s to bring together Security Agency (NLNCSA). These nations pi- a number of allied nations to have a common set oneered with creating internationally recognized of computer security standards. This resulted in a standards in a joint effort called Information Techglobal set of standards that superseded the Orange nology Security Evaluation Criteria (ITSEC). Book or TCSEC in the United States; ITSEC, for Although the standards that developed in Eumember European nations; and Canadian Trusted rope were quite similar to those outlined in the Computer Product Evaluation Criteria (CTCPEC). United States in TCSEC, novel research and Various books with different colors, commonly projects were undertaken in applying security stanreferred to collectively as the Rainbow series, moddards to new types of systems. One such inified and extended TCSEC for greater specializastance was the UK Ministry of Defense's project tion and sought to address security needs within to develop a unique microprocessor called VIPER (Verifiable Integrated Processor for Enhanced Re- the changing environment of computing, software, liability) at the Royal Signals and Radar Estab- and networking technology. Like TCSEC, the later lishment (RSRE) in 1987. Back in 1986 the UK books aimed to provide a standard for manufacturCabinet Office's Advisory Council tbr Applied ers regarding security features and assurance levels Research and Development suggested that math- in order to provide widely available systems meetematical proof should become standard for any ing certain "trust" requirements for sensitive applisystem in which a system failure could result in cations. Of particular significance, the so-called Red ten or more deaths. VIPER was an outgrowth of Book was issued by the National Computer Secuthis, but whether the constructed 32-bit system achieved mathematical proof was controversial. rity Center in 1987, and provided interpretations of British firm, Charter Technologies Ltd., was a li- TCSEC security features, assurance requirements, censee of the RSRE's VIPER and took legal ac- and rating structure to networks of computers, intion against the Ministry of Defense arguing that cluding local area networks (LANs), wide area netthe claim of proof was a misrepresentation [34: works (WANs) and internetwork systems. It also 159-164]. Noted sociologist and historian of tech- documented the different security services (such nology Donald MacKenzie has emphasized how as communications integrity, transmission security, the VIPER episode highlights that using the term and denial of service) with regard to LANs and "proof" without clear definition and precision can WANs. be dangerous. More broadly, he asserts that the repThis new set of international computer security utation of mathematics for precision and certainty standards was developed through a collaboration
610
J.R. Yost
of the United States, Great Britain, France, Germany, The Netherlands and Canada in conjunction with the International Standards Organization (ISO). The system was developed and refined in the mid-1990s and the first official set of standards was completed in 1996. These standards were named the Common Criteria (CC), but upon their minor revision to create CC version 2.0 in 1998, they also became known as ISO 15408. The Common Criteria defines seven security levels, called Evaluation Assurance Levels, which are labeled from EAL1 to EAL7. Higher number levels represent systems that have successfully undergone more stringent and extensive evaluations and offer greater trust. These levels are closely related to those set in ITSEC and TCSEC. For instance, EAL3 corresponds very closely TCSEC level C2. Common Criteria certification is a rigorous process, especially at the higher number levels, and is achieved through testing by a third party laboratory that has been accredited by the National Voluntary Laboratory Accreditation Program (NVLAP). The National Information Assurance Partnership (NIAP) was created to oversee use of the Common Criteria as the standard for evaluation in the United States, and similar bodies exist in other member nations. In addition to the originators of the Common Criteria, other countries joined in the late 1990s under a sub-treaty level Common Criteria Mutual Recognition Agreement (MRA), which was established in 1998 by the United States, France, Germany and Canada. Australia, Great Britain and New Zealand joined in 1999, and the following year Norway, Spain, Italy, Greece, The Netherlands, Finland and Israel signed on and became members. Common recognition of standards between all of these countries exists at the lower levels (EAL 1 to EAL4), but typically countries will only accept their own certification for levels EAL5 to EAL7. While TCSEC and subsequent global standards of the Common Criteria have been of fundamental importance to computer security systems professionals in government and industry, they are entirely outside of most individuals' understanding of computer security. Computer security remained a
relatively obscure topic to most people before the advent of personal computers and the widespread dissemination of these machines in peoples' ofrices and homes. Some became interested in computer security as a facet of their interest in insuring the protection of personal privacy. More individuals were exposed to issues of computer security through focusing events surrounding current and future threats and new forms of malware in the .age of ubiquitous computer networking with the Internet and World Wide Web. 20.10
THE ORIGIN AND EARLY DEVELOPMENTS OF CRYPTOGRAPHIC RESEARCH IN THE ACADEMIC COMMUNITY
Research and use of cryptography in the military, for diplomacy, and in other realms of society has been used for many centuries. This included codes (substitution or exchange of words and phrases) as well as more complex cryptography using cipher text (substitution of individual letters) to conc,eal embedded information from all but the intended recipient. The first known use of a cipher texl: in the military is described in Julius Caesar's Gallic Wars. Caesar discussed how he sent Cicero a message replacing Roman letters with Greek ones. Caesar also developed the so-called "Caesar slhift cipher", which shifted the alphabet forward three places in the cipher text [53: 9-11]. Between the ninth and thirteen centuries, Arab scholars pioneered new types of cryptographic systems, as well as cryptanalysis, or techniques to unearth patterns to decipher secret codes, such as the study of frequency and patterns of the cipher text. In one of the most famous historical uses of cryptography, Mary, Queen of Scots used cipher text to communic, ate with conspirators to assassinate Queen Elizabeth (the Babington Plot) and escape from her imprisonment. Thomas Phelippes, a cryptographer gifted in the art of frequency analysis, deciphered the messages between Mary Queen of Scots and her conspirators, exposed the plan to Queen Elizabeth, and Mary was beheaded [53: 42-44]. Each distinct cipher has a corresponding algorithm, or the general specification of the method
20.10
The origin and early developments of cryptographic research in the academic community
of encryption, and a key, or the detailed rules for a particular encryption. Over the years these algorithms and keys have become increasingly sophisticated, to stay ahead of the many corresponding advances in the art of cryptanalysis. The highest level of this work has been conducted by secretive government agencies such as the GBHQ in Great Britain and the NSA in the United States. Top scholars in the area worked in anonymity developing systems and analytical techniques. In addition to this secret research, several pioneering scholars outside the secret community tackled a fundamental problem with the practical use of cryptography. In doing so, they made a critical discovery, and changed the face of the history of cryptography. Whitfield DiNe, a recent graduate of MIT in the mid-1960s, began working at the school's nonprofit spin-off corporation MITRE to avoid the draft. He worked under mathematician Roland Silver for MITRE and also had the opportunity to work at MIT's Artificial Intelligence (AI) Laboratory as a resident guest of esteemed AI expert Marvin Minsky. After his MITRE funding ran out, he moved to Stanford to work in the Stanford AI lab with John McCarthy. During this time, Diffie became increasingly interested in the possibility of applying mathematics and cryptography to achieve private communications. Diffie was enthralled by David Kahn's book Codebreakers, a monumental and original work that detailed the development and use of secret communications from ancient times to the present and brought the NSA to public attention, an agency that prior to Kahn's book was so secretive and little known that some Washington insiders jokingly claimed the acronym stood for "No Such Agency" [32: 14]. With the influence of Kahn's book, his devouring anything he could find to read on cryptography, his interaction with various leading computer scientists, and his cross country travels to research the topic and meet with people, Diffie's interest in this area escalated. At the same time, his suspiciousness of the NSA also grew. On one of his road trips to do research and seek out people knowledgeable in cryptography, he visited and gave a talk at IBM's Thomas J. Watson Laboratory, where Alan Konheim suggested he look up
611
someone interested in cryptography on the Electrical Engineering faculty at Stanford University: Martin Hellman. Diffie and Hellman met in 1975 and immediately were rejuvenated by their common interest in cryptography, a field that Hellman had deep interest in but was frustrated by as a result of most of the high-level research in the field taking place behind the so-called triple fence of the NSA compound. One practical problem with cryptography at this time was that of key distribution. To use cryptography required a key distribution center. Individuals could not communicate prior to exchanging keys or having this function done through a trusted centralized authority. There were two primary issues with cryptography, privacy and authentication- making certain communications were not intercepted and deciphered by a third party and knowing that the person who claimed to send a message actually had sent a particular message. Diffie and Hellman, along with a third collaborator who came from Berkeley to join them at Stanford, Ralph Merkle, developed public key cryptography with the concept of a trap door one way function (a function easily solved in one direction but not the other). Diffie and Hellman reported on their ideas in their seminal article, New Directions in Cryptography, published in November 1976 in IEEE Transactions on Information Theory. The article began: We stand today on the brink of a revolution in cryptography. The development of cheap digital hardware has freed it from the design limitations of mechanical computing and brought the cost of high grade cryptographic devices to where they can be used in such commercial applications as remote as cash dispensers and computer terminals. In turn, such applications create a need for new types of cryptographic systems which minimize the necessity of secure key distribution channels and supply the equivalent of a written signature... Public key distribution systems offer a different approach to eliminating the need for a secure distribution channel [16: 644].
Hellman and Diffie's public key system rested on the notion that the traditional single symmetric secret key would be split so there would be both a public key and a private key, and that these two keys would be mathematically linked (a trap door
612
J.R. Yost
one way function). The public key would be associated with a particular individual and would be readily known and accessible- it could be published in a directory, just like someone's name and phone number in the phone book. The private key, however, would be kept secret. An individual would send a message to someone encrypting it with the recipienrs public key and their own private key (creating a trap door). This idea of splitting the key was completely counter-intuitive with accepted notions of cryptography, and in this, laid the brilliance of Hellman and Diffie's idea. In addition to Diffie and Hellman's great theoretical accomplishment with key distribution and privacy, the paper also addressed digital signatures for reliable authentication. This would have a profound impact down the road as the work of Diffie and Hellman was complemented and expanded upon by a group of researchers at MIT. First, however, it is important to address some background to the emergence of government encryption standards and the conflict that was developing between the government, particularly the NSA, and the small group of university-based scientists working on cryptography, led by Hellman and Diffie.
20.11
EARLY ATTENTION TO AND INVESTING IN COMPUTER SECURITY SYSTEMS DEVELOPMENT IN THE PRIVATE SECTOR
The computer security environment of the first half of the 1970s was beginning to change. While most investment in the field, including task forces and tiger teams, was funded by the DoD, one private sector company was becoming increasingly focused on the issue: IBM. Between the last half of the 1950s and first half of the 1960s IBM had built SAGE computers for the Air Force; the socalled Defense Calculator, commercialized as the IBM 701, for defense and industry; and Stretch, an unprecedented and ambitious computer for Los Alamos National Laboratory. The firm had also invested heavily in developing a backward compatible series of computers and software with the System/360 series and OS/360 operating system
that was announced in 1964. IBM's past experience drove home the importance of future military contracts, but leaders at the company also came to recognize that computer security would be of fundamental importance to all of their different markets in the future. 4 This led the company to commit to spend $40 million on computer security issues over a five-year period. By the late 1960s IBM, and its leader, Thomas Watson, Jr., believed it was critical for the computer industry to take the lead in addressing problems regarding computer security and privacy. In a 1968 address to fellow industrialists, Watson, Jr. stated: I believe we in industry must continue to improve existing technological safeguards which limit access to information stored in electronic systems; and we must devise new ones as the needs arise ... Moreover, I believe we in industry must offer to share every bit of specialized knowledge we have with the users of our machines - the men who set their purposes - in determination to help secure progress and privacy, both together [26]. Despite some existing knowledge concerning the commitment to and the expenditures by IBM on computer security work, the details of the firm's early activities in this field are still somewhat sketchy. The company produced literature for its customers outlining computer security broadly and indicated the wide scope of the firm's computer security agenda [26]. At the same time, much of the specific effort was devoted to developing and enhancing a single IBM product, Resource Security System (RSS), to address operating system design. Generally speaking, RSS was created to improve the existing level of security in resource-shafting systems [2: 5]. 20.12
CRYPTOGRAPHIC RESEARCH AND THE EARLY DIGITAL COMPUTER INDUSTRY
IBM's computer security work, however, was not limited to focusing just on the design and workJings 4Early time-sharing and computer networking work at IBM was funded by the government- IBM's contract for SAGE computers, funded by the Air Force, provided the computer giant with hundreds of millions of dollars in the second half of the 1950s.
20.13 RSA Data Security: pioneeringa security software industry
of operating systems such as RSS, and the underlying issues involving access to memory. Cryptography was also a fundamental concern and area of IBM's research. In the NSA, this effort was continuous from its founding in 1952, and the work of this secretive agency was far more advanced than anything existing outside of it. For centuries ever more advanced ciphers and deciphering methods had been developed to try to ensure secret communications, while simultaneously uncovering the secret code used by enemies. The work of the allies in breaking the Enigma code, which used a sophisticated mechanical rotor machine and changed ciphers daily, and other encryption tools and techniques in the US, Europe, and throughout the world, have an extensive and growing scholarly and popular literature. 5 In the late 1960s, IBM focused attention on cryptographic work, hiring Horst Feistel to lead a group that would come up with a data encryption system that was named Demon, and later renamed Lucifer. In later iterations, Lucifer evolved into what was established as the Data Encryption Standard (DES) in 1976 by the National Bureau of Standards. In 1973 Feistel published some details on Lucifer in Scientific American, the most detailed work to that date openly published on current cryptographic research. Feistel had come up with, or at least wrote about, substitution boxes (S-boxes). S-boxes were algorithms for nonlinear equations specifying how letters of cipher code could be shifted. Some believed that the NSA had provided this important development to Feistel in order to embed a controlled level of cryptography into the commercial sector [32:42-43 ].6 The NSA, as critics in the academic cryptography community asserted at the time and most people now accept, had a strong hand in influencing the NBS on the structure and specifications of DES (reducing the key size to only 56 bits) 7 Diffie, Hellman, and other academic critics of DES believed that the standard was constructed to enable
613
the secret agency to continue to have the capacity to break it (through a brute force attack), or that DES might have even included a trap door, offering easy decryption by the NSA (the NSA would provide tamper-proof DES chips to approved companies). The critics and some negative press influenced some corporations such as Citibank to steer clear of DES and develop its own proprietary standard. With DES, IBM management thought cryptography, a subset of and area contributing to the larger field of computer and communications security, had essentially been solved. Horst Feistel and the cryptographic group at IBM were somewhat letdown by the fact that IBM's top management thought the work in their area was done, and that the company could focus all its computer security research resources on the larger problem of operating system security [23]. The tension between Hellman, Diffie, and a growing group of academic researchers and the NSA and federal government would grow more intense and involve additional individuals as researchers at MIT extended the developments of Diffie, Hellman, and Merkle, and sought to commercialize encryption technology. Conflicts would also erupt over intellectual property between the leading cryptographic researchers at Stanford University and those at MIT.
20.13
RSA DATA SECURITY: PIONEERING A SECURITY SOFTWARE INDUSTRY
Diffie, Hellman and Merkle had created an outline or general explanation for using one-way functions as the basis for secret communication, but had not developed a workable system. This was accomplished on the heals of the Stanford researchers by a group of three scholars at MIT, Ronald Rivest, Adi Shamir and Leonard Adleman, who collectively would be awarded the Turing Award for 5A sampling of important and influential works include: their discovery. Rivest was intrigued by Diffie and Kahn [28], Denning [13], Diffie and Landau [ 17], de Leeuw Hellman's "New Directions" article and became [30], Hodges [24], Singh [53], Levy [32]. increasingly focused on trying to provide the math6A colleague of Feistel, Alan Konheim, believed that the ematical formulas that would take public key crypidea for S-boxes was given to Feistel by the NSA. 7DES was formally adopted by NBS in July 1977 tography from theoretical construct to workable
614
J.R. Yost
algorithms for a public key system. Rivest's sys- some management consulting, but soon thereafter, tem rested on a one way function created by tak- to lead the firm [5]. ing the product of two very large randomly chosen Bidzos, a talented businessman who had done prime numbers. The public key was the product of programming and networking systems work: at the prime numbers. The decryption key, however, IBM, and had experience in running his own incould only be uncovered by knowing the p r i m e s - ternational IT consulting venture, brought much a factoring problem that was nearly impossible to needed management and discipline to the organizado given the size of the randomly chosen prime tion. During the mid-1980s things remained tough numbers. While Rivest came up with the algorithm, for RSA, and after a short while Bidzos was essenhe had interacted closely with Shamir and Adle- tially the entire company, as the others departed. man and insisted that all three names appear on the The firm had little money, and the founders, Rivest, MIT/Laboratory for Computer Sciences Techni- Shamir and Adleman, were not active in the entercal Memo where the findings were first published. prise. Bidzos recognized the great opportunity for Like Diffie and Hellman's article, it contained a RSA was to license its encryption technology to prescient opening: be used in the products of other firms. The comThe era of electronic mail may soon be upon us; pany rounded the corner to financial security and we must insure that two important properties of substantial profitability later in the decade when he the current "paper mail" system are preserved orchestrated a deal with Lotus Development to in[privacy and signatures] [46]. clude RSA's algorithm in Lotus Notes, and subseUnlike, Diffie and Hellman's conceptualization, quent sales to Microsoft and Apple Computer [5]. the so-called Rivest, Shamir and Aldeman AlgoBidzos also initiated the annual RSA Data Serithm (RSA Algorithm) was something more con- curity conference. In the late 1980s, this was a relcrete that could be tested to see if it stood up to atively small event, but Bidzos insightfully made it scrutiny. Rivest worked with famed mathematician a broad and inclusive forum for issues of concern Martin Gardner to outline the algorithm in Gard- regarding computer security. By this time the batner's column in Scientific American. Appearing in tle between the NSA and government and the acathe August 1977 issue, the column offered a cash demic cryptographers at universities had become prize ($100) to anyone who could prove the system more intense. The government used export restricdid not work. tions (laws prohibiting the export of weapons or By the mid-1970s an increasing number of large technical know-how harmful to national security) corporations in petroleum, banking, and other into try to restrict the academic freedom of both the dustries were beginning to encrypt their electronic academics as well as to restrict encryption in prodcorrespondence [27: 23]. Despite the fact that ucts. The NSA wanted to be in sole control of the Rivest, Shamir, and Adleman had no real business experience, they decided to form a company type and strength of cryptography available to indito commercialize the technology of the RSA Algo- viduals and organizations both in the United States rithm. They received some money from an outside and abroad. With an increasing amount of revenue investor (physician and businessman Jack Kelly) coming from foreign markets, this created a maand purchased fights to the invention from MIT jor disincentive to produce products with embedfor $150,000. The company, RSA Data Security, ded encryption technology. By bringing different was formed in the early 1980s. It received sev- groups together to air their opinions and differeral other modest investments, but did not have a ences, including individuals from the government, true product to sell, was burning through cash at the RSA Data Security Conference grew to be the a rapid rate, and was in deep financial trouble. In premier event each year in the field of computer 1985, the company's marketer Bart O'Brien called security. Critics of the government's effort to control enupon an old friend he had worked with at Paradyne, Jim Bidzos, to come on board, originally just for cryption standards emphasized the right to privacy,
20.13
RSA Data Security: pioneeringa security software industry
as well as the importance of having access to foreign markets. Concerns mounted that the government, and especially the NSA, was fighting for standards that were not strong enough to withstand the massive brute force attacks (using extensive computing power to break encryption keys) possible given the computer infrastructure of the N SA, or that standards included a backdoor that gave the NSA ready access. Bidzos wanted to establish RSA Data Security's technology as the standard, and the widespread attention and visibility of the RSA Data Security Conference helped achieve this. RSA Data Security received major deals with Netscape and other firms by the mid-1990s as the true value of RSA's technology became apparent in the era of ubiquitous computer networking ushered in by the World Wide Web. While RSA had become successful as a business with the Lotus and Apple contracts, and particularly the Netscape and other deals of the Web era, the greater opportunity for RSA lay with digital signatures and authentication. As Rivest, Adleman and Shamir had vaguely alluded to back in the late 1970s, signatures would be flmdamental as computer networking grew. A critical issue with having the digital signature standard, and later domain name standards, rested with the broader IT industry acceptance of the standard. Bidzos insightfully recognized that this would best be accomplished in collaboration outside the confines of RSA Data Security. Along with a group of partner corporations, including Microsoft, Netscape, American Online (AOLt, IBM, and Visa, Bidzos and RSA Data Security spun-off and formed VeriSign in 1995 as an independent corporation. This company grew much larger than RSA Data Security, and Bidzos served as Chair of the Board in its early years (and is currently Vice Chair). In 1996 Bedford, Massachusetts-based Security Dynamics Technology, Inc., a firm specializing in authentication that also began in the early 1980s, acquired RSA Data Security fl)r approximately $300 million. The following year it also purchased DynaSoft, a leader in access control solutions. Shortly thereafter Security Dynamics held roughly two-thirds of both the encryption and authentication markets, and sold the popular product line of SecurlD software. While the security
615
products market was without industry-wide standards, the consolidation of the late 1990s and the substantial market share of Security Dynamics led to the emergence of some product standards, such as the SecurID line. These were used by many firms and individuals as electronic commerce began to grow r a p i d l y - eventually reaching more than 15 million users. As SecurlD took off and the company also developed Secursight enterprise security suite (providing enterprise-wide authentication solutions) in the late 1990s, the company also recognized the strategic importance of the RSA's brand and took advantage by changing its name to RSA Securities, Inc. in 1999. By this time the security software industry had grown to include other firms that formed fronts outside of the encryption and authentication areas, most importantly, virus detection software. Symantec, a firm that was formed by Dr. Gary Hendrix and a few of his colleagues from the Stanford Research Institute (SRI), was involved in many areas of programming during the 1980s. The firm moved into the security software field after going public in 1990 and merging with Peter Norton Computing, a firm that behind its founder and leader Dr. Peter Norton had created powerful virus detection software. While Symantec continued an acquisition spree throughout the decade, and had a portfolio of software applications of varying types, its Norton brand and security software became the centerpiece of the corporation. Symantec emerged as the leading security software company in the world in terms of revenue and net profits. Unlike RSA Data Security, Symantec was developing products in part for individual personal computer users rather than just for companies and organizations. The security area has also continued to grow as computer, software and service giants including IBM and Microsoft, have extended their presence in the field. Microsoft founder Bill Gates gave the keynote address at the 2004 RSA Data Security Conference and he emphasized the greatest portion of Microsoft's $6 Billion annual research budget is now targeted in the software security area [21 ]. The security software industry emerged because RSA Data Security, VeriSign, Symantec, and later
616
J.R. Yost
other firms, were developing products outside of the context of government that increasingly came to be standards. They were superior to anything that the government was developing, or at very least, better than anything they were releasing outside the protected confines of the NSA. These products were also developed specifically for commerce rather than military purposes. Needs were different and information integrity played a much larger role in this arena than in the DoD, where the paradigm was more closely aligned with protecting classified information and integrating computing technology with existing protocols of command and control, disclosure, and the overall infrastructure for classified military information. In the commercial world there was also growing corporate involvement between computer security firms and many of the largest firms in the world to address computer security/data integrity concerns. Many companies in the Fortune 100 belong to the International Information Integrity Institute (I4) an organization that confidentially shares information on best practices and emerging technologies and techniques in computer security to better facilitate communication and commerce in a global economy where the World Wide Web has fundamentally transformed the importance of effective electronic commerce infrastructure, policies, and performance to survive and thrive. 20.14
COMPUTER SECURITY AND SOCIETY
Computer security and computer privacy are unique and separate concepts, but are deeply interrelated and have been in a long and continuing historical dialog. While governments have been at the forefront of defining computer security standards, individual citizens have viewed the growth of massive databases and networking as potential and real threats to unauthorized use of personal information. Computer privacy emerged as a major public issue for the first time in the second half the 1960s and led some individuals to give initial thought and attention to the notion of computer security long before networked computers became commonplace in offices and homes, but after they had
become fixtures within many government agencies and corporations. 8 The relationship between privacy and security, like that between privacy and technology more generally, has little meaning outside of defined contexts. Privacy is commonly framed as a fight, which is defended fiercely by individuals and businesses. Invasions of privacy create a perceived or real need by the state to protect the greater good of society and to better ensure the security and well being of its people. Computer databanks can potentially reduce individual privacy, while computer encryption technology often facilitates private communications. Such private communications could have positive or negative impacts depending on the context. Private communications can protect individuals and businesses, allow law enforcement officials to communicate privately, but also can aid criminals or te~Torists in planning crimes or attacks. Some companies want unregulated use of encryption technology to ensure privacy of communication and the protection of trade secrets, yet buy, sell, and trade databases of information on consumers to better apply market segmentation strategies. The individuals or groups and situational factors dictate perspectives. Furthermore, such perspectives are not static, and commonly are influenced by focusing events, such as the terrorists' attacks on the World Trade Center and the Pentagon on September 11,2001. While the history of the evolving and ever contextually based relationship between privacy and 8privacy is both a cultural concept and a legal right that has long been debated in the United States regarding its constitutional foundation. While it is not formally delineated in the constitution, many have argued it is an implied constitutional right. The "right to privacy", first articulated in a Harvard' Law Review article in 1890 in response to the advent of handheld cameras and an emerging paparazzi photographing the rich and famous has evolved to become the basis for fundamental and controversial legal rights in the United States regarding contraception (Griswold v. Connecticut (1967)), and ;abortion rights (Roe v. Wade (1974)). Alongside federal rulings and laws on a small number of privacy related issues, much legislation on privacy has been passed by individual states. Computer security, on the other hand, is a broad term referring to the design of systems, and procedures for using them; and additionally software or hardware, or other technologies or practices to safeguard the integrity and control access 1:o information on, or transmitted over networks by, computers.
20.14 Computersecurity and society
security has existed for many decades, and even centuries, computer security and privacy did not register on the radar of many individuals, groups, and institutions prior to the latter part of the 1960s. Growing concern about the protection of financial and other information on individuals on government computers, the consolidation of databanks, and use of social security numbers as a standard identifier, led to substantial controversy and public backlash in the second half the 1960s. The failure of Lyndon B. Johnson's 1967 proposal for a National Data Center, to greatly centralize government information maintained on individuals, was indicative of the public concerns for individual privacy in the era of computers. Despite the abandonment of the National Data Center the amount and concentration of information on individuals by government continued and privacy concerns persisted. Computer databanks and privacy concerns were fueled by a number of influential books in the late 1960s and early 1970s. Most notably among these were legal scholars Alan Westin's Privacy and Freedom (1967) and Arthur L. Miller's The Assault on Privacy (1971 ). The former was a multifaceted look at privacy, technology, and law, inspired by and attentive to the heightening and changing uses of computers, while the latter concentrated almost exclusively on the new threats to individual privacy posed by the growing use and connectivity of computer databanks. Such works, however, tended to be more focused on real and potential abuses by government, as opposed to outside infiltration and the security of systems, or inappropriate uses in the private sector. The ongoing controversy concerning government as Big Brother, with massive data banks of information on individuals, planted a seed, and the concern would grow as commercial computer networks came into more common use. Such concerns accelerated rapidly, as networked minicomputers, and particularly personal computers, became increasingly commonplace, and infiltration of corporate and government systems became increasingly common [55]. 9 9In 1976 the Government Accounting Office estimated that many of the federal government's 9,00(/computers were in-
617
Growing public pressure for safeguarding and restricting the government's use of concentrated databases of information on individuals led Secretary of Health, Education, and Welfare Elliot Richardson to initiate a committee to study and report on government information systems and personal privacy, the Automated Personal Data System (APDS) Committee in 1972. As a result of Willis Ware's early involvement and expertise in computer security, coupled with his strong concern for the privacy of individuals, Richardson asked Ware to join, and later chair, the APDS. This committee's work led to the Privacy Act of 1974 and the establishment of the Privacy Protection Study Commission (PPSC). The work of these two groups represented the most important achievements during the 1970s to examine threats to individual privacy posed by government computer databases regarding financial, medical and other types of personal information [61 ]. While the Privacy Act of 1974 gave individuals in the United States the right to inspect records on themselves held by the federal government, this did not extend to state and local government databases, nor those held by corporations and non-government organizations. In general the US has favored a policy of responding to privacy problems that arise and gain attention, not to develop a broad and comprehensive policy on privacy, as has been the case throughout Western Europe and a number of other nations in the world during the past quarter century [61 ]. Overall, despite the Privacy Act and legislation at the state level, government data bases containing private information on individuals, such as the systems of the Social Security Administration, failed in tests to prevent infiltration. Such failures and abuses led President Jimmy Carter to seek to tighten security over the government's increasingly unwieldy networks of computers in order to protect individual privacy and prevent fraud and crime. National privacy legislation has generally been more prominent in Western Europe than in the sufficiently protected against sabotage, vandalism, terrorism, or natural disaster. In June 1976 a computer programmer who claimed to only be trying to prove the laxness of government computer security was convicted for tapping into classified federal energy information.
618
J.R. Yost
United States. In Great Britain, the government asked Dr. Ross Anderson of the Computer Laboratory of Cambridge to evaluate and help define security standards for patient clinical information within the country's National Health Service (NHS) network. With ethical conduct of patients rather than national security as the goal, protocols and standards for secure medical information systems have evolved differently, and have been influenced not only by national groups, but also international ones, such as the European Standardisation group for Security and Privacy of Medical Informatics (CEN TC 251/WG6) that has mandated the encryption of patient medical health data on large networks. In 1996 Anderson set out rules that could be used to uphold the principle of patient consent independently of the details of specific systems, and provide models and protocols that gave the medical profession an initial response to creeping infringement of patient privacy by NHS computer systems [3]. As privacy concerns have escalated, broad public concern with privacy has not been limited to government medical and social welfare databases. Concerns about the vulnerability of DoD systems heightened with the growth of networked personal computers and hackers and computer criminals in the 1980s. In 1983, as the film "War Games" was introducing a scary scenario of hacker infiltration of DoD computers controlling the nation's nuclear arsenal, a number of military experts expressed concerned about the penetrability of the department's computer systems. In the second half of that year, the DoD, in its biggest move ever to prevent illegal penetration of its computers, announced it would split its computer networks into separate parts for civilian and military users, and thus limit access to university-based researchers, trespassers, and possible terrorists or spies [10]. While this enhanced security, the notion of impregnable DoD computers was relegated to an idyllic and distant past. For instance, in 1988 the New York Times reported that a West German citizen had secretly gained access to more than 30 DoD and DoD contractor computers for more than two years [37]. On a different front, during the 1980s, the controversy over encryption standards between the
government, particularly the NSA and Commerce Department, and academic cryptographers and industry leaders (essentially just RSA Data Secnrity prior to the 1990s) received modest, but persistent attention from the New York Times and other media sources. This conflict, however, remained a fairly low level issue until the early 1990s when the Clinton Administration backed a plan for a key escrow phone encryption system using DES, and later a DES-like Slipjack algorithm (which used twice as many substitutions as DES, and thus provided much stronger security) that would allow government to have a trap door (Law Enforcement Access Field, or LEAF) to listen in on encrypted communications of this commercial cryptosystem, AT&T Telephone Security Device (TSD) 3600 - or the so-called Clipper Chip [49]. l° The idea for the Clipper Chip originated with the assistant deputy director of the NSA, Clinton Brooks, and National Institute of Standards and Technology acting director Ray Krammer. The argument Brooks, Krammer, and other proponents made for the Clipper Chip was that it was critical for law enforcement and national security to ensure the future ability of the government to wiretap the phones to catch and prosecute criminals and terrorists. Furthermore, it could obviate the government's real or perceived need for export controls. The Clipper Chip was heavily pushed by the agency, NIST, the FBI, and others in government, and rekindled the outcries of government abuse that was different in form, but similar in degree and tone to the public response two and a half decades earlier to President Lyndon B. Johnson's proposed National Data Center. While the Computer Security Act of 1987 had shifted authority in the field away from the NSA and to NBS (and later NIST), the influence of the secretive intelligence organization remained. This was in part because of some common beliefs among the leaders of NBS~qIST and NSA, but also a result of the far greater human and computing resources and capabilities of the NSA relative to the NBS/NIST. 10Ironically the original name for the government's trap door for Clipper was LEEE or Law Enforcement Exploitation Field.
20.15
The annual RSA Data Security Conference event provided a perfect forum for computer scientists, government officials, and industrialists to debate the Clipper Chip, and the rapidly growing event that early on had drawn less than 100 grew exponentially. RSA's Jim Bidzos was the master of ceremonies for these conferences and became the leading spokesperson opposing the Clipper Chip. Bidzos, however, was far from alone on this issue. Opposition to the Clipper Chip included software security companies, advocacy groups and political pundits from the right and the left, many members of the public, and foreign governments (which strongly opposed the importation of an American technology that allowed the US government unfettered access to eavesdrop). A New York Times/CNN poll in at the beginning of 1994 showed more than 80 percent of the American public opposed the Clipper Chip [32: 2611]. Vice President Albert Gore had originally been sold on the proposed chip by the NSA and FBI, and the Clinton Administration remained firmly in favor of the Clipper Chip throughout 1993 and the first part of 1994. However, soon thereafter the administration grew weary of this losing battle, gained greater appreciation for the drawbacks of the technology, and withdrew support. The Clipper Chip controversy highlighted the fact that the privacy of individuals and organizations remained a significant factor in drawing peoples' attention to computer security issues. The media's embracement of the issue of computer security fit in with the growing tendency toward sensationalism that attracted attention by instilling fear and feeding the insecurities of their viewers and readers. While computer security was an important issue and threats were real, most reporting took a shallow approach to the underlying complexity of the issue in favor of personalized accounts of cyberpunks, malicious hackers, cyber criminals, and the dangers faced. This got many people to think about computer security on one level, but overall, in terms of secure systems and practices, most remained uninformed or oblivious. Paradoxically, computer security and computer crime, like crime more generally, simultaneously exists as a frequent mass media topic
619
Conclusion
(indicating its popularity in a ratings driven media environment) and a topic that most people prefer not to think about. As longtime computer crime and computer security expert Donn Parker has argued: Everybody likes to be able to communicate in confidence and secrecy when they need to and everybody likes to have an assurance that they are free from attacks from criminals, but as far as security itself is concerned, the locked doors and filing cabinets and having to lock your computer, it is unpleasant and inconvenient and it subtracts in business from profitability, productivity, and growth. So nobody likes it. I have spent thirtyfive years of my career working in a field nobody really likes. It has been kind of interesting. The journalists like it for the sensationalism. It sells newspapers; you can get people disturbed [42].
20.15
CONCLUSION
Computer security has been a fundamental aspect of digital computing for decades, with both getting their start and early funding from the DoD, but with developments in industry and the academy becoming all the more prevalent over the past two decades. While the DoD continues to be a force in both networked computing and computer security, its relative place has diminished to an extent as a large and vibrant computer industry has been the epicenter of many computer technology developments and the software security trade has become a fundamental sector of IT security. Likewise, the pre-eminent position of the US in computer security standards has declined as the importance of research throughout the world and international standards becomes more critical in an increasingly global environment and economy. Standards today are created by the marketplace and through corporate interactions to a greater extent than ever before relative to the military. While security products abound and leading ones create some degree of standardization, the reality that no product or system is impenetrable becomes all the more clear. Increasing the dialogue about the historically subverted topic of computer security, both publicly, and when necessary, in closed settings such as leading international corporations becomes all the more important. Sharing
620
J.R. Yost
useful knowledge and techniques, as is done confidentially by major global corporations as part of I4, can lead to a more secure environment with greater information integrity. On both an individual and institutional level, recognizing that products cannot provide magic bullets and there will always be threats, infiltration, and destruction is a critical first step. At the same time, unpredictability can be a very valuable tool, as computer criminals prey on knowing how both people and computing and software systems work, and the continuity and timing of processes. Developing, using, and frequently changing complex passwords, varying the routine of computing systems, and continuously altering the procedures we use to interact with computers can make computer crime far more difficult. In this respect, the notion of standards is altered from clearly defined processes and adhering to exact specifications as was done within the Bell-LaPadula and TCSEC models, to defining broader evaluation techniques, recognizing differences, embracing variance, and never abandoning efforts to ensure greater integrity of data. Overall, a multifaceted approach of products and processes will be increasingly important to best address an uncertain future with regard to the challenging issue of computer security as computer networking becomes increasingly ubiquitous. REFERENCES [1] J. Abbate, Inventing the lnternet, MIT Press, Cambridge, MA (1999). [2] J.P. Anderson, Computer Security Technology Planning Study, Deputy for Command and Management Systems HQ Electronic Systems Division, Bedford, MA (1972). [3] R. Anderson, Security Engineering: A Guide to Building Dependable Distributed Systems, Wiley, New York (2001). [4] W. Aspray, John von Neumann and the Origins of Modern Computing, MIT Press, Cambridge, MA (1990). [5] J. Bidzos oral history interview by J.R. Yost, Mill Valley, California (December 11, 2004). [6] H.W. Bingham, Security Techniques for EDP of Multilevel Classified Information, Technical report No. RADC-TR 65-415, Rome Air Development Center, Griffiss Air Force Base, New York (December 1965), National Bureau of Standards Collection, Charles Babbage Institute, University of Minnesota.
[7] W.J. Broad, Every computer 'whispers' its secrets, New York Times (April 5, 1983), C 1. [8] W.J. Broad, Global computer network split as safeguard, New York Times (October 5, 1983), A13. [9] D. Burnham, Computer safeguard is broken in U.S. test, New York Times (January 31, 1977), 18. [ 10] M. Campbell-Kelly, From Airline Reservations to Sonic the Hedgehog: A History of the Software Industry, MIT Press, Cambridge, MA (2003). [11] D. Clark, D. Wilson, A comparison of commercial and military security policies, Proceedings of the IEEE Symposium on Security and Privacy, IEEE Computer Society Press, Los Alamitos, CA (1987), pp. 184-194. [12] E. Cohen et al., HYDRA: the kernel of a multiprocessor operating system, Communications of the ACM 17: 6 (1974), 337-345. [13] D.E. Denning, Cryptography and Data Security, Addison-Wesley, Reading, PA (1999). [14] J.B. Dennis, Segmentation and the design of multiprogrammed computer systems, Journal of the Association of Computing Machinery 12 (1965), 589-602. [ 15] Department of Defense Trusted Computer System Evaluation Criteria, DoD 5200.28 STD, Department of Defense, Washington, DC (1985). [16] W. Diffie, M.E. Hellman, New directions in cryptography, IEEE Transactions on Information Theory IT-22 (1976), 644-654. [17] W. Diffie, S. Landau, Privacy on the Line, MIT Press, Cambridge (1998). [18] W. van Eck, Electromagnetic radiation from video display units: an eavesdropping risk?, Computers and Security 4(4) (1985), 269-286. [ 19] L.D. Faurer, Computer security goals of the Department of Defense, Computer Security Journal (1984). [20] G.A.O. Finds Security is Lax at U.S. Computer Installations, New York Times (May 11, 1976), 16. [21] W. Gates, Remarks, RSA Data Security Conference (February 15, 2005), http://www.microsofl.com/ billgates/speeches/2005/02-15RSA05.asp. [22] L. Goff, Worm Disables Net, Computerworld 33(40) (1999), 78. [23] M. Hellman, oral history interview by J.R. Yost, Palo Alto, California (November 11, 2004), Charles Babbage Institute, University of Minnesota. [24] A. Hodges, Turing" The Enigma, Simon & Schuster, New York (1983). [25] M. Hunter, U.S. to tighten computer security to halt abuses, New York Times (July 27, 1978), A14. [26] International Business Machines, The Considerations of Data Security in a Computer Environment, National Bureau of Standards Collection, Charles Babbage Institute, University of Minnesota. [27] D. Kahn, Tapping computers, New York Times (April 3, 1976), 23. [28] D. Kahn, The Codebreakers: The Story of Secret Writing, Macmillan, New York (1967).
20.15
References
[29] D.S. Landes, The Unbound Prometheus: Technological Change and Industrial Development in Western Europe from 1750 to the Present, Cambridge University Press, London (1969). [30] K. de Leeuw, The Dutch invention of the rotor machine, 1915-1923, Cryptologia 27(1) (2003), 73-94. [31] E.C. Lesser, G.T. Thompson, How a hacker tried to fool a securi~ expert, New York Times (February 22, 1995), Dl9. [32] S. Levy, Crypto: How the Code Rebels Beat the Government- Saving Privacy in the Digital Age. Viking, New York (2001). [33] D. MacKenzie, G. Pottinger, Mathematics, technology, and trust: Formal verification, computer securiO,, and the U.S. Military, IEEE Annals of the History of Computing 19(3) (1997), 41-59. [34] D. MacKenzie, Knowing Machines: Essays on Technical Change, MIT Press, Cambridge, MA (1996). [35] J. Markoff, Can computer viruses be domesticated to serve mankind?, New York Times (October 6, 1991), El8. [36] J. Markoff, Computer intruder is put on probation and fined $10,000, New York Times (May 5, 1990). 1-2. [37] J. Markoff, Top-Secret and Vulnerable, New York Times (April 25, 1988), D 1. [38] J. Markoff, Virus Barely Causes Sniffle in Computers; Feared Computer Plague Passes with Very Few Infections, New York Times (March 7, 1992), 1-2. [39] A. Miller, Privacy: Computers, Databanks, and Dossiers, University of Michigan Press, Ann Arbor, MI (1970). [40] D.E Noble, America by Design: Science, Technology, and the Rise of Corporate Capitalism, Knopf, New York (1977). [41] D.B. Parker, Fighting Computer Crime, Charles Scribher & Sons, New York (1983). [42] D.B. Parker, oral history interview by J.R. Yost, Los Altos, California (May 14, 2003), Charles Babbage Institute, University of Minnesota. [43] E. Passaglia, K.A. Beal, A Unique Institution: The National Bureau of Standards, 1950-1969, National Institute of Standards and Technology, Gaithersburg, Maryland (1999). [44] C. Pursell, The Machine in America, Johns Hopkins University Press, Baltimore, MD (1995). [45] K. Redmond, T.M. Smith, From Whirlwind to MITRE: The R&D Story of the SAGE Air Defense Computer, MIT Press, Cambridge, MA (2000). [46] R.L. Rivest, A. Shamir, L. Adleman, A Method for Obtaining Digital Signatures and Public Key Cryptosysterns, MIT Laboratory for Computer Science Technical Memo 82 (MIT LCS TM-28).
621
[47] D. Russell, G.T. Gangemi Sr., Computer Security Basics, O'Reilly & Associates, Cambridge, MA (1991). [48] J.A. Russell, Electromagnetic Radiation from Computers, Security in the Computing Environment: A Summary of the Quarterly Seminar, Research Security Administrators-June 17, 1965, System Development Corporation, Santa Monica, CA (1966), National Bureau of Standards Collection, Charles Babbage Institute, University of Minnesota. [49] B. Schneier, Secrets & Lies: Digital Security in a Networked World, Wiley, New York (2000). [50] R.W. Seidel, Secret scientific communities: classification and scientific communication in the DOE and DoD, The History and Heritage of Scientific and Technological Information Systems, M.E. Bowden, ed., Information Today, New Bedford, NJ (2000). [511 B. Sinclair, At the turn of a screw: William sellers, the Franklin Institute, and a standard American thread, Technology and Culture 10 (1969), 20-34. [52] B. Sinclair, Philadelphia's Philosopher Mechanics: A History of the Franklin Institute, 1824-1865, Johns Hopkins University Press, Baltimore (1974), pp. 170194. [53] S. Singh, The Code Book: The Evolution of Secrecy From Mary, Queen of Scots to Quantum Cryptography, Doubleday, New York (1999). [54] M.R. Smith, Harpers Ferry Armory and the New Technology: The Challenge of Change, Cornell University Press, Ithica, New York (1977). [55] Theft by computer: Programmer taps U.S. data secrets, New York Times (June 17, 1976), 58. [56] F.M. Tuerkheimer, The underpinnings of privacy protection, Communications of the ACM 36(8) (1993), 69-74. [57] W.H. Ware, oral history interview by J.R. Yost, Santa Monica, California, RAND Corporation (August 11, 2003), Charles Babbage Institute, University of Minnesota. [58] W.H. Ware, Security controls for computer systems (U): Report of Defense Science Board Task Force on Computer Security, Office of the Director of Defense Research and Engineering, Washington DC (11 February 1970). [59] A.F. Westin, Privacy and Freedom, Atheneum, New York (1967). [60] P. Wright, Spy Catcher: The Candid Autobiography of a Senior Intelligence Officer, Viking, New York (1987). [61] J.R. Yost, Reprogramming the Hippocratic Oath: An Historical Examination of Medical lnformatics and Privacy, Proceedings of the Second Conference on the History and Heritage of Science and Technological Information Systems, W.B. Rayward and M.E. Bowden, eds, Information Today for Chemical Heritage Foundation, Medford, New Jersey (2004), pp. 48-60.