The Journal of Systems and Software 61 (2002) 225–232 www.elsevier.com/locate/jss
An empirical study of industrial security-engineering practices Rayford B. Vaughn Jr. a,*, Ronda Henning b ,1, Kevin Fox b,1 b
a Department of Computer Science, Mississippi State University, P.O. Box 9637, Mississippi State, MS 39762, USA Harris Corporation, Government Communications Systems Division, MS W2/9703, P.O. Box 37, Melbourne, FL 32902, USA
Received 1 March 2001; received in revised form 1 July 2001; accepted 1 September 2001
Abstract This paper presents lessons learned and observations noted about the state of security-engineering practices by three information security practitioners with different perspectives – two in industry and one in academia. All authors have more than 20-years experience in this field and two were former members of the US National Computer Security Center during the early days of the Trusted Computer System Evaluation Criteria and the strong promotion of trusted operating systems that accompanied the release of that document. In the last 20 years, it has been argued that security-engineering practices have not kept pace with the escalating threats to information systems. Much has occurred since that time – new security paradigms, failure of evaluated products to emerge into common use, new systemic threats, and an increased awareness of the risk faced by information systems. This paper presents an empirical view of lessons learned in security-engineering, experiences in applying the trade, and observations made about the successes and failures of security practices and technology. This work was sponsored in part by NSF Grant. 2002 Elsevier Science Inc. All rights reserved. Keywords: Computer security; Information assurance; Security-engineering; Risk assessment
1. Introduction Software engineering as a discipline is described by the IEEE as the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software; that is, the application of engineering to software. A major component of this discipline has become known as requirements engineering – the process of determining exactly what is to be developed by the software engineer and a determination of what performance characteristics it must possess. Thayer and Dorfman (1997) define software requirements engineering as ‘‘the science and discipline concerned with establishing and documenting software requirements’’ and suggest that it consists of several separate stages: elicitation, analysis, specification, verification, and management. Increasingly important re-
*
Tel.: +1-662-325-2756; fax: +1-662-325-8997. E-mail addresses:
[email protected] (R.B. Vaughn Jr.),
[email protected] (R. Henning),
[email protected] (K. Fox). 1 Tel.: (321)-984-6009; fax: (321)-674-1108.
quirements that are becoming more commonplace within the customer communities are the requirements for software and system security defenses and countermeasures. The increasing demand and importance of security requirements in systems engineering has created a relatively new engineering discipline – security-engineering. Some might say that security-engineering for information systems represents a completely new discipline. The authors’ perspective is that security-engineering falls squarely into the realm of systems and software engineering. During the requirements engineering process, the systems and software engineers determine the system user’s definition of ‘‘security’’. This can vary from ‘‘a guard at every physical door’’ to comprehensive data confidentiality, integrity, and availability requirements. The user’s definition and perspective on security defenses and countermeasures can impact all phases of the system lifecycle, including the design, code, integration, test, and maintenance functions. This paper makes an argument for security-engineering as a systems engineering skill and describes common practices of security-engineering in its industrial application today.
0164-1212/02/$ - see front matter 2002 Elsevier Science Inc. All rights reserved. PII: S 0 1 6 4 - 1 2 1 2 ( 0 1 ) 0 0 1 5 0 - 9
226
R.B. Vaughn Jr. et al. / The Journal of Systems and Software 61 (2002) 225–232
Approximately 20 years ago, a concerted effort was begun in the United States to strengthen the protection of information assets in automated systems. This effort was in response to several events – but generally a Rand study/report and a Defense Science Board Task Force effort led by Anderson (1972) and Ware (1970) are cited as the two of the most prominent. In summary, these activities cited the increasing use of information systems for command and control applications, and the ease with which both the operating systems and the data within them could be corrupted, as major causes for concern in the maintenance of American defensive force superiority. These reports generated a series of government actions (both domestic and internationally) to protect information assets being stored and processed electronically. Much of the rest of the story is relatively well known within the information security community. Several attempts were made during the 1970s to develop secure operating systems, including experimentation in kernel modularization and the use of formal methods for code design and development. Lessons learned from these experiments were documented (primarily by the MITRE Corporation) and incorporated into architectural and development guidance (DOD, 1983, 1985) which was promulgated by the U.S. DOD through an organization initially named the DOD Computer Security Center (1981) and later the National Computer Security Center (or NCSC). The international community adopted various related approaches to trusted systems and their evaluation (CSSC, 1992; GISA, 1989; UKCESG, 1989), and, eventually adopted a uniform approach to the evaluation of a system’s security properties, the Common Criteria or ISO Standard 15408 (1999). A continuous stream of increasingly restrictive policies and regulations associated with processing private, sensitive but unclassified, and classified data continued to evolve through the 1990s and into the present.
curement policies based upon them were unsuccessful at generating sufficient market demand for the vendor communities. Network attacks were proven trivial, and availability (or denial of service) attacks were proven simple. Traditional common law has been proven inadequate in its ability to address cyber crime issues. Today, although we understand the problems and their associated issues much more comprehensively, we may be no closer to solving this problem than we were 30 years ago. Does this mean that these earlier efforts were failures? Obviously this is not the case. The community has developed a strong scientific methodology for building trusted systems, albeit for specialized applications. A level of security awareness that certainly did not exist in the past has been generated. Large numbers of security products are now available (although most are single problem solutions) and the difficulty factor with respect to system penetration has been increased. This success, however, has not been to the extent originally envisioned. Systems engineers and the user communities have not, for example, been able to ‘‘secure’’ systems, prevent penetrations, deny availability attacks, or later prosecute those that have been identified as attackers. Additionally, there has not been an overwhelming acceptance of evaluated products that have been built to government or international standards, due to the length of time involved in a comprehensive security analysis for a product. What appears to be the current pervasive view is an approach to securing systems that applies a defense in depth architecture to mitigate the risk to information assets down to an acceptable level. A representative process for this approach is provided at Fig. 1. In this environment, the security-engineer composes various security products to define sufficient protection. In some cases this judgement proves accurate and in other cases not. In the paragraphs that follow, we attempt to introduce some thoughts and experience factors gained from practicing security-engineering over several years.
1.2. Technological history
2. A security-engineering process approach
Interesting tangential events in information technology development were occurring. New hardware, software and middleware were introduced and, seemingly overnight, the terminal hardwired to mainframe paradigm for connectivity was shattered. Individual desktops had more power and connectivity at their disposal than the entire data centers had only five years before. The Internet and global connectivity boomed, electronic commerce became commonplace almost overnight. Products addressing single function security solutions exploded in the marketplace and became necessities. Government mandated evaluation policies and pro-
As with other requirement elicitation difficulties, determining what the information security requirements of a customer are and how they can be best satisfied is left largely in the hands of the systems engineer. In turn, the systems engineer uses a knowledgeable security-engineer to develop a security architecture that can address the needs of the customer and meet the comprehensive system-level requirements. Customers and end users generally are incapable of articulating their security needs as anything more than high level declarations. To develop a common understanding (or perhaps a common mental model) between the engineer and customer,
1.1. Historical background
R.B. Vaughn Jr. et al. / The Journal of Systems and Software 61 (2002) 225–232
227
Fig. 1. A representative security-engineering process applying a defense in depth solution to mitigate the risk profile to an acceptable level.
some form of a business process review (BPR) generally occurs. This BPR involves the engineer, the end customer, and perhaps other stakeholders who work together to understand the current business process. Over the life of the BPR, the team reaches a comprehensive understanding of what the current business processes are and how they can be changed within business constraints to improve security to some level acceptable in the organization. Henning (1997) makes the case that existing requirements engineering tools, such as Zachmann Business Process Models, can be augmented to allow common requirements engineering tools to collect raw input on information security data flows. Sufficiency in information security is achieved when the solution’s cost, in operational terms, does not exceed its value in terms of the protection it affords – a principle previously published by the authors in Vaughn (1998), and Vaughn and Henning (2000). Starting with information gained from working closely with the customer (current processes, constraints, security policies, desired outcomes, etc.), the security-engineer will generally conduct a risk assessment/analysis, a vulnerability analysis, propose an engineered solution, implement the solution, test, document procedures, and train the organization in new procedures. This process, depicted in Fig. 2 is cyclic and needs to be periodically repeated because the solution set tends to deteriorate over time as new vulnerabilities are discovered and promulgated and as new attack schemes are discovered and employed. Building a system to meet a security requirement is often difficult, because the problem being addressed is
Fig. 2. Security-engineering process view.
not static, but dynamic. Requirements such as providing an easy to use interface, online help facilities, or real time scheduling are static requirements. For static requirements, the technical solution can be determined when the system is built and delivered and that solution is generally viable for the life of the system. A security requirement is dynamic for several reasons. First, the security solution is dependent on several factors: • the threat against the system, • the likelihood of the threat being exercised, • the state of technology available for system protection, • the state of technology for system attack, and
228
R.B. Vaughn Jr. et al. / The Journal of Systems and Software 61 (2002) 225–232
• the perceived value of the enterprise’s information assets. Second, a security solution (in most cases) needs to be developed to defend against most likely threats. The security solution itself is also a dynamic factor. The threat against an enterprise can change depending upon specific, identifiable events. If the security solution proposed by an engineer is viewed as static – then the engineer must endeavor to establish a protection solution that addresses the Max(Threat) that can occur. If the solution is viewed as dynamic – then a range of protections can be proposed that address specific threat conditions and events leading to those conditions. This leads us to suggest the notion of a ‘‘System Security Environment’’ and define it as follows (Vaughn, 1998). Definition (System security environment). A system security environment, m, is an element of the set of possible security environments V that may be defined as three tuple: V ¼ ½S; A; F ; where S is the set of possible system states of existence, si , in which a defined risk has been identified and a set of protections have been put in place to mitigate the risk. A is the set of actions, ai , that move the system from one state to another. F is the set of key protection factors, fi , associated with a specific state, si . It then becomes the job of the BPR analyst to work with the enterprise to discover those pairs, ðsi ; ffj ; 8j ; j ¼ 1; ngÞ which represents a specific plausible environment and the specific factors of protection that are important to the enterprise. This notion is depicted in Fig. 3, where the security state of an enterprise is shown to change over time based on some identifiable event. As the state changes, so must the protection environment. If the security-engineer can identify these change triggers in advance, the dynamic nature of the protection needed can be planned for,
Fig. 3. System security environment.
documented, and implemented. This, of course, assumes that the enterprise will treat security as a dynamic entity and not static. If the enterprise chooses to treat security as a static entity, then a single set of protection factors must be selected based on Max(Threat). This also raises a need for automated mechanisms that can accept detection (sensor) inputs (or state changes) and automatically increase or decrease protection factors in a system. Such tools integrated into a dynamic security protection architecture do not exist today.
3. Experience factors Certain heuristics are created over time as securityengineers attempt to define protection perimeters for their customers. This experience base becomes a belief system for the engineer and affects future recommendations in similar circumstances. Below are nine key factors and beliefs that the authors see as lessons learned from experience and factors that affect today’s securityengineering practice. (a) Not all science works well in practice (e.g., trusted OS, Government Evaluation, formal verification, models, etc.). Early architectural and trusted process guidance were promulgated in the form of the Trusted Computer System Evaluation Criteria (Orange Book), and the rainbow series of supporting interpretative documents. The resulting trusted commercial operating systems were individually excellent works with a strong scientific basis – but were simply never adopted to any great extent. Even within the US DOD community that funded the research initiatives that led to these systems, there was never a general acceptance of the solution set. The underlying science was sound – however, the application of the science in software solutions proved difficult when the trusted operating system had to not only address the needs of sophisticated users and their applications but also the needs of a collaborative networking environment. Early adopters of trust technology made very significant capital investments, only to find that trusted systems were not user friendly or maintained with the same vigilance as a vendor’s standard commercial products. The security evaluation of trusted products proved lengthy, costly, and faulty, and was considered a poor but necessary investment if one expected to compete for government procurement dollars. As evaluation delays mounted, commercial vendors were forced to maintain two product baselines, one trusted and one current, which meant early adopters were further penalized because they were always one release behind the commercial state of the art. (b) Defining a methodology for formal verification of the trusted software necessary for higher levels of assurance proved elusive – not because the science was bad, but rather because the state of the practice was not
R.B. Vaughn Jr. et al. / The Journal of Systems and Software 61 (2002) 225–232
mature and remains in that position today. The complexity of operating system services contradicts the security assertions of modularity and ‘‘small enough to be analyzed’’. Software, upon which trusted systems are built, does not easily comply with principles of science. As long as human talent builds the application software or the software generating software, it will be subject to human frailty and error. Dr. Dorothy Denning, a noted computer security researcher, addressed many of these same concerns in her opening plenary address at the 22nd National Information Systems Security Conference (NISSC) in October 1999 (see http://www.cs. georgetown.edu/ denning/infosec/award.html). (c) Neither all standards work well nor do standards equate to security. Those who have worked in the field of software or systems engineering can attest to the difficulty of integrating components which were supposedly built to a common standard. Additionally, protocol standards used to transmit data between processors and storage are often incomplete, vulnerable, or error riddled. Any experience with TCP/IP in its current form will convince an engineer of protocol failings. Likewise, emerging standards for high performance computing protocols like VIA, Packetway, and Myrinet have security flaws that are not being addressed (Chakravarthi, 1999; Dimitrov and Gleeson, 1998). The lesson learned is that one cannot rely on standards to secure systems, or wait until systems have been totally evaluated against a given standard. Standards can often be the bane of security in that predictability is the basis upon which many attacks are mounted. Additionally, standards cannot be used to mandate security. The battlecry ‘‘C2 by 1992’’,was intended to jump start the security marketplace. Instead, as the evaluation process dragged on, it created a market for creative prose. The product is ‘‘in evaluation, under evaluation to the ITSEC, or designed to meet the requirements of ?’’ became common phrases in the vendor literature. This created more confusion in the marketplace, and frequently caused acquisition authorities to waive security requirements. As such, the true demand for secure systems could never be adequately projected. (d) Firewalls provide a false sense of security in that an undeserved assumption of protection prevails from their use, while there still exist holes in the security perimeter that can be exploited. In today’s networked world and wide area connectivity, commercial firewalls are generally considered the first line of ‘‘technical’’ defense for an internal network connected to the outside world. Firewalls are often employed to screen incoming packets and determine validity or intent and to screen outgoing packets for similar purposes. They may employ sophisticated intrusion detection software, auditing mechanisms, expert systems, or other additional functionality to assist in their task. Firewalls and other related mechanisms can be dangerous when they are improperly
229
set up, configured incorrectly, or purchased based on a salesperson’s ‘‘emphatic assertion’’ of their capabilities. In some communities, firewalls are not considered worth the effort of installation because they are so often misused that the use of firewalls causes more problems than they solve. While the authors of this paper do not subscribe to this philosophy, it is offered as evidence of the problem. (e) Many products/mechanisms are on the market today for a customer’s selection – but little is being done to integrate their results. One can purchase a network monitor, an intrusion detection system, smart cards, an audit reduction tool, and other point solutions to assist in mitigating risk. One cannot so easily find products that integrate the results of multiple data points – particularly when they disagree. Automation of ‘‘fuzzy’’ decision making is still lacking and products are missing. We know of only a few emerging products to address this shortfall – STAT Analyzer from Harris Corporation and at least one other product under development by a major networks corporation scheduled for release in 2001. This particular product integrates the results of several other system and network assessment tools into a common view using a proprietary process known as ‘‘FuzzyFusione’’. This product was reported on at previous conferences (Vaughn and Henning, 2000; Henning and Fox, 1999). More work is sorely needed in this area. System administrators need more automated assistance in determining true system protection status and overall reporting. Point reporting (single sensor) is insufficient for today’s distributed systems and sophisticated attacks. (f) In order to understand the security of a networked computing system, an analyst must look at the entire system – not just a particular set of technologies or network components. Systems are hard to secure, and complex systems are even more so. Systems interact with other systems, thus forming larger systems. Systems have emergent properties – they do things that were not anticipated by the original designers or users. For networked systems, one coping mechanism in the past has been to ignore the system, and concentrate on individual network components – on specific technologies (e.g., Windows NT vulnerabilities). But security has to be considered as a system within larger systems. Security must be viewed within its context – the context of the larger system being secured, that system’s purpose, its value, and the threats it will face. (g) New architectures introduce new problems that old science does not fix. A key example of this is high performance computing (Dimitrov and Gleeson, 1998). A few years ago high performance computers were generally monolithic supercomputers dedicated to special applications. Networking advances, parallel computing, and processor speed increases led to the creation of high performance computing by creating clusters of high end
230
R.B. Vaughn Jr. et al. / The Journal of Systems and Software 61 (2002) 225–232
workstations collaborating on a single problem and communicating with each other over high speed networks. Such co-operative environments were used to quickly break encrypted data that was formerly thought to be secured. Technological advances often challenge conventional wisdom in security-engineering. The message here is that one cannot rely on today’s solution for tomorrow’s problem. Another example might be that attackers seem to discover new techniques that early wisdom failed to consider. An example might be the discovery of information hiding techniques, like steganography, to move information past a guard. (h) Security is unachievable in the large – but acceptable risk mitigation is attainable. Acceptable risk mitigation requires a balance of prevention, detection (for those things that could not be prevented), and response and recovery (for those things that could not be prevented and could not be detected). Prevention techniques include firewalls, virtual private networks (VPN), trusted operating systems, encryption, public key infrastructure (PKI), biometrics, smart cards and others. Detection is necessary because we cannot fully prevent attacks on our systems, data, and networks. Detection techniques include intrusion detection systems, data mining, content filtering, virus checkers, audit log analysis, and some emerging artificial intelligence techniques (e.g., FuzzyFusione Vaughn and Henning, 2000; Henning and Fox, 1999). We have learned only too well that prevention and detection fail much too often. Response and recovery techniques become important at this stage – but are lacking in the product market. These tools include backup software, forensics software, and audit logs to some extent. (i) Policy is important – but most of the fundamentals of security policy are common sense. Engineers do not need to spend a lot of time in this area, but policy does need to be written down. Policy determines many things. It is often where the engineer can research the intent of the organization relative to security. It may contain a description of what assets the organization believes need to be protected, and responsibilities to implement that protection. Policy is almost always in need of update and much of it can be adapted from readily available boilerplate. Much of the protection architecture for a system, however, can be applied without a deep understanding of the organization’s policy. This observation is not in any way meant to downplay the importance of policy within an organization. It is included here to indicate that such policy often does not truly exist – and when it does, it is generally inadequate. The existence of policy or lack thereof, however, does not mean that a security solution cannot be proposed. Often, the policy can be intuitively deduced and later documented. However, the policy and the security solution that implements that policy must be used to be effective. They must be efficient, easy to use, and appropriate.
4. Thoughts and considerations Information assurance or security-engineering is sometimes referred to as a ‘‘black art’’ or ‘‘arcane science’’. A good security-engineer should know and understand a good security design or implementation by intuition vice quantifiable measures. In some regards, security-engineering is more closely related to the legal profession: it relies upon a common body of knowledge and ‘‘case law’’ precedents for given architectures. Security-engineers are reluctant to be the first known implementation of a particular architecture – the penalty for being first is additional scrutiny and analysis, causing cost and schedule impacts. Given that commonly accepted information assurance metrics are not agreed upon today, much of what we do, the tools we choose, and the perimeters we employ are based on empirical measures and past experience. This section contains some observations and heuristics that are founded on experience and not science. (a) There are different security assurance needs for different application domains. Government intelligence agencies are far more likely (for good reason) to demand evaluated products, formal verification of code, trusted development environments, and high-end encryption. Government agencies, not in the intelligence business, are far more likely to settle for less to handle their abundance of sensitive but unclassified data. Intelligence agency system security must start with the initial design of the system and build security into the overall system itself. Others in Government applications may not need this rigor. Most, in fact, can quite easily accept ‘‘add on’’ products to build a trust perimeter around a vulnerable system. In this domain, evaluated products are important to the customer, but not an overriding priority. Commercial encryption of browser quality is often acceptable here. Meanwhile, the commercial customer will almost exclusively rely on composition of commercial off the shelf products for their security. Evaluation of the product by third party laboratories is not a key factor. Within this customer base, experience with a product or protection architecture is the key to acceptance. (b) In many applications – past performance and emphatic praise DOES count. This is particularly true with commercial clients who want to use products that have an excellent reputation and service providers whose livelihood depends upon reliable, predictable systems. If a product has performed well for other clients, is reasonably straightforward in its installation, has a good support structure from the vendor, and has proven itself over time – the customer and securityengineer are both likely to favor it. This decision is often made without the benefit of formal product evaluation, trusted development environments, or code verification.
R.B. Vaughn Jr. et al. / The Journal of Systems and Software 61 (2002) 225–232
(c) History in the business keeps one from repeating past mistake seven if it is not the lowest cost proposal. There are many ‘‘start-up’’ companies that are beginning to seek business in the information assurance business area. Government agencies are sometimes bound by the proposal process and low bid selection. (Note, the authors are well aware of the ‘‘best value’’ selection criteria often cited, but believe this criteria is in reality often replaced by the low bid strategy.) Selection of a security-engineering capability based on price can (and has) lead to disaster. Experience, past performance, company commitment to the IA business area, and permanent staff can be contractor pluses. Others may involve more risk. (d) There are frightening new programming paradigms taking hold in the dot com world (e.g., extreme programming 2) that will likely have a negative impact on trusted development or even controlled development. Security starts with the coders and the code that is written. This is true whether the code is for an operating system, compiler, application layer, or any other executable. Testing, quality assurance, documentation, standards, life cycle development and other standard software engineering practices are important to the assurance that we look for during execution. Trends that produce code without such quality measures are also trading off assurance for time to market. Such practices represent a threat to the security of systems. Time to market pressures can lower software safety/trust/reliability. The consumer then becomes the testing ground for the program. (e) Finally, we mention that integration and system composibility is a great challenge and is not being addressed to any great extent (or perhaps at all). What we mean by this is that the ability to add on products to a system and know what results is still a black art. In part this stems from the complexity of systems and their emergent properties. It is entirely possible to install several products that individually each provide some security features/protections, yet the combination of products results in system failure. So systems must be viewed as a whole, and not just considered piecemeal. It is also possible that individual products have data that if combined with other data would signal an attack or penetration – but there exists no framework from which products can communicate with each other. In the network management area, such products do exist. We need them in the security area too.
5. Summary and conclusion We have attempted in this paper to outline some challenges to conventional wisdom and historic prac-
2
http://www.extremeprogramming.org (accessed October 9, 2000).
231
tices. The world is changing and so is our ability to secure its automation. We have many products in the market place today, but we are also finding that the products do not keep pace with the problems needing solutions. Old models of security-engineering do not always work well with today’s problem sets. Much of security-engineering is still based on the experience of the engineer, risk management, and even luck. One might suggest that, even with over 30 years of research and progress that we should be closer to being able to protect our systems, but in reality, it is easier today to mount a significant attack against systems than it was in years gone by. This is primarily due to automated attack scripts, the abundance of attack information available to malicious users, and global interconnection. Training and experience still count a lot. Awareness programs and user training are still important. Most important, however, is the training of our systems administration staff – an area of increasing importance and one this is often sorely neglected. The technical talent shortage continues to grow and finding capable staff with experience is becoming much more difficult. Service providers that have managed to acquire a critical mass of these individuals are lower risk companies to provide services to clients. Past history is important. Meanwhile, both commercial and government entities must be educated on the value of their information, exposures in their networks, threats, risks and thus their need to consider security as a vital system within their larger networked computing systems.
Acknowledgements The authors of this work wish to acknowledge the National Science Foundation and Grant CCR-0085749 for partially supporting this work. Additionally, we wish to acknowledge the support of the Center for Empirically Based Software Engineering (CeBASE) and its co-directors, Dr. Victor Basili and Dr. Barry Boehm. Finally, we thank Dr. Jack Murphy, security-engineer with EDS Corporation for his reading of this paper and insightful comments.
References Anderson, J., 1972. Computer Security Technology Planning Study, ESD-TR-73-51, ESD/AFSC, Hanscom AFB, Bedford, MA 01731, October 1972 [NTIS AD-758 206]. Chakravarthi, S., 1999. Security of high-performance messaging layers on programmable network interface architecture. In: Proceedings of the 22nd National Information Systems Security Conference, Arlington, Virginia, USA, October 1999, vol. 1, pp. 61–70. Canadian System Security Centre, The Canadian Trusted Computer Product Evaluation Criteria, 1992. Dimitrov, R., Gleeson, M., 1998. Challenges and new technologies for addressing security in high-performance distributed environments.
232
R.B. Vaughn Jr. et al. / The Journal of Systems and Software 61 (2002) 225–232
In: Proceedings of the 21th National Information Systems Security Conference, Arlington, VA, USA, October 1998, vol. 2, pp. 457– 468. Department of Defense, Trusted Computer System Evaluation Criteria (Orange Book), DoD 5200.28-STD (1983, 1985). German Information Security Agency, Criteria for the Evaluation of Trustworthiness of Information Technology (IT) Systems, January 1989. Henning, R., Use of the zachmann model for security requirements engineering. In: 20th National Information System Security Conference, Baltimore, October 15, 1997. Henning, R., Fox, K., 1999. The network vulnerabiltiy tool (NVT)-a system vulnerability visualization architecture. In: 22nd National Information Systems Security Conference, Washington, DC, October 18–21, 1999. International Standards Organization, International Standard 15408, The Common Criteria.
Thayer, R., Dorfman, M., 1997. Software requirements engineering, second ed. IEEE Computer Society Press, Silver Spring. United Kingdom Communications and Electronics Security Group, UK Systems Security Confidence Levels, CESG Memo No. 3, January 1989. Vaughn, R., 1998. Sufficiency in information security. In: Proceedings of the 21st National Information Systems Security Conference, Crystal City, VA, October 5–9. Vaughn, R., Henning, R., 2000. A pragmatic applications oriented approach to information assurance. In: 12th Annual Canadian Information Technology Security Symposium, Ottawa, Canada, June 2000. Ware, W., 1970. Security Controls for Computer Systems (U): Report of Defense Science Board Task Force on Computer Security; Rand Report R609-1, The RAND Corporation, Santa Monica, CA (February 1970).