Computers & Security, 16 (1997) 94-l 14
Information System Defences: A Preliminary Classification Scheme Fred Cohen Sandia National Laboratories,
7011 East Avenue, Livermore, CA 94550, USA.
This paper describes 140 different classes of protective methods gathered from many different sources. Where a single source for a single item is available, it is cited in the text. The most comprehensive sources used are not cited throughout the text but rather listed here (Cohen, 1995 and Neumann, 1995). Other major sources not identified by specific citation are listed here (Bellovin, 1989, 1992; Bishop, 1996; Cheswick, 1994; Cohen, 1991, 1994a; Denning, 1982; Feustal, 1973; Hoffman, 1990; Knight, 1978; Lampson, 1973; Landwehr, 1983; Linde, 1975; Neumann, 1989; Spafford, 1992).
Background For some time, people who work in information protection have been considering and analyzing various types ofattacks and defences. Several authors have published partial lists of protective techniques, but none have been very complete. In trying to do comprehensive analysis of the techniques that may apply to a particular system, many people have found themselves looking through the many reference works time and again to assure, to a reasonable degree, that their coverage was fairly comprehensive. That is the reason this paper, and its earlier companion paper on attack methods, was written. This paper is a first cut at putting all of the methods
94
of defence into a classification scheme and co-locating them with each other so that knowledgeable experts can do a thorough job of considering possible defences without having to look at numerous reference articles, and so that those who wish to gain expertise will have a starting point for their investigation. In addition to the list of methods, it was decided to add examples of each method - hopefully to instill clarity - and to provide an initial assessment of the complexity issues involved in these methods. This has proven most helpful in explaining to many people who think that the protection task is easy or straight forward, just how hard the issues we face are and how little has really been done to address them. The best result that could come out of this paper would be for the readers to point out all of its flaws and imperfections - by providing more protective methods so the list can be expanded - by identifying related results so that the true complexity of these issues can be formally determined and citations to other reference works can be added - by helping to provide improved examples - and by suggestions for ways to make this classification scheme more comprehensive, more widely accepted, and more valuable to the information protection research community.
0167-4048/97/$17.00
0 1997, Elsevier Science Ltd
Computers & Security, Vol. 76, No. 2
Properties In writing this paper, the intent was to provide more comprehensive coverage than was previously available. Along the way, we noticed several properties of the methods of defence: Property 1: non-orthagonality. The classes described by this classification scheme are not orthagonal. For example, physical security and searches may have significant overlaps. This property makes analyzing the space as a whole quite difficult. Property2 synergistic. The classes described herein have synergisms so that standard statistical techniques may not be effective in analyzing them. For example, iftwo defences are each 90% effective this does not mean that when combined they become 99% effective when combined they may become 0% effective. Synergistic effects are not yet understood fully in this context, however, this makes analysis of attack and defence quite complex and may make optimization impossible until synergies are better understood. Property 3: non-spec$city. The classes described are, for the most part, non-specific to an architecture or situation. Actual defences, however, are quite specific, and the devil - as they say - is in the details. In some classes, for example enhanced perimeters, there are hundreds of distinct techniques known to exist. The broadness ofthese classes makes them each a substantial area of research. Property 4: descriptive only. The classes included here are outlined descriptively and - with a few notable exceptions have not been thoroughly analyzed or even defined in a mathematical sense. Except in those few cases where mathematics has been developed, it is hard to even characterize the issues underlying these classes, much less attempt to provide comprehensive understandings. Property 5: limited applkability. Each class described here may or may not be applicable in or to any particular situation. While threat profiles and historical information may lead us to believe that certain items are more or less important or interesting, there is no scientific basis today for believing that any of these classes should or should not be considered in any given circumstance. This sort of judgement is made today entirely on the basis of a judgement call
by decision makers. Despite this general property, in most cases, analysis is possible and produces reasonably reliable results over a reasonably short time frame. Property 6: incompleteness. The classes given here are almost certainly incomplete in covering the possible or even realized defences. This is entirely due to the author’s and/or reviewer’s lack of comprehensive understanding at this time. We can only form a complete system by completely characterizing these issues in a mathematical form - something nobody has even approached doing to date.
Defence methods Defence 1: strong change control. Strong change control requires that research and development be partitioned from production and that an intervening change control area be put in place that (1) prevents unauthorized changes from passing, (2) verifies the propriety of all changes that can be interpreted in a general purpose way in the production environment, (3) tests all changes against sample production data, (4) only passes verified source code from research and development to production, (5) verifies that all changes are necessary for their stated purpose, (6) verifies that all changes are suitable to their stated purpose, (7) verifies that all changes clearly accomplish their stated purpose and no other purpose, and (8) verifies that the purpose is necessary and appropriate for the production system and authorized by management. Strong change control typically triples programming costs, does not permit rapid changes, and prohibits the use of macros and other similar general-purpose mechanisms now widely embedded in software environments. It also prohibits programming in the production system, requires a strong management commitment to the effort, and is limited to environments where the cost and inconvenience is acceptable. (Refer to Attack 13.) De+nce 2: waste data destruction. Effective destruction of waste products so as to make the cost of exploiting those waste products commensurate with the value of keeping those waste products from being exploited. Examples include shredding of waste paper, electromagnetic destruction of electromagnetic media, removal and destruction of labels and other
95
Information System Defences: A Preliminary Classification Scheme/Fred Cohen
related intelligence information, and the introduction of noise to reduce the signal-to-noise ratio of electromagnetic or audio signals left over from normal operations. In most cases, well known and reasonably inexpensive techniques are available for destruction ofwaste. Cost and complexity goes up for destruction of some materials in a very short time frame with very high assurance, but this is rarely necessary. Defence 3: detect waste examination. Detection of attempts to exploit waste products. Examples include the planting of false information that is easily traceable to the method of dissemination if discovered, the use of sensors in waste storage and processing areas to detect and observe activities carried out there, and auditing and testing of the destruction process. Detecting some classes of waste examination may be quite complex. For example, detecting the gathering of waste product through collection of electromagnetic emanations may be quite hard. In most cases, relatively low-cost solutions are available. Defence 4: sensors. Physical and/or informational devices are used to sense properties indicative or abnormal operations or activities. Examples include motion sensors in physically secured areas, explosive detecting sensors in doorways, and informational sensors in a firewall to detect unauthorized access attempts. The creation and placement of sensors is usually a reasonably simple matter, but analyzing sensor data and acting upon it is a substantially more complex issue. Dej&e 5: background checks. Information about the background of an individual or organization is gathered in order to assess their trustworthiness for performing desired functions. Examples include inexpensive commercial background checks commonly used in pre-employment screening, credit and financial checks commonly used in loan processing, and in-depth background checks used for government security clearances. The cost of performing background checks can range from as low as a few hundred dollars to tens of thousands of dollars, depending on the depth and level of detail desired and the activities involved in the individual’s history. D&&e 6:jedingfalse information. False information is fed to attackers in order to inhibit the successfulness of their attacks. Examples include the provision of misleading information to cause foreign governments
to spend money on useless lines of research, the provision of false information that will be easily detected by a potential purchaser of information so that the attacker will lose face, and the creation of honey pots, lightning rods, or similar target systems designed to be attractive targets and redirect attacks away from more sensitive systems. Generating plausible misleading information can be difficult and expensive, but it is often neither. Defence 7: e&tive mandatory access control. Access to information and/or systems and infrastructure is controlled by a set of mechanisms that are not under discretionary control and cannot be bypassed. Examples include the objective implementation of trusted systems, POset and Lattice-based systems, and systems using digital diodes and/or guard applications to limit access. Despite more than 15 years of substantial theoretical and development efforts and hundreds of millions of dollars in costs, almost no systems to date have been built that provide fully effective mandatory access control for general purpose computing with reasonable performance. This appears to involve many undecidable problems and some theoretical limitations that appear to be impossible to fully resolve. Examples of unsolvable problems include perfect access control decisions and non-fuced shared resources without covert channels. Highly complex problems include viruses, data aggregation controls, and unlimited granularity in access control decisionmaking. Defence 8: automated protection checkers and setters. Automated programs check protection settings of all protected resources in a system to verify that they are properly set. Examples include several Unix-based tools. If proper protection settings can be decided by a fured time algorithm, it takes linear time to check (and/or set) all of the protection bits in a system. Making a decision of the proper setting may be quite complex and may interact in non-trivial ways with the design of programs. In many cases, commonly used programs operate in such a way that it is impossible to set privileges properly with regard to the rest of the system - for example database engines may require unlimited read access to the database while internal database controls limit access by user. Since external programs can directly read the entire database, protection should prohibit access by non-privileged users, but since the database fails under this condi-
Computers & Security, Vol. 76, No. 2
tion, protection other functions
has to be set incorrectly to work.
in order for
Defence 9: tmsted applications. Trusted programs may be used to implement or interact with applications. Examples include secure Web and Gopher servers designed to provide secure versions of commonly used services (Cohen, 1997), trusted mail guards used to permit information flow that would be in violation of policy in a Bell-LaPadula-based system if not implemented by a trusted program, and most device drivers written for secure operating systems. Trust is often given but rarely fully deserved - in programs that is. The complexity of writing and verifying a trusted program is at least NP-complete. In practice, only a few hundred useful programs have ever been proven to meet their specifications and still fewer have had specifications that included desirable security properties. Deferue 10: isolatedsub-file-system areas. Portions of a file system are temporarily isolated for the purpose of running a set of processes so as to attempt to limit access by those processes to that subset of the file system. Examples include the Unix Chroot environment, mandatory access controls in POset configurations, and VA&s virtual disks. Implementing this functionality is not very difficult, but implementing it so that it cannot be bypassed under any conditions has proven unsuccessful. There appears to be no fundamental reason that this cannot be done, but in practice, interaction with other portions of the system is almost always required, for example, in order to perform input and output, to afford interprocess communication, and to use commonly available system libraries. Defense 11: quotas. Each user, group of users, function, program, or other consumer of system resources is granted limited resources by the settingofquotas over the use of those resources by the consumers. Examples include disk quotas available in many operating systems, CPU cycle quotas imposed in some environments on a usage per time basis, memory usage quotas imposed on many timesharing systems, and file handle quotas commonly used in timesharing systems. Implementing quotas is straight forward and relatively reliable, but it is rarely used in distributed computing environments because of the control complexity and it is widely considered Draconian by
users who encounter legitimate work.
quota limits when trying to do
Defense 12: properly prioritized resource usage. Resource usage is prioritized so as to fail to provide proper resources only under conditions where success is not possible. Examples of avoiding improper prioritization abound, but only theoretical solutions typically exist for achieving optima. The resource allocation problem is well known to be at least NP-complete, and in practice, optimal resource allocation is not feasible. Resource limitations are usually addressed by providing more and more resources until the problems go away, but under malicious attack, this rarely works. Dejkce 13: detection beforefailure. Redundancy of some sort is used to detect faults before they result in failures. Examples include multi-version systems, coding-based fault detection, and testing for faults. The general area of fault detection includes many NP-complete, exponential, and factorial problems. The complexity of many other such problems, however, is well within the normal computing capability available today These issues have to be addressed on a case-by-case basis. Defence 14: human intervention after detection. Once an incident is detected, people intervene. Examples include most methods for disaster recovery (which require human involvement during the recovery process), updating of known-virus checkers for new viruses, and recovering from most denial of service attacks. Reactive human responses are limited to situations in which real-time (within a small number of computer cycles) response is not required, and is limited by the skills of the defenders. Defence 15: physical security Physical means are used to prevent harm. Examples include guards, fences, and physical alarm systems. Physical security is expensive and never perfect, but without physical security, any other type of information protection is infeasible. The best attainable physical security has the effect of restricting attacks to insiders acting with expert outside assistance. Deferue 16: redundancy. Redundancy is used to mitigate risks. Examples include the use ofbackups to mitigate the risks of lost information, standby systems to mitigate the risks of downtime, and cyclic redundancy
97
Information System Defences: A Preliminary Classification Scheme/Fred Cohen
(CRC) codes to mitigate the risks of certain classes of noise in transmission. Redundancy costs, and more redundancy costs more. If privacy is to be retained, redundant systems increase the risk of exposure and must also be protected. Avoiding common-mode failures requires that redundancy be implemented using separate and different methods, and this increases costs still further. Redundancy must be analyzed and considered on a case-by-case basis. Defence 17: uninterruptable power supplies and motorgenerators. Uninterruptable power supplies provide continuous conditioned power to critical equipment during brownouts, blackouts, and across a wide range of electrical conditions. Motor generators provide long-term lower response-time backup power in cases of wide area outages or long-term outages. This is standard off-the-shelf equipment in widespread use. Periodic verification and testing is needed in order to assure proper operation over time. Defence 18: encryption. Information is transformed into a form which obscures the content so that the attacker has difficulty understanding it. Examples include the use of DES encryption for protecting transmitted information, the use of the RSA cryptosystem for exchanging session keys over an open channel, and the one-time-pad which provides perfect secrecy if properly implemented. Shannon’s 1949 paper (Shannon, 1949) on cryptanalysis asserted that, with the exception of the perfect protection provided by the on-time-pad, cryptography is based on driving up the workload for the attacker to break the code. The goal is to create computational leverage so that the encryption and decryption process are relatively easy for those in possession of the key(s) while the same process for those without the key(s) is relatively hard. Proper use of cryptography requires proper key management, which in many cases is the far harder problem. Encryption algorithms which provide the proper leverage are now quite common. Defeente 19: overdamped protocols. Protocols that tend ro reduce the traffic volume over time are used to prevent positive feedback in network traffic. Examples include protocols that guarantee that the acknowledgement process produces a finite and bounded number of finite packets. The complexity of analyzing a protocol for being overdamped is closely related to
98
livelock and deadlock analysis which is probably NPcomplete for most current protocols. Nevertheless, protocol analysis is a well-studied field and it is feasible to assure that protocols are overdamped. Defence 20: temporary blindness. Temporarily ignore certain sets of signals based on the belief that they are unreliable. Examples include temporarily ignoring network nodes that appear to be misbehaving, temporarily disabling accounts for increasing time periods for each failed password attempt, and partitioning a network as an emergency procedure in response to an incident. No detailed analysis of this technique has been performed to date. De&e 21:fault isolation. When faults are detected, faulty components are isolated from non-faulty components so that the faults do not spread. Examples include the partitioning of corporate networks from the Internet during the Morris Internet virus, the normal partitioning of faulty components of the power grid from the rest ofthe grid, and the partitioning of the telephone system into critical and non-critical circuits during a national emergency If designed into a system, fault isolation is feasible. In cases where fault isolation was not previously considered, a lot of effort may be required to implement isolation - primarily because nobody knows the list of links to be cut in order to form the partition, the location of the physical links that need to be severed, or the effect of partitioning on the two subsets of the network. Defence 22: out-of-range detection. Entries not within acceptable bounds are detected. For example, negative values for entries that should be positive, values for future dates earlier than the present, and pointers to locations outside of the user’s addressable space might all be detected. In properly designed systems, legitimate ranges for values should be known and violations should be detectable. Many systems with limited detection capability turn off bounds and similar checking for performance reasons, but few make the implications explicit. Defente 23: reintegration. Systems or other elements removed from service are put back into service. Examples include the rebooting of systems that have crashed in loosely coupled systems, the use of check-
Computers & Security, Vol. 16, No. 2
points and transaction replay for resyncronization in more tightly coupled transaction systems, and the hardware replacement and resynchronization used in redundant computers. Reintegration is relatively straight forward if properly planned before hand. Otherwise, it may be quite difficult, and sometimes even impossible, to fully reintegrate. Defence 24: training and awareness. People are made aware of and trained about how to respond to attack techniques. Examples include training and awareness seminars, in-house awareness programmes, and demonstrations wherein users are shown how they might be vulnerable. It’s pretty straight forward to make people aware and provide them with defences against most classes of attacks directed toward them. Defence 25: policies. Organizations provide specifications ofwhat is expected and how governance is to be affected. Examples include a clear usage policy with details of how it is enforced and what happens to those who break the rules, a policy against using illegal copies of software so as to reduce liability, and clear specification ofwho is responsible for what functions. Making and disseminating policy has organizational and interpersonal complexity, but is easily implemented. Defence 26: rerouting attacks. Attacks are shunted away from the most critical systems. Examples include honey pots used to lure attackers away from real targets and toward false and planted information, lightning rods used to create attractive targets so that attackers direct their energies away from real targets, shunts used to selectively route attacks around potential targets, and jails used to encapsulate attackers during attacks and gather information about their methods for subsequent analysis and exploitation. All of these techniques are easily implemented at some level. Dej&e 27: standards. Specific identified methods are specified to implement protection in the hope that they have been well studied and there is a community investment in their use. Examples include the use of X.509 certificates for interoperable key-managed encrypted data transport, Orange Book Bl approved systems for increased operating system security assurance, and ISOprocesses for high-quality industrial-grade quality assurance. Standards tend to reduce the complexity of meeting assurance require-
ments by structuring them and sharing They also tend to make interoperability attain.
the work. easier to
Defence 28: procedures. Specific identified methods are applied in specific ways to implement protection in the hope that, by uniformly applying these methods, they will be effective. Examples include the use of checklists to verify proper status during preflight, regularly performing backups to assure that information is not inadvertently lost, and a standard method for dealing with bomb threats made over the telephone. Developing and implementing standard procedures is not difficult and can be greatly aided by the use of standard procedure notebooks, checklists, and other similar aides. Defense 29: auditing. Events of potential security relevance are generated. Examples include audit records made by financial systems and used to identify fraudulent use, sign-in sheets used to identify who has entered and/or left an area and at what times, and computer-generated audit records that track logins, program executions, file accesses, and other resource uses. Generating audit records is not difficult. Care must be taken to secure the audit records from illicit observation and disruption and to prevent audit trails from using excessive time or space. Defence 30: audit analysis. Audit trails are analyzed in order to detect record sequences indicative of illicit or unexpected activities. Examples include searching for indicators of known attacks that appear in audit records, sorting and thresholding of audit trails to detect patterns of misuse, and the cross-correlation of audit records to detect inconsistencies. Analyzing audit trails can be quite complex and several NP-complete audit analysis problems appear to have been found. Defence 31: misuse detection. Indicators of misuse are analyzed in order to detect specific sequences indicative of misuse. Examples include audit-based misuse detection, analysis of system state to detect miss-set values or unauthorized changes, and network-based observation of terminal sessions analyzed to detect known attack sequences. In general, misuse detection appears to be undecidable because it potentially involves detecting all viruses which is known to be undecidable. De&me 32: anomaly detection. Patterns of behaviour
are
99
lnforma tion System Defences: A Preliminary Classification Scheme/Fred Cohen
tracked and changes in these patterns are used to indicate attack. Examples include detection of excessive use, detection of use at unusual hours, and detection of changes in system calls made by user processes. In general, anomaly detection involves a tradeoff between false positives and false negatives (Liepins, 1992). Dcfence 33: capture andpunishment. Once detected, perpetrators are arrested, tried, convicted, and punished. Examples abound - there is an old saying - the best deterrent is swift and certain capture and punishment. There are considerable technical complexities in tracing detected attacks and many legal complexities in punishment, particularly when international issues are added to the mix. Defence 34: improved morality. People are brought
into a level of moral agreement and awareness that prevents them from doing the wrong thing. Examples abound, but situation ethics classes, improved moraliw training, and ethics classes have been used to affect &is goal. Defence 35: awareness of implications.
Making people aware of the penalties and consequences of actions is used to dissuade improper behaviour. Examples include briefings on the people who have been caught and punished, details of what has happened to innocent victims as a result of previous breached of protection, and personal appearances by people who have been involved in the process.
L)efence 36:periodic
reassessment. Protection is reassessed periodically to update methods so as to reflect changes in technology, personnel, and other related factors. Elxamples include periodic reviews, annual outside audits, and regular testing. Periodic reassessment is easily accomplished.
Llefence
37: least privilege. An individual is granted enough privilege to accomplish assigned tasks, but no more. Examples include limitations on control over infrastructure. This is sometimes difficult to achieve in practice because the granularity of control is inadequate to the granularity of functions performed by individuals.
Defence 38:financial situation checking. Financial records of individuals are checked to detect unusual patterns indicative of bribes, extortion, or other impropriety.
100
Examples include periodic examination of banking records, periodic observation to detect excessive or inadequate lifestyle, and credit checks. Privacy and cost issues abound. Defence 39: good hiring practices. Background
checks, reference checks, and other similar hiring practices are used to verify that employees are likely to be loyal, upstanding, honest, and trustworthy It’s hard to do this right.
Defence
40: separation of duties. Responsibilities and privileges should be allocated in such a way that prevents an individual or a small group of collaborating individuals from inappropriately controlling multiple key aspects of a process and causing unacceptable harm or loss. Examples include limiting need to know areas for individuals, eliminating single administrative points of failure, and limiting administrative domains. Analyzing and implementing such controls are not difficult but may involve increased cost.
Defence 41: separation offunction.
Functions of systems and networks should be separated in such a way that prevents individual or common-mode faults from producing unacceptable harm or loss. Examples include the use of separate and different media for backups, the division of interlinking functional elements over a set of need-to-know areas. In practice, efficiency often takes precedence over effectiveness and separation of function is typically placed lower in priority.
Defence 42: multi-person controls. More than one person is required in order to enact a critical function. Examples include controls to require that two people simultaneously turn keys in order to launch a weapon, cryptographic key distribution systems in which keys are shared among many individuals, a subgroup of which is required to participate in order to grant access, and voting systems in which a majority must agree before an action is taken. Although some such systems are quite complex, there is no fundamental complexity with such a system that prohibits its use when the risk warrants it. Defence 43: multi-version programming.
Multiple program versions are independently developed in order to reduce the likelihood of similar faults creating catastrophic failures.
Computers & Security, Vol. 16, No. 2
Examples include the redundant software used to control the dynamically unstable Space Shuttle during re-entry, software used in other critical space systems, and software used in safety systems. This technique typically multiplies the cost of implementation by the desired level of redundancy and substantially increases the requirement for detailed specification. Defence 44: hard-to-guess passwords. Hard-to-guess passwords (something you know) are used to make password guessing difficult. Examples include automatically generated passwords, systems that check passwords when entered, and systems that try to guess passwords in an effort to detect poor selections. There are no difficult complexity issues in this technology, but there are some human limitations that limit the value of this technique. Deferue 45: augmented authentication devties time or use variant. Something you have or can do is used to augment normal authentication, typically by a time or use variation in the authentication string. Examples include the Secure-ID authentication device and similar cards, algorithmic authentication, and challenge-response cards. Despite several potential vulnerabilities associated with these devices, they are basically encryption devices used for authentication and the complexity issues are similar to those involved in other areas of cryptography
completeness tation rules.
checks on complex authorization
limi-
Defence 48: security marking and/or labelling. Markings and/or labels are used to identify the sensitivity of information. Examples include electronic marking within the computer systems, physical marking on each sheet of paper, and labelling of tapes and other storage media. Labelling is easy to do, but requires systematic procedures that are rigorously followed. Defence 49: classijjing information as to sensitivity. Information is classified as to its sensitivity when it is created. Examples include the standard classification processes in most large businesses and government agencies, the automated classification of information in trusted systems based on the information it is derived from, and the declassification process performed by the government in order to periodically reassess the sensitivity of information. The process of determining proper classification of information is quite complex and normally requires human experts who are trained in the field. Defence 50: dynamic password change control. Users are required to periodically change authentication information. Examples include periodic password change requirements, use-based change requirements, and protection against reuse of passwords or trivial variations on past passwords. This makes things a bit harder on the users.
De&e 46: biometrics. Unique or nearly-unique characteristics of the individual (something you are) is used to authenticate identity. Examples include finger prints, voice prints, retinal blood vessel patterns, DNA sequences, and facial characteristics. While the overall field of biometric identification and authentication is quite complex, these methods are normally used to authenticate identity or differentiate between identities from relatively small populations (typically a few thousand individuals), and this process is feasible with fairly common data correlation techniques.
Defence 51: secure design. By designing hardware, software, and/or networks more securely, attacks are made more difficult. Examples include the use of proven secure software, verified designs, and inherently secure architectures. It’s much harder to make things secure than to make them functional. Nobody knows exactly how much harder, but there is some notion that making something secure might imply verifying the security properties, and this is known to be at least NP-complete.
Defense 47: authorization limitation. Authorization is limited in time, space, or otherwise. Examples include limitations of when a user can make certain classes of requests, where requests can be made from, and how much information an individual may attain. Limiting authorization is not complex, however, arbitrarily complex restrictions may be devised and current techniques do not provide for consistency or
Defence 52: testing. Tests are used to improve the assurance that protection is effective. Examples include regression testing, functional testing, protection testing, complete tests, and a slew of other techniques (Bishop, 1996; Chung, 1995; Cohen, 1994, 1997; Linn, 1983; Lyu, 1995; Moyer, 1996; Pfleeger, 1989; Puketza, 1996; Sabnani, 1985; Sarikaya, 1982). Testing abounds with complexity issues. For example
101
lnforma tion System Defences: A Preliminary Classifka tion Scheme/Fred Cohen
complete tests are almost never feasible and methods for performing less than complete tests have a tendency to leave major missing pieces. D+nce 53: known-attack scanning. Looking for known attack sequences as indicated by state or audit information. Examples include virus scanning and pattern matching in audit trails against known attack signatures, and virus monitors that check each program for known viruses at load-time. This class of detection methods is almost always used to identify a finite subset of an infinite class of attacks, and as such is only effective against commonly used copies of known attacks. Defence 54: accountability. People are held accountable for actions. Examples include vesting ownership in information, tracking activities to individuals, and performance measures of individuals. Accountability can be quite complex and the issues are further muddied by legal limitations on some organizations. De&e 5.5: integrity shells. Cryptographic checksums or secured modification-time information are used to detect changes to information just before the information is interpreted. Examples include several products on the market and experimental systems (Cohen, 1988a, b). The objective of an integrity shell is to drive up the complexity of a corruptive attack by forcing the attacker to make a modification that retains an identical cryptographic checksum. The complexity of attack is then tied to the complexity of breaking the cryptographic system. Dejhue 56:jine-grained access control. Access control is provided over increasingly smaller data sets, perhaps down to the bit level. Examples include database field-based access controls, record-by-record controls, and multi-level secure markings of portions of documents. In general, determining accessibility of data analyzed by Turing capable programs is an NPcomplete problem (Denning, 1982), and in some cases it may be undecidable. De&e 57: change management. Changes are controlled so as to increase the assurance that they are valid. Examples include change control systems for software, roll-back and back-out capabilities for updates, and strong change control processes in use in select critical applications. Proper change control demands that, in the production system, no programming ca-
102
pability be available. The verification of the propriety of changes is complex and, in general, may be comparable with proof of program correctness which is well known to be at least NP-complete. Defence 58: configuration management. Configurations are managed so as to eliminate known vulnerabilities and assure that configurations are in keeping with policies. Examples include many configuration management tools now available for implementing protection policy across a wide range of platforms, menu-based tools used to set up and administer systems, and tools used to configure and manage firewalls. Configuration management normally requires a tool to describe policy controls, a tool to translate policy into the methods available for protection, and a set of tools which implement those controls on each of the controlled machines. In some cases, policy may be incommensurable with implemented protection capabilities, in other cases, proper configuration may require a substantial amount of effort, and the process of changing from one control setting to the next may introduce unresolvable insecurities. Defence 59: lockouts. Entry points or functions are locked to prevent their use - typically in response to an identified threat. Examples include the use of physical locks used to prevent an intruder from escaping, lockout of computer accounts based on incorrect authentication, and lockouts used to prevent people from using systems or network components while under maintenance. The major complexity in lockouts comes when they are used automatically This leads to the possibility for enemy use reflexive control to cause denial of services. Analyzing this class of behaviours is quite complex. Defence 60: drop boxes and processors. A secured facility is provided for puttingvaluable information or documents so that it cannot be removed or can only be removed by authorized individuals. Examples include the IBM Abyss processor for secure network-based processing, password analysis components that take in passwords and reveal only whether they were right or wrong, and drop boxes used to place money in for night deposit. No major complexity issues arise in this context. Defence 62: authentication ojpackets. Packets of information are authenticated. Typical examples include the
Computers & Security, Vol. 16, No. 2
use of cryptographic checksums, routing information, redundancy inherent in data to test the authenticity and consistency of information. Complexity issues are typically peripheral to the concept of authentication of packets but the use of computational leverage is fundamental to the effectiveness of these techniques against malicious attackers. Defence 62: analysis of physical characteristics. Physical characteristics of information, systems, components, or infrastructure is analyzed to detect deviations from expectations. Examples include the use of time domain reflectometers to detect wiring changes and listening devices, the use of pressure and gas sensors in pipes containing critical wiring to detect breaches of the enclosure, and the analysis of dollar bills to detect forgeries. No fundamental complexity limitations are inherent in the use of physical techniques, however, they may be quite difficult to use in networked environments over which physical processes can not easily be controlled. De_,fke 63: encrypted authentication. Authentication information is encrypted in storage or during transit. Examples include the encryption of plaintext passwords passed over networks, the encryption of authenticating information that uniquely identifies a physical item such as a piece of paper by minor deviations in its surface, and the distribution of authentication over multiple paths to reduce path dependency. The basic issues in encrypted authentication are how to use encryption to improve the effectiveness of the process and what encryption algorithm to use to attain the desired degree of effectiveness. Defence 64: tempest protection. Protection against the exploitation of electromagnetic emanations. Examples include Faraday cages to reduce emanations in the appropriate frequency ranges, the use of special low-emanations components, power levels, and design, and the filtering of power supplies. Tempest protection is not highly complex, but requires substantial engineering effort. Defence 65: but-eased or enhanced perimeters. Perimeter areas are widened or enhanced so as to provide suitable protection. Examples include increased distance, improved strength or quality in the perimeters, and the use of multiple perimeters in combination such as the combination of a moat and a wall. There may
be substantial cost involved in increasing or enhancing perimeters and it may be hard to definitively determine when the perimeter is strong enough or wide enough. Dqbzce 66: noise injection. Noise is injected in order to reduce the signal to noise ration and make compromise more difficult. Examples include the creation of deceptions involving false or misleading information, the induction of electrical, sonic, and other forms of noise to reduce the usability of emanations, and the creation of active noise-reducing barriers to eliminate external noises which might be used to cause speech input devices to take external commands or inputs. Noise injection is fairly simple to do, but it may be hard to definitively determine when the noise level is high enough to meet the threat. Defence 67: jamming. Signals are used to disrupt communications. Examples include desynchronization of cryptographic systems by induction of bit or signal alterations, the introduction of erroneous bits at a rate or distribution such that checksums or CRC codes are unable to provide adequate coverage, and the introduction of noise signals in a particular frequency range so as to reduce effective bandwidth for a particular communications device or system. The basic issue in jamming is related to the efficient use of limited energy. With enough energy, almost any signal can be overwhelmed, but the amount of energy required to overwhelm signals across a wide frequency band, it may take more energy than is easily available, and it is normally relatively easy to locate and destroy such a large noise generation system. The complexity then lies in determining the most effective use of resources for jamming. Defence 68: spread spectrum. The use of frequency-hopping radios or devices to reduce the effect ofjamming and complicate listening in on communications. Examples include spread-spectrum portable telephones now available in for homes and spread spectrum radios used in the military The basic issues in spread spectrum are how to rapidly change frequencies and how to schedule frequency sequences so as to make jamming and listening difficult. The former is an electronics problem that has been largely solved even for low-cost units, while the latter is a cryptographic problem equivalent to other cryptographic problems. Defence 69: path diversity. Multiple
paths are used to 103
Information System Defences: A Preliminary Classification Scheme/Fred Cohen
reduce the dependency on a single route of communications or transportation. Examples include the use of different motor routes selected at random near the time of use to avoid hijacking of shipments, the use of multiple network paths used to assure reliability and increase the cost ofjamming or bugging, and the use of multiple suppliers and multiple apparent delivery names and addresses to reduce the effect of Trojan horses placed in hardware. The basic issues in path diversity are how to rapidly change paths and how to schedule path sequences so as to make jamming and listening difficult. The former is a logistics problem that has been solved in some cases, and the latter is a cryptographic problem equivalent to other cryptographic problems. Defence 70: quad-tri-multi-angulution. Multiple signals from different views are coordinated in order to uniquely locate it. Examples include signal triangulation to locate ships on Earth (a near-plane), quadrangulation used in the global positioning system (GPS) to locate a point in 3-space (plus time), and the use of multiple audit trails in a network infrastructure to locate the source of a signal or malicious bit-stream. While Euclidean spatial location is fairly straight forward, additional mapping capability is required in order to use this technique in cyberspace, and the number of observation points and complexity of location has not been definitively published. This would appear to be closely related to mathematical results on finding cuts that partition a network. Defense 71: Faraday boxes. Conductive enclosures are used to eliminate emanations and induced signals from an enclosed environment. Examples include the use of tempest shielding to reduce emanations from video display units, the use of wire mesh enclosures to eliminate emanations from small installations and computer rooms, and the use of metallic enclosures to prevent electromagnetic pulse weapons from disabling enclosed systems. The only complexity involved in the design of Faraday cages comes from determining the frequency range to be protected and calculating the size of mesh required to eliminate the relevant signals. Some care must be taken to assure that the shielding is properly in place and fully effective. Defence 72: detailed audit. Auditors examine
104
an infor-
mation system to provide an opinion on the suitability of the protective measures for effecting the specified controls. Examples include the use of internal auditors and external auditors. Audits tend to be quite complex to perform and require that the auditor have knowledge and skills suitable to making these determinations. Defence 73: trunk access restriction. High bandwidth or privileged access ports and lines have restricted access. Examples include protection of trunk telephone lines from access by calls originating on outside lines, limiting access to high-bandwidth infrastructure elements, and physically securing an Ethernet within a building. This appears to be of minimal complexity but may involve substantial cost or require special expertise. Defence 74: informationflow controls. Controls that limit the flow of information. Typical examples include mandatory access controls (MAC) used in trusted systems, router configuration which is often used to contain data between members of an organization to remain within the organization, and information flow controls based on work models. Flow control has been studied in some detail and there are some published results on the complexity of select aspects (Bell, 1973; Biba, 1977; Cohen, 1985b, 1986; Denning, 1975, 1976). Defence 75: disconnect maintenance access. Maintenance ports are disconnected or controlled so as to effectively eliminate unauthorized access. Example include the proper use of cryptographic modems for covering maintenance ports, the connection of maintenance ports only when maintenance is underway and properly supervised, and the elimination of nonlocal use of maintenance functions. This is a relatively simple matter except in large networks or locations where physical access is difficult, expensive, or impossible. Defense 76: e&&e protection mindset. People’s mindset is orientated toward protection issues. Examples include organizations with effective protection awareness programmes, organizations with a strong tradition and culture of protection, and individuals with a passion for information protection. Dealing with people is always a complex business, but fostering a protection-orientated attitude is not usually difficult if management is supportive.
Computers & Security, Vol. 16, No. 2
Defence 77: physical switches or shields on equipment and devices. Peripherals such as microphones and cameras are provided with switches capable of definitively disabling their operation, while disks are physically write locked and equipment may use smoke and dust filters. Examples include covers over video cameras, switches that electrically or mechanically disconnect microphones, and keyed disconnects for keyboards, mice, and displays. This is not a complex issue to address, but substantial costs may be involved if implemented through a retrofit. Defevlte 78: trusted repair teams. Trusted groups of individuals working in teams perform repairs so as to mitigate the risks of the repair process introducing corruptions, leaks, or denial. Examples include cleared repair people and specially contracted cleared repair facilities. People-related issues dominate this protection method. Defence 79: inventory control. Inventory is tracked at a granularity level sufficient to be able to identify the relevant history of the relevant parts. Examples include physical inventory control systems, inventory audits, and online inventory control over information. Inventory control is not a very complex process to do well, but it requires a consistent level of effort and a systematic approach. De&e 80: secure distribution. Physical or informational goods are delivered in such a way as to assure security properties. Examples include cryptographic checksums used to seal software distributed over public communications systems, special bonded couriers using secure carrying cases, and buying through phoney or friendly corporations so as to conceal the real destination of a shipment. Some aspects of secure distribution, such as the elimination of covert channels related to bandwidth utilization, can be quite complex - in some cases as complex as a cryptography in general, while other aspects are relatively easy to do at low cost. Defense 81: secure key management. The management of keys in such a way as to retain the security properties those keys are intended to assure. Examples include physical key audits, anti-duplication measures, and periodic rekeying, public-key automated key generation and distribution systems, and analysis of physical traits on keys for integrity verification. Key management is one of the least understood and hardest
problem areas in cryptography today, and has been the cause of many cryptosystem failures - perhaps the most widely publicized being the inadequate key management by Germany during World War II that led to rapid decodingofEnigma cyphers. Physical key management is equally daunting and has led to many lock and key design schemes. To date, as far as can be determined from the available literature, no foolproof key management scheme has been devised. Deferue 82: locks. LA&S are used to prevent unauthorized entry while permitting authorized entry. Examples include mechanical locks, magnetic locks, and electronic locks, password checkers and similar informational authentication devices, and timebased, use-based, environment-based, or event-based locks. Lock technology has been evolving for thousands ofyears, and the ideas and realizations vary from the simple to the sublime. In general, locking mechanisms can be as complex as general informational and physical properties. Dejnce 83: secure or trusted channels. Secure channels are specially devised communications channels between parties that provide a high degree of assurance that communications security properties are in place. Examples include trusted channels between a user and an operating system required for the implementation of trusted computing bases, cryptographically secured communication channels used in a wide range of circumstances, and physically secured communications paths controlled by gas-filled tubes which lose their communications properties if altered. In general, trusted channels can be as complex as general informational and physical properties. De&e 84: limitedfunction. Function is limited so as to prevent exploitation of Turing capability or similar expansion of intended function. Examples include limited function interfaces to systems, special purpose devices which perform only specific functions dictated by their designers, and special devices designed to fail under attempts to use them for other than their intended purpose. It has been shown that systems with even a very limited set of functions can be used to implement general purpose systems (Turing, 1936). For example, gates can be used to create digital electronic function, and, subject to tolerance requirements, general purpose physical machines also exist (Cohen, 1994d).
105
Information System Defences: A Preliminary Classification Scheme/Fred Cohen
Deferue 85: limited sharing. Sharing of information is limited so as to prevent unfettered information flow. Examples include the Bell-LaPadula security model, the Biba integrity model, Denning’s Lattice models, and Cohen’s POset model (Bell, 1973; Biba, 1977; Cohen, 1987b; Denning, 1975). Effectively limiting sharing with other than purely physical means has proven to be a highly complex issue. For example, more than 20 years of effort has been put forth in the design of trusted systems to try to achieve this goal and it appears that another 20 years will be required before the goal is actually realized. Defellce 86: limited transitivity. Limiting the ability of information to flow more than a certain distance (perhaps in hops) from its original source. Examples include special operating systems that track transitive information flow and limit it and technologies that can only be shared a few times before losing their operability This is not a highly complex area, but there is a tendency for information to rapidly reach its limit of transitivity and there are restrictions on the granularity at which control can be done without producing high mathematical complexity on time and space used to track transitive flow (Cohen, 1984; 1994b). Defense 87: disable unsufefeutures. Features known to be unsafe in a particular environment are disabled. Examples include disabling network file systems operating over open networks, disabling guest accounts, and disabling dial-in modems connected to maintenance ports except when maintenance is required. This is not difficult to do, but often the feature that is unsafe is used and this introduces a risk/benefit tradeoff It is also common to find new vulnerabilities and if this policy is followed, it may result in numerous changes in how systems are used and thus create operational problems that make the use of many features infeasible. Defence 88: authenticated information. Information is authenticated as to source, content, and or other factors of import. Examples include authenticated electronic mail systems based on public-key cryptography, source-authenticated secure channels based on authenticating information related to individuals, and the use of third party certification to authenticate information. Redundancy required for authentication and the complexity of high assurance
106
authentication combine to limit the effectiveness of this method, however, there is an increasing trend towards its use because it is relatively efficient and reasonably easy to do with limited assurance. Defence 89: integrity checking. The integrity of informaand devices is verified. tion, people, systems, Examples include detailed analysis of code, change detection with cryptographic checksums, in-depth testing, syntax and consistency verification, and verification of information through independent means. In general, the integrity checking problem can be quite complex, however, there are many useful systems that are quite efhcient and cost effective. There is no limit to the extent to which integrity can be checked and the question of how certain we are based on which checks we have done is open. As an apparently fundamental limitation, information used to differentiate between two otherwise equivalent things can only be verified by independent means. Defense 90: infrustructure-wide digging hotlines. Toll-free telephone services are provided at the infrastructure level to provide information on the location of cables, gas lines, and other infrastructure elements prior to digging. This requires tracking all infrastructure elements and introduces a substantial risk that gathered and co-located infrastructure information will be exploited by attackers. Defence 91: conservative resource allocution. Resources are allocated so as to assure that anything that is started can be completed with the allocated resources. Examples include the use of over specification to assure operational compliance, preallocation of all required memory for processing in computer operating systems, and static allocation rather than dynamic allocation in telecommunications systems. In general, the allocation problem is at least NP-complete. As a far more important limitation, most systems are designed to handle 90th to 99th percentile load conditions, but the cost of handling worst case load is normally not justified. In such systems, stress-induced failures are almost certain to be possible. Defence 92: jre suppression equipment. Special equipment is put in place to suppress fires. Examples include Halon gas, fire extinguishers, and water reserves with dumping capability This is off-the-shelf equipment and its limitations and properties are well understood by fire departments worldwide.
Computers & Security, Vol. 16, No. 2
Defetue 93:jre doors, fire walls, asbestos suits and similar -fire-limiting items. A variety of physical devices and equipment provides effective limitation offire’s effect on buildings, facilities, equipment and people. This is off-the-shelf equipment and its limitations and properties are well understood by fire departments worldwide. Defence 94: concealed services. Concealment is used to provide services only to those who know how to access them. Examples include secret hallways with trick doors, menu items that don’t appear on the menu, and programs that don’t appear in listings but operate when properly called. This type of concealment is often referred to as security through obscurity because the effectiveness of the protection is based on a lack of knowledge by attackers. This is generally held to be quite weak except in special circumstances because it’s hard to keep a secret about such things, because such things are often found by clever attackers, and because insiders who have the special knowledge are often the sources of attacks. Defence 95: traps. Traps are devices used to capture an intruder or retain their interest while additional information about them is attained. Examples include man-traps used to capture people trying to illicitly enter a facility, computer traps used to entice attackers to remain while they are traced, and special physical barriers designed to entrap particular sorts of devices used to bypass controls. In general, no theory of traps has been devised, however, it is often easy to set traps and easy to evade traps, and, in legal cases, entrapment may void any attempt at prosecution. Defense 96: content checking. Content is verified to assure that it is within normal specifications or to detect particular content. Examples include verification of parameter values on operating system calls, examination of inbound shipments, and real-time detection of known attack patterns. In a limited function system, content checking is limited by the ability to differentiate between correct and incorrect values within the valid input range, while in systems with unlimited function, most of the key things we commonly wish to verify about content are undecidable. Dejke 97: trusted system technologies. Trusted systems are systems that have been certified to meet some set of standards with regard to their design, implementation and operation. Examples include systems and
components approved by a certification body under a published criteria and standard implementations used for a particular purpose within a particular organization.While these systems tend to have more effective protection than systems that are less standard and not certified, the aura of legitimacy is sometimes unjustified. The certification process introduces substantial complexities and delays, while the advantage of standardization comes from an economy of scale in not having to independently verify the already certified properties of a trusted system for each application. Defence 98: perception management. Causing people to believe things that forward the goal. Examples include the appearance of tight security which tends to reduce the number of attacks, creating the perception that people who attack a particular system will be caught in order to deter attack, and making people believe that a particularly valuable target is not valuable in order to reduce the number of attempts to attack it. It is always tricky trying to fool the opponent into doing what you want them to do. Defence 99: deceptions. Typical deceptions include concealment, camouflage, false and planted information, reuses, displays, demonstrations, feints, lies, and in1995). Examples include facades sight (Dunnigan, used to misdirect attackers as to the content of a system, false claims that a facility or system is watched by law enforcement authorities, and Trojan horses planted in software that is downloaded from a site. Deceptions are one of the most interesting areas of information protection, but little has been done on the specifics of the complexity of carrying out deceptions. Some work has been done on detecting imperfect deceptions. Defence 100: retaining confidentiality of security status information. Information on methods used for protection can be protected to make successful and undetected attack more difficult. Examples include not revealing specific weaknesses in specific systems, keeping information on where items are purchased confidential, and not detailing internal procedures to outsiders. Many refer to this practice as security through obscurity. There is tendency to use weaker protection techniques than are appropriate under the assumption that nobody will be able to figure them out. History shows this to be a poor assumption.
107
lnforma tion System Defences: A Preliminary Classification Scheme/Fred Cohen
Defence 101: regular review ofprotection measures. Protection measures are regularly reviewed to assure that they continue to meet requirements. Examples include internal and external protection audits, periodic administrative reviews, and protection posture assessments. Regular review is simple to do, but the quality of the results depends heavily on the reviewers. Defence 102: independent computer and tool use by auditors. Auditors create independent systems that permit them to review the content of a system under review with reduced dependency on the hardware and software in the system under review. Examples include removing disks from the system under review for review on a separate computer, booting systems from auditor-provided start-up media in order to assure against operating system subversion, and review of external information fed into and generated by the system under review. It can be quite difficult to attain a high degree of assurance against all hardware and software subversions, particularly in the cases of storage media and deliberately corrupted hardware devices. Dtj&ue 103: standby equipment. Special redundant devices are used to assure against systemic faults. Examples include uninterruptable power supplies, hot and cold standby systems, off-site recovery facilities, and n-modular redundancy This is common practice and is not difficult to do. Deferue 104: protection ofdata used in system testing. Data used to test systems is protected to prevent attackers from understanding the limits of the tests performed and to assure that the test data doesn’t corrupt the system under test. This is approximately as complex as protecting live data except that test systems can often be physically isolated from the rest of the environment. Defetue 10.5: Chinese walls. Combinations of access are limited so as to limit the combination of information available to an individual. Examples include the Chinese walls used in the financial industries to prevent traders from gaining access to both future planning information and share value information, need-toknow separation in shared computing bases, and access controls in some firewall environments. Chinese walls can become quite complex if they are being used to enforce fine-grained access controls and if
108
they are expected information flow.
to deal with time transitivity
of
Defence 106: tracking, correlation, and analysis of incident reporting and response information. Incident reports and mitigating actions are collected, reported, correlated, and analyzed. Examples include analysis for patterns of abuse, detection of changes in threat profiles, detection of low-rate attacks, detection of increased overall attack levels, improvement of response performance based on feedback from the analysis process, and the collection and reuse of diagnostic and repair information. This is not a complex thing to do, but it is rarely done well. In general, the analysis of information may be quite complex depending on what is to be derived from it. Defence 107: minimizing copies of sensitive information. The number of copies of sensitive information is limited to reduce the opportunity for its leakage. Examples include restrictions on handling of sensitive information in most government and industry facilities. There is a tradeoff between the advantage of retaining fewer copies to prevent leakage and having enough copies to assure availability Deferue 108: numbering and tracking all sensitive information. Each copy of sensitive information is numbered, catalogued, and tracked to assure that no illicit copies are accidentally made and to add identifying information to each copy to enable it to be traced to the individual responsible for its handling. This is a bit expensive but is not complex. Defence 109: independent control of audit information. Audit information is not kept in the control of the systems about which the information applies. For example, audit information may be immediately transmitted to a separate computer for storage and analysis, a trusted computing base may separate the information from the areas that generate it, or audit information may be written to a write-once output device to assure against subsequent corruption or destruction. This is not complex to accomplish with a reasonable degree of assurance, however, under high load conditions, such mechanisms often fail. Defence 110: low building profile. Facilities vital systems or information are kept in buildings and locations to avoid undue attention. Examples include computer
containing low profile interest or centres lo-
Computers & Security, Vol. 16, No. 2
cated in remote areas, undistinctive and unmarked buildings used for backup storage, and block houses used for storing high valued information. This is not an inherently complex thing to do, but in many companies, computer rooms are showcases and this makes low profiles hard to maintain. Defense 12 1: minimize trutc in work areas. Traffic is limited to reduce the opportunity for diversions. Examples include the separation of areas by partitions, separation of workers on different tasks into different areas, and designing floor and building areas so as to control flow. This is not a highly complex protective measure. Deferue 2 12:place equipment and supplies out ofharms way. Physical isolation of key equipment and proper placement of all information and processing equipment is used to reduce accidental and intentional harm. This is not hard to do. Defence 113: universal use of budges. Identifiable badges are worn by every authorized person in the facility Examples include electromagnetic badges that automatically open locks to areas, smartcard badges that include electronic authentication information, and the more common physical badges which use colour, format, seals, and similar authenticating information to assert authenticity, Badging technology can be quite complex depending on the control requirements the badges are intended to fulfill. Deferue 114: controlphysical access. Restrictions on who can get to what physical locations when. Examples include gates requiring identification for entry, the use of locks on wire closets, and secure areas for storage of material. Although the complexity of this area is not extreme in the technical sense, there is a substantial body ofknowledge on physical access control. Deferue 115: separation of equipment so as to limit damage from local events. Physical distance and separation are used to prevent events effecting one piece of information or technology from also effecting other related ones. Examples include off-site storage facilities, physically distributed processing capabilities, and hot sites for operation during natural disasters and similar events. Physical separation is often mandated by regulation, especially between classified and unclassified computers. Separation can be quite a complex issue
because emanations can travel over space and can be detected by a variety of means. Within a secured area, for example, Sandia is required to maintain appropriate separation between classified computing equipment and unclassified equipment, phone lines, etc. The required separations are: 6 inches between classified equipment and unclassified equipment, 6 inches between classified equipment and unclassified cables, and 2 inches between classified cables and unclassified cables. Cables include monitor/keyboard/printer cables, phone lines, etc., but NOT power cables. These separations are valid for classified systems up to and including SRD. Defence 116: inspection of incoming and outgoing materials. All material is examined when crossing some boundaries to assure that corruption or leakage is not taking place. Examples include examination of incoming hardware for Trojan horses, bomb detection, and detection of improperly cleansed material being sent out. Because information can be encoded in an unlimited number of ways, it can be arbitrarily complex to determine whether inbound or outbound information is inappropriate. This is equivalent to the general computer virus and Trojan horse detection problems. Defence 117: suppression of incomplete or obsolete data. Incomplete or obsolete data is suppressed to prevent its accidental use or intentional abuse. Examples include databases that remove old records, record keeping processes that expunge data after statutory and business requirements no longer necessitate its availability, and the destruction of old criminal records by government when appropriate. Tracking obsolescence is rarely done well, however, if tracking is properly done, suppression is not complex. Dejkce 118: document and information controlprocedures. Whenever sensitive information is sent or received, a procedure is used to confirm the action with the other party. Examples include receipts for information, non-repudiation services in information networks, and advance notice of inbound transmissions. Nonrepudiation is a fairly complex issue in information systems, typically involving complex cryptographic protocols. In general, it is complex an issue as cryptography in general. Defkce 119: individual accountabilityfor all assets. Each information asset is identified as to ownership and 109
Information System Defences: A Preliminary Classification Scheme/Fred Cohen
stewardship and the owner and steward are held accountable for those assets. Examples include document tracking, owner-based access control decisions, and mandatory actions taken in cases where individuals fail to carry out their responsibility. This is not complex to do, but may require substantial overheads.
elude insurance contracts relating to information assets, employee contracts that specie the employee’s responsibilities and punishments for non-compliance, vendor and outsourcing agreements specifying requirements on supplied resources and responsibility for failures in compliance. The complexity of the legal system is legend.
Defense 120: clear line of responsibilityfor protection. Responsibility for protection is clear throughout the organization. Examples include executive-level responsibility for overall information asset protection, delineated responsibility at every level of the organization, and spelled out responsibilities for individual accountability The challenges of organizational responsibility are substantial because they involve getting people to understand information protection at an appropriate level to their job functions.
Defence 125: time, location, function, and other similar access limitations. Access is restricted based on time, location, function, and other characterizable parameters. Examples include time locks on bank vaults, access controls based on IP address, terminal port, or combinations of these, and barriers which require some time in order to be bypassed. The only real complexity issue is in identifying the proper sets of controls over the conditions for access and non-access for each individual relating to each system.
Defense 221: program change logs. Changes in programs are accounted for in detail. Examples include automated change tracking systems, version control systems, and full-blown change control systems. Change control is often too expensive to be justified and is commonly underestimated. Sound change control is hard to attain and approximately doubles the software costs of a system (Cohen, 1994b).
Defence 126: multidisciplinaryprinciple (GASSP). Measures, practices, and procedures for the security of information systems should address all relevant considerations and viewpoints, including technical, administrative, organizational, operational, commercial, educational, and legal (GASSP 1995). Security is achieved by the combined efforts of data owners, custodians, and security personnel. Essential properties of security cannot be built-in and preserved without other disciplines such as configuration management and quality assurance. Decisions made with due consideration of all relevant viewpoints will be better decisions and receive better acceptance. If all perspectives are represented when employing the least privilege concept, the potential for accidental exclusion of a needed capability will be reduced. This principle also acknowledges that information systems are used for different purposes. Consequently, the principles will be interpreted over a wide range of potential implementations. Groups will have differing perspectives, differing requirements, and differing resources to be consulted and combined to produce an optimal level of security for their information systems.
Dejke 122: protection of names of resources. Names and other identifying information are kept confidential. Examples include keeping the employee phone book and organizational chart from being released to those who might abuse its content, keeping filenames and machine names containing confidential data confidential to make them harder to find, and keeping the names of clients, projects, and other operational information confidential to prevent exploitation. Operations security is a complex field - in large part because of the effects of data aggregation. Defence 123: compliance with laws and regulations. All applicable laws and regulations are followed. Examples include software copyright laws, anti-trust laws, and regulations applying to the separation of information between traders and investment advisors. The complexity
of the legal system is legend.
Defeence 124: legal agreements. Binding legal agreements are used to provide a means of recovering from losses due to negligence or non-fulfilment. Examples in-
110
Defence 127: integration principle (GASSP). Measures, practices, and procedures for the security of information systems should be coordinated and integrated with each other and with other measures, practices, and procedures of the organization so as to create a coherent system ofsecurity (GASSP, 1995). The most
Computers & Security, Vol. 16, No. 2
effective safeguards are not recommended individually, but rather are considered as a component of an integrated system of controls. Using these strategies, an information security professional may prescribe preferred and alternative responses to each threat based on the protection needed or budget available. This model also allows the developer to attempt to place controls at the last point before the loss becomes unacceptable. Since developers will never have true closure on specification or testing, this model prompts the information security professional to provide layers of related safeguards for significant threats. Thus if one control is compromised, other controls provide a safety net to limit or prevent the loss. To be effective, controls should be applied universally. For example, if only visitors are required to wear badges, then a visitor could look like an employee simply by removing the badge. Dejke 128: timeliness principle (GASP). Public and private parties, at both national and international levels, should act in a timely coordinated manner to prevent and to respond to breaches of the security of information systems (GASSP 1995). Due to the interconnected and transborder nature of information systems and the potential for damage to systems to occur rapidly, organizations may need to act together swiftly to meet challenges to the security of information systems. In addition, international and many national bodies require organizations to respond in a timely manner to requests by individuals for corrections of privacy data. This principle recognizes the need for the public and private sectors to establish mechanisms and procedures for rapid and effective incident reporting, handling, and response. This principle also recognizes the need for information security principles to use current, certifiable threat and vulnerability information when making risk decisions, and current certifiable safeguard implementation and availability information when making risk reduction decisions. For example, an information system may also have a requirement for rapid and effective incident reporting, handling, and response. In an information system, this may take the form of time limits for reset and recovery after a failure or disaster. Each component of a continuity plan, continuity of operations plans, and disaster recovery plan should have timeliness as a criteria. These criteria should include provisions for the impact the
event (e.g. disaster) may have on resource availability and the ability to respond in a timely manner. Defence 129: democracyprinciple (GASP). The security of an information system should be weighed against the rights of users and other individuals affected by the system (GASSP 1995). It is important that the security of information systems is compatible with the legitimate use and flow of data and information in the context of the host society It is appropriate that the nature and amount of data that can be collected is balanced by the nature and amount ofdata that should be collected. It is also important that the accuracy of collected data is assured in accordance with the amount of damage that may occur due to its corruption. For example, individuals’ privacy should be protected against the power of computer matching. Public and private information should be explicitly identified. Organization policy on monitoring information systems should be documented to limit organizational liability, to reduce potential for abuse, and to permit prosecution when abuse is detected. The monitoring of information and individuals should be performed within a system of internal controls to prevent abuse. Note: the authority for the following candidate principles has not been established by committee consensus, nor are they derived from the OECD principles. These principles are submitted for consideration as additional pervasive principles. Dejbce 130: internal control principle (GASP). Information security forms the core of an organization’s information internal control system (GASSP 1995). This principle originated in the financial arena but has universal applicability As an internal control system, information security organizations and safeguards should meet the standards applied to other internal control systems. The internal control standards define the minimum level of quality acceptable for internal control systems in operation and constitute the criteria against which systems are to be evaluated. These internal control standards apply to all operations and administrative functions but are not intended to limit or interfere with duly granted authority related to development of legislation, rule making, or other discretionary policy making in an organization or agency.
111
lnforma tion System Defences: A Preliminary Classification Scheme/Fred Cohen
Defense 131: aduersq principle (GASSP). Controls, security strategies, architectures, policies, standards, procedures, and guidelines should be developed and implemented in anticipation of attack from intelligent, rational, and irrational adversaries with harmful intent or harm from negligent or accidental actions (GASSP 1995). Natural hazards may strike all susceptible assets. Adversaries will threaten systems according to their own objectives. Information security professionals, by anticipating the objectives of potential adversaries and defending against those objectives, will be more successful in preserving the integrity of information. It is also the basis for the practice of assuming that any system or interface that is not controlled is assumed to have been compromised. Dejkue 132: continuity principle (GASSP). Information security professionals should identify their organization’s needs for continuity of operations and should prepare the organization and its information systems accordingly (GASSP 1995). Organizations’ needs for continuity may reflect legal, regulatory, or financial obligations of the organization, organizational goodwill, or obligations to customers, board of directors, and owners. Understanding the organization’s continuity requirements will guide information security professionals in developing the information security response to business interruption or disaster. The objectives ofthis principle are to ensure the continued operation of the organization, to minimize recovery time in response to business interruption or disaster, and to fulfill relevant requirements. The continuity principle may be applied in three basic concepts: organizational recovery, continuity of operations, and end user contingent operations. Organizational recovery is invoked whenever a primary operation site is no longer capable of sustaining operations. Continuity of operations is invoked when operations can continue at the primary site but must respond to less than desirable circumstances (such as resource limitations, environmental hazards, or hardware or software failures). End user contingent operations are invoked in both organizational recovery and continuity of operations. Deferue 133: simplkityprituiple (GASSP). Information security professionals should favour small and simple safeguards over large and complex safeguards (GASSP, 1995). Simple safeguards can be thoroughly
112
understood and tested. Vulnerabilities can be more easily detected. Small, simple safeguards are easier to protect than large, complex ones. It is easier to gain user acceptance of a small, simple safeguard than a large, complex safeguard. Defence 134: periods processing and colour changes. Processing at different levels of security is done at different periods of time with a colour change operation used to remove any information used during one period before the next period ofprocessingbegins. It is rather difficult to thoroughly and with certainty eliminate all cross talk between processing on the same hardware at different periods of time. Defence 135: alarms. Alarms are used to indicate detected intrusions. Examples include automated software-based intrusion detection systems, alarm systems based on physical sensors reporting to central control rooms, and systems in which each sensor alarms on detection. The technology for detection is quite broad and there are many complexity issues that have not been thoroughly researched (Cohen, 1996d). Defence 136: insurance. Risk can be mitigated others to assume those risks. Insurance complex issue involving actuarial tables, about whether to self-insure, and a heavy agement aspect.
by paying is a rather decisions risk man-
Defeence 137: choice of location. Location is often related to risks. Examples include selection of neighbourhood and its impact on crime, selection of geographical location and its relation to natural disasters, and selection of countries and its relation to political unrest. Statistical data is available on physical locations and can often be used to analyze risks with probabilistic risk assessment techniques. Defence 138:filtering devices. Devices used to limit the pass through of information or physical items. Examples include air filters to limit smoke damage and noxious fumes, packet filters to limit unauthorized and malicious information packets, and noise filters to reduce the impact of external noise. Filtering can be quite complex depending on what has to be filtered. Defence 139: environmental controls. Devices or methods that control temperature, pressure, humidity, and
Computers & Security, Vol. 16, No. 2
other environmental factors. Examples include heaters and air conditioners, humidity controllers, and air purifiers. These devices are commonplace and wellunderstood.
Networks under Partial Orderings, IFIP-TCII, Security,Vol. 6, 1987, pp. 118-128.
Defence 140: searches and inspections. Physical, electronic, sonic, and other searches and inspections designed to reveal the presence of illicit devices or tampering. This activity is typified by manual labour augmented with various special-purpose devices.
Cohen, F., 1988. A New Integrity-Based Model for Limited Protection Against Computer Viruses, Masters Thesis, The Pennsylvania State University, College Park, PA, 1988.
It is hoped that this list will be a starting point and not an ending point. If all goes well and many of the readers comment on this listing, efforts will be made to improve and expand upon the list in the future and to relate our results to the readership again at a later date.
Computirs G
Cohen, F., 1987b. Introductory Information Protection, ASP Press, 1987.
Cohen, F., 1988b. Models of Practical Defenses Against Computer Viruses, IFIP-TCI 1, ComputersG Serutity, Vol. 7, No. 6, 1988. Cohen, F., 1991. A Short Course on Systems Administration and Security Under Unix, ASP Press, 1991. Cohen, F., 1994. A Short Course on Computer Viruses, 2nd Ed. New York: John Wiley, 1994. Cohen, F., 1994b. Operating Systems Protection Through Program Evolution, IFIP-TCll, Computers 6 Sect&y, 1994. Cohen, F., 1994~. It's Alive!!!, John Wley and Sons, 1994.
References
Cohen,
F., 1995. Protection and Security on the Information Wiley and Sons, New York, 1995.
Superhighway,
Agudo, 1996. Assessment of Electric Power Control Systems Security, Joint Program Office for Special Technology Countermeasures, September 30, 1996. Bell, D.E. and LaPadula, L.J., 1973. Secure Computer Systems: Foundationsand Model. The Mitre Corporation, 1973.
Mathematical
Bellovin, S.M., 1989. Security Problems in the TCP/IP Protocol Suite. ACM SZCCOMM. Computer CommunicationsReview, April 1989, pp. 32-48. Bellovin, S.M., 1992. There Be Dragons, Proceedings ofthe Third Usenix UNIX Security Symposium. Baltimore, September 1992. Biba, KJ., 1977. Integrity Considerations for Secure Computer Systems, USAF Electronic Systems Division, 1977. Bishop, M. and Dilger, M., 1996. Checking for Race Conditions in File Access. Bochmann, G.V and Gecsei, J., 1977. A Unified Method for the Specification and Verification of Protocols, IFIP Congress, Toronto, 1977, pp. 229-234. Cheswick, W. and Bellovin, S.M., 1994. Firewalls and Internet Security - Repelling the Wiley Hacker, Addison-Wesley, 1994. Cohen, F., 1984. Computer Viruses - Theory and Experiments, IFIP TC-11 Conference, Toronto, 1984.
Cohen, F., 1997. A Secure World-Wde-Web Server, IFIP-TCll, G Security, 1997, in press.
Computers
Cohen, F., and Mishra, S., 1994. Experiments on the Impact of Computer Viruses on Modern Computer Networks, IFIP-TCI 1, Computers G Security, 1994. Cohen, F. et al., 1993. Defensive Information Warfare -Information Assurance, Task Order 90-SAIC-019, DOD Contract No. DCA 100-90-C-0058, December, 1993. Cohen, F. et al., 1994. National Info-Set Technical Baseline Intrusion Detection and Response, Lawrence Livermore National Laboratory Sandia National Laboratories, December, 1996. Dagle, J. et al., 1996. Assessment of Information Assurance for the Utility Industry, Electric Power Research Institute, Draft, December 5, 1996, Palo Alto, California, USA. Danthine, A.A.S., 1982. Protocol Representation with Finite State Machines, Computer Network Architectures andProtocols, EE. Green, Jr. Editor, Plenum Press, 1982. Denning, D.E., 1975. Secure Information Flow in Computer Systems, Ph.D. dissertation, Purdue Univ., West Lafayette, Indiana, USA, 1975.
Cohen, F., 1985. Algorithmic Authentication of Identification, InformationAge,January 1985, pp. 35-41,7,1.
Denning, D.E., 1976. A Lattice Model of Secure Information Flow, Communicafiotlr of the ACM, Vol. 19, No. 5, 1976, pp. 236-243.
Cohen, F., 1985b . A Secure Computer Network Design, IFIP, TC-11, Computen G Security, Vol. 4, No. 3, 1985, pp. 189-205.
Denning, D.E., 1982. Cryptography and Data Security, Addison Wesley, Reading, Masachusetts, USA, 1982.
Cohen, F., 1986. Computer I/irusex, ASP Press, 1986.
Dunn&an, J.F. and No!& AA., 1995. Victoryand Deceit -Dirty Trickc at War, William Morrow and Co., 1995.
Cohen, F., 1987. Protection and Administration of Information
113
Information System Defences: A Preliminary Classification Scheme/Fred Cohen
Feustal, E.A., 1973. On the Advantages of Tagged Architecture, IEEE Trans. on Computers C-22, No. 7, July 1973, pp. 644-656.
Pekarske, R., Cross-connects,
GASSP, 1995. GenerallyAccepted System Security Principles, Prepared by the GASSP Draft Sub-committee.
Sabnani, K.K. and Dahbura, A., 1985. A New Technique for Generating Protocol Tests, Computer Communications Review,
Hailpern, B.T. and Owicki, S.S., 1983. Modular Verification of Computer Communication Protocols, IEEE Communications, Vol. 31, No. 1, January 1983. Harrison, M. et al., 1976. Protection in Operating CACM, Vo1.19, No. 8, August 1976, pp. 461-471.
Systems.
Hecht, H., 1993. Rare Conditions Faults, IEEE 0-7803-1251-l/93,1993.
Cause
-
An Important
Hoffman, L.J., 1990. Rogue Programs: Kruq Horses, Von Noisted, Reinhold, 1990. Knight, January
G., 1978. Cryptanalysts 1978, pp. 68-74.
Corner,
Worms, and Trojan Cryptologia, Vol.
Lampson, B.W., 1973. A Note on the Confinement CACM, 16(l), October 1973, pp. 613-615. Landwehr, Computer
of
1,
Problem,
C.E., 1983. The Best Available Technologies for Security, IEEE Computer, Vol. 16, No. 7, July 1983.
Linde, R., 1975. Operating System Penetration, Computer Conference, 1975, pp. 361-368.
AIFIPS
National
Liepins, G.E. and Vaccaro, H.S., 1982. Intrusion Detection: Role and Validation, Computers G Security, Vol. 11, 1992, 347-355. Lyu, M., 1995. Handbook of Software
Its pp.
ReliabilityEngineering.
Merlin, PM., 1979. Specification and Validation ZEEE Communications, Vol. 27, No. 11, November
of Protocols, 1979.
Neumann, PG., 1995. Computer Related Rtikx. Addison Wesley, ACM Press, 1995. Neumann, I? and Parker, D., 1989. A Summary of Computer Misuse Techniques, Proceedings of the 12th National Computer Conference, October 1989. NSTAC, 1996. National Security Telecommunication Advisory Committee, Information Assurance Task Force - Electric Power Information Assurance Risk Assessment, November 1996 draft. Palmer, J.W. and Sabnani, K.K., 1986.A SurveyofProtocol Ventation Techniques, MilCom, September 1986.
114
1990. Restoration in a Flash Telephony, September 10, 1990.
Using
DS3
1985. SAIC-Iw,
1995. Inform&on Warfare - Legal, Regulatory, Policy, and Considerationsfor Assurance, July 4, 1995.
Organizational
Sarikaya, B and Bochmann, G.V, 1982. Some Experience with Test Sequence Generation for Protocols, Protocol Spec$cation, Testing, and Veriication ZZ, C. Sunshine editor, North-Holland Publishing, 1982. Shannon, C., 1949. Communications Theory of Secrecy Systems, Bell Systems Technicaljournal, 1949, pp. 656-715. Spafford, E., 1992. Common System Vulnerabilities, Software Engineering Research Center, Computer Science Department, Purdue University, March 1992. Sunshine, Specification
C., 1979. Formal Techniques for and Verification, ZEEE Computer, 1979.
Protocol
Thyfault, M.E. et al., 1992. Weak Links, Information Week, August 10, 1992, pp. 26-31. Turing, A., 1936. On Computable Numbers, with an Application to the Entscheidungs Problem, London Math SOL Ser 2, Vol 42, November 12, 1936, pp. 230-265. van Eck, W!, 1985. Electromagnetic Radiation from Video Display Units: An Eavesdropping Risk?, Computers & Security, Vol4,1985, pp. 269-286. Voas, J. et al., 1993. A Model for Detecting the Existence of Software Corruptions in Real-Time, IFIP-TCII, Computers G Security, Vol. 12, No. 3, 1993, pp. 275-283.
Winkelman, 1995. Misdirected phone call shuts down local power, ACM SZGSOFT Software Engineering No&s, Vol. 20, No. 3; July 1995, pp. 7-8. WSCC, 1996. Western Systems Coordinating Council, WSCC Preliminary System Disturbance Report, August 10, 1996, draft. Fred Cohen
can be reached at tel: +l
510-294-1225.
510-294-2087;
fax: +1