Emerging security requirements Harold Lorin discusses the security requirements of recently evolved information systems
Interest in the security of information systems has increased partly because of evolving systems maturity, and partly in response to dramatic intrusions into major systems. These have included intrusions by amateur 'hackers' which, although embarrassing, have caused no substantial damage. Intrusions from employees are far more damaging but have not been widely publicized. The paper describes the US government's security policy and its implications for private organizations. A security policy is basic to the concept of security and defines the manner in which an information system can access and manipulate data. Protection mechanisms which enforce security policies are discussed. Mandatory and discretionary policies which form a particular security policy are outlined. The characteristics of a formal security model are also defined, and the design of a secure operating system is discussed. The present status of information systems security is outlined. Keywords: information networks, security, mandatory security policy, discretionary security policy, secure operating system
Information systems (IS) are exposed to a variety of potential security risks. Computer users may attempt to falsify identifications and illegitimately by-pass authorization tests. The integrity of an information system may be broken by a number of different users who may have access to the system for different reasons. Operators can also misdirect sensitive data to unsecured devices, they may dump data that they are not authorized to remove from the system, and they may remove media with sensitive data from the system. Maintenance engineers IBM Systems Research Institute, 205 East 42nd Street, New York, NY 10017, USA
can disable protection mechanisms and can drain data from the system with private utility programs that can run without restrictions. Systems programmers may make changes in systems software so that programs whose names are known are given illegal functions and drain data, leaving no trace that an illegal program was run. The interest in security has been developing over the last few years, partly as a result of evolving systems maturity, and partly in response to dramatic intrusions in major systems. Some of these intrusions have come from amateur hackers, who have caused no substantial damage, but have been a source of major embarassment to the IS community. Other intrusions, which have as a matter of polio/ been kept extremely secret, have come from employees. There is a growing awareness, that employee intrusion into electronic systems causes substantial amounts of resource loss.
US S E C U R I T Y P O L I C Y
In September 1984, the White House in the USA issued a Directive 1 establishing a national security policy. While the policy naturally centred on government and military information systems, it suggested that new security policies could be directly relevant to the practices of the private sector. The document deals with systems, particularly network oriented systems, which 'process the private or proprietary information of U.S. persons and businesses' as systems for which there should be a national policy concern. Mention is made of 'appropriate partnership between government and the private sector' to achieve security goals. The directive commits the government to 'offering assistance in the protection of certain private sector information...' In particular, the government will 'encourage, advise and, where appropriate, assist' private
0140-3664/85/060293-06 $03.00 © 1985 Butterworth & Co (Publishers) Ltd vol 8 no 6 december 1985
293
companies to identify systems which handle sensitive data, to determine the vulnerability of these systems and formulate measures for providing protection. This government position is going to influence private organizations in two fundamental ways. First, of course, the government will require that its contractors and subcontractors working on classified projects be in conformity with Department of Defense policies for securing sensitive information. Secondly, of course, as standards and technologies emerge all organizations with sensitive corporate or industry data will be interested in adopting them and accepting the leadership of the government even if they are legally or contractually required to do so. As a general security interest develops, vendors of systems will find that user organizations will show increasing concern about security issues and that concern will begin to be reflected in vendor hardware and software. A major goal of the government is to encourage the computer industry to create and make available to the commercial market place systems that can be trusted with the storage, transmission and processing of sensitive data. The government is anxious that its concept of security is understood to be applicable to all organizations that must conform to regulations about security and privacy, and that wish to control the dissemination of information within their own organizations.
SENSITIVITY
AND
NEED
TO KNOW
Basic to the concept of security is the security policy. The security policy states the manner in which an information system can access and manipulate data. The security policy must be enforced by protect/on mechanisms in the system. The policy is a statement of who can do what, to what data. The protection mechanism is the combination of hardware and software features that translates the policy into systems actions that protect elements in a system from malicious or erroneous intrusion. The hardware protection mechanism includes a control state, a memory address protection feature, and the address creation algorithms. The software mechanisms include a directory containing privileged access lists, an operating system resource allocation mechanism, password and Iogon facilities, etc. These mechanisms help ensure that systems which process, store, or use and produce classified information will prevent deliberate or inadvertent access by unauthorized persons. The protection mechanisms must be capable of supporting a variety of security policies. The security policies are, hopefuly, appropriate to the real risks and threats of loss. A security policy is a precise statement establishing the nature of each system's control of sensitive information. Security policy has two aspects, mandatory policy and discretionary policy. Mandatory policy involves the enforcement of sensitivity security levels such as secret, top secret, etc. Many government contractors and other companies operate within a regulatory environment at the local, state or federal levels that prescribes how data is to be kept private and secure. The military application of
294
mandatory policy involves the protection of data in accordance with its classification. To support mandatory policy, a system must show that it has enforceable rules based upon a comparision of the user and the object hierarchical security designations. It must also show that it can restrict access on the basis of this comparison, and that it can prevent unauthorized downgrading. An important military aspect of mandatory policy is 'marking' or 'labelling.' The requirement is that each document produced by the system contain appropriate labelling on each page in accordance with the classification of the most classified information in the document. Discretionary security policies are intended to provide another level of security control. The discretionary policies are defined when the access control lists are made out or the permissible views of data with contemporary systems defined. The discretionary policies are based upon appreciations of 'need to know' within an organizational structure. Thus, personnel data is restricted from access outside of the personnel department, manufacturing data is controlled and promulgated only to those who are concerned with manufacturing, etc. Figure 1 shows mandatory policy as the horizontal layering of Top secret, Secret, Confidential, etc. The discretionary policies are represented vertically. Figure 1 shows that in order to access a unit of data, one must have the clearance and the 'need to know'. i i I
Finance
Marketing
Manufacturing
I I
I I
A
I i
I
B
Personnel
I I I I Top secret I I i Secret I I I Confidential
I
I
I
II Unclassified
I
Figure 1. Mandatory policy Thus, in order to access information at A, an individual must have a secret clearance and a manufacturing oriented need to know. In order to access data at B, an individual must have a Confidential clearance and a Marketing oriented need to know. Facilities for the enforcement of discretionary policies are common in software. IBM offers RACF with MVS, directory access control is built into Multics, and other operating systems or database managers have mechanisms that support the idea of proscribed access. Some MIS organizations have integrated security issues into data design by defining the set of views of data a user or user organization may access. Interest is centred on the 'multilevel security' system which allows users of diverse clearance and 'need to know', to share a system with data for which not all users have clearance. The sharing requirement is important because the isolation of every vertical and horizontal category into a dedicated system would represent an
computer communications
intolerable constraint on the design of information systems and might actually create vulnerabilities. The population of operators and systems programmers would necessarily increase, as would the number of devices that would have to be secured. The proliferation of populations of people and devices that must be trusted inevitably increases risk. C H A R A C T E R I S T I C S OF A SECURE SYSTEM
Security in an information processing system requires functions that monitor the access points to the system; monitors references to data within the system, and control the sites where sensitive data can be displayed or printed. Logon procedures, authorization lists, possibly some form of encryption, and other functions are required. These will be described in more detail later. However, a system that apparently has these security functions is not necessarily a secure system unless it can demonstrate that these functions are correctly executed; that they fail under only well understood circumstances, and that they cannot be bypassed so that paths to sensitive data can be found that lead around the protection mechanisms. A secure system must have integrity. Integrity means that the system will perform as it is specified to perform. The components of a system must not only conform to specification, but they must also be shown to be adequately tested. The components in the operational environment must be protected against hardware and software component failure. There must be no failure that can prevent the protection mechanism from enforcing the security policy. A test of integrity is predictability. A system must fail in a predictable manner after attempting corrective action, and properly signal the nature of the failure. The system must be able to enforce its own rules about access to resource under all conditions, so that a malfunction cannot open an impermissable path to data or process. A combined hardware architecture and operating system access control mechanism must always fail by refusing to grant access rather than by incorrectly granting it. At the most rigourous certification level it must also be shown, however, that failure cannot be forced to maliciously deny information to qualified requestors. An associated requirement of a secure system is auditability. A system must be capable of examination in order to determine that it is in fact behaving as specificed, and that its status at any time conforms to expectation. The system must contain independently auditable parts with formal interfaces, and with an ability to log and journal its interfaces, and protect these journals from being vulnerable sources of sensitive data. All systems variances must be visible. Finally, a secure system must be open to control. The system must be changeable, but have resistance to unauthorized change. The system must accept correct definition of scopes of control. There must be sufficient 'granularity' so that data not capable of sharing need not be packaged with data that is capable of sharing in order
vol 8 no 6 december 1985
to produce the necessary sharing. (Granularity refers to the smallness of a defined protectable area of program or data). It is better to be able to define scopes of protection that are quite small, because it is easier to control who can have access. As the amount of information in an object decreases the number of people who must share it also decreases. These requirements for a trusted computer base imply a number of things about the nature of the hardware, the software, and the application development methods that associate with an information system. If a precondition for being a secure system is being a correct system, then all the disciplines of software engineering that pertain to program quality and program correctness are relevant to the security environment. This will require not only the control of application development processes in an organization, but also the establishment of processes for the certification of the correctness of operating systems that are accepted into the information environment. Without proper methodology for certification, of course, there is a potential risk that systems known to be insecure will be allowed into the enviromment. The direct and immediate management implication of this is that the officer who has been given responsibility for security must be given some interests and rights in the development of software engineering methods and in the acceptance of vendor software. Without these rights the security officer has no possibility of creating adequate security environments. In addition, the methodology used for determining acceptance must be well understood and demonstratable. To achieve secure systems in the future physical protection, and functions such as Iogon protection, authorization lists and encryption, are required as well as certification of the processes that created the system and inspection of its structural correctness and integrity. Thus, security becomes a mainline concern of systems management, operation and design, and must be controlled by technologically competent management. It can no longer be left to the hired private agency.
F O R M A L SECURITY M O D E L S
The basic security paradigm used by the United States Department of Defense derives from a formal security model called the BelI-Lapadula model. This was introduced in 'Secure Computer Systems: Unified Exposition and Multics Interpretation,'Mitre Corp., USA, July 1976). This model insists that in a shared system it be impossible to: • gain access to data you should not see, • declassify sensitive data, • improperly upgrade your own security dghts, Using functionality and assurance, the United States Department of Defense has defined four systems divisions: A, B, C, and D. At the lowest level the D systems have no creditable security characteristics. The C systems support discretionary security policies. The B systems are resistant
295
to covert intrusion and have increasingly rigouous correctness criteria, culminating in the A system that must be mathematically proven to be correct. To be considered secure a system has to show certain functions. These functions include access control lists, password verification, and process and address isolation, which are part of the govemment's definitions for the functionality of a secure system. At the most critical secure levels, however, focus is not on what the system does to separate users and data (subjects and objects), but rather how it can demonstrate that it does these things correctly; that the mechanisms cannot be avoided or tampered with in any way. Each level builds on the previous level, so that an A level system must have all of the password, audit, process isolation and protection features of lower level systems, with a proof that they are correctly designed to support the security policy, that the design is correct, that the program conforms with the design, that the code is correct, and that it cannot be tampered with or bypassed on the way to data. In the DOD secure systems definition and classification structure, a A B3 system must show that the reference monitor concept is in place and that all acceses to objects are mediated and shown to be correct and permissable. The task control block (TCB) of a B3 system must also show that it is tamperproof, and that it is small enough to be subject to analysis and tests. All code must be excluded that is not relevant to security enforcement. Software engineering methodology must be used during the TCB design, and implementation of the TCB must minimize its complexity. In this way the system has adequate recovery procedures, and is highly resistant to penetration. The B2 and B3 systems must both show strict and effective use of software engineering methodologies. The TCB shall incorporate significant use of layering~ abstraction and data hiding. Significant software engineering shall be directed toward minimizing the complexity of the TCB... An A level system provides verified protection. This class is characterized by the use of formal verification methods to assure that the security controls in the system can effectively protect sensitive data. They are functionally equivalent to B3 systems, because no functional requirements are added nor are any unique architectural features required. However, at this level a unique demonstration that the system is adequate and correct is required. Formal design specification and verification techniques must be employed that lead to the highest assurance that the TCB is correct.
SECURITY AND THE DESIGN OF OPERATING SYSTEMS Good design in an operating system must naturally achieve the integrity, auditability and controllability requirements of a secure system. Optimistic designers feel that security requirements can be approached as a natural end in the improvement of software quality. 296
In a secure operating system it must be possible to demonstrate that no process can: • forge a name to an object • pass on the name to another process illegitimately • enlarge its rights over an object either by its own manipulation or by any sequence of Calls, Sends, or other communications interfaces. • have its rights enlarged illegitimately by virtue of some pattern of Calls or Sends. If this can be domonstrated, then one can make the statement that the operating system can monitor reference of any process to any object at all times. If the operating system kernel and trusted processes are correct and cannot be tampered with, then this is a demonstration of a secure system. The kernel of an operating system are those basic mechanisms which must exist to control sharing and resource utilization in the system.
AI-rEMPTS TO CREATE SECURE OPERATING SYSTEMS Over the last decade there have been a number of attempts to create a secure operating system and a continuing government review of such systems as successful candidates for various secure environments. An early project was the Hydra operating system at CarnegieMellon, other work has centred on Multics, VM/370 and Unix*. There have been a number of attempts to build secure versions of Multics. Multics has many of the nominal properties of a secure operating system. It runs on hardware with a segment oriented (object oriented) addressing scheme; it uses hardware based hierarchical privilege ring structure, and it has an authorization list facility in its file structure. Attempts to develop Multics systems with kernels have been made at MIT and other places, and have been widely documented. After a long history of failure, due perhaps to the magnitude of the population of Multics segments that ran in the most privileged dng, Multics may well be moving toward some B level accreditation. Unix has been the target of a number of attempts to build a secure system. The Kernalized Secure Operating System (KSOS) for the PDP/11 is a Unix derivative, as is the UCLA Data Secure Unix. KSOS is a security kernel that enforces the BelI-LaPadula model on user programs but which suspends them for identified trusted processes Thus 'trusted processes' can change the security level of objects, and their 'need to know' classifications can invoke security kemel functions and write high level data into low level objects. The Non-Kernel Security Related (N KSR) category accords roughlywith the DOD TCB as an organizational concept. The objection to the KSOS approach is that the trusted processes makes it difficult to determine exactly what formal security model KSOS enforces since the trusted processes represent an accumulation of exceptions. Work has been in progress to secure VM/370 at Systems Development Corporation beginning in 1976. *Unix is a trademarkof AT&T Bell Laboratories
computer communications
The result is a product called KVM/370, Kemalized VM. The approach was to retrofit VM into security relevant and non-security relevant modules in accordance with the DOD requirement for B level systems. The system undertakes the Bell-LaPadula model of security. It has been partially verified using SDC formal methods and has been prototyped and demonstrated to the DOD. KVM/ 370 is a creditable advanced major technological effort in the area of general purpose secure operating systems. There have been a number of other experimental efforts at secure operating systems based upon Multics or Unix models. There has also been some recent indication that the VAX VMS system is being moved toward improved security features in a way that would satisfy some DOD requirements. The Secure Communications Processor (SCOMP) of Honeywell provides a hardware basis for a very high level operating system that is undertaking formal verification at the A level.
PRESENT STATUS Information systems have a multiplicity of security vulnerabilities that are compounded by interconnection. At the net~/orking level the major effort has been the use of encryption on the network facilities, although it is possible that use of fibre optics that do not at the moment seem tappable will improve the protection of data on the network. Current cabling is quite vulnerable to tapping and may be protected only by encryption or shielding. Shielding is not practical along the full run of many network structures. Making tapped information unintelligible through encryption is now a standard technique. However it is difficult and pits computer against computer, cryptographer against cryptographer. It does not seem likely that the commercial community will select operating systems on the basis of security features alone, although it is clear that the DOD will. What seems more likely is that all operating systems will evolve slowly to produce improved security properties, partly because the development processes will improve, and partly because they will have to be restructured to serve new systems architectures, The rate of this evolution will be determined by the rate that the commercial community, with the encouragement of the DOD, convinces itself that security risks are real. If users do not express interest and show a willingness to make performance tradeoffs then we will not be able to secure sensitive data for a long time to come. One method that can be used to help secure data is to very rigorously control the functions that can be requested from a user interface. A system with closed functionality, that permits no additional user programming, and that does not permit unanticipated queries, may by virtue of the impenetrability of its interface achieve some measure of protection over the data entrusted to it. It is known for example, that inadvertent or malicious errors are possible by programming in assembly
vol 8 no 6 december 1985
language and then by programming in a higher level language. Similarly as all functionality is defined by preset menu sequences, or tree structures of menu sequences, it becomes harder and harder for a user to find a covert channel or to circumvent the security mechanisms. Another approach to security has been to isolate the data in specialized data nodes that place a security filter between programs running on an application processor, and the functions of the data node. All data requests are passed through a filter at the data node that determines that the request is correctly formed and it asks for a view of data that is permissable to send to the relevant user, or node, or terminal. The operating systems at the applications nodes, however, still need to be secure if there are multiple users at those nodes. This is because it is necessary to show isolation between users at the application node as well as isolation from data and application. For single-user stations, however, this systems design, which is an extension of the idea of a file server node, might be adequate for imposing discretionary security with some confidence. The question of workstation security has been receiving much attention as a result of the proliferation of PC-like systems. It is clear that PC's that are to be linked together and share and send data to each other will have to have password protection for the system. Equally clear is the necessity for access control lists, since data may be referenced from the network, giving the system an essential multiuser characteristic. Each workstation will require some level of classification characterization. The complexity of these functions depends upon the extent to which persons of different need to know and different security level may share the same terminal, if not concurrently, then consecutively. It will also be necessary for the workstations to conform to the labelling requirements, and mark each screen and copy produced with the appropriate classifications. The entire technology of data sharing, and movement to and from workstations is now beginning to take on conceptual shape. The main aim seems to be to control what data can be sent to a workstation, based on station and user defined views that are retained by the 'host system'. But given that control, the issue of how to secure and protect the data that is sent to the workstation remains. There are already versions of PC-like systems that have keys so that they cannot be turned on by anyone but the physical key holder. There is a difference of opinion about whether irremovable discs adds or subtracts from security. The argument for their adding to security is the argument used for large closed systems. If one cannot remove the disc, one cannot take the data on it from the protection of the system. However, some security theorists feel that the opposite is true with a small system, since the entire small system can be removed with reasonable ease. Therefore, the security policy requires that all sensitive data be recorded on removable media that can be handled like classified hard copy and locked in legally secure areas when appropriate. Operating systems designers may argue whether or not
297
it is possible to build systems that are acceptably fast, have acceptable recovery and are not penetrable. It is possible that a hardware and software system that was started from scratch using these three goals might be acceptable. In fact, contemporary operating systems are designed for functionality in the constraint of performance, and have not been designed with the aim of security. The design processes of commercially available operating systems are not able to demonstrate invulnerabilty. There is no certainty that the protocols of software engineering and structuring concepts can really be applied to provide security in advanced multifunction systems.
REFERENCES 1 NationalSecurityDecision Directive 145 (Unclassified version) The White House, Washinston DC, USA (17 September 1984) 2 Department of Defense Trusted Computer System Evaluation Criteria CSC-STD-001-83, Library S 225, 711 3 Lorin, H, 'OperatingSystems' in Flores and Seidman, (Eds) Handbook of Computing, Van Nostrand, (1984) 4 Gold, B D, et aL, 'A Security retrofit of VM/370' Proc. NaL Computer Conf. (1979) pp 335-344 5 Saltzer, J H, and Schroeder, M D 'The protection of information in computer systems.' Proc IEEEVol 63 No 9 (September 1975) pp 1278-1308
the quarterly international journal for all those concerned with the social, legal, economic and political issues of information technology -- designed to assist organizations and governments to formulate their information policies for the coming decades For further details and a sample copy write to: Sheila King Butterworth Scientific Limited PO Box 63 Westbury House Bury Street Guildford Surrey GU2 5BH UK Telephone: 0483 31261 Telex: 8 5 9 5 5 6 SCITEC G
298
D
E3L
computer communications