Managing network security — Part 2: Where should we concentrate protection?

Managing network security — Part 2: Where should we concentrate protection?

Network Security Januarv Managing Network Security - Part 2: Where Should We Concentrate Protection? FredCohen Over the last several years, computin...

481KB Sizes 0 Downloads 60 Views

Network Security

Januarv

Managing Network Security - Part 2: Where Should We Concentrate Protection? FredCohen Over the last several years, computing has changed to an almost purely networked environment, but the technical aspects of information protection have not kept up. As a result, the success of information security programmes has increasingly become a function of our ability to make prudent management decisions about organizational activities. These articles take a management view of protection. Thev seek to reconcile the need for security with the limitations of technology.

The makeup of modern networks Most modern networks include a mix involving some or all of five different classes of computing systems, each with substantially different protection requirements: l

l

l

8

but sometimes to a larger organization or, as in the case of a Web server, to the world at large. l

Safety-critical systems: these typically include embedded systems such as computers that control automated manufacturing facilities or aeroplanes in flight. Personal workstations: these computers are used by one or a small number of individuals, are on the desktop. and are usually networked to other resources. They typically include personal computers, telephone instruments and peripheral devices File servers: these computers provide shared access to common information, usually within a local area network,

l

Large systems: these computers provide centralized resources that are typically too expensive or too critical to distribute or replicate throughout the organization. Examples include mainframes, classical data processing systems, supercomputers and archival storage and retrieval systems, Infrastructure components: these components provide connectivity but rarely run user-level applications or provide a direct point of presence to non-support personnel. Examples include routers, gateway computers, telephone switching systems and network control systems.

The rest of this article looks at special protection properties of

7 997

these different types of systems in more detail.

Safety-critical systems Safety-critical systems often have very stringent requirements for availability and integrity. For example. some critical control systems used in dynamically unstable aircraft cannot fail for more than a millisecond without causing a crash. Similarly, many of these systems control elements of manufacturing systems that could cause loss of life if not properly controlled. The stringent requirements for availability and integrity are normally met through an engineering process. This process carefully considers the range of possible events in the physical world. relates them to control events. and produces a design which fails in as safe a mode as possible under each set of conditions that could do to cause harm. This is typically done through a fault tree analysis which considers all single faults and, in some cases, substantial numbers of multiple simultaneous or sequential faults. Protection of such a system consists primarily of assuring that the system is in proper and well-defined states throughout its operation. Because typical software systems used in most current computers have inadequate protection to provide this high degree of assurance, special purpose systems such as programmable logic controllers are used to intermediate between highly complex software systems and hardware capable of causing harm. Redundant interlocks are typically used to assure that regardless of any errors in computation, physical

01997

Elsevier Science

Ltd

January 7997

redundancy conditions.

will prevent

Network Securify

unsafe

High-level systems are then networked along with detailed data gleaned from instrumentation to provide interoperation. Interoperating components are typically redundant as well. For example, in an automated manufacturing plant, inspection stations check the results of manufacturing processes to assure that the resulting component meets specification. Subsequent testing provides yet another level of redundancy in order to assure that the overall system is working properly. Information protection in these consists of systems then redundancy, well-thought-out design and implementation, testing, controls, physical interlocks, failsafes. and other similar methods, The reason this is possible for safety-critical systems, is that the loss associated with a failure is substantial and quantifiable, and economics of scale justify the high cost of high assurance based on its positive effect on the bottom line.

Personal workstations Personal workstations tend to represent the opposite end of the protection spectrum from safety-critical systems. Failures in these systems usually affect only the output of a single worker and, with some notable exceptions, have little affect on the overall operation of a substantial operation. For this reason, it is almost never cost effective to spend much money protecting particular personal any workstation. There are exceptions to this.

01997

Elsevier

Science

Ltd

In tightly integrated organizations where each worker’s output depends on the output of other workers, failures in even a single workstation may produce cascading affects that negatively impact the output of many other workers. For example, in a sales office where all order processing goes through one person, a failure in that person’s workstation might cascade so as to reduce the efficiency of the whole organization. In this case, the lack of organizational redundancy makes one personal computer more critical than others. Some people are stewards for more critical information than other people. Protection failures in those people’s personal workstations tend to have a more substantial impact. For example, the strategic plan for introducing a new product line or the secret chemical formula for a popular soft drink might be contained in a personal workstation. Some attack techniques produce cascade failures that might impact many personal workstations and therefore have a more profound overall effect on the organization. Computer viruses are a good example of such an attack technique. Common weaknesses may permit widespread abuse. If a bad person knows of a weakness that is pervasive throughout the networked personal workstations of an organization, they may be able to cause widespread harm and thus have a substantial overall impact. This is called a common-mode failure. Entry into one workstation may be leveraged into a far more pervasive break-in. In most cases, once a trusted internal workstation has been broken into, it can be

exploited to gain far greater access. Published examples indicate that even a single corrupted internal workstation can sometimes be leveraged into a billion-dollar loss. Cost effective protection in personal workstations is a very tricky challenge. It is almost never cost justified to prevent simple attacks or failures, but more sophisticated attacks can leverage even a single workstation into staggering harm. For this reason, it is vital to (1) understand which personal workstations must be well-protected and properly protect them, (2) design the work flow of the organization so as to minimize the impacts of individual failures on overall productivity, provide effective (3) organizational-level protection against threats like computer viruses and common mode failures, and (4) provide adequate internal controls so that break-ins effecting individual workstations can only have a limited overall impact on the organization. Cost-effective mixes of protection often have substantial interaction with the structural design of information networks.

File servers File servers usually support 10 or more personal workstations and provide for shared services like printing, E-mail forwarding and database access. They are commonly used to coordinate activities within a department or based on function. For example, one file server might provide central printing services for a small department, while a different file server might provide E-mail forwarding services to a building.

9

Network Security

Because file servers serve many users, downtime, corruption of data, or leakage of data may have a greater effect on the organization as a whole. For if the payroll example, department’s file server fails at a critical time, it might be impossible to produce pay cheques in time for a particular pay period. In most cases, protection for file servers is given higher priority than individual workstations. For example, access controls are normally configured on file servers, trained systems administrators often help to manage file servers on a part-time basis, regular maintenance and backups are done in many cases, and audit trails may be kept and periodically examined. The justification for added protection comes from their increased importance in the overall operation, and the cost-effectiveness of increased protection comes from an economy of scale.

Large systems The high cost of large systems is by their usually justified high-valued functions and, as such, the requirement for effective protection tends to be far more stringent than it is for file servers. In an engineering company, for example, a large system might be used to perform complex analysis structural stresses on of components of bridges. If bridges fail because of incorrect computations, the consequences are very serious, Similarly, large systems may be used to control large production facilities. Downtime in this situation may result in extreme financial losses, and errors or omissions in the coordination of parts for just-in-time delivery might result in

10

January

production line shutdowns to high reject rates. Large systems tend to have more than one systems administrator, full-time operations staff, systems programmers and other similar human support systems that smaller or less critical systems do not have. Because of the size of the operation, protection in large systems can be tuned on a case-by-case basis to the criticality and financial impact of failures in the protection system. Large systems tend to use user and group-based access controls, often involve a development environment connected to an operational system through a change-control function, have regularly scheduled and verified backups, produce detailed audit trails that are examined on a regular basis, are periodically audited by internal and external IT auditors, have 24-hour service contracts, and in some cases have continuous on-site service personnel with cold-standby hardware components available for immediate use. The cost of protection for such a system is justified by the system’s criticality and the high cost of failures. Availability is normally given high priority in this environment because the cost of operating the system isso high that the loss from downtime justifies almost any effort to bring the system back up. For example, some new supercomputers cost in the order of USS 1 billion to acquire and operate over a five-year period. That comes to almost $1 million a day, or $40 000 per hour, or $10 per second. With direct costs on this scale, not including application-specific losses, it should be relatively easy to justify the continuous presence of an information protection

7997

specialist, some hardware engineers, several operators, and a few systems programmers.

Infrastructure components Infrastructure components form the basis for interconnecting information systems together. As such, they are vital to the overall effectiveness of a protection programme. One of the raging debates in the information protection community over the past several years has been over the balance between host-based protection and infrastructure-based protection. Some that argue since infrastructure protection is, and will likely always be very limited, host-based protection is the key to a successful protection programme. Others take the position that host-based protection for the vast majority of hosts is not cost justifiable and that infrastructure-level protection is the only cost effective option. My personal view is that a balance is needed. When economies of scale favour infrastructure-based protection and that production is effective against the threats faced by an organization, it is a good choice. In today’s computing environments there are places where infrastructure-level protection is justified. Examples include, but are not limited to, firewalls between the Internet and most organizations, intrusion detection systems in some networks, PBX-based protection of telephone networks, router-based traffic limitations on internal centralized networks, and incident response.

01997

Elsevier Science Ltd

January

7 997

Network Security

In many cases, infrastructure-level protection covers some, but not all of the potential vulnerabiiities. In these cases, a mix of infrastructure and host-based protection may be required. For example. most modern firewalis don’t provide protection against the content of information they allow to pass. An internal Web browser might bypass the firewall’s protection mechanism by acting as a Trojan horse within an Internet-based Web page. In order to address this challenge, we may find that host-level protection is most cost-effective against things that firewalls do poorly. At the same time, it would be a very expensive task to secure all the hosts in a

substantial network against low levei attacks that are easily and effectively prevented by most network firewalls. The key to effective infrastructure-level protection is finding the right mix between host-based and infrastructure-based techniques and gaining the economics of provided at the scale infrastructure level wherever possible.

Summary It is often hard to see the big picture in a big company, but in

Real World Anti-Virus Product Reviews and Evaluations - Part 2 Sarah Gordon and Richard Ford This article discusses frequently encountered errors in the evaluation process relative to anti-virus software selection by examining some of the methods commonly used by corporate and governmental personnel working in the area of Management Information Systems (MIS). In addition to discussing inherent problems, we will suggest alternative methodologies for evaluation. We will examine commercial certification processes, as well as the Information Technology Security Evaluation and Certification (ITSEC) approach, as possible models for anti-virus product evaluation and certification. Finally, we will discuss ways in which the information which is currently available may be used to help select anti-virus software which is both functional and cost efficient.

The independent professidnal There are reviewers expertise

01997

evaluator

some who to

Elsevier

(IPE)

independent possess the conduct a

Science

Ltd

meaningful review. One good example of such a reviewer is Rob Slade, a frequent contributor to Virus-L and the Fidonet Virus echo and author of several books on computer viruses. His reviews

the case of information protection, only the big picture leads to cost effective results, Our brief overview of different sorts of networked systems demonstrated a wide range of possibilities, but the devil is in the details. Cost effective protection in today’s networked environments is required and that can only be achieved through careful detailed analysis of the systems as they operate within their environment. In future issues, I hope to explore the balance in more detail, but for now, I simply wish you a happy and healthy new year.

illustrate a major difficulty experienced by others who are attempting to carry out reviews: lack of resources. However, in Slade’s case much of this is made up for by his experience and expertise. While Slade represents all that is best about the IPE, there are many self-appointed experts who have neither his experience nor expertise. There is no easy way to discriminate between those who are qualified to carry out such a review and those who are not. One only has to recall the glut of virus ‘gurus” who appeared during the ‘Great Michelangelo Scare’ to see the problems which you will have deciding how much reliance to place in independent reviews of software. Another notable reviewer (and founder of the Italian Computer Anti-Virus Research Institute), Luca Sambucci, has provided independent testing to computer magazines since 1992. His anti-virus tests are thorough and competent; however, he has not

11