Setting optimal intrusion-detection thresholds

Setting optimal intrusion-detection thresholds

Computers b Security, 14 (1995) 621-631 0167-4048(95)00017-8 Setting optimal intrusiondetection thresholds B. C. Soh and T. S. Dillon Computer Netwo...

887KB Sizes 0 Downloads 77 Views

Computers b Security,

14 (1995) 621-631 0167-4048(95)00017-8

Setting optimal intrusiondetection thresholds B. C. Soh and T. S. Dillon Computer Networks Laboratory, Applied Computing Research Institute, and School of Computer Science G Computer Engineering, La Trobe University, Melbourne, Victoria 3083, Australia

In this paper a model is developed to study an intrusion detection process. From the model, a measure called the Secure Computation Index is proposed. This index is used to quantify the total aspect of an intrusion-safe (or intrusionresistant) system. Comparative studies based on the index can assist in making decisions on optimal strategic controls against any possible system intrusion. In this paper, we show how the model can be used to help in setting optimal intrusion-detection thresholds, which will provide the best intrusion coverage with the minimum false positive rate. Keywords: Intrusion detection, Intrusion mal intrusion-detection thresholds.

process

model,

Opti-

1. Introduction n the past few years, several break-ins into computer networks and computer systems have occurred, and break-ins are becoming more widespread. They may be the acts of external intruders (outsiders) or internal intruders (insiders) or both. More often than not, these intruders attack the system with malicious intent to affect system security attributes, namely secrecy, integrity, and service availability. Needless to say, the widespread break-ins have led to considerable interest in (i) the development of strategies and approa-

I

0167-4048/95/$9.50

0 1995, Elsevier Science Ltd

ches that would contain system intrusions and (ii) the creation and enforcement of policies and laws that would deter system intrusions. In Section 2 types of system intrusion-detection mechanisms, and audit trails are discussed. In Section 3, we develop an analytical model to analyze system intrusion processes, while a mathematical notation is given in Section 4. In Section 5, we present a system of equations, a solution method and an example, and develop a measure called the Secure Computation Index to quantifjr the total security aspect of an intrusion-safe computer system. In Section 6, we describe the sensitivity studies carried out by varying those parameters which have effects on system security, and present and discuss the results. Section 7 concludes this paper with future directions. 2. System

intrusion

detection

The main thrust of the intrusion-safe approach is to use audit trails, intrusion-detection mechanisms, and system monitoring techniques. The basic principle of these mechanisms is derived

621

6. C. Soh and T. S. Dillon/Setting thresholds

from the hypothesis that system security violations (including system misuses [l]) can be detected from abnormal patterns of system usage [2] and accounting [3]. Note that abnormalities in usage and accounting are correlated, since accounting data are generated from a pattern of usage. Audit data are collected and analyzed against the existing profiles, or historical usage or accounting patterns, for any possible system penetrations or aberrant usage. The performance or effectiveness of these intrusion-detection mechanisms is measured by the intrusion coverage, denoted by ci, which is the conditional probability of detecting an intrusion or attempted break-in given that the intrusion or attempted break-in has occurred. Automated security analysis of audit trails can help improve the intrusion coverage. However, it is extremely difficult to have a perfect intrusion coverage (i.e. ci = 1) for two reasons: (1) normal partially

and abnormal characterized;

activities

can

tion mechanisms produced almost the same pairs of data. An intrusion coverage of about 99% results in a false alarm rate of about 8%, 95% intrusion coverage results in about 4% false alarms, 85% intrusion coverage results in about 2% false alarms and 70% intrusion coverage results in about 1% false alarms.

2.1 Audit trails

Collection and analysis of audit trail information has long been a minimum security requirement in most database systems. In [5], a security audit trail is regarded as: “a set of records that collectively provide documentary evidence of processing used to aid in tracing from original transactions forward to related records and reports, and/or backwards from records and reports to their component source transactions.” In [2], audit trail analysis is suggested system intrusion threats posed by:

to address

be only

(2) even if normal and abnormal activities could be fully characterized, it might be prohibitively expensive to design intrusion-detection mechanisms covering all types of intrusions. An alarm is flagged if certain intrusion-detection thresholds are reached. Both the detection sensitivity level and false alarm rate depend on the thresholds set. When the detection sensitivity level is high, the false alarm rate will also be high. The problem of false positives can be approached by designing an intrusion-detection system which has better discriminating power. In [4], different detection mechanisms were studied to compare intrusion coverage and false alarm rates. From the test results of the study, it was observed that the relationship between the intrusion coverage and the false alarm rates remained relatively constant: the different detec-

622

optimal intrusion-detection

?? Masquerading

?? Penetration

?? Leakage

or successful by a legitimate

by a legitimate

?? Inference

by a legitimate

?? Malicious

software.

?? Denial

user.

user. user.

of service.

In short, the aims are three-fold: ?? To

detect possible

?? To

deter possible

?? To

break-in.

of having

system system

effective

audit

trails

intrusions. intrusions.

help assess the amount of damage caused, in the case where system intrusions are detected by other means.

Computers

In the past, audit trails have usually been geared more towards host-based intrusion-detection systems. However, in [6] issues involved in network These are summarized auditing are explored. below in relation to audit data generated in a networked environment: Collection, which events should be should be audited.”

is concerned auditable and

with what

“what events

Storage, which is concerned with “what data to store, where to store it, how to reduce its volume, and how long to keep it.” Protection, which is concerned with “confidentiality, integrity of the data and management functions, controlling access to the data, ensuring that the audit mechanism does not cause denial of service and providing adequate trust in the mechanisms.” Integration, which is concerned with “correlating related events in different components and compensating for differences in format and content of audit trail data collected by different components.” Analysis, which is concerned ways to automate the detection damage assessment functions.”

with “finding of intrusion and

2.2 Types of system intrusion-detection

mechanisms

There are various intrusion-detection mechanisms that can be incorporated into a secure system. An intrusion-detection system can be host-based, e.g. IDES [7] or network-based, e.g. N.S.M. [8]; knowledge-based or (data-driven) statisticallvbased. For a survey of‘other intrusion-detection systems, see [9]. The targets of a host-based intrusion-detection system are non-networked, stand-alone computers. The basic strategy is to develop user profiles for detecting suspicious user activities. On the other hand, in a network-based intrusion-

& Security,

Vol. 14, No. 7

detection system, the main strategy is to develop profiles of network resources usage in order to detect suspicious network traffic patterns. The knowledge-based approach and statisticallybased approach both have their own limitations. For example, in the knowledge-based approach, an intrusion scenario that does not trigger a rule will not be detected. On the other hand, in the statistically-based approach, accurate statistical distributions are required, otherwise, a high false alarm rate will result. Fortunately, these approaches complement one another. Needless to say, to achieve a high intrusion coverage in an intrusion-detection system, a mix of these approaches is required.

3. Model of system

intrusion

processes

Direct studies of actual intrusions and penetrations in real systems are quite impossible. Analytical models or simulated intrusions need to be used instead. Comparative or sensitivity studies based on models constructed can assist in making decisions on optimal strategic controls against any possible system intrusion. They can also help in defining the settings of various parameters and their relationship to likelihood of a successful intrusion. We shall use a Markov model in this paper to study system intrusion processes. We assume that a system can be a node, a network (LAN or WAN) or a gateway. An important feature of our model is the assumption of a constant state transition rate (i.e. an exponential distribution). The issue of whether the process generated by intrusions satisfies that assumption is often debated. Some argue that the assumption of, for example, a constant transition rate is a simplification of intrusion processes. However, no tractable analytical solution techniques have yet been available for solving a nonMarkovian model of intrusion processes.

623

B. C. Soh and T. S. Dillon/Setting thresholds

Therefore, even if constant transition rates are not applicable, this does not mean that the solutions obtained from our model are useless. In fact the solutions could be used for sensitivity studies, giving a relative rank or importance for different intrusion-detection parameters, or strategic controls, for instance. Often this can provide a valuable insight into the relative characteristics of the intrusion-detection system. If necessary, the model can then be refined to account for nonconstant transition rates by simulation and/or experiments (where appropriate). The results can then be used to improve the strategies. It is noteworthy that in intrusion processes which are relatively complex, the Markov modeling method appears to be the only feasible way to obtain an analytical solution. Figure 1 (for notation used in the figure, see Section 4) shows our model for a system intrusion process, which basically consists of nine states. State 0 is called the normal state. The system is in this state when there are no system security violations or when the first attempted break-in occurs. The system security attributes (information confidentiality, information integrity, service availability and information trustworthiness) are well maintained. If the first attempted break-in is not detected by intrusion-detection mechanisms, transition from State 0 to State 1 occurs. More attempted break-ins may be carried out until either the intruder gives up (transition back to State 0) or penetration is successful (transition to State 2). A high rate of password failures may be generated. The source of attempts can originate from inside (the legitimate users) or outside (the unauthorized users). An intruder can attempt initially to infiltrate a less secure host or an obscure gateway computer and, having succeeded, use it as a way station to launch an attack on the ultimate target(s). In State 2, the system intruder is able either

624

has been penetrated. to cross the system’s

The first

optimal intrusion-detection

line defence (usually the access control barriers) or to exploit the system’s known flaws or weaknesses (this is called the “tunnel effect”). The Internet worm [lo], for example, took advantage of the tunnel effect that existed in standard software such as thejingerd, gets and sendmail programs installed on many Unix systems [ 111. Alternatively, an intruder’s intention may be non-malicious (e.g. for fun, for a challenge, to see how many computers the intruder can crack, or for use as a way station to launch attacks on other networks); however, inadvertent harm to the system, users or owners may result. If the break-in is malicious (e.g. planting malicious software, file deletions, and information theft), then transition from State 2 to State 3 occurs. The system is under malicious attack in State 3. This is the most serious state where financial or resources loss is likely and perhaps inevitable. State 4 is a benign state. From here, the intruder logs out of the system often leaving behind trap doors (e.g. recompiled login programs) to facilitate future logins. State 5 is the false alarm state. The false alarm rate depends on some threshold used in the detection mechanisms. The detection mechanisms fail to discriminate aberrant system usage unrelated to security, e.g. abnormal usage due to software updates or changing work tasks. Obviously, false alarms are treated as system overheads. When the false positive is validated, transition from this state to State 0 occurs. In State 6, the system succeeds in detecting the attempted break-in or the non-malicious penetration. When detected, the intruder is either denied access or unmasked. In both cases, transition from this state to State 0 occurs. In State 7, the intrusion is detected but loss of system integrity, and/or secrecy, and/or service availability has already occurred. A system cleanup state (State 8) is therefore required.

Computers

The system is first isolated and then reconfigured to the normal state in State 8. The isolation is required to prevent further attacks during cleanup. A system audit is carried out to determine the extent of the damage, followed by system reconfiguration and recovery. The clean-up activities may include: deleting any accounts that have been created by an intruder; examining closely any changes to the file systems that affect system security, safety, reliability, etc.; reformatting disks; and installing restorable and working file backups or new software. All this, of course, involves cost and time. How much the total cost depends on the extent of the malicious attack.

4.

Mathematical

The following

& Security,

Notes for (viii): it has been claimed that with the availability of Ethernet sniffers and faster password crackers (using vector processors), the success rate of password guesses is as high as 90%. But in the claim it has not been mentioned whether some password security-related measures, such as the use of password checkers and the requirement of periodic password changes, have also been put in place. (A demonstration has shown that using 120 workstations, 2 parallel supercomputers and about 8 days of CPU time, a 40-bit encryption code could be cracked.) However, we are of the opinion that these are isolated and impractical cases.

notation and description

notation

will be used in this paper:

(0

l/lb,

mean

(ii)

1122

mean time (in hours) to successfully fully cracking a password);

(iii)

1113

Mean

(iv)

z

penetration rate through operating systems);

time (in hours)

time (in hours)

between

each subsequent

to complete the tunnel

first attempted

cross the access control

a malicious

effect (r measures

mean

time (in hours)

between

each subsequent

logout;

(vi)

mean

time (in hours)

between

each subsequent

login;

(vii)

transition state; 1-P

rate from

the malicious

detection

(e.g. success-

the no-confidence

state to the system

levels of

reconfiguration

reconfiguration

rates from State 5, State 6, and State 8, respectively;

system

(x)

false alarm rate;

(A)

detection

(xii)

conditional probability that an intrusion that it has occurred (intrusion coverage); pi(t)

barriers

probability of breaking the access control barriers; based on the reports in [ 12, 131, the success rate for cracking a password has been estimated at 5%, which may appear to be conservative (see the notes below); p measures the effectiveness of the access control mechanisms;

(uo

(xiii)

break-in;

attack;

(v)

(viii)

Vol. 14, No. 7

probability

rates from State 1, State 2, State 3, respectively;

that the system

or attempted

break-in

is detected,

given

will be in State i at time t.

625

6. C. Soh and T. S. Dillon/Setting thresholds

5. System of equations We define a steady-state measure (for comparative studies) called the Secure Com~~~tutionIndex (SCI) to give an insight into the expected response to a system intrusion or penetration of infinite duration. (A continuous-time measure-for real-time analysis-corresponding to the SC1 is a subject of future research.) Since the SC1 is a steady-state parameter and State 0 (see Fig. I) relates to the total security aspect (including reconfiguration) of an intrusion-safe system, we define the SC1 as the value of PO = PO (co). From the model shown in Fig. I, it is obvious that the Markov chain is irreducible and hence Pi = Pi (CO) exists. Pi also means the long-run proportion of time the system resides in State i; in particular, PO is the long-run proportion of time the system resides in State 0. By considering first a set of Kolomogorov’s forward equations obtained from the model shown in Fig. I and then letting t approach CO, the fol-

optimal intrusion-detection

lowing system of linear homogeneous equations for steady-state probabilities Pi = P,(co) can be derived (see [I41 for details):

((I -c)A +cA1+cqpo

(2) (3)

(B+ 63)P3= 4 P2+ yP4,

(4)

2YP4= P2 + p3,

(5)

PlJ3 =

so,

(6) (7)

VP, = 83P3,

(8)

p3Ps = VP,.

(9)

5.1 Solution The steady-state probabilities Pi satisfy the following identity (see [ 151 for details):

-0 A3

where ki, in conjunction with the solutions (I)-(9), can be expressed as follows:

3

k = 1

(I-4&

&+z+61

63

1

k

Fig.

626

1. State

transition

diagram process.

for

the

system

intrusion

3

=WA+P) /3+263



for

Computers

k

=

mean time (in hours) attack (l/n,) = 12;



k5=& Pl

k, =

Vol. 14, No. 7

access control barriers (l/n,) = 24;

b(k2+k3)

4 2Y

& Security,

to complete

a malicious

mean time (in hours) logout (l//3) = 12;

between

each subsequent

mean time (in hours) login (l/y) = 12;

between

each subsequent

pentration rate through the tunnel effect (7) = 0.5;

cdl + i&k, + 6,k, 1

transition rate from the malicious detection to the system reconfiguration state (v) = 0.5;

P2

state

probability of breaking the access control barriers (1 -p) = 0.05;

k,$&, V

system reconfiguration

rate from State 5 in Fig. 1

(PI) = 1.0; system reconfiguration

k,=*.

rate from State 6 in Fig. 1

(Pa) = 0.5;

k Applying the law of total probability, we obtain:

system reconfiguration (p3) = 0.25;

rate from State 8 in Fig. 1

detection rate from State 1 in Fig. 1 (6,) = 0.9; detection rate from State 2 in Fig. 1 (6,) = 0.9;

Po=l,

detection rate from State 3 in Fig. 1 (6,) = 0.9. that is

1 P() =

(10) l+iki i=l

5.2 Example In order to show the usefulness of the model developed in the previous chapter, we shall at this point consider a sample system which has the following parameters: mean time (in hours) between each subsequent first attempted break-in (l/11) = 24; mean

time

(in hours)

to successfully

cross the

The selection of some of the parameter values, e.g. those of 11, 22, etc., are for illustrative purposes only. However, the values of the false alarms corresponding to certain intrusion coverage are based on the test results cited in [4]. These are shown in Table 1. In this example we are particularly interested in the SC1 versus the intrusion coverage: pi(t) versus ci. Figure 2 shows the impact of an intrusiondetection process on the SCI. As can be seen from Fig. 2, an increase in the intrusion coverage does not necessarily mean an increase in the SCI. In fact, the index reaches a maximum value at a certain optimal intrusion coverage, after which it starts to decline sharply. This can be explained by the fact that as intrusion

627

B. C. Sob and T. S. Dillon/Setting thresholds

TABLE

1. False alarm rate versus

intrusion

coverage

141

False alarm rate

Intrusion

coverage

0.00

0.20 0.33 0.62 0.82 0.92 0.95 0.99

0.01 0.02 0.03 0.04 0.05 0.06

optimal intrusion-detection

The above analysis is important, as the current intrusion-detection systems, to the best of our knowledge, lack a quantitative approach to setting optimal intrusion-detection thresholds which will provide the best discriminating power.

6. Sensitivity analysis Since, practically, several of the system intrusion parameters would be difficult to obtain for some years, sensitivity analysis is essential to compensate for this difficulty. Results obtained from the analysis can be used to ascertain where the future effort should be directed. Hence, studies were carried out on:

90 89 88 87 -

?? effect

86 -

SCI (%)

of system

reconfiguration

rates

on

the

SCI;

85 84 -

?? effect

of break-in

and attack rates on the SCI;

?? effect

of access control

83 82 -

mechanisms

on the SCI.

81 -

80, 0.0

0.2

0.4 Intrusion

0.6 Coverage,

0.8

1.0

ci

Fig. 2. SC1 vs ci for l/A, = l/& = 24, l/A3 = 12, l/B = 12, l/y = 12, 6, = 0.90, fi* = 0.90, 83 = 0.90, p = 0.95, lJ = 0.5, z = 0.5, p, = 1, p2 = 0.5, p3 = 0.25.

coverage is increased, so is the false positive rate. Given this, the system now resides less of the time in State 0 as it is spending more time in State 5; hence PO(~) is reduced. Reducing the false positive rate can be achieved only at the cost of losing some sensitivity to potential intrusions or attempted break-ins. On the other hand, increasing the intrusion coverage can be done only at the expense of increasing the false alarm rates, which in turn will reduce the SCI. The figure shows that an intrusion coverage of about 82% and a false alarm rate of 3% yields the best SC1 for this system.

628

The 3-6.

results

6.1

Discussion

of these

studies

are shown

in Figs

of results

Figure 3 shows that if the reconfiguration rates are low, a high intrusion coverage will not help achieve a high SCI, but will instead cause the SC1 to deteriorate. It also shows that reconfiguration mechanisms play a significant role in achieving a high SCI. From this we deduce that, from the point of view of total security, good reconfiguration mechanisms are necessary. In Fig. 4 it is interesting to note that the system reconfiguration rate p2 has the most impact on the SC1 until ci z 0.95. Thereafter, the system reconfiguration rate ,ul becomes the most dominant factor. Equally interesting is the observation that both ,LL~and ,LL~have almost the same impact on the SC1 when ci varies from 0.20 to 0.35. Figure

5 shows

the impact of the attack rates, ,$, attack rates are relatively high,

i = 1, 2, 3. If the

Computers & Security, Vol. 14, No. 7

98

__--. / /

/

/ ,

,./’

/ I

cc

90

I

88

jr

.,... -... ..

:’

‘.

,:’

80 SCI (%) SC’1 (%)

78

70

68

58 0.0

0.2

0.4

Intrusion p: . _-___-_-I

0.6

0.8

p, = 0.20, 112= 0.10, jl3 = 0.05 p, = 1.00, p2 = 0.50, pLs= 0.25 p, = 5.00, ~2 = 2.50, ps = 1.25

Fig. 3. SC1 vs. SC1 vs ci for l/1., = l/A, = 24, l/S = 12, l//3 = 12, lb! = 12, p = 0.95, ” = 0.5, s, = 0.90, & = 0.90, 63 = 0.90, z = 0.5.

the intrusion coverage needs to be high in order to maintain a satisfactory SCI. If the attack rates are relatively low, high intrusion coverage may have an adverse effect on the SCI, probably due to the fact that any increase in intrusion coverage is matched by an increase in false alarms. Figure 6 shows how p, a measure for the access control mechanism effectiveness, affects the SCI.

I

I

I

I

0.4

0.6

0.8

1.0

Intrusion

1.0

Coverage, ci

I

0.2

-: . . .. _--_____:

Coverage,

ci

= 1.00 p, = 4.00, p2 = 1.00, pL.1 p1 = 1.00, p2 = 4.00, /.ls= 1.00 p, = 1.00, p* = 1.00, pLJ= 4.00

Fig. 4. SC1 vs ci for 1, = A2= 0.05, AZ= 0.10, p = 0.10, y = 0.10, p = 0.95, Y = o-5, 6, = 6, = 6, = 0.90, z = 0.5.

In our sample system, p does not seem to have much effect on the SCI. However, for significantly high attack rates, Ai, i = 1, 2, 3, the effect of p on the SC1 will be much greater. 7. Future work Simulation experiments will be carried out to validate the model for the system intrusion process. Future research will also be carried out using the transient state probabilities in a continuous-time Markov model to compute the total security aspect

629

B. C. Soh and T. S. Dillon/Setting thresholds

optimal intrusion-detection

90 -

------_-_

--__

-.

89 -

\

\

88 87 86 -

88

SCI (%)

85 84 83 -

78

82 81 -

SC1

(%)

80

0.0

48 L 0.2

0.4

Intrusion

. .

___--__

-1

0.6 Coverage,

0.8

1.0

ci

l/1, = 4.8, I/& = 48, l/1, = 2.4 l/1, = 24, l/& = 24, l/1, = 12 l/i, = 120, l/i, = 120, l/1, = 60

Fig. 5. SC1 vs ci for I@ = 12, l/y = 12, p = 0.95, v = 0.5, t = 0.5, 6, = 0.90, 6* = 0.90, 63 = 0.90, p, = 1, pa = 0.5, I~J = 0.25.

in the time domain. This will give a more accurate picture, as intrusion-detection systems are often on-line and run in real-time. Another aspect that needs to be investigated is the case of multiple, independent intrusions.

References (11 P. Helman and G. Liepins, Statistical foundations of audit trail analysis for the detection of computer misuse,

630

I

I

I

0.6

0.8

1.0

p: . . . . . . . . . . ..

p = 0.50 p = 6.95

_--_____:

p = 0.99

Coverage,

ci

Fig. 6. SC1 vs ci for l/A, = l/n, = 24, l/1, = 12, l/b = 12, 8* = 0.90, 6, = 0.90, l/Y = 12, v = 0.5, r = 0.5, 6, = 0.90, ~1 = 1, pa = 0.5, p3 = 0.25.

58 .

-:

I 0.4

Intrusion

68 -

0.0

I 0.2

IEEE Trans. on Software I%~., SE-19 (Sep. 1993) 886-901. An intrusion-detection model, IEEE PI D.E. Denning, Trans. on Sojware Erz~., SE-13 (Feb. 1987) 222-232. [31 C. Stall, Stalking the wily hacker, Communirations of the ACM, 31 (May 1988) 484-497. detection: its [41 G.E. Liepins and H.S. Vaccaro, Intrusion role and validation, Computers G Securiry, 11 (1992) 347-355. Trusted Computer System [51 DOD (U.S. Dept. of Defense) Dec. 1985. Evaluation Criteria, DoD5200.28-STD, Research, standards [61 S.I. Schaen and B.W. McKenney, and policy directions for network auditing, The 7th Intrusion Detection Workshop, Menlo Park: SRI International, May 1991. F. Gilham, R. Jagannathan, C. 171 T.F. Lunt, A. Tamaru, Jalali and P.G. Neumann, A real-rime intrusion-detection expert system (IDES), Final Technical Report, SRI International, Menlo Park, CA, Feb. 1992. G.V. Dias, RN. Levitt, B. Mukherjee, J. [81 L.T. Heberlein, Wood, and D. Walker, A network security monitor, in Proc. 1990 IEEE Symp. on Research in Securiv and Privacy, Oakland, CA, pp. 296-303, May 1990. audit trail analysis and intrusion [91 T.F. Lunt, Automated detection: a survey, in Proc. 11th National Computer Security Conference, Baltimore, MD, Oct. 1988.

Computers

[lOI E. Spafford, 71ze Internet Worm Incident, Tech. Rep. no.

CSD-TR-933, Comp. Sci. Dept., Purdue University, Sep. 1991. [Ill S. Garfinkel and G. Spafford, Practical UNLX Security, O’Reilly 81 Associates, Sebastopol, CA, 1992. LIZIM. Mandell, The West German hacker incident and other intrusions, in Computers under Attack: Intruders,

Dr B.C. Soh is a faculty member of the School of Computer Science and Computer Engineering at La Trobe University, Melbourne, and a member of LEEE and its affiliates: Computer Society and Reliability Society. His main research areas include network intrusion detecsecurity, system tion, malicious software, system dependability evaluation, and faulttolerant, secure and safe computing.

& Security,

Vol. 14, No. 7

Wormr, and Viruses, P.J. Denning (ed.) ACM Press, New York, NY, 1990, pp. 150-155. D. Seeley, Password cracking: a game of wits, Communications ofthe ACM, 32 (June 1989) 700-703. K Trivedi, Probability and Statistics with Reliability, Queuing, and Computer Science Applications, PrenticeHall, Englewood Cliffs, NJ, 1982. L. Kleinrock, Queueing Systems, Volume 1: Theory, John Wiley & Sons, New York, 1975.

T.S. Dillon received a PhD degree from Monash University in 1974. He is now the Professor and Head of Computer Science and Computer Engineering at La Trobe University and a senior member of IEEE. He is Editor-in-Chief of International journal of Computer Systems Science and Engineering and International journal of Engineering Intelligent Systems. He has published more than 400 research papers and four books and made significant contributions to a number of research areas including expert and intelligent systems, neural networks, objected-oriented computing, computer communications, and fault-tolerant, secure and safe computing.

631