SIPDES: A simulation program debugger using an expert system

SIPDES: A simulation program debugger using an expert system

Expert Systems WithApplications. Vol.2, pp. 153-165,1991 0957-4174/91 $3.00+ .00 © 1991PergamonPressplc Printedin the USA. SIPDES: A Simulation Pro...

976KB Sizes 0 Downloads 34 Views

Expert Systems WithApplications. Vol.2, pp. 153-165,1991

0957-4174/91 $3.00+ .00 © 1991PergamonPressplc

Printedin the USA.

SIPDES: A Simulation Program Debugger

Using an Expert System GEORGIOS I. DOUKIDIS AND RAY J. P A U L London Schoolof Economicsand PoliticalScience, Aldwych,London, England

Abstract--SIPDES is an expert system that has been developed to aid in the debugging o f simulation models. The simulation models are written in Pascal and are hand written or are derived from the use o f an interactive simulation program generator. User amendments to the program occur because either the generator can not handle all of the model complexity, or because o f a revision to the problem description. These amendments give rise to run time errors or errors seen at the reporting stage o f the program run. An expert system was considered to be a suitable method for developing a debugging tool because o f the limited domain o f the problem (simulation and Pascal); because o f the versatility o f these systems in handling changes and extensions," and because o f the limited availability o f experts who had to acquire their expertise empirically. Experiences of system design, knowledge acquisition, and system validation are related. Future uses for such systems are outlined. The type o f expert system development tool used is also discussed.

ular model structure known as three phase (Crookes et al., 1986). Sampling, queue handling, a time-advance mechanism, and data collection are examples of some of the routines in the eLSE library. This CASM system is supported by an interactive simulation program generator (ISPG) that, based on information taken from an activity cycle diagram description of the problem, produces a Pascal simulation program using the eLSE suite of supporting routines (Paul & Chew, 1987). This ISPG is similar in style to the CAPS system that produces models in the ECSL language (Clementson, 1982). Powerful though this type of ISPG is, some complex problem decision rules cannot always be handled directly, requiring the amendment of the generated code. Also, reevaluation of the problem being modeled gives the analyst the choice of using the ISPG again or amending the code. For teaching purposes, students write a simulation program without using the ISPG, to enhance their understanding of the modeling structure. From experience of teaching students at the LSE and of applied work with the systems (for example, E1 Sheikh et al., 1987), a variety of program errors, both run-time and in the output, have been determined. These errors are typically diagnosed by a limited number of "experts" in the simulation system, and whose availability is usually restricted. A number of solutions to this problem of scarcity of expert advice have been devised, and the expert system debugger SIPDES (Simulation Program Debugger using an Expert System) described in this article is one of them.

1. I N T R O D U C I ' I O N SEVERAL AUTHORS HAVE commented (O'Keefe, 1986; Paul, 1989a, 1989b) on the relationship between artificial intelligence and discrete event simulation modeling. Many examples exist of each approach incorporating the other approach, or of using the other approach as a useful adjunct. In this article, we look at the role that artificial intelligence, particularly expert systems, can play in simulation environments, and illustrate this by means of an expert system that has been developed to debug faulty simulation programs. The Computer Aided Simulation Modeling (CASM) research group at the London School of Economics (LSE) are dedicated to producing computer systems that make the use of simulation models more efficient in both cost and time (Balmer & Paul, 1986). Some experiences using artificial intelligence techniques have already been reported (Doukidis & Paul, 1985, 1986; Paul & Doukidis, 1986). At the heart of the first CASM simulation system is a suite of Pascal routines, the extended Lancaster Simulation Environment (called the eLSE routines). These routines are a modified and enhanced version of a suite of Pascal routines developed at the University of Lancaster, England (Crookes et al., 1986). The eLSE routines provide the support for writing discrete event simulation programs using a partic-

Requests for reprints should be sent to Ray J. Paul, London School of Economics and Political Science, Houghton Street, Aldwych, London WC2A 2AE, England. 153

154

G. I. Doukidis and R. J. Paul

SIPDES is a stand-alone expert system designed to help an analyst or student discover where his simulation program written with the eLSE routines has gone wrong. The error may be a run-time error (for example, attempting to move an entity from an empty queue), or it may be an obvious mistake in the output (for example, nothing happens, or an entity disappears completely over time). The SIPDES system provides messages to facilitate the nature of the hypothesis being tested as well as the normal help facilities. SIPDES can be defined (Shannon, Mayer, & Adelsberger, 1985) as an Expert Simulation System since its goal is "to make it possible for engineers, scientists and managers to do simulation studies correctly and easily without such elaborate training." SIPDES can also be considered as an advice-giving simulation expert system which are defined (O'Keefe, 1986) as systems "that assist the simulation scientist and simulation user." Since SIPDES can be applied to a simulation program that models any of this class of simulation problems, it is domain independent. Its knowledge base concerns simulation in general, the eLSE systems in particular, and relevant aspects of Pascal as well. 2. R E A S O N S F O R D E S I G N I N G S I P D E S

Software maintenance can be broken down into a number of activities (Land, 1985): 1. corrective maintenance, which is the response to an assessment of failure; 2. adaptive maintenance, which is the response to a change in the data or in the processing environment; 3. perfective maintenance, which is concerned with the elimination of inefficiency, enhancing performance and improving maintainability. The overall effort devoted to maintenance activities is estimated to be 45-50 percent (Land, 1985), with the largest effort going into perfective maintenance. Here we are concerned with the corrective maintenance usually known as "debugging," which has been defined as "the art of beating an error once its existence has been established" (Van Tassel, 1978). A known error must exist before debugging, otherwise it is testing. A bug occurs when a system performs contrary to its expected function. Bugs, and the response to bugs, vary with the type and location of the bug and the cost (in time or in money) of correction, For example, types of bugs are: 1. specification, 2. design (logic, control, structure, interface), 3. implementation (coding, syntax, transcription), 4. execution (initialization, computation, data). Similar classifications based on the type, location and cost of bugs exist (Basili & Perricone, 1984; Van Tassel, 1978). There are a number of methods and tools available for debugging, ranging from the "improved knowledge

of language" method (Van Tassel, 1978) to the Cornell program synthesiser (Teitelbaum & Reps, 1981). Since we are setting the scene for SIPDES, we shall only briefly mention some of the distinctive computer-aided tools that are used in debugging. They are the debugging compiler, intelligent editors, and various debugging aids written into the program. The compiler's discovery of syntax errors is the most important and most taken-for-granted stage of debugging. The greater the number of errors discovered and corrected at this stage, the easier all later debugging and testing will be. By using a debugging compiler (Van Tassel, 1978) the syntax is more carefully examined, and the interaction of commands is checked. More importantly, numerous checks are also made during execution of the source program. The main disadvantage of the debugging compiler is that the additional checking requires extra effort and hence execution time is generally slow. Programs are not merely text but are hierarchical compositions of computational structures and hence they can be edited, executed, and debugged easily in a programming environment that consistently acknowledges this viewpoint. Such an interactive programming environment is the Cornell program synthesiser (Teitelbaum & Reps, 1981) that has integrated facilities to create, edit, execute, and debug programs. With the synthesizer, programs are created top-down by inserting new statements and expressions at a cursor position within the skeleton of previously entered templates. Routine diagnostic facilities are livewire syntax directed. Discrete computational units of execution correspond exactly to the syntactic units of the editor. When tracing, the screen cursor indicates the location of the instruction pointer in the source code as the program executes. There are a number of debugging aids that are written into a program as it is being written, such as dumps, and traces. A dump is a record of information of the status of the program at any given time. It is of limited use mainly because it is provided in machine language. A trace is a record of the path of execution of the program. It can be used to see if the program is executed in the same sequence as the programmer intended and if the variables have the desired values stored in them. For example, the muLISP programming environment has a well designed tracing facility (Doukidis, Shah, & Angelides, 1988). Within the CASM simulation programming environment there is no debugging compiler and obviously the design of an "intelligent editor" similar to a synthesizer would be infeasible in terms of the effort required. The third type of debugging aid can be said to be partly employed since there are display commands in key places of the simulation code. Hence when the program is running, the user is able to follow the logic of the model. Run-time error messages of varying

SIPD ES

quality are also given. Unfortunately these facilities are not wholly adequate for dealing with user-induced errors resulting from amendments to the code. The reasons why SIPDES was developed as a debugging tool are as follows. The problem of debugging simulation program code has all the hallmarks of an expert system (as outlined below). The availability of an expert system development environment made the task of writing SIPDES feasible. Crucially, the CASM software systems are under constant development. Whilst the systems could be updated for potential user mistakes, these are difficult to determine before the system modifications have been made. Empirical evidence is required to determine new problems that arise. Only run-time errors can be detected within the system. Mistakes that can only be detected at the reporting stage cannot be determined by an inbuilt error system. Reporting stage errors are the most difficult to detect and are the source of most requests for expert advice. With the development of the CASM systems, the incorporation of run-time error detection expands disproportionately with the size of the system, consuming more computer memory and/or slowing the system down. Since an ISPG program will work, and efficiency is a major objective, the incorporation of an expensive and frequently updated run-time error detecting system into the simulation system is an expensive aberration. Run-time errors should be few in number if the SYstems are used correctly. However, whilst the detection of these errors can be sought from the expert system, its major use is to determine the cause of mistakes uncovered at the report stage. 3. T H E E X P E R T S Y S T E M BACKGROUND TO SIPDES We have established that during simulation modeling some assistance is needed to discover and correct semantic and logic errors introduced into program code by user modifications. This assistance is provided to the user by an expert. Ideally, the expert does not simply tell the user how to correct a particular problem, but attempts to educate the user in effective debugging techniques so that the user is given advice on how to isolate and discover the problem. Since the availability o f " e x p e r t s " was limited, we developed a system that could "replace" the experts (the tutors). Each session with the system should strengthen the user's ability to solve problems without assistance and thus reduce the user's dependence on the experts. There are many such debugging systems that can assist users in locating and correcting errors in their programs, and many of them are expert systems (Christensen et al., 1985; Johnson & Soloway, 1985; Smith, Fink, & Lusth, 1985; Hill & Roberts, 1987). There are good reasons why expert systems have been used by ourselves and other authors.

155

A basic checklist for the "suitability" of an expert system approach to a particular problem could be set out as follows (Forsyth, 1984) SUITABLE

UNSUITABLE

Diagnostic No established theory Human expertise scarce Data noisy

Calculative Magic formulae exist Human experts are two a penny Facts are known precisely

One of the great advantages of the expert system approach is the flexibility afforded. This is due to the nature of the design of an expert system, with the main control structure separated from the knowledge base. This allows additions to be made to the knowledge base with little or no alteration required to the inference engine. At the beginning of system development there was no indication of how many errors or what types of error the system would need to accommodate. The acquisition of knowledge would be a continuous process throughout and perhaps even beyond the period of system development. The system decided on, then, would have to be able to cope with this continued growth. The expert system structure allows for expansion within its knowledge base with relative ease. One of the important functions of the system would be to educate the user in how to analyze his programs and uncover any problems, thereby operating as a computer-based intelligent tutor. Expert systems are considered to be successful intelligent tutors (Land, 1985; Sleeman & Brown, 1982). One further consideration in favor of the expert system approach was the availability of an expert system development tool known as ASPES (Doukidis & Paul, 1987). Any necessary modification to ASPES could easily be made, thereby enhancing it as well. ASPES is a tool for designing, consulting, and experimenting with expert systems. It is a research tool that is also used as a teaching aid at the London School of Economics. ASPES is a skeletal system written in Pascal: it is an expert system building tool to which the user adds the Pascal code for the particular application. The virtue of this approach is that the modeling process is entirely under the control of the user in that Pascal can be written transparently and is well supported. The system works under VMS on the VAX and also on the IBM PC. The architecture of ASPES is presented in Figure 1 and consists of two levels: the executive level and the library units level. Each of the library units consists of declarations, procedures, and functions concerned with some aspect of expert systems programming. The units are knowledge-base maintenance, inference mechanisms, an explanation facility, a user interface, an external system interface, and list manipulation. The executive is a program which consists of the controller, the knowledge-base maintenance tool and the inference

156

G. L Doukidis and R. J. Paul

CONTROLLER Executive Level

+ KNOWLEDGE-BASE MAINTENANCE TOOL

INFERENCE ENGINE

i

I

\\\\\\\\\\\\\\\\\.\\ knowledge -base maintenance explanation facility

\\\\\\\\\\\\~ inference mechanisms user interface

Units Level

external list system manipulation interrace \\\\\\\\\\\\\~\\"~\\\\\\\\\\\\\~

FIGURE 1. Structure of ASPES.

engine, to which the user adds his own problem specification using the support provided by the units as necessary. The knowledge base is held in an external text-file(s) in the form of if-then rules, possible hypotheses to be proved, and facts with opposite meanings. When the system is run, the knowledge base resides in the working memory as a variety of linked lists of records. 4. S I P D E S

True diagnostic expert systems have been defined (Stuart et al., 1985) as systems that "attempt to proceed from symptoms to causes or possible causes of the problem. During a session with these systems the user may be asked to supply additional symptoms as the diagnosis proceeds, but specific instructions about how to obtain these additional symptoms is not provided." Bennet and Clifford (1981) and Hartley (1981) give good examples of such systems. SIPDES is not a purely diagnostic expert system. It includes detailed step-by-step instructions for troubleshooting when the cause of the problem cannot be identified precisely. It is intended to serve both as a training tool and as a debugging aid for inexperienced (and sometimes experienced) eLSE users. When a hy-

pothesis has been proven, a course of action is recommended to the user. Where it is appropriate, the system provides extra information on what that action may entail, after giving examples of the correct code required. The overall system control starts with data-driven forward-chaining followed by goal-driven backwardchaining. This approach is similar to that followed by the expert system shell PESYS (Doukidis & Whitley, 1987; 1988). When SIPDES is run, the user is presented with a top level set of goals, as shown below: At which stage was the error noted? 1--Before all activities have been completed at least once. 2--Sometime in the simulation run. 3 - - A t the simulation final report. The user, by choosing any of these options, can narrow the search by indicating the subgoal to focus on. Once the specific rules are evoked, SIPDES works as a regular back-chainer. Hence SIPDES can be seen to operate at two levels. In the higher level, SIPDES uses easily observed information about the initial symptoms of the problem to identify a subset of possible bugs. SIPDES then proceeds to the second level where stepby-step instructions about bug finding for the identified subset are provided. SIPDES can explain its line of reasoning at any point on demand. This capability enhances both its ability to serve as a trainer to the inexperienced user, and its credibility as an expert consultant to the more experienced user. SIPDES uses both the general explanation facility provided by ASPES and one especially designed for this domain. 5. D E V E L O P I N G T H E K N O W L E D G E BASE The first task in the construction of SIPDES was the acquisition of the knowledge base. This was accomplished through discussions with users and experts to determine regular errors, and with the experts (i.e., those familiar with the eLSE routines) to determine the solutions. Experimentation was also carried out to investigate errors and their causes. As the error information was accumulated, a tree diagram was constructed. The diagram helped to give some indication of the form and size of the problem that the expert system would have to handle. The diagram also helped to give a feel for the causal relationship of errors and how the system should guide the user. These errors are related to misuse of the eLSE routines or errors in the simulation program's logic (e.g., adding an entity to the wrong queue). It became apparent when designing the tree diagram that the errors fell into two broad groups: (i) errors that interrupt the simulation run and (ii) Errors discovered from analysis of the report. The errors in the first group are described in computer terminology as severe errors. They cause the simulation

SIPDES

157

run to "crash," i.e., the simulation program does not complete its run and the user is placed back in the operating system. One deviation from this pattern of events is an error which sends the program into a continuous loop and which can only be stopped by user interruption. This first section of the knowledge base can be conveniently subdivided into subgroups or segments:

5.1. Segment 1 la. Access Violation: This branch indicates that an eLSE routine has not been defined or a particular function not declared. The run cannot begin. lb. File Not Found: Trying to call a file not created.

5.2. Segment 2 2a. Halt Procedure Called: The program has begun to run and fails to complete due to a condition becoming illegal, e.g., beheading an empty queue. 2b. Arithmetic Fault: Here the error relates to a calculation such as trying to divide by zero during the simulation run.

5.3. Segment 3 If the user tries to behead a queue in which the first entity has been incorrectly set as not available, then the program collapses. In the second group the errors are more complicated and the causes potentially numerous. These errors are discovered when the simulation program has completed its run and a final report produced. The errors here are associated with "unexpected" or nonsensical results within the report, e.g., when no activities take place. The expert system will be required to help guide the users through their program to determine where their logic fails. The errors in the second group are far more complicated and difficult to identify because they are to do mainly with the logic of the model (simulation knowledge is the dominant factor here). Examples of the type of symptom discovered at this stage are: (i) no activities have taken place at all; (ii) one or more activities have taken place, but not all of them; (iii) the number of times one or more activities takes place reaches some maximum, and does not increase further with time; (iv) one or more activities behave reasonably, whilst the rest display erratic or unexpected behavior; and (v) one or more recording requests are not complied with. 6. T H E F O R M A T O F THE

KNOWLEDGE BASE As the previous discussion suggests, a logical format for the knowledge base design would be to compart-

mentalize the knowledge. This was done by dividing the knowledge base into files each containing knowledge about a subproblem. These files are called "knowledge sources." The knowledge base maintenance tool is directed to the appropriate knowledge source by the user's response to the initial menu. Each knowledge source is composed of four blocks of knowledge: 1. Hypotheses 2. Opposites 3. Rules 4. Help facility. The first three blocks represent the basic structure of the knowledge base provided by APSES. The last is a modification made to the knowledge structure to help create a system that is particularly user friendly, and closely attuned to the problem domain of eLSE. A sample knowledge source is shown in Appendix 1.

6.1. Hypotheses The first block at the head of each section contains a list of hypotheses that indicate why a particular error may have occurred and what action should be taken to remove the error. These hypotheses are taken in turn by the inference engine until one can be validated from the rules available in the third block. Hypotheses are used mainly when the control mechanism is goaldriven, backward-chaining.

6.2. Opposites The inclusion of opposites guides the application of the rules (i.e., questions to the user) down one particular branch of the tree. Without these opposites, a particular line of questioning can experience discontinuities. Two branches may be distinct paths to the same goal. Without opposites, the inference engine can switch or j u m p to a parallel branch because of the way the inference engine processes the rules (via cyclic linked-lists). The use of opposites allows one proved antecedent to cancel out other possible branches. This reduces the amount of time the user needs to be at the terminal and makes for a more comprehensible user-friendly system.

6.3. Rules The "production rules" make up the major content of the knowledge base. The formalization of these rules represents the translation of the knowledge acquired into a form that may be accessed by the inference engine: SIPDES works on a straight forward productionrule system where the knowledge base has the general format:

158

IF (antecedent)

THEN (consequence)

G. L Doukidis and R. J. Paul

detailed explanation of a particular question. Initial trials with SIPDES shows that assumptions concerning the user's familiarity with the eLSE systems could not be taken for granted. The HELP facility is accessed by typing "?" in response to any of the questions put to the user.

7. E X A M P L E C O N S U L T A T I O N S E S S I O N These antecedents seek to describe the symptoms of a particular error as experienced by the user, i.e. IF No activities took place at all? THEN Probable error associated with 1st Conditional-Event The antecedent is presented to the user as a question, so the above becomes at the user-screen interface, "Is It True: No activity took place at all?" It is through these questions that the user interacts with SIPDES. Each antecedent, then, has to be carefully phrased if the user is to answer correctly. In an effort to ease the problem o f antecedent interpretation and to make the system more "user dynamic" and friendly, the rules have been divided into two types. The first type are literal rules that use minimal data which is easily accessible. For example, the rule above is literal and requires only one piece of data. The rule is not subject to debate, interpretation, or misunderstanding. Other literal rules may use more than one piece of data. Literal rules encode expertise about "clear cut" situations. The second type of rules are descriptive. These require substantial data and assistance to overcome the problem of antecedent interpretation. This is achieved by introducing the concept of SIGNPOST as an aid for these special antecedents. SIGNPOST serves two purposes: to direct the user to parts of their simulation program to which the antecedent refers; and to allow greater flexibility in the phraseology of the questions put to the user. This increases user confidence by providing a clear indication of what the antecedent is aiming to achieve. These SIGNPOSTs are identified within the rules, by the inference engine, with the aid of brackets as shown in Appendix 1.

6.4. Help Facility The final block of information is provided as an extension to SIGNPOST and the antecedent. Like SIGNPOST it aids the knowledge engineer by allowing more flexibility in question formulation and provides extra support for the user. " H E L P " provides a more

Crookes et al. (1986) describe the eLSE routines and the underlying three-phase simulation modeling structure in detail. Activities in a simulation are represented by several blocks of codes or modules. One such module tests whether an activity can start and schedules the end of the activity if appropriate. Each entity or object engaged in the activity has a separate module handling the end of the activity. One possible error that program writers and amenders make is to mix up the end of activity modules with each other. The entities are then put in the wrong queues following the activity. This can lead to subsequent activities failing to meet their starting conditions. The program does not fail, but there is obviously something wrong at the reporting stage. Appendix 2 shows a cut-down example of the modules of code that represent the start and end of an activity. Appendix 3 provides an example of a session with SIPDES that assists the user in finding the reason for the error discussed above. The observed symptom of some activities never starting has a number of different causes. Hence the session progresses through several steps, based on the user's responses, before suggesting the probable cause of the error. Examples of the use of signposts and the help facility can readily be seen in this example.

8. C O N C L U S I O N S Knowledge acquisition was found to be one of the most frustrating tasks. The system developed should reflect the type of problems users experience, and so it was with the users that the investigation began. The initial interviews proved to be unsuccessful since they took place some time after these users had started using the eLSE package. This was sufficient time for the errors to have been forgotten or to have become very fuzzy. Some clues were forthcoming however and they provided the basis for many experiments later. In our system development the top-down division of the problem into a knowledge tree was used. The construction of the knowledge-tree proved invaluable. It gave at a glance the number of errors acquired, their causes, and any patterns that existed. A major proportion of the system development time was spent on the acquisition and development of the knowledge base to an acceptable level. Experimentation

SIPDES

provided much of the initial knowledge base. This proved a long and laborious process. It is worth reiterating that any system like this is only as good as its knowledge base, and this can easily be the major effort. Initial validation highlighted several incorrect assumptions that had been made in the initial formulations and also indicated the need for the HELP facility. An initial assumption made concerned the amount of detail users recall about the nature of their errors. This level of recall is low, mainly because of the descriptive nature of the 'detail' required from the user. It was also hoped that once the system became popular, users would be familiar with the information required by the system, thereby concentrating their minds on determining the symptoms of their error. This hope was not realized. The system has several advantages over, say, a dictionary of symptoms of errors. It is readily expandable both in width (new areas of problems) and in depth (symptoms associated with an error). SIPDES is a guide that will systematically help the user through his program. This dynamic interactive characteristic is the system's most important quality. SIPDES has had a field trial with a new group of eLSE users--the MSc Operational Research students at the LSE. These students, 40 in number, have demonstrated the viability and usefulness of such a debugging tool. The most valuable lesson from the development of SIPDES has been an empirical determination of the balance that should be achieved between incorporating some debugging facilities in the simulation system, and using an external advisor. More recent developments in the CASM systems have many more inbuilt error detection facilities. However, little can be incorporated to handle the problems caused by representing the model logic incorrectly. A revised version of SIPDES is under development. The knowledge acquisition lessons of SIPDES will be invaluable in this task. SIPDES still serves a useful practical function, since the eLSE routines are public domain. They are commercially used in the UK, and in education in the UK and Brazil. Developments such as SIPDES are seen as part of the CASM systems and it is anticipated that given sutficient confidence, they will be incorporated directly. Methods of automatically moving into such systems at run-time errors are being investigated. Errors in simulation reports can be kept in reporting files so that an adequate problem description can be input to a SIPDES type system. Running the simulation program itself within the debugging system to determine which hypothesis is correct is a longer term research objective.

Acimowledgments--Programming contributions are gratefully acknowledged of Paul Balacky.

15 9

REFERENCES Balmer, D.W., & Paul, R.J. (1986). CASM--The tight environment for simulation. Journal of the Operational Research Society, 37(5), 443-452. Basili, V.R., & Perricone, B.T. (1984). Software errors and complexity: An empirical investigation. Communications of the ACM 27(1). Bennet, J.S., & Clifford, R.H. (1981, August). DART: An expert system for computer fault diagnosis. In Proceedings of the Seventh International Joint Conference on Artificial Intelligence, Vancouver, British Columbia. Christensen, L.C., Stokes, G.E., Hays, B., & Coons, D, (1985, October). TEACH: a knowledge-driven lab assistant for a computerbased instruction system. In K.N. Karma (ed.), Expert systems in government symposium (pp. 586-595). IEEE Computer Society Press. Clementson, A.T. (1982). Extended control and simulation language. Cle. Com Ltd., Birmingham, England. Crookes, J.G., Balmer, D.W., Chew, S.T., & Paul, R.J. (1986). A three-phase simulation system written in Pascal. Journal of the Operational Research Society, 37(6), 603-618. Doukidis, G.I. (1987). An anthology on the homology of simulation with artificial intelligence. Journal of the Operational Research Society. 38(8), 701-712. Doukidis, G.I., & Paul, R.J. (1985). Research into expert systems to aid simulation model formulation. Journal of the Operational Research Society, 36(4), 319-325. Doukidis, G.I., & Paul, R.J. (1986). Experiences in automating the formulation of discrete event simulation models. In E.J.H. Kerckhoffs et al. (eds.), A.1 applied to simulation. Simulation Seties, 18(1), 79-90. Doukidis, G.I., & Paul, R.J. (1987). ASPES: A skeletal pascal expert system. In H.G. Sol et al, (eds.), Expert systems and artificial intelligence in decision support systems (pp. 227-246), Dordrecht, Holland: D. Reidel. Doukidis, G.I., & Whitley, E.A. (1987). Developing and running expert systems with PESYS. Future Generation Computer Systems, 3(3), 189-199. Doukidis, G.I., & Whitley, E.A. (1988). Developing expert systems. Lund, Sweden: Chartwell-Bratt. Doukidis, G.I., Shah, V.P., & Angelides, M.C. (1988). LISP: From foundations to applications. Lund, Sweden: Chartwell-Bratt. El Sheikh, A.A.R., Paul, R.J., Harding, A.S., & Balmer, D.W. (1987). A microcomputer based simulation study of a port. Journal of the Operational Research Society. 37(8), 673-681. Forsyth, R. (1984). The Architecture of expert systems. In R. Forsyth (ed.), Expert systems: Principles and case studies. London: Chapman and Hall. Hartley, R.T. (1981, August). How expert should an expert system be? In Proceedings of the Seventh International Joint Conference on Artificial Intelligence, Vancouver, British Columbia. Hill, T.R., & Roberts, S.D. (1987). A prototype knowledge-based simulation support system. Simulation, 48(4), 152-161. Johnson, W.L., & Soloway, E. (1985). PROUST: Knowledge-based program understanding. IEEE Transactions on Software Engineering, March, 267-275. Land, F. (1985). Outline functional requirements of a computer-based teaching prograrnme for software maintenance, Research paper, Department of Statistics, London School of Economics. O'Keefe, R.M. (1986). Simulation and expert systems--A taxonomy and some examples. Simulation, 46(1), 10-16. Paul, R.J. (1989a). Artificial intelligence and simulation modelling In M. Pidd (ed.), Computer modelling for discrete simulation. Chichester: Wiley. Paul, R.J. (1989b). Combining artificial intelligence and simulation. In M. Pidd (ed.), Computer modelling for discrete simulation. Chichester: Wiley. Paul, R.J., & Chew, S.T. (1987). Simulation modelling using an in-

160

G. I. Doukidis and R. J. Paul

teractive simulation program generator. Journal ~fthe Operational Research Society, 38(8), 735-752. Paul, R.J., & Doukidis, G.I. (1986). Further developments in the use of artificial intelligence techniques which formulate simulation problems. Journal of the Operational Research Society, 37(8), 787-810. Shannon, R.E., Mayer, R., & Adelsberger, H.H. (1985). Expert systems and simulation. Simulation, 44(6), 275-284. Sleeman, D., & Brown, J.S. (1982). Intelligent tutoring systems. New York: Academic, Smith, H.R., Fink, P.K., & Lusth, J.C. (1985). Intelligent tutoring using the integrated diagnostic model. In K.N. Foreman (ed.),

Expert systems in government symposiums (pp. 126-135). IEEE Computer Society Press. Stuart, J.D., Pardue, S.D., Carr, L.S., & Feldcamp, D.A. (1985). TITAN: An expert system to assist troubleshooting the Texas Instruments 990 Minicomputer System. In K.N. Karma (ed.), Expert systems in government symposium. (pp. 439-446). IEEE Computer Society Press 439-446. Teitelbaum, T., & Reps, T. ( 1981 ). The Cornell Program synthesizer: A syntax-directed programming environment. Communications ~f the ACM, 24(9), 563-573. Van Tassel, D. (1978). Program style, design, efficiency, debugging, and testing, 2 Ed. Englewood Cliffs, N J: Prentice-Hall.

APPENDIX

1

A Sample Knowledge Source {hypotheses} Y O U N E E D TO C O R R E C T C A L L _ N E X T _ B _ E V E N T A D D T O s t a t e m e n t n e e d s to b e c o r r e c t e d . CORRECT CAUSE STATEMENT R O U T I N E R E Q U I R E D TO C H E C K F O R E M P T Y Q U E U E B E F O R E BEHEAD, H E A D O R C A U S E T H E E N T I T Y M U S T BE R E M O V E D F R O M Q U E U E O R A . A V A I L SET B A C K T O T R U E

Iopposites} T H E E R R O R IS A S S O C I A T E D W I T H E I T H E R B E H E A D , H E A D O R C A U S E T h e e r r o r m e s s a g e g i v e n c o n t a i n e d the p h r a s e ' ' A R I T H M E T I C FAULT'' T H E E R R O R IS A S S O C I A T E D W I T H E I T H E R B E H E A D , H E A D O R C A U S E E r r o r m e s s a g e g i v e n b y the IBM PC w a s ' ' E R R O R IN CAUSE''

{rules} IDENTIFY1 IF T H E E R R O R IS A S S O C I A T E D W I T H E I T H E R T H E B E H E A D O R H E A D R O U T I N E S THEN QUEUE HAS BECOME EMPTY IDENTIFY2 IF E r r o r m e s s a g e g i v e n by the I B M PC was ''ERROR IN CAUSE'' THEN E N T I T Y A T T H E F R O N T O F T H E Q U E U E HAS B E C O M E U N A V A I L A B L E F O R A C T I V I T Y IDENTIFY3 IF Q U E U E HAS B E C O M E E M P T Y ( P l e a s e r e f e r to the p r o c e d u r e C A L L _ F O R _ _ N E X T _ B _ E V E N T . ) ( E x a m i n e the n u m b e r of B - E V E N T S l i s t e d in t h a t p r o c e d u r e a n d ) (check t h a t t h i s n u m b e r is t h e n u m b e r of B - E V E N T S in y o u r p r o g r a m . ) Y o u r list of B - E v e n t s in p r o c e d u r e C A L L _ F O R _ N E X T _ B - E V E N T is c o m p l e t e THEN YOU NEED TO CORRECT CALL_NEXT_B_EVENT IDENTIFY4 IF QUEUE HAS BECOME EMPTY (For the f o l l o w i n g s e r i e s of q u e s t i o n s y o u w i l l n e e d to h a v e ) ( d e t e r m i n e d w h i c h q u e u e is in error, y o u m a y a l r e a d y h a v e an idea) (from p r e v i o u s a n a l y s i s . ) (Find the B - E V E N T t h a t s h o u l d be i n c r e a s i n g the size of t h a t q u e u e . ) (If y o u r e q u i r e f u r t h e r h e l p to "nd the of f e n d i n g B E H E A D or HEAD) s t a t e m e n t , p l e a s e a n s w e r '?' to f o l l o w i n g q u e s t i o n . ) T h e A D D T O f u n c t i o n in the B - E V E N T adds to the w r o n g Q U E U E

SIPDES

161 THEN A D D T O s t a t e m e n t n e e d s to be c o r r e c t e d . IDENTIFY5 IF Q U E U E HAS B E C O M E E M P T Y ( E x a m i n e t h o s e C - E V E N T S w i t h i n y o u r p r o g r a m t h a t do not r e l y o n all) (the q u e u e s p r o c e s s e d w i t h i n t h a t C - E V E N T h a v i n g Q S I Z E > = i) (Find w i t h i n o n e of t h e s e C _ E V E N T S the o f f e n d i n g queue) T h e Q U E U E is p r o c e s s e d i n s i d e a C A U S E s t a t e m e n t b y e i t h e r B E H E A D or HEAD T H E R E L E V A N T C A U S E S T A T E M E N T R E F E R S TO THE W R O N G B - E V E N T THEN CORRECT CAUSE STATEMENT IDENTIFY6 IF Q U E U E HAS B E C O M E E M P T Y Y O U B E H E A D T H E Q U E U E O U T S I D E THE C A U S E S T A T E M E N T N O S T A T E M E N T HAS B E E N P R O V I D E D T O C H E C K F O R E M P T Y Q U E U E THEN ROUTINE REQUIRED TO CHECK FOR EMPTY QUEUE BEFORE BEHEAD,HEAD OR CAUSE IDENTIFY7 IF E N T I T Y A T T H E F R O N T O F T H E Q U E U E HAS B E C O M E U N A V A I L A B L E F O R A C T I V I T Y T h e C A U S E in q u e s t i o n c o n t a i n s a H E A D s t a t e m e n t The e n t i t y is not r e m o v e d f r o m the q u e u e a f t e r the C A U S E r o u t i n e THEN T H E E N T I T Y M U S T BE R E M O V E D F R O M Q U E U E O R ^ . A V A I L S E T B A C K T O T R U E IDENTIFY8 IF E N T I T Y A T T H E F R O N T O F T H E Q U E U E HAS B E C O M E U N A V A I L A B L E F O R A C T I V I T Y T h e C A U S E in q u e s t i o n c o n t a i n s a B E H E A D s t a t e m e n t T h e e n t i t y is n o t r e m o v e d f r o m the q u e u e a f t e r the C A U S E r o u t i n e THEN T H E E N T I T Y M U S T BE R E M O V E D F R O M Q U E U E O R A . A V A I L S E T B A C K TO T R U E @ {help f a c i l i t y } ??? E R R O R M E S S A G E G I V E N W A S ''ERROR IN CAUSE'' If y o u a r e u n s u r e a b o u t this q u e s t i o n go b a c k a n d r u n y o u r s i m u l a t i o n p r o g r a m again. If the e r r o r m e s s a g e is g i v e n it w i l l a p p e a r c o n t i n u o u s l y t h r o u g h o u t the run, a l l o w i n g t h e r u n to c o m p l e t e . A D D T O IN B - E V E N T IS N O T A D D I N G TO T H E C O R R E C T Q U E U E E x a m i n e the e r r o r m e s s a g e given. If y o u a r e on the V A X this i n d i c a t e s at w h i c h C - E v e n t the s i m u l a t i o n run w a s t e r m i n a t e d . If y o u are o n t h e IBM e x a m i n e the last C - E v e n t t h a t t o o k place. T H E E R R O R IS A S S O C I A T E D W I T H E I T H E R BEHEAD, H E A D O R C A U S E T h e e r r o r m e s s a g e w i l l g i v e an i n d i c a t i o n of the e L S E r o u t i n e in error. T h e A D D T O f u n c t i o n in the B - E V E N T a d d s to the w r o n g Q U E U E A C-Procedure would normally begin on the condition that 'all' q u e u e s r e q u i r e d for t h a t p r o c e d u r e h a v e at l e a s t o n e e n t i t y in them. i.e. W H I L E ( Q S I Z E ( Q U E U E I ) > = i ) A N D (QSIZE(QUEUE2) >=i) DO .... T h e u s e r is r e q u i r e d t o i n v e s t i g a t e t h o s e C - p r o c e d u r e s

162

G. I. Doukidis and R. J. Paul that do not require all queue sizes to be greater than one for the procedure to take place. Your list of B-Events in procedure CALL_FOR_NEXT_B-EVENT is complete The number of B_EVENTS listed in the procedure CAL_FOR_NEXT_B_EVENT should contain all the B_EVENTS within your simulation program. The list is contained within the CASE statement.

APPENDIX 2 SampleeLSEProgram Code P R O C E D U R E C2; (* s e r v i c e *) BEGIN W H I L E ( Q S I Z E ( w a i t ) > = i) A N D ( Q S I Z E ( b e r t h _ Q U E U E ) > = i) DO B E G I N W R I T E L N ( ' s e r v i c e S T A R T S '); s e r v i c e _ T I M E := ROUND(normal(80,20,3)); CAUSE( 3,BEHEAD(wait),service_TIME); CAUSE(4,BEHEAD(berth_QUEUE), service_TIME); b e r t h _ U T I M E := b e r t h _ U T I M E + service_TIME; N s e r v i c e := N s e r v i c e + i; END; (* of w h i l e l o o p *) END; (* of p r o c e d u r e c2 *) P R O C E D U R E B3; (* s h i p E N D S s e r v i c e *) BEGIN WRITELN('END OF service : berth ' , C U R _ N O ENT); ADDTO(BACK,berth__QUEUE,CURRENT); END; P R O C E D U R E B4; (* b e r t h E N D S s e r v i c e

*) BEGIN WRITELN('END OF service : berth ' ,CUR__NO_ENT); ADDTO(BACK,bertb__QUEUE,CURRENT); END;

APPENDIX 3 SIPDES Example Session Log * S I P D E S aims to d i r e c t the u s e r t h r o u g h H i s / H e r s i m u l a t i o n p r o g r a m to a i d f a u l t d i a g n o s i s . It runs o n a s e r i e s of q u e s t i o n s to t h e user. For the u s e r to w o r k t h r o u g h these, a l i s t i n g of the s i m u l a t i o n p r o g r a m m a y n e e d to be r e f e r r e d to.

SIPDES

163 D o y o u w i s h to c o n t i n u e o r q u i t t o f e t c h a l i s t i n g of y o u r p r o g r a m . A n s w e r Y to c o n t i n u e . (Y/N): Y

THROUGHOUT SIPDES THE USER WILL BE GIVEN DIRECTIONS TO PARTS OF THEIR SIMULATION PROGRAM WITH THE AID OF ''SIGNPOST''

PLEASE PRESS RETURN = = => OPENING MENU At which stage was the error noted? i. B e f o r e a l l B - E V E N T S h a d b e e n e x e c u t e d at l e a s t o n c e . 2. S o m e t i m e i n t o t h e s i m u l a t i o n run. 3. A t t h e s i m u l a t i o n r e p o r t . INPUT STAGE NUMBER

. . . .

>

3

SIGNPOST! T h e f o l l o w i n g s e r i e s of q u e s t i o n s r e f e r to t h e f i n a l r e p o r t s t r u c t u r e g i v e n a t t h e e n d of t h e s i m u l a t i o n run. P a r t i c u l a r a t t e n t i o n is g i v e n t o t h e n u m b e r of t i m e s a c t i v i t i e s h a v e t a k e n place. PLEASE PRESS RETURN = = => Is t h i s t r u e : NO ACTIVITIES TOOK PLACE AT ALL (Y/N) O R ? : n Is t h i s t r u e : SEVERAL ACTIVITIES HAVE NOT TAKEN PLACE (Y/N) O R ? : ? SEVERAL ACTIVITIESHAVE NOT TAKEN PLACE T h e r e p o r t s t r u c t u r e u s u a l l y m e t h e r e l o o k s like: (the n u m b e r s a r e f i c t i o n a l ) CI__EVENT C2_EVENT C3_EVENT C4_EVENT C5__EVENT C6__EVENT

PRESS RETURN

8 8 10

NOTE:

0 <= = = First event with zero level activity. 0 0 a n d so o n

=

=

=>

Is it t r u e SEVERAL ACTIVITIES (Y/N) O R ? : Y

HAVE NOT TAKEN PLACE

Rule IDENTIFY2 deduces TRUNCATED CYCLE RUN Rule IDENTIFY2 deduces INCORRECT CONDITIONS Rule IDENTIFY2 deduces CONDITION FOR ACTIVITY NOT FULFILLED

164

G. I. Doukidis and R. J. Paul PLEASE PRESS RETURN = => SIGNPOSTI Please refer to the procedure GO_THRU_C_EVENTS. If y o u h a v e introduced an extra C-EVENT after LANGEN was used, examine the value of C N U M . PLEASE PRESS RETURN

=

=

=>

Is t h i s t r u e : Y o u r l i s t of C_J~vents in p r o c e d u r e G O _ T H R U _ C _ E V E N T S (Y/N) O R ? : y

is c o m p l e t e

SIGNPOSTI P L E A S E R E F E R T O T H E i s t C__EVENT W I T H Z E R O L E V E L O F A C T I V I T Y IN R E P O R T Examine the WHILE. . D O c o n d i t i o n s at t h e f r o n t of t h a t C - E V E N T . PLEASE PRESS RETURN = = => Is t h i s t r u e : THE CONDITIONS (Y/N) O R ? : ?

AT THE FRONT OF THE C_EVENT ARE CORRECT

THE CONDITIONS AT THE FRONT OF THE C_EVENT ARE CORRECT You should check here that the correct queues are involved in t h e W H I L E . .DO c o n d i t i o n . PRESS RETURN

=

=

=>

Is it t r u e THE CONDITIONS AT THE FRONT OF THE C_EVENT ARE CORRECT (Y/N) O R ? : Y SIGNPOST! Please refer again to the WHILE. .DO c o n d i t i o n at t h e h e a d of t h e i s t C - E V E N T w i t h z e r o l e v e l of a c t i v i t y . F o r t h e f o l l o w i n g s e r i e s of q u e s t i o n s y o u w i l l n e e d t o k e e p in m i n d those QUEUES involved inthe WHILE. .DO c o n d i t i o n . O n e of t h o s e a r e p r o b a b l y in e r r o r . PLEASE PRESS RETURN = = => Is it t r u e YOU WISH TO CONTINUE (Y/N) O R ? : Y ( * s w i t c h f r o m s t a g e 3 to s t a g e 2*) Rule IDENTIFY10 deduces FURTHER INVESTIGATION REQUIRED. PLEASE PRESS RETURN = => SIGNPOST! P l e a s e r e f e r to t h e p r o c e d u r e C A L L _ F O R _ _ N E X T _ B _ E V E N T . E x a m i n e t h e n u m b e r of B - E V E N T S l i s t e d in t h a t p r o c e d u r e a n d c h e c k t h a t t h i s n u m b e r is t h e n u m b e r of B - E V E N T S in y o u r p r o g r a m . PLEASE PRESS RETURN = = => Is t h i s t r u e : Y o u r l i s t of B - E v e n t s (Y/N) O R ? : y

in p r o c e d u r e C A L L _ F O R _ _ N E X T _ _ B _ E V E N T is c o m p l e t e

SIGNPOST! F o r t h e f o l l o w i n g s e r i e s of q u e s t i o n s y o u w i l l n e e d to h a v e d e t e r m i n e d w h i c h q u e u e is in e r r o r , y o u m a y a l r e a d y h a v e an i d e a f r o m previous analysis.

SIPDES

165 F i n d t h e B - E V E N T t h a t s h o u l d be i n c r e a s i n g t h e s i z e of t h a t q u e u e . If y o u r e q u i r e f u r t h e r h e l p to f i n d t h e o f f e n d i n g B E H E A D or H E A D s t a t e m e n t , p l e a s e a n s w e r '?' to f o l l o w i n g q u e s t i o n . PLEASE PRESS RETURN = = => Is t h i s t r u e : T h e A D D T O f u n c t i o n in t h e B - E V E N T a d d s to t h e w r o n g Q U E U E (Y/N) O R ? : n SIGNPOST! Examine those C-EVENTS within your program that do not rely on all the queues processed within that C-EVENT having QSIZE >= 1 F i n d w i t h i n o n of t h e s e C _ E V E N T S t h e o f f e n d i n g q u e u e PLEASE PRESS RETURN = = => Is t h i s t r u e : T h e Q U E U E is p r o c e s s e d HEAD (Y/N) O R ? : Y

inside a CAUSE statement by either BEHEAD or

Is t h i s t r u e : THE RELEVANT CAUSE STATEMENT REFERS TO THE WRONG B-EVENT (Y/N) O R ? : y Rule IDENTIFY 5 deduces CORRECT CAUSE STATEMENT PLEASE PRESS RETURN = => T h e f o l l o w i n g a c t i o n s h o u l d be t a k e n CORRECT CAUSE STATEMENT T h a n k y o u f o r u s i n g S I P D E S . It h a s b e e n a p l e a s u r e . I h o p e y o u w i l l r e c o m m e n d m e to y o u r c o l l e a g u e s . HAVE A NICE DAY!I! DO YOU WISH TO TRY AGAIN ? (Y/N) : N