SD methods in D0 software development

SD methods in D0 software development

Computer Physics Communications 45 (1987) 245—257 North-Holland, Amsterdam 245 THE USE OF SA/SD METHODS IN DO SOFTWARE DEVELOPMENT J. LINNEMANN Dept...

986KB Sizes 0 Downloads 19 Views

Computer Physics Communications 45 (1987) 245—257 North-Holland, Amsterdam

245

THE USE OF SA/SD METHODS IN DO SOFTWARE DEVELOPMENT J. LINNEMANN Dept. of Physics and Astronomy, Michigan State University, MI 48824, USA

J. FEATHERLY, B. GIBBARD, S. KAHN, S. PROTOPOPESCU Brookhaven National Laboratory

D. CUlTS, J. HOFTUN Brown University

C. BROWN, A. ITO, A. JONCKHEERE, R. RAJA Fermi National Accelerator Laboratory

S. HAGOPIAN, S. LINN Florida State University

D. ZIEMINSKA, A. ZIEMINSKI Indiana University

A. CLARK, C. KLOPFENSTEIN, S. LOKEN, T. TRIPPE Lawrence Berkeley Laboratory

S. KUNORI University of Maryland

D. BUCHHOLZ Northwestern University

E. GARDELLA University of Pennsylvania

Y. DUCROS, A. ZYLBERSTEJN CEN Saclay

R. ENGELMANN, D. HEDIN, K. NG and K. NISHIKAWA State University of New York at Stony Brook

The DO experiment has used the ‘Structured Analysis/Structured Design’ (SA/SD) methodology in its software development for the past year. The data flow diagrams and data dictionaries of structured analysis were the primary tools used in development of an ideal model of the DO software system. These and the structure charts developed during the design phase

OO1O-4655/87/$03.50 © Elsevier Science Publishers B.V. (North-Holland Physics Publishing Division)

246

J. Linnemann et aL

/ SA/SD

methods in DO software development

form the basic documentation of the system. Real-time structured development techniques, e.g. state transition diagrams, are employed to describe control functions in some areas, e.g. in the calibration software. The SA/SD methodology has proven to be valuable in the formulation of ideas and in communication between software developers. The methodology and its application to DO software are described and the benefits and problems are assessed. Problems finding adequate software tools for the VAX environment are discussed and a data dictionary manager developed by DO using DEC RDB is described.

1. History DO had its first brushes with software engineering methodolgy during the summer of 1985. At that time, several of us had felt a strong motivation to approach software development in our project in a more organized fashion than was common in high energy physics experiments. These sentiments were the result of a mixture of trying to understand what we felt had been good in some of our previous experiments, and suffering from the effects of rather vigorous aversive therapy administered by other of our previous experiences, By a coincidence, at the same time as I was in Europe talking with people from LEP collaborations and especially with Gottfried Kellner of Aleph, Stu Loken at LBL had also been having discussions with other Aleph members who had been visiting his laboratory on the subject of this strange thing called Structured Analysis/Structured Design. As a result, we invited a representative of Yourdon, Inc., industrial purveyors of seminars on the methods, and Mme. Videau, the visiting Aleph member, to make presentations to our collaboration during a week-long workshop. After some rather vigorous discussions, we decided to adopt (at least provisionally) these methods for our software development. It should be emphasized that the testimony and description of Mme. Videau was vastly more comprehensible and suasive to our audience than that of the consultant. 1.1. Motivation The motivations we had in adopting this course were various. We felt that the method held promise of providing a better software design that we would have been likely to arrive at by traditional means. This aim is best achieved by having a

program with simple communication paths and a minimum number of interactions among the parts. The “better” here is couched not so much in terms of choice of algorithm or raw execution speed, but in total time spent on a project, for a satisfactory performance level. This separability of parts helps two pressing concerns in a large HEP software project parcelling out projects to different institutions, and modifying the program to meet changing requirements (this is research after all). The same maintenance issues arise when introducing new members into the collaboration during the rather long lifetime of the experiment one needs good documentation. Since the method derives the programs from visible, communicable documents, one hopes that the design work could do double duty as documentation as well. Finally, having something visible as a design document allows more effective critique of the design as it progresses, and gives management something concrete to look at other than checking the number of disk blocks taken up by the group libraries. —



1.2. Training The initial training materials on the methods were taken from Aleph software notes [1] which were in turn based on a digestion of the courses given to Aleph at CERN by one of the consultants of Yourdon, Inc. The next level of sophistication was achieved by the purchase of a few copies of a book on the subject [2], which was fairly avidly passed about. At this time, I was chosen by my colleagues as a guinea pig for attending a commercial course on the methods. I came back with more knowledge of inventory control systems than I’d really wanted to acquire, but eager to translate the ideas into HEP terms. I gave a series of lectures to my colleagues on my interpretation of the materials. These served as a next level of introduction to the techniques. Interspersed with the lectures were

J. Linnemann et aL

/ SA/SD

working sessions built around the pieces of the design already in hand at that point. This was a substitute for the more structured experience of applying the methods to the in-class exercises provided by the commercial course, albeit an imperfect one, and I found myself sometimes making slow headway in introducing ideas from the course not found in the first, now canonical, set of texts. Although scientists, we were disturbed that anything which was claimed to be good could change and then claim to be better. A part of the course involved the right to consult with course instructors on application of the ideas to our setting; this developed into a rather useful set of interactions [3] with Paul Ward, then vice-president of Yourdon, Inc. Eventually over the course of the next year, some 15 of my colleagues attended the realtime version of the Yourdon courses, and a somewhat larger number have read a series of 3 more recent books on the subject [4]. 2. The SA/SD methodology

methods in DO software development

247

real proof of its benefits requires a substantial time to pass. This tends to make one nervous, as there is more emphasis on up-front investment of effort in the face of inevitable pressure to produce working code. As with electronics design, one starts with a broad overview of the system and seeks a cleanly separable set of functional blocks, paying attention to issues such as simplicity of interconnection, minimization of data traffic, ease of isolation and debugging, and replacability of functional blocks with equivalents meeting the same external specifications. Both proceed in two phases an analysis phase in which one decides what must be done, and a detailed design phase, in which one decides on specific methods to accomplish the goals one has agreed upon. —

2.2. Tools The perhaps most visible aspect of the methodology are the design tools. For the most part, they

There are several components which make up the SA/SD methodology. I think it is useful to separate them. A useful analogy to this method of software design is the process of design of cornplex electronic circuity. Here, at least, few would argue against a substantial planning phase before proceeding. I believe that the reason is that we are all rather aware of the costs of patching up a botched hardware design. I submit that trying to place bandaids on something so complex as a bungled 100000 line suite of interconnected programs is also something to avoid, 2.1. Outlook The SA/SD methodology has been formed in a commercial environment. This has focussed attention on decreasing software costs, which in physics terms, is just the manpower needed to write and keep running the software for an experiment. In line with industrial experience, this puts considerable emphasis on reduction of maintenance costs for software. The real power of the method is advertiesed in minimizing the effort over the software life cycle. This has the consequence that

are graphical; the tools all need to be learned, and this requires practice. They provide a language in which to express a design, and, like most languages, are easier to read than to write. They are analogous to logic diagrams, bus protocols, timing diagrams, state diagrams, and block diagrams of complex electronic systems. 2.3. Methods The particular design methods associated with the notation form a substantial part of the motivation of the choice of notational tools. However, they in some measure stand apart from the mere notations, in that one could use any of several design techniques and still express the results with the notations. The electronic analogies are various “good practices” and more systematic design methods, such as designing complex logic in terms of finite state machines, or, at lower levels, simplification of combinatorial logic by various reduction diagrarn techniques. The major SA/SD methods are 1) Stimulus-based decomposition of complex func-

248

J. Linnemann et aL

/ SA/SD methods in DO software development

tions. This is an alternative to top-down decomposition which states that for a (sub-) system with a given number of distinct types of inputs to which it must react, there should be an internal functional element to handle each such input stimulus. I should perhaps note that the term “event” is used in the SA/SD literature; for reasons of clarity we have adopted the term “stimulus” in its place. 2) Derivation of first-pass structure chart from dataflow diagram. This method (actually a mix of two strategies) forms the link between structure analysis and structured design. 3) Refinement criteria for preliminary designs. These are basically a set of desiderata, which suggest alterations to produce designs which have been shown to be associated with low maintenance costs,

3. SA/ SD in DO In this section I will outline the DO experience in the use of the various tools and methods of SA/SD. 3.1. Context diagram The context diagram in fig. 1 is meant to represent interactions of the software with the outside world. This serves two purposes. First it shows the scope of the system, that is, what is inside the system and presumably being planned by the software design group, and what is outside the system and thus not being worried about other than in terms of interfacing. The second purpose is to show the types of messages which can be exchanged with the world outside the software system thus conceived. The idea here is that people whose domain is outside the software system can check to see that the interactions are consistent with those they had imagined, and whether these will indeed cover all relevant cases. This checking is important, because the design of the software proceeds from these assumptions. New classes of interactions might well have important consequences for the system design, and it is vital that omissions or errors be brought to the atten-

tion of the design group as soon as possible, as such errors are vastly easier to fix when the design is in a conceptual stage, on paper, rather than embodied in code already written. The large central circle represents the system the software group thinks it is designing. Entering and leaving this circle are arrows which represent flows of data interchanged with persons or things outside the system. The contents of these data flows are explained in the accompanying data dictionary. Since the system design group began work on our software design while I was off attending a course, and we had not gotten much sense of the use of the context diagram from our introductory material, the context diagram was retrofitted onto the design. This required a considerable amount of synthesis, and writing a rather lengthy data dictionary (see below) to describe the data items of the diagram. The diagram was actually presented in several layers so that it could be comprehended. This work forced a coherent overview of the project, which was beneficial in terms of clarity and of getting some sense that the relevant pieces had at least been thought of. The system was of very considerable complexity, which made building the context diagram rather tricky, and our success with getting verification from our colleages that we had set the right scope for the project was only moderately successful. Either they felt that whatever we were doing was OK by them, or that they were too busy to worry about it now, but would find the problems later when the time came. The attitude that software is infinitely plastic/repairable still is very prevalent. 3 2 Stimulus list One of the procedures which SA/SD recommends based on the context diagram is to make up a list of stimuli from the outside world. It is the purpose of the software system to respond to these. To be complete, this should include things like a lack of a stimulus when one is expected(e.g. data event timeout). The complexity of the diagram discouraged us from this task, and there was some resistance to introducing too many new concepts at once. We did not make a serious attempt to use this design tool.

J. Linnemann et aL

Do SOFTWARE Author~JTL

/ SA/SD

methods in DO software development

CONTEXT DIAGRAM (OVERVIEW)

249

VERSION 2.1 Sept — 86

08



DETECTOR ELECTRONIC READOUT

PAT TERN GENERATOR

Trigger_Type

Raw_Ev.nt_Data

-....~

Hardware Alarm

Slmulated...Row_Dota

~

HARDWARE CONTROL

Raw~Uonitor

Hardware Control_Dialog

/

TRIGGER CONTROL

Trl

9aer_ControI_Q~~._.—’7 SHIFT

SOFTWARE

AcceI~jator_Conditions

“I

OPERATORS

/7

ACCELERATOR CONTROL

Proc.aain9_Dial~,,,/

Acce ~

~

oi 09

Simulot ion...Proceesing_Dloiog

\N~naIYal5_D

10109

Fig. 1. DO Context Diagram.

3.3. Dataflow diagram The dataflow diagram (fig. 2) is a graphic which focusses on processes and the transfer of data among them. The logical interdependency of the processes is shown by the direction of data transfer; processes are seen as stimulated by the arrival of their input data. Data which does not force the startup of a process, but is necessary for execution, is shown as being transferred in and out of storage areas called stores. This view of the system is closest to what we

normally think of as programming, and proved the most natural of the notions. Although it was a fairly natural mode of expression, writing the diagrams took a great deal of practice, and we found that we revised our high-level diagrams quite heavily. During this period, it was very important to have the working group together at one spot. When this was not practical, we did find that there were some at least partial remedies. We have at various stages used telefax transmission of hand drawn diagrams (use dark pen!), or Decmail transmission of files which contained diagrams

250

J. Linnemann et aL

/ SA/SD

methods in DO software development

(EVEDT’s RECTANGULAR mode is quite helpful here). Among the issues highlighted by the dataflow diagram are the logically necessary interconnection of tasks and the logically required storage. In the analysis phase, which concentrates on an ideal model of the system, considering only complexity forced by requirements, not implementation constraints, this helps expose opportunities for parallelism in implementation.

(* Data Dictionary

Complex systems will be represented by hiearachies of such diagrams. Top-down design is particularly prone to revision of upper level-diagrams due to further analysis at a lower level, but this problem exists in any system in which the final logical analysis is not reached without iteration. These inconsistencies are a considerable nuisance, and it was here that the lack of a software tool, especially for enforcing consistency between diagrams and their parents or daughters,

for DFD 4, .~MJ, 15-Apr-86

*)

Page

1

Calibration.....Dialog — $ See DD for DFD Context * * Must add the following * I Calibration.._Diagnostic_.Request :Calibration._.Diagnostic._.Response I Test._Diagnostic_..Request :Test_Piagnostic...Response I Cal ibrat ion_Intervent ion._.Demand:Operator...Response External._9arameters — * See DD for DFD Context Calibration....Pata

*



See DD for DFD 0 * Must add the following + Trigger_Type + Zero_Suppressed...Flag + Pulser_.A.mplitude...Readout *

$

CalibratioiLjvfonitor_JData — * See DD for DFD 0 System_Parameter

*



See DD for DFD 0

*

Systent...Configuration.._Dialog — * See DD for DFD 0

*

*

*

S%fSteflL..StatUs — * Contains information needed by the calibration and test bubbles to determine which events each should examine and control words to determine actions taken * ( Run~umber + Interpretation__of_event_type + Number_of_events_requested + Gain/Pedistal_request + hardware_state I PAUSE I ABORT] Done



~Data indicating completion or partial completion of task, $ Indicates success or failure and resources that may be released ~ Fig. 2. A fragment of a data dictionary.

J. Linnemann et al.

/ SA/SD methods in DO software development

DFD 4.0 (Manage Detector Parameters) Authors AMJ,JLDH

Version 2.4 01 - Oct

-

251

86

Begin_Ca lib rot ion_D 10109

4.1 Control CalibratIon Data Taking

I

I

J

System_Cant iguratlon_Diolog

Creote_SySteflLDialOg

Calibration_Run_Parameters Config.

system_Parameter.

Done/W_Step

Done/~_To.k





4.2 Process Calibration Data

Pul,er...Ai*plltude







Verity k Stemear I ze Cal lb. Results

Processed_CalIbration

End_Cal ibration_Dioiog_—__._~.

Trigger_Type

Formatted_Detector_Data

CalibratIon_History

Calibration_Data

Fig. 3. A fragment of the dataflow diagram for calibration.

was felt. Such a level of functionality in SA/SD support software is not cheap on a VAX class machine, 3.4. Data dictionary To add substance to the dataflow diagram, it is necessary to define what is meant by the data being passed. This is necessary to understand whether the processes indeed have sufficient information to carry out their job. We found the data definition rather time-consuming, and hard to express in a fashion which does not bias the

choice of implementation. The data dictionary (fig. 3) is not easily maintained in a hierarchical fashion, since there is inherently a lot of cross referencing of data used by other dataflow diagrams. Software support of a common data dictionary would have been welcome at an earlier stage than we achieved it; we did the bulk of our design work with simple editor files, which allowed no cross checks or syntax verification. As one enters more detailed design, the contents of the dataflows start to contain data objects (such as Zebra banks) which will be needed for long-term user documentation. After a lot of discussion we

252

J. Linnemann et aL

/ SA/SD

methods in DO software development

decided to hold the Zebra bank definitions in external files which are pointed to by the data dictionary as it descends to that level of detail. There are some good reasons for this separation, as the use of the banks normally requires more structural information than is easily represented in the data dictionary (see entity-relationship diagram below).

lowest-level, atomic processes. Higher-level processes are described in terms of their components. However, if the design effort is suspended for some time at an intermediate level, it is very easy to lose track of what had been meant by particular high-level processes, even with careful choice of names. There was considerable aversion to this “extra work” and we paid for neglecting it. Even at the lowest level processes, the writing of these specifications before coding has not been uniform, although all code entering the library does have such a description embedded in prolog comments.

3.5. Process spectfication The other objects on the dataflow diagrams which needed definition were the processes. Here discovering the design in a top-down fashion, combined with the size of the project and the time between meetings, led to difficulties. In principle, one needs process specifications for only the

3.6. Entity-relationship diagram To this point, we have not used the entity-relationship diagram in our design effort. It represents

STATE TRANSITION DIAGRAM FOR: DOCALIB AUTHOR: JTL VERSION .0 NOV. 19, 198B INITIAL STATE

IDLE REQUEST ACQUIRE RESOURCES RAISE

INCOMPLETE

I P~iI

I 4.

RELEASE RESOURCES

REQUEST ACQUIRE RESOURCES

SETUP

1 _

____

B~N ENABLE TRIGGER INITIALIZE CALIB STEP

DISABLE TRIGGER VERIFY RESULTS

PA USED

UPDATE CAUB STEP

CONTINUE & NOT INCOMPLETE

I

STOP

DONE & LAST

DISABLE

VERIFY RESULTS

IPAUSE DISABLE TRIGGER

ENABLE TRIGGER

TAKING DATA DONE & NOT LAST UPDATE CALIB STEP

RELEASE RESOURCES VERIFY RESULTS

Fig. 4. A state transition diagram for the calorimeter calibration program.

J. Linnemann et aL

/ SA/SD methods in DO software development

graphically the structural interrelations of data, which are hard to derive from the data dictionary. It is particularly useful in design of databases, a problem which we are just embarking on. Some of the same function is also carried out in diagrams representing the dependency structure of Zebra banks, which we will need in any case.

253

tion. We have used these extensively, and they will serve as part of our long-term documentation. As they are fairly close to the code, they are very well accepted by the group. We are making good use of them also in the sense of criticizing a proposed code organization before actual coding, and believe we have substantially improved our design quality as a result.

3.7. State transition diagram 3.11. Idealization / logical model The state transition diagram (fig. 4) is a tool to represent dynamic activation and deactivation of processes. It is most useful in the implementation design stage, and has clearly helped our thinking. It represents states of the system (boxes), the conditions which cause it to change states (above the lines on the arrows), and the actions that take place upon transition (below the lines on the arrows). The main uses are in online systems, where states represent broad behavior modes of the system, and in human interface description, where there is a good match between a state (behavior repertoire) and a menu. 3.8. Physical Dfd As one begins implementation, one should alter the dataflow diagrams to add new processes (such as commumcation interface routines) needed to actually implement the design. We have gone instead directly to structure charts (see below),

Our first design phase tried quite hard to be a purely logical model, with the hope of not introducing more complexity into the design that the problem itself warranted. This has been mostly successful, in spite of its being a bit of a strain on our u~ualhabits of mind. We did, however, feel that a design which did not recognize some distinction between online filtering and offline analysis would strain credulity, and we thus incorporated this into our basic logical design. 3.12. Derivation of logical model A strongly recommended technique of discovering a logical process design is to assign a process to handle each distinct input to the context diagram. Our context diagram was dauntingly cornplex, but taking this approach for lower-level ohjects has shown great utility in design of the interactive and online parts of the system. It is not as well adapted to ofifine needs, as there is one main input event data and an enormously complex response. Here approaches akin to object-oriented design are more promising; one needs to create processes for all data objects to be manipulated, and processes to create all relations between them. The Zebra bank dependency chart or entity-relationship diagrams are more useful starting points here. —

3.9. Processor Dfd A transformation of the physical dataflow diagram in which bubbles are the processors involved highlights interprocessor communication, Similar diagrams can be done for task communication. We have not done so in general. These can help highlight choices in partitioning of processes, or serve as documents showing interrelations of sets of tasks. 3.10. Structure chart The structure chart (fig. 5) highlights static routine structure and inter-routine communica-



3.13. Derivation and refinement of structure charts The techniques for deriving first draft structure charts and for refining structure charts [2] are generally well received in the collaboration, and we seem to be making progress at doing better ones. The process is not as algorithmic as the

254

J. Linnemannet al. / SA /SD metho& in DO sofhvaredevelopment

- + 4 I

z

1

2 >E .

s -

I +

J. Linnemann et aL

/ SA/SD

derivation of process structure from the classes of input stimuli, as they are more in the form of a list of desiderata (short data passing paths, keeping actual input and output far from high-level routines, avoiding passage of control flags, leaving decisions at high levels and work at low levels) than methods for obtaining them. However, it is clear to us even at this stage that public exposure of a design before it is cast into code, and cornparison against these criteria, has materially improved our software quality in terms of giving easily understood and readily modifiable code. 3.14. Walkthroughs Wallcthroughs are the formal review of parts of the software. There are criteria for good walkthroughs described in the works in the references. We have been doing a lot of informal reviewing, but have not been able to discipline ourselves to hold formal reviews. Even so, the informal effort seems highly worthwhile. Our situation might be a bit easier if our software effort were more centralized.

4. Acceptance The overall acceptance of the methods and the aims of the methods in the collaboration is generally rather good. Training is fairly important in raising the level of acceptance if you get people to sit in one place long enough to listen and then actually practice the methods, they are generally pleased by the results. Reading alone is helpful, but experience with the methods is irreplacable. We find SA/SD does slow down progress on some items where wide assent is needed, and try to remedy this as best we can with telecommunications replacing plane fares, or centralization, which would tend to cut universities out of the software effort entirely. The large front-end investment is of some concern when there are delivery time pressures. So far, our strategy has been to invest some of our best people in design efforts first on the broad structures, and then in the specific areas most in need of trouble. —

methods in DO software development

255

5. Software Support 5.1. Purchased The only software support we have purchased for SA/SD is a PC-based program from STRUCSOFT. Its features are quite good, but the difficulty of accessing PC’s over a network has discouraged its introduction. It has been difficult to convince people to spend money for, not only the courses, but also the rather expensive software needed for good support of the methods. At this stage, with design documents totalling over 100 pages, it would take considerable effort, and not all of it clerical, to even enter the current design into a support system. 5.2. Homegrown We have some home-grown software support. A graphics interface draws the processes and text of our dataflow diagrams, but not the flow connecting the objects. It requires explicit coordinates for placement of objects. This, plus periodic mailing, is the core of our graphics support. EVEDT draws state transition diagrams and structure charts in a form suitable for DECMAIL. We have also moved the information from the files specifying the dataflow diagram drawings, and the associated data dictionaries, to a database on our VAX’s. Remote login allows remote access to these. There are rather sophisticated data dictionary manipulation facilities (printouts associated with given diagrams or given processes), but checking diagrams across levels has not yet been implemented.

6. Spinoffs I would like to mention two very pleasant spinoffs of the use of these methods. As a result of our introduction to SA/SD, dataflow diagram techniques were used for the design of a factory for fabrication of PC boards for our calorimeters (fig. 6). The dataflow diagram emphasizes precisely those features of greatest interest potential for parallelism, and need for storage. This —

256

J. Linnemann et aL

/ SA/SD

methods in DO software development

H~

~

~LS~I:~wASH

BOARDS SPECIAL

INSPECT,

CURE

~

~~OUT~

BOARDS — —

STORAGE ON I- 0 RACKS

=

STORAGE IN LONG TERN STORAGE ROOM

12/23/86 HR & PM

Fig. 6. The DO signal board factory.

design was done after 20 minutes of informal instruction to colleagues who had not been associated with the software effort. Dataflow diagrams were also used at the Berkeley International Conference on High Energy Physics to design the flow of papers and registration information. In both cases, the people implementing the design (physicists and technicians in one case, secretaries in the other) remarked on how easy to understand the design was and how it clarified their thinking about the process. In both cases the designers remarked that they reached a markedly better solution for these problems than their first intuitive design.

7. Comparison with motivation: evaluation If we compare the results to date with our expectations, we can make a few useful state-

ments. This should be tempered with the knowledge that SA/SD is a life-cycle philosophy, and we are only now entering even the daily-use stage of software written with this philosophy, so in a real sense the jury is still out. As for the hoped-for benefits in terms of analysis of requirements, and making possible distributed software development, I would rate our experience to date as quite positive. The question of documentation quality as generated by the method rates a fair to good. The misgivings have to do with the fate of the dataflow diagrams in the long term without software support, and the tension between data dictionaries and bank description. The attraction of rigorous documentation, where the program is derived from the specification, is great, but its achievement still seems distant. As for the results in terms of raising software quality by critique and revision, I would immodestly rate our experience as very good. We all feel that we are writing better software now

J. Linnemann et aL

/ SA/SD

than we were before we started this venture. We are committed to continuing in this direction, and would do it over again if we were starting today.

References [11 G. Kellner, Aleph note 127, Support of Aleph Software Development, and note 142, Practical Guidelines for SA/SD.

methods in DO software development

257

[2]M. Page-Jones, The Practical Guide to Structured Systems Design (Yourdon Press, 1980). [3] of P. Ward DO System Design Essential Modelling a Highand Energy Physics Datagroup, Collection/Analysis System, presented at Structured Development Forum VII, 23—26 February 1986, San Francisco. [4] S. Mellor and P. Ward, Structured Development for RealTime Systems (3 vols.) (Yourdon Press, 1986).