Real-time systems: Another perspective

Real-time systems: Another perspective

J. SYSTEMS SOFTWARE 1992; 18:101-108 Controversy Corner It is the intention of the ~o~r~ui of ~~s~e~s and Software to publish, from time to time, a...

924KB Sizes 6 Downloads 97 Views

J. SYSTEMS SOFTWARE 1992; 18:101-108

Controversy

Corner

It is the intention of the ~o~r~ui of ~~s~e~s and Software to publish, from time to time, articles cut from a different mold. This is one in that series. The object of the Controversy Corner articles is not so much to present information as to stimulate thought. Topics chosen for this coverage are not traditional formal discussions of research work, but rather are informal presentations of key issues in the systems and software world.

Real-Time

101

Systems:

This series will succeed only to the extent that it stimulates not just thought, but action. If you have a strong reaction to the article that follows, either positive or negative, write to Robert L. Glass, Editor, Journal of Systems and Software, Computing Trends, P.O. Box 213, State College, PA 16804. We will publish the best of the responses as Controversy Revisited.

Another

Perspective

Wolfgang A. Halang university of Groningen, department

of Computing Science, Gro~i~ge~, The Netherlands

1. INTRODUCTION 1.1 Definition

of Real-Time Systems

The term real-time system is, unfortunately, often misused. Therefore, it it necessary to precisely define the notions relating to real-time systems. The ~nd~en~ characteristic of this field is the real-time operating mode, which is defined in the German industry standard DIN 44 300 [l] as follows: The operating mode of a computer system in which the programs for the processing of data arriving from the outside are permanently ready, so that their results will be available within predetermined periods of time; the arrival times of the data can be randomly dist~but~ or be already a priori determined depending on the different applications.

It is the task of digital computers working in this operating mode to execute programs, which are associ-

Address ~orr~~nd~~~g to Profmor Woifsag A. Halang, University of Groningen, Dept. of Computer Science, P.O. Box 800, 9700 A V Groningen, The Netherlands.

0 Elsevier Science Publishing Co., Inc. 655 Avenue of the Americas, New York, NY 10010

ated with external technical processes. The program processing must be temporally synchronized with events occurring in the external processes and must keep pace with them. Hence, real-time systems are always to be considered as embedded in a larger environment and so are also called “embedded systems.”

1.2 Real-Time System Requirements

Real-time operation distinguishes itself from other forms of data processing by the explicit involvement of the dimension time. This is expressed by the following two fundamental user requirements, which real-time systems must fulfill even under extreme load conditions: 0 timeliness 0 simultaneity As we shall see later, these requirements are supplemented by two further ones of equal importance: l

predic~bili~

0 dependability

0164-1212/92/$5*00

102

1.SYSTEMS

SOFTWARE 1992; 18:101-108

On request from the external process, data acquisition, evaluation, and appropriate reactions must be performed on time. This requires not only mere processing speed, but also the timeliness of the reactions within predefined and predictable time bounds. Hence, it is characteristic of real-time systems that their functional correctness depends not only on the processing results, but also the instants when these results become available. Correct instants are determined by the environment, which cannot be forced to yield to the computer’s processing speed, as in batch and time-sharing systems. According to the external processes, into which realtime systems may be embedded, there are environments with hard and with soft time constraints. They are distinguished by the consequences of violating the timeliness requirement: whereas soft real-time environments are characterized by rising costs with increasing lateness of results, such lateness is not permitted under any circumstances in hard real-time environments because late computer reactions are either useless or dangerous. In other words, the costs for missing deadlines in hard real-time environments are infinitely high. Hard time constraints can be determined precisely and typically result from the physical laws governing the technical processes being controlled [2]. The second requirement for simultaneous processing of external process requests is that real-time systems must essentially be distributed and must also provide parallel processing capabilities. The definition of the real-time operating mode has important consequences for the dependability of realtime systems, since the demanded permanent readiness of computers working in this mode can only be provided by fault-tolerant and-especially with respect to inadequate handling-robust systems. These dependability requirements hold both for hardware and software. They are particularly important for those applications in which computer malfunctions not only cause loss of data, but also endanger people and major investments. Naturally, expecting high dependability does not mean that a system will never fail; no technical system is absolutely reliable. However, by using appropriate measures, one must strive to make the violation of deadlines and corresponding damages quantifiable and as unlikely as possible. In so doing, the individual limitations of a real-time system must be recognized and the risk of its use for process control must be carefully considered [2]. Data to be processed may arrive randomly distributed in time, according to the definition of the real-time operating mode. This fact has lead to the widespread conclusion that the behavior of real-time systems cannot and may not be deterministic. This conclusion, which found expression in the nondetermin-

Wolfgang

A. Halang

istic selection of a rendezvous partner as semantics of the selective-wait statement in the Ada language, is based on a thinking error! Indeed, the external technical process may be so complex that its behavior appears random. The reactions to be carried out by the computer, however, must be precisely planned and fully predictable. This holds in particular for the case of the simultaneous occurrence of several events, leading to competition for service, and also includes transient overload and other error situations. Then, the user expects the system to degrade its performance gracefully in a transparent and foreseeable way. Only fully deterministic system behavior will allow the safety licensing of programmable devices for safety-critical applications. The notion of predictability and determinism appropriate in this context can be illustrated by this example: we do not know when or if a house will burn, but we expect the prompt arrival of the fire department when it is called. As we have seen, predictability of system behavior is of central importance for the realtime operating mode. It supplements the timeliness demand, for the latter can only be guaranteed if the system behavior is precisely predictable, both in time and with respect to external event reactions.

1.3 The Need for New Thinking Optimality

Categories

and

Criteria

The definition cited in section 1.1 implies that some prevailing misconceptions (in addition to the ones mentioned in [3]) about real-time systems need to be overcome: neither time-sharing nor fast systems alone are necessarily real-time systems. Commencing the processing of tasks in a timely fashion is much more significant than speed. Thinking in probabilistic or statistical terms, which is common in computer science with respect to questions of performance evaluation, is as inappropriate in the real-time domain as is the notion of fairness for the handling of competing requests or the minimization of average reaction times as optimality criteria of system design. Instead, worst cases, deadlines, maximum run times, and maximum delays need to be considered. To realize predictable and dependable real-time systems, reasoning in static terms and the acceptance of physical constraints are necessary -all dynamic and “virtual” features are considered harmful. Despite the best planning of a system, a transient overload in a node resulting from an emergency situation is always possible. To handle such cases, loadsharing schemes have been devised that migrate tasks between the nodes of distributed systems. In industrial embedded real-time systems, however, such ideas are generally not applicable, because they hold only for

Real-Time Systems: Another Perspective

J. SYSTEMS 1992;

computing tasks. In contrast, control tasks are highly I/O bound, and the permanent wiring of the peripherals to certain nodes makes load sharing impossible. In consequence, research must be more application and reality oriented. Maximum processor utilization is a major issue by the thinking criteria of classic computer science, and is still the subject of many articles. For embedded realtime systems, however, it is totally irrelevant whether processor utilization is suboptimal, as costs have to be seen in a larger context, viz., in the framework of the controlled external process and with regard to the latter’s safety requirements. Taking into account the costs of a technical process and the possible damage that a processor overload may cause, the cost of a processor is usually negligible and, in light of steadily declining hardware costs, of decreasing significance. A one-hour production stoppage of a medium-size chemical facility due to a computer malfunction and the required clean-up cost $50,000. This is about the price of the computer controlling the process itself. A processor board costs only a fraction of that amount. Hence, processor utilization is not a feasible design criterion for embedded real-time systems. Lower processor utilization is a small price to pay for system and software simplicity (an experienced system engineer’s salary may run up to $1,600 per day) as prerequisite to achieving dependability and predictability. 2. CONSOLIDATION: CONTRIBUTION

THE EUROPEAN

The first intensive research activities in real-time systems date back some 30 years. Until recently, owing to language barriers and the literature not being easily accessible, there was considerable lack of communication between the different national research groups, resulting in a number of parallel developments and repetitions. Several important and rather early developments that were carried out in Europe and appear to be relatively unknown will be highlighted below. Electrical, chemical, and control systems engineers first elaborated the real-time systems field in the early 1960s. Research and development efforts were aimed toward improving the then unsatisfactory software situation. Thus, the first, and later the majority, of highlevel real-time languages were defined and developed in Europe: RTL/2 and Coral 66 were developed in Britain, PROCOL and LTR in France, and PEARL in Germany. Although it was standardized 10 years after PEARL, the U.S. Department of Defense language for programming embedded systems, the French development Ada does not meet the functionality of PEARL with respect to real-time features. In close connection

SOFTWARE

103

l&3:101-108

to these language developments and to the use of special-purpose process-control peripherals, the research on real-time operating systems advanced considerably. Owing to the exceptional requirements for the implementation of PEARL, supporting real-time operating systems with still unmatched capabilities were developed as early as the mid- 1970s. During that decade, German research activities in all areas of realtime systems were boosted by the Federal Government-funded project PDV. The results were published in several hundred dissertations and project reports. In Britain, provoked by legislation, much attention was directed to the safety aspects of real-time systems; important progress has been made in this area. For example, a new microprocessor, the VIPER, was developed, with the unique feature that the correctness of its design has been formally proven. British designers have also taken the lead in establishing methods for the formal verification of software. The fundamentals on safety engineering of programmable systems, as elaborated by the TC 7 of the European Workshop on Industrial Computer Systems, have already been incorporated into international standards [ 161. As a consequence of these systematization efforts, the field receives considerable attention in European institutions of higher education. The first comprehensive textbook on real-time systems, for instance, was published as early as 1976 [5]. Although initial research activities abated somewhat in the early 1980s the environments for carrying out real-time projects have steadily improved. Thus, in Europe, the use of assembly language programming for real-time applications is now rare. Instead, software tools for supporting the entire development process of real-time systems from hardware configuration and software requirements specification to code generation and documentation are already in widespread use. The first and most comprehensive of such tools is EPGS ]6, 71, whose origins date back to the mid- 1970s.

3. A COMPREHENSIVE RESEARCH PROGRAM

REAL-TIME

SYSTEMS

In this section, we identify and discuss a number of topics, which, in our estimation, are or will be the major areas of research activities on real-time systems. To provide a motivation for the theme of this contribution and to place it into a larger context, we shall pay special attention to all time-related aspects. The following list comprises all important directions into which real-time systems research efforts are or should be heading:

104

J. SYSTEMS SOFTWARE 1992; 18:101-108

l

Conceptual foundations of real-time computing;

.

Predictability and techniques for schedulability analysis;

l

Requirements engineering and design tools;

l

Reliability and safety engineering, with special emphasis on the quality assurance of real-time software;

.

High-level languages and their concepts of parallelism, synchronization, communication, and time control;

l

Real-time operating systems;



Scheduling algorithms;

l

Distributed, fault-tolerant language- and/or operating system-oriented innovative computer architectures;

0

Hardware and software of process interfacing;

.

Communication systems;

l

Distributed data bases with guaranteed access times;

.

0

.

Artificial intelligence, with special emphasis on realtime expert and planning systems; Practical utilization in process automation and realtime control; Standardizations.

Academic real-time systems research has to be based on a solid, realistic model derived from the application domain and inco~orating the new thinking categories and optimality criteria outlined above. Computer science has no well-developed concept of time. As a matter of fact, time appears to be systematically suppressed. Therefore, it is necessary to reflect on the role time plays. A clear understanding of the reasons why and the manner in which time is involved in the design of real-time systems is needed as a prerequisite for a sound methodology. We expect that the systematic exploration of common sense notions about time and of analogies with everyday-life solutions for time-related problems will yield principles for the design of real-time systems. A time metric must be introduced to realize the pr~ic~bili~ requirement. To achieve temporal predictability and full determinism of system behavior will be a major effort, in the course of which many features of existing programming languages, compilers, operating systems, and hardware architectures will have to be questioned. To this end, real-time systems must be designed in all aspects to be as simple as possible, for simplicity fosters unders~dability and enhances de~n~bili~ and o~ration~ safety, Even at the cost of losses in (average) speed, al1 features impairing system behavior predictability, such as direct memory access, caches, and virtual memory, must be renounced. As far

Wolfgang A. Halang as possible, parallelism is to be implemented physically to prevent problems. Over the past decade, there has been a proliferation of formal specification methods that incorporate some notion of time. But while these methods may have some use in verifying qualitative timing properties, they are of little value in reducing complexity. Electrical engineers designing real-time systems do not yet have requirements engineering and design tools at their disposal that are oriented to their way of thinking and that allow them to precisely express all timing constraints they encounter. Absolute timing or temporal supervision of activities expressed in a system-inde~ndent language are still not possible. A framework is needed in which time planning can be carried out and that allows for the mutual balancing of tasks with different temporal urgencies. We are convinced that graphic methods are best suited to express concurrency, cooperation, and temporal behavior of real-time systems in a fully predictable way, because they take advantage of the inherent capability of pictures to effectively convey complex information. Moreover, they allow for straightforward formalization, a prerequisite of their use for program specification and verification. When developing real-time programs, not only software correctness, in the sense of mathematical mappings as in sequential processing environments, has to be proved, but also the intended behavior in the time dimension and the interaction of concurrently active processes need verification. Although not yet fully developed, there is already a host of methods available to carry out the former task. Working in close cooperation with requirements engineering and design tools, analytical methods that perform a static, a priori check to determine if the specified time conditions can be met (“schedulability analysis [S]“) are required. Such new verification procedures must be quantitative and time oriented, and should utilize the partial synchronization of tasks implied by the timing constraints. The guiding principle for the development of the next generation of high-level real-time programming languages should be to support predictable system behavior and inherent software safety without impai~ng unders~ndability. Language constructs for the formulation of absolute and relative time dependencies and for controlling the operating system’s resource scheduling algorithms must be provided. The latter feature will enable the compiler to perform, to a large extent, checks for feasible executability of task sets. New languages should further support the reusability of modules, dis~ibuted software, various timede~ndent fault-tolerance mechanisms, and the programming of programmable logic controllers. New user-oriented synchronization methods that employ time as an easily

Real-Time Systems: Another Perspective conceivable and natural control mechanism must be devised and provided. The next-generation languages should combine the advantages of PEARL [9] and Real-Time Euclid [17] and should, for safety purposes, incorporate as many ideas from NewSpeak [lo] as practically feasible. Future real-time operating systems will be expected to guarantee the deadlines and precedence relations between tasks under observation of fault-tolerance measures on the basis of integrated resource scheduling. The common deadlines of several cooperating tasks must be met in distributed systems while taking the transmission overhead into account. Frequent temporal supervision measures must be taken during program execution to guarantee timely system behavior or initiate a graceful degradation of performance. The arrival of tasks ready for execution and requesting resources can no longer be considered a random process. For the sake of predictability, a more deterministic procedure must be applied that utilizes the info~tion about the future instants when tasks will enter the ready state. Thus, static scheduling methods are applicable to a large extent and future resource conflicts can be detected and possibly resolved at a very early stage. A real-time operating system will have to be able to predict at any time if all active tasks will meet their deadlines. For this purpose, the sufficient conditions suggested by Halang [4] and Henn [l l] may be used. Together with the corresponding scheduling algorithms, they are already useful in the design phase for forecasting scheduling sequences, transient overloads, and to support capacity planning. Feasible algorithms for resource scheduling that render an easily understandable, predictable, and modifiable temporal system behavior must be developed and applied. A unifo~ theory of correctness, timeliness, and dependability is urgently needed as a foundation for the design of large distributed and fault-tolerant systems. Fault-tolerance measures take their time; therefore, there is a danger that they may be sacrificed to efficiency arguments. Accordingly, research into faulttolerance should yield effective, time-bounded methods for error having and a~~st~tion of redundancy, If possible, all system components should continuously supervise their own operation and should react to failures in a predefined way. The guiding principle for devising error-handling facilities is to minimize a malfunction’s effect. Thus, a system will reduce its performance gracefully in the presence of failures; this degradation behavior must be fully predictable. The effect of system load on the fault susceptibly of real-time systems has not yet been investigated. When developing new architectures for real-time computers, the designers’ main objective must be the

J. SYSTEMS SOFTWARE 1992; lS:lOl-108

105

support of programming languages, operating systems and scheduling algorithms, fault-tolerance, and time management, as well as error handling and timebounded communication. This contributes to increased speed and na~owing of the semantic gap between hardware and software. Favorable in~r~onnection topologies and specialized components with inherently low internal data transmission requirements are needed for distributed architectures. For predictability reasons and to meet the demands of the applications, new process peripherals with accurate user-timed behavior must be developed in connection with the realization of time-based syn~hron~tion primitives. The innovations in VLSI technology will continue to have a major impact on the entire field of computer design by providing higher levels of integration. Within 10 years, with the advent of gigascale integration, some 100 million to 1 billion transistors should fit on a single die. Thus, it should be possible to accommodate up to four processors on one chip. These processors do not necessarily have to be equal; besides a general purpose task processor, there could be dedicated processors for fast interrupt recognition and response, for the operating system kernel, and for I/O handling. Such an architecture would reflect the parallelism inherent in real-time operation and would balance real-time performance. A thorough analysis of the application domain will reveal whether this approach is feasible or if it would be more advantageous to put the processors on separate chips and use the enormous transistor count for integrated memories and I/O devices to prevent communication bottlenecks. With respect to real-time communication systems, future research should emphasize predictably timely network behavior, integrated scheduling ~go~~rns for co~unication channels and other resources, and dynamic routing for message transmissions with guaranteed deadlines. Since in real-time systems data are time dependent and of only limited temporal validity, the latter needs to be supervised and the communication system must block out-of-date messages. To meet the high speed requirements of distributed real-time data base systems, a rn~~urn degree of ~~ielism has to be realized for transaction processing. A theory of the integrated control of this parallelism and the corresponding resource scheduling is needed; at the same time, this aims to maximize parallelism and minimize the worst-case transaction processing time under observation of boundary conditions data consistency, transaction correctness, and transaction deadlines. In real-time systems, artificial intelligence methods based on heuristic knowledge are mainly applied for the control and scheduling of time-bounded processes. The

106

J. SYSTEMS SOFTWARE 1992; 18:101-108

best possible solution to a problem is to be found within dynamically given time limits. It is an open research problem to develop symbol-processing methods observing such time limits in a predictable way. Among others, new storage management techniques other than garbage collection need to be devised. In general, real-time systems research must be much more application oriented than other areas of computer science. The methods employed are generally process specific, because the process is part of the control loop closed by and in the computer. For example, this holds for overload-handling and error-recovery procedures, which can be designed by exploiting the processes’ inertia and corresponding typical time constants. A real-time system is subject to variable time conditions depending on the process speed. It may be possible to relax them in case of overload, for example, reducing the speed of a robot arm. Adaptive, self-correcting systems with carefully designed graceful performance degradation behavior in response to errors can only be constructed by full utilization of the process characteristics. Analogously, the comparison of two different designs or systems is possible only on the basis of application-specific benchmarks-mere MIPS figures do not say anything.

4. AN IDEALISTIC DEVELOPMENT

VISION OF FUTURE

If future real-time systems are not to fail as did the very early, fully centralized ones, VLSI technology-driven hardware development must be accompanied by a consolidation process with respect to architectural and missions and commercial software issues: “Military applications involving real-time systems at their base are becoming rapidly prevalent while the science and technology to support the credible design, construction, and enhancement of such systems is woefully deficient” [12]. The consolidation effort must center around fulfilling the timeliness, predictability, and dependability requirements, because they have not yet been met. These objectives can only be achieved by choosing simplicity as the fundamental design principle, i.e., by following Dijkstra’s advice stressing the need for simplification

[ 131:

Computing’s core challenge is how not to make a mess of it. . . so we better learn how not to introduce complexity in the first place. . . . It is only too easy to design resource sharing systems with such intertwined allocation strategies that no amount of applied queueing theory will prevent most unpleasant performance surprises from emerging. The designer that counts performance predictability among his responsibilities tends to come up with designs that need

Wolfgang

A. Halang

no queueing theory at all. . . . The moral is clear: prevention is better than cure, in particular if the illness is unmastered complexity, for which no cure exists. . Both the final product and the design process [must] reflect a theory that suffices to prevent a combinatorial explosion of complexity from creeping in. . . . It is time to unmask the computing community as a Secret Society for the Creation and Preservation of Artificial Complexity.

Since there is practically no real-time system without safety relevance in one way or another, the nontemporal requirement of dependability and high integrity is paramount. We must reach a stage where real-time systems are engineered with sufficient dependability to allow licensing authorities to formally approve their use for safety-critical control purposes. Simplicity is a precondition for this. The fundamental importance of simplicity is established by its position in the following causal chain: Simplicity

+ (easy)

Predictability

+ Dependability

At first, it is surprising to encounter the notion of predictability in the context of computing, since, in principle, all digital computers are fully deterministic and are, therefore, predictable in their behavior. To express precisely the special meaning of predictability, which is appropriate as a fundamental concept of realtime computing, the adjective “easy” was used in the causal chain. It qualifies the notion of predictability as defined by Stankovic and Ramamritham [ 14]- “predictability means that it should be possible to show, demonstrate, or prove that requirements are met subject to any assumptions made, for example, concerning failures and workloads”-by paying tribute to the economic and intellectual effort that must be invested in order to establish the property for a given real-time system. If the system is simple, it can be easily understood, which is the main step toward verification of its correct behavior in the sense of Descartes: “vet-urn est quod valde clare et distincte percipio.” Future hardware functions will be even more readily available than today, work at higher speeds, and cost less. Hence, following the new thinking criteria, problem-oriented architectures for real-time systems that achieve predictability based on simplicity must be designed. Measures for performance enhancements are valuable only if their effects can be a priori analyzed and quantified, and if they are fully deterministic. As advantageous utilizations of cheap hardware, we envisage the separation of functions, their encapsulation into standardized and safety-licensed modules, and the provision of physical parallelism reflecting the parallelism inherent in the embedding processes. Thus, real-time kernels may be implemented in firmware or hardware,

Real-Time

Systems:

Another

Perspective

1. SYSTEMS 1992;

and interrupt processing may be delegated to a separate unit in order to solve a serious problem, described by Dijkstra [ 131 as “ . . . the real-time interrupt . . . its effect was the introduction of nondeterminism and endless headaches. . . “. In addition, new functions could be realized, such as radio sets in every processing node for the reception of time signals from an official, global time reference to replace the current inaccurate and difficult-to-synchronize computer clocks (signals of the official international time derived by standardization institutes such as National Bureau of Standards from their atomic clocks are continuously transmitted by the satellite-based General Positioning System or in some countries via terrestrial radio, e.g., the long-wave station DCF 77 in Germany [ 151). Following the simplicity argument, we shall see a coexistence of standard real-time operating systems, i.e., various versions of UNIX upgrades, for less critical applications with softer time constraints and specialized small kernels in the domain of safety-sensitive embedded systems with tight deadlines. It is the latter more demanding application area in which the first real-time systems fulfilling all four fundamental requirements will be realized. Some dedicated kernels already approach guaranteeing timely and predictable operation. To catch up with respect to dependability, the endeavor of formally proving a kernel’s correctness has been initiated. There will be a new generation of real-time languages whose characteristics will not only be user- and application-oriented concepts but also, and more importantly, simplicity and inherent safety (Dijkstra calls Ada a monstrum [13]!). These languages will have to be the vehicles of a new programming paradigm: program design and the corresponding proof design, which are closely linked [ 131, will always be carried out together and, because of the dependability requirement, there will be no more real-time programs without proof. This paradigm necessitates the development of correct compilers. The high costs involved will foster software reusability; little progress has been made in this area, because of “artistic” attitudes and a lack of (self-)discipline on the part of programmers. To a large extent, future critical real-time software will be assembled of standardized and safety-licensed modules, which may even be marketed in “ROM-canned” form. Once hardware and real-time operating systems with predictable behavior and well-defined new languages will have made possible the design of predictable realtime systems, it will be the task of comprehensive development sirnilies to facilitate the design process. These tools will accompany all realization phases, viz., requirements engineering, design and implementation

18: IOI-

SOFTWARE

107

108

of combined hardware and software systems, system integration of preferably standardized components, and automatic generation of accompanying documents. They will allow implementation-independent structuring and description of systems, precise formulation of timing constraints, and, at the same time, restrict freedom of choice to prevent errors. However, the major novel feature of these environments will be that for each development phase, they will incorporate appropriate analyzers and formal verification tools. As a consequence of the predictability and dependability requirements, this will bring about another paradigm change, from empirical a posteriori validation (testing), which cannot prove the absence of errors, to design-integrated verification with mathematical rigor.

REFERENCES 1. DIN 44 300 A2: Informationsverarbeitung. Berlin-Cologne: Beuth-Verlag (1985). 2. R. G. Herrtwich, Echtzeit, Informatik-Spektrum 12, 93-96 (1989). 3. J. A. Stankovic, Misconceptions About Real-Time Computing: A Serious Problem For Next Generation Systems, IEEE Computer 21, lo-19 (1988). 4. W. A. Halang, A Practical Approach to Pre-emptable and Non-pre-emptable Task Scheduling with Resource Constraints Based on Earliest Deadlines, in Proceedings of Euromicro ‘90 Workshop on Real Time, Washington: IEEE Computer Society Press 1990, pp. 2-7. 5. R. Lauber, Prozessautomatisierung, Vol. 1, 2nd ed., Springer-Verlag, Berlin, 1989. 6. R. Lauber: Development Support Systems, IEEE Computer, 15, 36-46 (1982). 7. R. Lauber and P. Lempp, Integrated Development and Project Management Support System, in Proceedings of

the 7th IEEE International Computer Software and Applications Conference COMPSAC ‘83, Chicago, 1983, pp. 412-421. 8. A. Stoyenko, A Schedulability Analyzer for Real-Time Euclid, in Proceedings of the IEEE 1987 Real-Time Systems Symposium, San Jose, California, 1987, pp. 218-227. Language PEARL. Part 1 9. DIN 66 253: Programming Basic PEARL, 1981, Part 2 Full PEARL, 1982, Part 3 Multiprocessor-PEARL, 1989. Berlin-Cologne: BeuthVerlag . 10. I. Currie, NewSpeak, in High Integrity Software (C. T. Sennett, ed., London, Pitman, 1989, pp. 122-158. 11. R. Henn, Feasible Processor Allocation in a Hard-RealTime Environment, Real-Time Syst. 1, 77-93 (1989). 12. A. van Tilborg, Preface to the ONR Kickoff Workshop on Foundations of Real-Time Computing Research Initiative, Real-Time Syst. Newslett., 5, 6-7 (1989). 13. E. W. Dijkstra, The Next Forty Years: Personal Note,

EWD 1051, 1989.

108

J. SYSTEMS SOFTWARE 1992; 18:101-108

14. J. A. Stankovic and K. Ramamritham, Editorial: What is Predictability for Real-Time Systems? Red-Time Syst. 2, 247-254 (1990). 15. G. Becker, Die Sekunde, PTB-Mitteilungen, January 1975. Physikalisch-Technische Bundesanstalt, Braunschweig.

Wolfgang A. Halang 16. International Electrotechnicai Commission, S~undard 880: Software for Computers in the Safety Systems of Nuclear Power Stations, IEC, Geneva, 1986. 17. A. Stoyenko and E. Kligerman, Real-Time Euclid: A Language for Reliable Real-Time System, IEEE Trans. Software Eng. 12, 941-949 (1986).