Copyright © IFAC Software for Computer Control, Graz, Austria 1986
PARALLEL PROGRAMMING IN ADA AND IN THE HUNGARIAN ADA COMPILER J. Boo Computer and Automation Institute, Hungarian Academy of Sciences, Budapest, Hungary
Abstract. The Ada language definition includes besides the classical programming technics new concepts to overcome various problems in highlevel programming. One of the most powerful tools found in Ada is that for real-time programming with facilities to model parallel tasks. In Ada task is the basic unit for defining a sequence of actions that may be executed in parallel to other similar units, Synchronization is achieved by rendezvous between a task issuing an entry call and a task accepting the call. The real-time facilities are briefly discussed with the special features of the Hungarian Ada compiler in this paper. Keywords.
Ada;
compiler;
parallel
INTRODUCTION
processing;
real-time
system.
Task Type and Task Objects
The multi and real-time prograMming appeared some 20-25 years ago. A set of so-called real-time primitives were introduced to help the programming of such sequential processes, which execute in parallel, compete for the processor, and cooperate with each other. These are called parallel processes. Later on, these primitives were built into some languages. Concurrent Pascal was the first language, which not only offers means for parallel programming, but it checks the right use of them, as well. Some interesting proposals have been developed for parallel languages, too. The most important ones were: the CSP (Communicating Sequential Processes) suggested by Hoare in 1978, and the DP (Distributed Processes) developed by Brick Hansen also in 1978. Some concepts of them strongly influenced the structure of the Ada language.
A task consists of two parts: the task specification and the task body. For example, let us see the well-known circle buffer program written in Ada. The BUFFER task ensures, that at any time only one task can access the buffei and it baiances the speed of the producing and consuming task. task type BUFFER is --entry declarations entrY PUT(C: out CHARACTER); entry GET(C: in CHARACTER); end;
(1) task body BUFFER is --declar~tive part SIZE constant INTEGER:;100; POOL array (1 .. SIZE) of CHARACTER; COUNT: INTEGER:;O; IN,OUT: INTEGER:;l; begin --sequence of statements loop select when COUNT < SIZE ;> accept PUT(C: in CHARACTER) do POOL (IN):;C; end; IN:;IN mod SIZE + 1; COUNT:;COUNT + 1; or when COUNT > 0 ;) accept GET(C: out CHARACTER) do C:;POOL(OUT) ; end; OUT:;OUT mod SIZE + 1; COUNT:;COUNT + 1; or terminate; end select; end loop; end BUFFER;
The design of Ada began in 1978 and was standardized by 1983 with issuing the Ada Reference Manual. This was the first language, which was planned from the very beginning to satisfy the requirements of parallel programming, and for this reason these facilities are in accordance with the other concepts of Ada. PARALLEL PROCESSING IN ADA: TASKING In Ada, tasks are entities, whose execution proceeds in parallel. It means, that each task can be considered as it is being executed by a logical processor of its own. Different tasks are executed independently, except at certain points, where they are synchronized. Parallel tasks can be implemented on either single or multiprocessor architecture.
The task specification contains entries which can be seen and called from outside. The execution of tasks is defined by the body. The task is an Ada unit and it can refer to the entities of the surroundings according to the usual "visibility" rules.
Ada is one of the first languages, which handle a task as an elementary type and this can be used for object declaration and for declaring composite types. 113
J. Bod
114
It can also have its own entities defined in the declarative part. The optional exception handlers can process exceptions raised during the execution of the body. This declaration defines a task type, and several task objects can be created from this type. We call a task object to be static, if it is created by an object declaration, and to be dynaMic, if it is accessed by a pointer and created by allocation on the heap. A static task is said to be dependent on a task, subprogram or block if it was created within it. A dynamic task is said to be dependent on a unit, if the accessed type pointing to the task type was defined within it. The unit on which the task is dependent is called its parent unit. Task Activation and Termination The execution of a task is divided into the elaboration of the declarative part (activation) and the execution of the statement part of the task body. During the object declaration only the interface is created to the task (its entries can be called from this time on), but its execution generally starts later. More preciselly, static tasks created in a declarative part start their activation at the end of the declarative part parallel to each other, in undefined order. Dynamic tasks are activated immediately. declare type POINTER is access BUFFER; X,Y:BUFFER; Z:POINTER; W:POINTER:=new BUFFER; --activation of task pointed by W begin --activation of X,Y Z:=new BUFFER; --activation of task pointed by Z end; On the other hand, the execution of the task may complete earlier than the life of the object. The call for an entry for a task which had completed its execution causes an exception . So the interface to the object is cancelled, even if the object is still alive. There are points where tasks are synchronized . One of these synchronizations takes place at activation time. The task or the main program causing the activations of other tasks is suspended until the activations are finished. Another point of synchronization is, when a task unit or the main program completes its execution. Normally this happens when the end of the statement part is reached, but it can also be the consequence of raising an exception or in the case of tasks being aborted. A task cannot be terminated, a subprogram or block cannot be left until all of its dependent tasks are terminated.
Rendezvous Entry call and the corresponding accept statements are the primary means of synchronization and passing values between tasks. The execution of these is called rendezvous in Ada. The entry call is very similar to the procedure call, but the statements belonging to it are executed after passing the parameters by the called task during the accept statement. Because of its importance we examine rendezvous in detail. The Tl and T2 tasks can meet at a rendezvous, if for example Tl calls an entry of T2 and T2 reaches an accept statement for this entry. If a task calls an entry of an other task before the accept statement (for this entry) is reached in the task having this entry, the task is suspended. On the other hand a task is also suspended, if it reaches an accept statement (for one of its entries) prior to any call of that entry. When an entrY has been called and the corresponding accept state~ent has been reached, the accept statement is executed by the called task, while the calling task remains suspended. Finishing the accept statement both tasks continue their execution in parallel. For example, in (1) after creating a task obj ect TASK OBJ: BUFFER the producing and consuming task can give entry call statements: TASK ORJ.PUT(CHAR); TASK=OBJ.GET(CHAR);
Identification Problems in Rendezvous As it can be seen, the rendezvous is characterized by an asymmetric naming scheme, where the callers of an entry must name the other task, while the called task generally does not know the identity of its caller. This is very useful in many applications, for example, when tasks are characterized as services and users. A user certainl v needs to know the name of the requested "service. On the other hand a service does not need to know the names of the users. For example, a library program, which provides resources to arbitrary users can be written in Ada. However,there are cases, when the identity of the callers should be known. Let us consider the problem of scheduling a single resource between n users. The identity checking of the user tasks can be solved by the use of the family of entries. type USER ID is new range 1 .. N; task RESOURCE is entry REQUEST (ID:in USER ID); entry RELEASE (USER_ID); entry family end; (3 )
task body RESOURCE is USER:USER ID; begin loop accept REQUEST (ID: in USER_ID) do USER: =ID;
Parallel Programming in Ada
In Ada a priority value can be defined to each task. But this priority is not taken into account when a call is selected from the queue, only during the rendezvous.
end; accept RELEASE (USER); end loop; end RESOURCE; The user Must supply its identity when he requests a resource and after this, the scheduler will accept only a call for release with this identity as index. There can be an other identification problem connected with Ada tasks . Assume, that in the previous exaMple the users a re similar to each other and for this reason we want to define one task body with different task objects for them. The different user tasks could identify theMselves by an own identifier used as paraMeter. To identify the siMilar users for the RESOURCE task is a bit complicated. A possihle solution is to define the users as an array of USER type tasks:
a
task type USER is entry IDENTIFY (X : in INTEGER); end; task body USER is INDEX: INTEGER; begin accept IDENTIFY (X:in INTEGER) no INDEX:=X; end; end USER; US:=array(l .. N) o f
liS
It might be thought, that the "first in, first out" nature of the entry queue is severe constraint in cases, when some requests have high priority. Nevertheless, the handling of the requests with priority can be solved by using different entries for each level. For example, let us see the RESOURCE task in (3), if the users are divided into two parts according to their priority. type USER ID is new range 1 .. N; type LEVEL is (QUICK,SLOH); task RESOURCE HITH PRIOR is entry REQ(LEVEL)(ID:in USER ID); entry REL(USER ID); end; (3 )
task body RESOURCE HITH PRIOR is USER : USER ID; begin loop select accept REQ(QUICK) (ID : in USER_ID) do USER:=ID; end; or when (REQ(QUICK)'COUNT=O) => accept REQ(SLOW) (ID: in USER ID) do USER:=ID; end; --E'COUNT gives the number o f --entry calls queued on the entry E end select; accept REL(USER); end loop; end RESOURCE _IHTH PRIOR;
USER;
The RESOURCE task identification loop:
!'lust
have
an
for J in 1 .. N loop US(J).IDENTIFY(J); -- entry in the USER task end loop; ' and hence INDEX can be used for identification . A disadvantage of this algorithm is, that USER tasks cannot begin their execution immediately after activation, because they have to wait for the identification, which takes place in order. In an earlier version of Ada there was an entity, the so-called faMily of tasks, which- could solve our problem. Generally, families of tasks could be very useful, when there are several physical equipments and distinct but similar tasks are required to handle them. For example, when disks are controlled by a disk-handler, the disks would communicate with it by calling its entries and passing their family index as a paraMeter, so that the handler would know which disk is invoked. In the Ada Reference Manual families of tasks have been neglected, since they do not conform the type concept of tasking.
Select Statement For the realization of non-deterministic features, Ada introduced the select statement. To illustrate it, let us see an algorithm for the well-known problem of dining philosophers described by Hoare, at the end of the paper. The pro g ram contains a simple form of selective wait statement with a delay alternative. For the execution of the statement the guards (conditions a"'ter when) are first evaluated. If there is no cnndition or the condition is true, the alternative is called open, otherwise it is closed. First the open accept alternatives are considered. Selection of such an alternative takes place immediately, if a rendezvous with it is possible. After selection, the accept statement and the possible sequence of statements are executed. If no rendezvous is possible before the specified delay has elapsed, the (open) delay alternative will be selected. There are some other very useful means in Ada for parallel processing. For example: Conditional entry call - issues an entry call, which is cancelled, if the rendezvous can not start immediately.
Entry Queues and Priority If several tasks call the same entry before the corresponding accept statement, the calls are attached to a queue associated with the entry and handled in the order of arrival.
Timed entry call - issues an entry call, which is cancelled, if the rendezvous can not start within the given delay. Delay - suspends the execution of the task for at least the duration of the specified value.
.I.
IIG Abort causes aborted.
one
or more
tasks
to
be
Priority - for each tasks a priority value can be declared and so on. TASKING IN THE HUNGARIAN ADA COMPILER The Hungarian Ada compiler as traditional compilers consists of the lexical and syntactical analysers, the semantical analyser and the abstract and real codegenerator. The abstract codegenerator produces an intermediate target code (A-code) for a hypotethical A-machine.Task handling was realized by noninterruptable A-instructions connected to synchronization points. The implementation of the tasks is one of the most challenging but complicated job when developing a compiler. In spite of this fact the first version of Ada compiler (which was a subset compiler) already contained all the features of tasking. At the beginning of the implementation we had to decide whether we should use the tasking services of an operating system, which would increase the safety and efficiency of the generated code, or we should organize the real-time system ourselves. We chose the latter one, because we wanted to generate code by our compiler for several different computers, but the operating systems of these machines help tasking in different ways and not cover entirely the possibilities of Ada. We developed our real-time system for the single processor architecture of IBM/3031. '.che complete Ada program runs in an OS task, and the Ada tasks share the time of this. The concept of A-code gives the possibility to organize real-time system on different computers in another way. The essential questions for an implementation of tasking facilities are as follows: storage allocation, implementation of scheduling, organization of queues and the implementation of the select statement. Storage Allocation The storage necessary for an Ada unit includes the activation record (area for control information, static and temporary objects), the dynamic stack (for objects with dynamic size), and the heap (for objects created by allocators). For sequential programs the area for the activation records can be allocated as elements of a memory stack. But in the case of parallel· processes several stack-like storage area have to live together at the same time. This is why in the first version of the compiler a heap with a first-fit allocation strategy was used as the base storage management technique. With the help of a garbage collector program this technique is very efficient from the point of view of using up the memory, but it takes a long time. In the second version of the compiler we
Bad
are going to use cactus stack. In the cactus stack a heap element is allocated only for every activation of a task. This element will be used as a stack to store the activation record of the task and the activation records of all subprograms in the calling chain of the task. A disadvantage of this solution is, that a system defined storage size will belong to every task giving the possibility of an overflow. Scheduling To every task (and to the main program) there belongs a task descriptor. It contains all the information necessary for running tasks concurrently and to ensure their communication with each other. Generally several tasks are ready-to-run and compete for the processor. These tasks are waiting in the ready queue. This queue is divided into as many subqueues, as the number of priority levels in the implementation. Hhen a process must be scheduled the first task of the first nonempty subqueue with the highest priority will be selected. In our implementation the scheduler must be invoked after each A-instruction which puts the task into the queue or removes a task from the queue. As I/O interrupts, expiring delay time and time slice do not break the run of a program, it is advisab le to check often, whether such a condition would occur. If it would, then the scheduler have to be invoked to satisfy the request, as well. Organization of Queues Our run-time system must handle several chains of tasks. One of them is the ready queue divided into subqueues as described above. Tha tasks created and allocated in a unit are linked into an activation chain. The tasks depending on a unit are linked into a dependence chain. The unit can not be left (can not terminate in the case of tasks) until all tasks hanging on this chain are terminated or able to terminate. The delay queue is the waiting for time delay.
list
of
tasks
All tasks that are waitini? on a call of the same entry are placed ~n the queue of this entry and are handled in the order of arrival. The States of Tasks The tasks in our system may exist in one of the following main states: DORMANT
after creation but before activation.
UNDER ACTIVATION during activation. ACTIVE
after creation until the beginning of termination.
Parallel Programming in Ada COMPLETE
from the beginning of its termination until it can be terminated.
TERMINATE
from the time it can be terminated until the deal location of the task descriptor.
The tasks of "UNDER ACTIVATION" and "ACTIVE" state may be -suspended during their run. A task can be suspended in several states, for example in: SELECT
ENTRY CALL
ACCEPT RETURN
when the task reaches an accept statement or a selective wait statement where no alternative can be chosen. when the task reaches a call for an entry and the rendezvous can not start. when the task is on an entry call and the rendezvous starts.
The state of a task is registered in its task descriptor . Here we describe the effects of some A-instructions concerning the state of tasks. INITIALIZE TASK is execut e d after the creation of the task object, that is, after allocation memory for the task descriptor on the dynamic stack of the parent unit. It fills in some fields of the descriptor, for example, links the task descriptor to the appropriate activation and dependence chain. It sets the status of the task into "DORMANT" state. ACTIVATE TASK starts the activation of "DORMANTTT tasks and changes their state into "UNDER ACTIVATION". It links these tasks to the ready subqueue corresponding to the priority of the activating task 8nd suspends the activating task by removing it from the ready queue. ENTER TASK is the first instruction of a task. It allocates place for the the activation record and fills in administration part of it. EXECUTE TASK finishes the elaboration of the declarative part . It fills in the address of the exception handler, and sets the task into ACTIVE state. It links the task into the ready subqueue corresponding to the actual priority, if it is needed. If the task was the last one hanging on the activation chain, then it links the activating task to the ready queue. COMPLETE TASK is the last but one instruction of the task. It causes an exception in the units, which are waiting for a rendezvous for the task. If the dependence chain of the task is not empty, then it sets the task into COMPLETE state and removes it from the ready queue. If the chain is empty, then it sets the task into TERMINATE state. TERMINATE TASK is the last instruction of the task. I t removes the task from the ready queue and from the dependence chain SCC-I
117
of the parent unit. If the task was the last one hanging on the chain, then it sets the parent unit into TERMINATE state and links it into the ready queue. It deal locates the dynamic objects, the heap elements and the activation record of the task. Implementation of the Select Statement The information necessary to run entry call and accept statements are described in the entry descriptor. The entry descriptors are plac e d at the end of the task descriptor. The state of an entry is open, if the task ready to accept a rendezvous for this entry, and no call to it has been issued. Otherwise, it is closed. The entry descriptor containes - besides many others - the state of the entry, the head of the queue of the tasks, that are waiting on a call to the entry, and the address of an accept statement in the case of open entries. As an illustration let us see some effects of the following two A-instructions: CALL_ENTRY realizes the simple entry call . It causes an exception at the point of call, if the called task is in COMPLETED or TERMINATED st a te. If the called entry is closed, then it links the task to the entry queue, removes the task from the ready queue and suspends it in ENTRY CALL state. If the called entry is open, then it removes the task from the ready queue and suspends it in ACCEPT RETURN stat e, closes all open entries of toe c a lled task, links the called task to the appropriate ready queue and gives the control to that accept statement, the address of which is in the entry descriptor . SELECTIVE "lAIT rea 1 izes the accept statement and the simple selective wait statement, which has o nl y accept exami nes the alternatives. First it alternatives. If there is no open alternative, then an exception is rai s ed at the point of the selective statement. If there is no called open alternative, then the task is removed from the ready queue and suspended in SELECT state. The state of the entries with open alternatives is set to open and the addresses of the accept statements are saved in the entry descriptor . If there are called open alternatives, then one of the alternatives is selected by chance. The partner in the rendezvous will be the first task in the selected entry queue. The state of this calling task is set to ACCEPT RETURN state, while the control is given to the accept statement.
J. Bad
118
THE ADA PROGRAM FOR DINING PHILOSOPHERS This program was written by J.Vladik from Checho-Slovakia. The execution of the program starts with the identification of the philosophers. During the run 0:1' the program every philosopher eat 5 times with requesting t,vo forks and 10 seconds later releasing them. After all of them finished eating, no call will arrive to the entries of the FORK HANDLER task, and this i_s why the delay arternative will be selected and the exit statement will be executed. procedure DINING is task FORK HANDLER is entry REQUEST(O .. 4); entrY RELEASE(O .. 4); end; task type PHILOSOPHER is entry MY IDENT(I:in INTEGER); end; PHILOSOPHERS:array(O .. 4) of PHILOSOPHER; task body FORK HANDLER is FORKS:array(0~.4) of INTEGER range 0 .. 2 :=(others => 2) ; procedure INCREASE(I:in INTEGER) is begin FORK«I+l) mod 5):=FORK«I+l) mod 5)+1; FORK«I+4) mod 5) :=FORK«I+4) mod 5)+1; end; procedure DECREASE(I:in INTEGER) is begin FORK«I+l) mod 5):=FORK«I+l) mod 5)-1; FORK«I+4) mod 5):=FORK«I+4) mod 5)-1; end; begin loop select when FORK(0)=2 => accept REQUEST(O) do DECREASE(O);end; or when FORK(1)=2 => accept REQUEST(l) do DECREASE(l);end; or when FORK(2)=2 => accept REQUEST(2) do DECREASE(2);end; or ,,,hen FORK (3) =2 => accept REQUEST(3) do DECREASE(3) ;end; or when FORK(4)=2 => accept REQUEST(4) do DECREASE(4) ;end; or accept RELEASE(O) do INCREASE(O);end; or accept RELEASE(l) do INCREASE(l);end; or accept RELEASE(2) do INCREASE(Z);end; or accept RELEASE(3) do INCREASE(3);end; or accept RELEASE(4) do INCREASE(4);end; or delay 10.0; exit; end select; end loop; end FORK HANDLER; task body PHILOSOPHER is ACT IDENT:INTEGER; begin accept MY IDENT(I: in INTEGER) do ACT IDEN!: =1; end;for J in 1 .. 5 loop FORK HANDLER.REQUEST(ACT IDENT); delay 10.0; FORK HANDLER.RELEASE(ACT IDENT); end loop; end PHILOSOPHER;
begin for J in 0 .. 4 loop PHILOSOPHERS (J) . t1Y IDENT (J) end lOaD; end DINING;
;
CONCLUSION The Ada language offers powerful tools for real-time programming. It was planned from the very beginning to satisfy the requirements of parallel programming, and this is why these faciliti_es are in accordance with the other concepts of Ada. The efficient implementation of tasking facilities: activation, synchronization, termination of tasks, the realization of the select statements and so on, are one of the most important questions when developing a compiler. REFERENCES Ada Reference Manual (1983) U.S.DOD. (1981) Developing multi-task systems on high-level languages. HTA SZTAKI Working Paper.
B6szMrm~nyi,L.
Gupta,R. ,Soffa,M.L. ,(1985).The efFiciency of storage management schemes For Ada programs.In J.G.P.Barnes, G.A.FisherCEd.), Ada in Use. Cambridge University Press, pp.164-172. Ichbiah et a1. (1979) Rationale for the design of the Ada prof,ramming language.SIGPLAN Noticies,4(6),PartB. Lovengreen,H.H. ,Bjorner,D. (1980) On a formal model of the tasking concept in Ada. SIGPT,AN Noticies ,1500) ,213-273. \Je1sh,J. ,T"ister,A. (1981) A comparative study of tilsk communication in Ada. SOFTHARE Practice and Experience Vo1 11, 257-290.