Comput. Lang. Vo[. 8, No. 2, pp. 51-60, 1983
0096-0551/83 53.00+0.00 Copyright ~ 1983 Pergamon Press Ltd
Printed in Great Britain. All rights reserved
A DISTRIBUTED SYNCHRONIZATION MECHANISM FOR INTERACTING PROCESSES W. C. YEN and K. S. Fu School of Electrical Engineering, Purdue University, West Lafayette, IN 47907, U.S.A.
(Received 23 April 1982; revision received 24 February 1983) Al~tract--Reliable synchronization is intended to ensure a graceful degradation of the system in the event of a failure. Solutions to synchronization problems of this kind under constant space conditions are presented. Lamport's bakery algorithm and generalized critical region have been modified and extended for application to this problem. Concurrent processes
Synchronization
Critical section
Reliability
1. I N T R O D U C T I O N
The non-deterministic nature of concurrent execution of sequential processes with multiple processors poses problems for both hardware and software design. The unpredictable pattern of demand for hardware resources causes contention [1] which degrades the system throughput significantly [2]. The indeterminacy of the interleaving among sequential processes can be so serious that programs may not execute consistently. Synchronization attempts to impose a structure that circumvents this undesirable indeterminacy. Reliable synchronization is intended to ensure that a failure of any individual components in a system (e.g. a multiprocessor with shared memory or a network of computers with disjoint memories) will result in only a graceful degradation of the system. Ideally, a solution to a problem of this kind should not rely on any central software controller or single hardware component. Moreover, due to the advance of VLSI technology, it is very likely that future systems will be built from a larger number of single-chip computers, each possesses a communicating processor and a very limited amount of memory. Consequently, it is highly desirable to maintain a constant space complexity in all solutions explored. The purpose of this paper is to illustrate that ideal solutions with constant space complexity for a class of synchronization problems can be obtained. This is done by considering a set of synchronizing primitives which will provide solutions to various problems. Consider that there are N cyclic processes operating concurrently and asynchronously. Each process has an interacting period followed by a non-interacting, or disjoint, period. In the disjoint period, a process operates on its own private data only. The value of N is fixed and known to all processes. The processes act independently in the sense that each process fully controls its own state transitions. For each process, a portion of its memory is specified as the globle memory and the rest is absolutely private. A process can only write onto its own memory but may read others' globle memories at any time. A read or write operation is assumed to be indivisible i.e. simultaneous operations on the same entity take place one after another but in an unknown order. This, in fact, is the atom provided by the hardware as the unit of interleaving among sequential processes. Inspection of other process's globle memory is the only way that a process can communicate with others. Thus, a process may not even be aware of the inspection from other processes; on the other hand, a process must name its inspected processes. The speeds or speed ratios of various processes at any instant are totally unknown. However, a process is assumed always in progress, though possibly very slow, as long as it is not dead. A process is dead when the content of its globle memory is set to zero. A failed process may be restarted at a certain predefined location of its code. The system will continue to operate This work was supported by the NSF ECS 80-16580. 51
52
W.C. YEN"and K. S. Fu
correctly in spite of a failure of any individual component, even under repeatedly failing and restarting. If a process dies in a manner that it simply stops execution during the interacting period, the system, however, will still consider the process alive. The proposed system is fully distributed and does not rely on any central software controller or hardware component. Moreover, communication among processes is kept under minimal constraint. Since we are interested in providing a general solution of the synchronization problem, its detailed implementation which depends on a specific hardware organization is not concerned here. 2. C R I T I C A L
REGION
A critical region [3, 4] is a concurrent language construct defined by the notation R E G I O N v DO E which associates a statement E with a shared variable v. The code E is usually called the critical section [5] which is the unit of ordering faced by programmers. As a result, a programmer is provided an explicit control over the degree of interleaving. Associating critical sections with a shared variable enables a compiler to check whether or not shared variables are used strictly inside the critical sections. It is desirable that an implementation of the critical region statement meets the following conditions: (CI) One and only one process is in the critical section at any time. (C2) The system will not be deadlocked. (C3) The competition for entering critical sections should be fair (first-come, first-served; FCFS). That is, there exists a code in the system called the doorway. When a process starts to execute a critical region statement, it first executes the doorway to get a ticket and then waits for its turn to enter the critical section. If process A passes the doorway before process B, process A is guaranteed to be served ahead of process B. Furthermore, the execution of the doorway is independent from the actions of other processes. This problem has been first considered by Lamport [6] but unfortunately, in Lamport's solution, a repeatedly failure and restarting process will deadlock the entire system. Rivest and Pratt [7] and Peterson and Fischer [8] improved Lamport's solution by allowing that condition (C3) does not have to be completely satisfied. Katseff [9] incorporated Rivest and Pratt's, Peterson and Fischer's, and Eisenberg and McGuire's [10] ideas into one algorithm by which all three conditions are met. Nevertheless, the algorithm has time complexity of O(n 2) and space complexity of O(n). An algorithm with time complexity of O(n) and space complexity of O(1) is given as follows:
Algorithm 1. (* R E G I O N v DO E *) GLOBLE qindex[i]: = 0, done[i]: = 0 PRIVATE j,k BEGIN (* for process i *) L0. FOR j IN [1.. N] DO IF done[j] = l T H E N GO TO L3; Ll. qindex[i]: = 1; L2. FOR k IN [1.. N] DO IF qindex[k] > qindex[i]-! T H E N qindex[i]: = qindex[k] + l; GO TO L5; L3. qindex[i]: = qindex[j] + l; L4. FOR k IN [1 . . j - l , j + 1 . . N ] DO IF qindex[k] > qindex[i]-l T H E N qindex[i]: = qindex[k] + 1; L5. done[i]: = l; L6. FOR j IN [ l . . N] DO
A distributed synchronization mechanism
53
L6.1 IF qindex[j] < > 0 AND (qindex[j],j) < (qindex[i],i) THEN GO TO L6.1; L7.; L8. qindex[i]: = 0; done[i]: = 0; END It should be noted that the order of execution in the FOR statement is not important as long as each element in the set has been gone through once. An execution of a critical region statement is interpreted as an interaction of a cyclic process. The code from L0 to L4 is the doorway. When in the doorway and qindex[i] is set greater than zero, process i is said holding a partial queue-index (P qindex~). After passing through the doorway, process i sets done[i] to obtain a full queue-index (F qindexi). The basic idea is described in the following. All interacting processes join an implicit queue. A process first joins the queue with a partial index of which the value is equal to the lowest queue index that can ever be held. Then, it keeps building up the index, or say "walking" toward the end of the implicit queue until a full index is obtained. The index invariants can thus be established as that if at the time process i joins the queue there is a process j holding a full index, then Fqindexj < Pqindexi ~< Fqindexi
(RI)
1 ~< Pqindexi ~< Fqindex~
(R2)
Pqindex[' < Pqindex~ + ', 0 < n < N
(R3)
otherwise
It is obvious that
where n is the nth value of Pqindex~. The partial queue-index always increases monotonously. Note that the properties (RI)-(R3) are invariant in the sense that they are valid as long as processes i and j are in their current cycles. The privileges of entering the critical section is granted to a process as soon as it knows that all other interacting processes hold or are going to hold a higher index than its own. Ties are broken by following process's names alphanumerically. In other words, the ordered pair (qindex[i],i) forms a unique virtual ordering among interacting processes (all processes in the implicit queue). Without any confusion, we will also refer the queue index as the corresponding ordered pair. In Algorithm 1, the value of qindfex could be very large if the disjoint period is much shorter than the interacting period. Such an unbounded value problem has been pointed out in previous literatures [6, 11, 12]. A read or a write operation is assumed to be indivisible in our system in order to guarantee that the value of a new qindex will be at most one greater than the largest value of the qindices. Hence, the range of the qindex among interacting processes cannot be larger than N. This condition enables us to reconfigure the implicit queue into a circular queue of size Z as shown in Fig. 1. As we know, in order to preserve the order of processes in such a circular queue without requiring any additional information, the outer distance between any two qindices has to be at least d, (outer distance)
..,,oo. ,...c., ~
Z ( s i z e of tile circular queue) > 2 N - I
Fig, I. The implicit circular queue.
54
W.C. YE~ and K. S. Fu --interacting
~riod
--
I a
0
..~ qindex~ri]..4,,.
....-~.
q ~ndex m- l [ i l ...e.. q indexm ~i ]
( P q index/ )
I
1
( F q index/ )
1
I
-I
.fwq,
I
I
I
I
1
I Lo
IL5
Lr
Fig. 2. State transition diagram of Pi.
one greater than the inner distance so that the value of the turned-over qindex can be interpreted correctly. By doing so in the algorithm, the qindex[a] should be stored modulo Z each time and the result of logic comparison between qindices should be complemented if Iqindex[a]-qindex[b]l > N. This is a common technique used in communication protocol and also adopted in Ref. [12]. Note that the reconfigured circular queue still satisfies the invariant relations (RI)-(R3). In Lamport's system, only a correct write operation is assumed and a read operation may provide any arbitrary value so that a read and a write to a single location can occur simultaneously. However, under such a condition there is no way to bound the value of the bakery number (similar to qindex). The indivisibility of a read or a write operation is, in fact, not a strong assumption since in most cases the value of qindex can always be accommodated into a single memory word. 2.1 Correctness proof We now introduce the notations for the proof of correctness of Algorithm 1. A state of process
Pi is represented by (~,/~) where ~ ~ {all possible values of the globle variable qindex}, and/~ {Di, Ieqi, Icqi, Ihqi}. Di denotes the disjoint period, Ieqi, Icqi, and Ihqi denote the period of establishing its order in the queue, checking its order in the queue, and being at the head of the queue respectively. The state-transition diagram of P~ is specified and illustrated in Fig. 2. (0, D~) is the initial state of Pi and a cycle is completed when P~ returns to this state. The only state transition which depends on the states of other processes is lcqj--* > Ihch. According to Fig. 2, we know that this transition occurs only at the time that statement L6 is completed. A system state is described by (S, DS, IN) where S denotes the set of all N processes, and DS and IN are the sets of processes in the disjoint and interacting periods respectively. The set IN consists of three subsets QRi, {i}, and QFi, where QR~ is the set of all interacting processes with orders behind the order of Pi, and, on the contrary, QF~ is the set of all interacting processes with orders preceding the order of P~. The size of these sets, except S, are not constant but DS U IN = S
(SI)
QR~ U {i} U QF~ = IN
($2)
DS fl QR~ N {i} ~ QF~ = ~b.
($3)
Corollary 2.1. If Pj is in QFi then Pi is in QRj for all Pi and Pi ~ IN, and vice versa. Proof From ($2) and because of Pi, Pj ~ IN, Pi must be in either QRi or QFj. Pj in QFi implies that (qindex[j],j) < (qindex[i],i), therefore Pi is in QR i. The proof of the converse is similar. Corollary 2.2. The transition Icqi--.Ihqi occurs if and only if QFi is empty.
A distributed synchronization mechanism
55
Fig. 3. Process flows of P~.
Proof. We know that Icqc--,Ihqi occurs if L6 of Algorithm 1 is completed, i.e. for all Pj ~ S, either qindex[j] = 0 or (qindex[j],j)~> (qindex[i],i). It is clear that qindex[j] = 0 for Pj ~ DS, (qindex[.j],j) > (qindex[i],i) for Pj ~ QRi, and (qindex[i],i)= (qindex[i],i). Hence, by (S1) and ($2), the only possible set that process i has to wait for is QF~. Furthermore, none of the two conditions (S l) and ($2) can ever be met for any process in QF~ so L6 of the algorithm can never be completed as tong as there is a process in QF~. The proof is thus completed. Corollary 2.3. No process enters QFi, and no process leaves QF~ through the entire period of Icq~ + Ihq~. Proof. At the beginning of Icq~ period, we assume that the state of the system for P~ is represented by DS °, QR °, and QF °. There are three possibilities for Pj ~ QF ° to leave QF~: (1) Pi in the interacting period dies and enters DS, (2) Pj in Ihqj period completes its current cycle and enters DS, and (3) P~ in Ieqj period is getting a new index value which makes the order of Pj higher than that of P~'s and the Pj enters QR~. There is only one possibility for Pj ~ DS ° to leave QF~ and it must enter QR~. That is, Pj in D i period completes the period and starts to establish its order. According to (RI), it has to enter QR~. There is also only one possibility for Pj ~ DS ° to leave QF~ and it must enter QRi. That is, Pj in Dj period completes the period and starts to establish its order. According to (R1), it has to enter QR i. There is also only one possibility for Pi E QRi° to leave QFi and it must enter DS. That is, Pj in Ieqi + Icqj period dies and thus enters DS. It is not possible for Pie Q R° to enter QF~ due to the following reason. Since Pj ~ QR~, we know that Pi does not wait for Pj at L6.1 of Algorithm 1. This implies that (qindex[j]d) > (qindex[i],i). It is clear that according to (R3) Pj will never enter QF~ since the above inequality always hold until Pj or P~ completes its current cycle. A directed graph which illustrates all the possible process-flows among DS, QR~, and QF~ is shown in Fig. 3. Lemma 2.1. Algorithm 1 satisfies Condition (C1). Proof. This is equivalent to prove that if P~ is in period Ihq~ there is no Pi e {S-i} in period Ihqi. Firstly, we show that Icqi~Ihch and IcqF--,Ihqi cannot occur simultaneously. From Corollary 2.2 we know that QF~ is empty when the transition Icq~---,Ihch occurs. According to ($2) and Corollary 2.1, Pi is in QR~ and P~ is in QFj when all P~ E {IN-i}. Therefore, by Corollary 2.2 no other transition can occur simultaneously. Secondly, from Corollary 2.3, it is clear that QF~ stays empty and thus the above argument stays valid until P~ completes its interacting period. Lemma Z2. Algorithm I satisfies Condition (C2). Proof. P~ is blocked if QF~ is not empty. According to Corollary 2.1 and Corollary 2.3, QF~ will be empty eventually. Therefore, the system will never be deadlocked. Lemma 2.3. Algorithm 1 satisfies Condition (C3). Proof. Since the time required to execute the doorway is equivalent to period Ieq, from (R1) we know that Pi is in QR~ if Ieq~ is completed before Ieqi is started. According to Corollaries 2.1, 2.2, and 2.3, Ihq~ must precede Ihch. Furthermore, P~ does not wait for any particular states of other processes during period Ieq~. 3. G E N E R A L I Z E D
CRITICAL REGION
Lamport [13] has generalized the conditional critical region statement by extending the values of the shared variable v to some finite, but arbitrary set, and by associating two additional boolean functions, "conflict" and "should-precede", with each process. Later, it has been further extended in [14].
56
W.C. YEN and K. S. Fc
Two interacting processes may not be conflict due to a request from different resources or due to the fact that they request the same resource but the operation intended is not necessary to be mutually exclusive. Thus, two interacting processes are said to be conflict only if conflict (v~, vj) = true. Here, we restrict v to have the form (r, op) where r is the name of resources and op is the type of operation intended on r. Note that all shared variables which represent shared objects in the system are called resources. Furthermore, it is often desirable in a synchronization problem to schedule various processes more flexibly than a simple FCFS. The should-precede function was introduced to specify when a process should precede and ignore the ordinary FCFS order. Here, we redefine an integer-valued priority function which has the form pri(op) to denote a conceptually simpler and different function. The priority function specifies the priority queue that an interacting process should join. In other words, there is no longer only one FCFS queue in the system but several ones depending on the number of priority levels used. An interacting process in a higher priority queue always blocks the processes in lower priority queues if they are conflict each other. The extra flexibility provided by the conflict and priority functions enables us to solve a number of problems more easily. A typical example is the second readers'/writer's problem [15] that only the writer needs to access a file exclusively and once a writer is ready to write, he performs the write as soon as possible [16]. The operations of reader and writer can be expressed in terms of the generalized critical region as follows: reader: REGION (rfname, read) DO E. writer: REGION (wfname, write) DO E. where conflict (v~, vi) = true iff r~ and rj are equal and opi or oPi is "write". In addition, pri(op) = 1 if op is "read" and pri(op)= 2 if op is "write". The dinning philosopher problem [17] can also be scheduled safely (deadlock-free, starvationfree) as follows: philosopher i: REGION (forki + fork(i + 1)mod5; use) DO eat. In addition, the explicit process scheduling problem is automatically solved by the conflict function without resorting to the event queues [18]. An implementation of the generalized critical region statement should still satisfy the three conditions (C1)-(C3), though they need to be slightly modified, with an additional condition as follows:
(ct') (c2') (C3') (C4')
One and only one conflicting process may be in the critical section at any time. Assuming that a process requests and releases all resources at once, the system will not be deadlocked. FCFS should be maintained for processes on the same level of priority. A higher priority process should enter its critical section as soon as possible [15].
An algorithm for process i is given as follows:
Algorithm 2 (* generalized REGION v DO E *) GLOBLE qindex[i]: = 0, done[i]: = 0, mode[i]: = v, p[i]: = pri(op) PRIVATE j, k BEGIN (* for process i *) L0. FOR j IN Con(i) DO IF p[j] = p[i] AND done[j] = 1 THEN GO TO L3; L1. qindex[i]: = 1; L2. FOR k IN Con(i) DO IF p[k] = p[i] AND qindex[k] > qindex[i]-I THEN
A distributed synchronization mechanism
57
qindex[i]: = qindex[k] + 1: GO TO L5; L3. qindex[i]: = qindex[j] + 1; L4. FOR k IN Con(i) DO IF p[k] = p[i] AND qindex[k] > qindex[i]-I THEN qindex[i]: = qindex[k] + 1; L5. done[i]: = l; L6. FOR J IN Con(i) DO L6.1 IF conflict(mode[i], mode[j]) THEN IF p[j] > p[i] OR #¢(j) < ~(i) THEN GO TO L6.1; LT. p[i]: = Pmax; LB. FOR j IN Con(i) DO IF p[j] = Pmax AND conflict(mode[i], mode(j]) T H E N BEGIN p[i]: = pri(op); GO TO L6 END; L9.; LI0.qindex[i]: = 0; done[i]: = 0; mode[i]: = 0; pill: = 0; END. where :14=(j)< 4~(i) denotes {p[j] = p[i] AND qindex[j] < > 0 AND (qindex[j],j) < (qindex[i],i)}. Pmax is an arbitrary constant which is larger than the maximum possible priority used and is known to all processes. Since two processes may never be conflict each other if they do not share any resource, a process only needs to inspect the process which has a possibility to compete with it. Thus, for each process i, let Con(i) denote the conflict set [13] which consists of all processes which could be conflict with it. Algorithm 2 actually reconfigure the implicit queue into multiple priority queues. The statement L6 can no longer effectively block any conflicting Pj when P~ is in the critical section and pri(op) > pri(op~). Therefore, L8 is added to serve as the second lock. As a result, any process passes L6 without knowing a lower priority one already in the critical section will be branched back to L6 and wait there. Moreover, two conflicting processes pass L6 simultaneously without knowing each other will both be branched back to L6 and wait there. Moreover, two conflicting processes pass L6 simultaneously without knowing each other will both be branched back to L6 and then the higher priority one will get passed and enter the critical section first. If a process keeps failing and restarting then all lower priority conflicting processes will be blocked indefinitely. This is the same situation as that a stream of higher priority jobs keeps the low priority jobs waiting indefinitely. The following relations always maintain the total ordering of the system: (1) pri(op~) > pri(op) and pri(op) > pri(opk) imply pri(opi) > prio(op0, (2) ~:(i) > ~(j) and # 0 ) > # ( k ) imply @(i) > @(k). In other words, the complicated checking for the potential deadlock of the should-precede function considered in Ref. [14] can be omitted. It should be noted that the introduction of conflict set in the algorithm does not affect the above relations, particularly the relation (2).
4. G E N E R A L I Z E D C O N D I T I O N A L C R I T I C A L R E G I O N The conditional critical region [19] is known to be a convenient synchronizing primitive in a situation where a process wishes to wait until the components of shared data satisfy a certain condition. It has the notation: REGION v WHEN B DO E.
%
A typical example is the disk track allocation problem. For an "acquire" operation, whether the number of available tracks is sufficient or not can be simply checked in terms of a condition. Another example is the bounded buffer problem. Senders have to check if.the buffer is full and CL
8;2--~
58
w.C. YEw and K. S. Fu
receivers have to check if the buffer is empty. However, the required implementation of a conditional critical region statement for these two examples may be quite different. For the disk track allocation problem, the "acquire" operation should be given a lower priority than the "release" operation. In addition, two operations of the same type may have different priorities due to the different number of tracks involved. This calls for the parameter op to represent not only the type of operation but also the quantity involved in the operation. In this case, the implementation of the additional "conditional waiting" can be solved by just inserting a conditional branching statement (CBS) between L8 and L9 of Algorithm 2. A CBS statement does the following: CBS. IF NOT BE THEN BEGIN p[i]: = pri(op); GO TO L6 END; Nevertheless, the bounded buffer problem is different. As long as the buffer is neither empty nor full there usually is no reason to give a priority to the senders or to the receivers. In other words, in common situations the senders and the receivers should be served on the same FCFS basis. Now, if a sender enters the Critical section and finds its condition is false the system will be deadlocked by applying CBS to implement the conditional waiting since the sender is still at the head position of the queue. In fact, the semantics of a conditional critical region statement [19] calls for a process to evaluate condition B inside its critical section. If B is true the process continues to execute E, otherwise the process leaves its critical section temporarily and re-enters the critical section after another process has successfully completed a conflicting critical section. Unfortunately, the general algorithm in Ref. [13] fails to show that this key problem can be handled. A straightforward and simple solution is to branch the sender to LI0 then re-enter the queue instead of branching back to L6. This scheme will not deadlock the system. However, it is an unfair implementation especially when the buffer is heavily used. Another alternative which does the fair scheduling is given as follows:
Algorithm 3 (* generalized REGION v W H E N B DO E *) GLOBLE qindex[i]: = 0, done[i]: = 0, mode[i]: = v, p[i]: = pri(op), hand[i]: = 0, ok[i]: = 0 PRIVATE j, k, hp BEGIN (* for process i* ) L6. IF QF(i) not empty T H E N BEGIN IF 0 < > hp: = help(i,QF(i)) THEN BEGIN ok[i]: = 1; p[i]: = Pmax; L6.1 IF hand[hp] = 0 T H E N BEGIN pill: = pri(op); GO TO L6 END ELSE IF hand[hp] = 1 T H E N GO TO L6.1 ELSE GO TO L9 END; GO TO L6 END; ¢
L9~ IF NOT B T H E N BEGIN hand[i]: = 1;
A distributed synchronization mechanism
59
L9.1 F O R j IN Con(i) D O IF conflict(mode[i],mode[j]) A N D ok[j] = 1 T H E N G O T O L9.2; G O T O L9.1; L9.2 hand[i]: = 2; L9.3 I F conflict(mode[i],mode[j]) A N D ok[j] = 1 T H E N G O T O L9.3; G O T O L9 END; LI0.; L1 l.qindex[i]: = 0; done[i]: = 0; mode[i]: = 0; p[i]: = 0; hand[i]: = 0; ok[i]: = 0; END. where QF(i) is the set o f conflicting processes with orders lower than that o f process i, i.e. p[j] or 4# (j) < 4~(i). N o w , process i is allowed to enter the critical section even if QF(i) is not empty in the case that a conflicting process hp is in its critical section but Bhp is false and process i is the only process in QF(i) that make B~p true. The name o f such a conflicting process hp will be returned by the function " h e l p " and then an explicit handshaking between processes i and hp will be carried out in order to pass on the privilege o f entering a critical section safely. The correctness proofs o f Algorithm 2 and 3, which can be easily extended from the p r o o f given in Section 2.1, are left to interested readers. 5. C O N C L U D I N G
REMARKS
It is k n o w n that the availability o f a physically distributed system does not automatically imply its reliability. We have presented algorithms to achieve a reliable synchronization o f interacting processes with very few assumptions a b o u t interprocess c o m m u n i c a t i o n and m e m o r y space requirement. A c o m m o n notion behind all the algorithms presented in this paper is that a process always establishes its own status first before inspecting others to make a decision. This should be taken as a rule o f t h u m b for implementing synchronization a m o n g fully distributed processes without a prior knowledge about their speeds and actions. REFERENCES 1. W. C. Yen and K. S. Fu, Performance analysis on multiprocessor memory organization, Proceedings of the ACM Pacific '80 Conference on Distributed Processing, pp. 142-153 (November 1980). 2. W. C. Yen and K. S. Fu, Analysis of multiprocessor cache organizations with alternative main memory update policies, Proceedings of the gth Annual International Symposium on Computer Architecture, pp. 89-105 (May 1981). 3. P. Brinch Hansen, Concurrent programming concepts, Comput. Survey 5, 223--245 (1973). 4. C. A. R. Hoare, Towards a theory of parallel programming, International Seminar on Operating System Techniques, Belfast, Northern Ireland (August-September 1971). 5. E. W. Dijkstra, Cooperating sequential processes, Tech. Univ., Eindhoven, The Netherlands (1965). 6. L. Lamport, A new solution of Dijkstra's concurrent programming problem, Communs Ass. comput. Mach. 17, 453-455 (1974). 7. R. L. Rivest and V. R. Pratt, The mutual exclusion problem for unreliable processes: preliminary report, Proceedings of the 17th Annual Symposium on the Foundation of Computing Science, pp. 1-8 (October 1976). 8. G. L. Peterson and M. J. Fischer, Economical solutions for the critical section problem in a distributed system, Proceedings of the 9th Annual ACM Symposium on Theory of Computing, pp. 91-97 (May 1977). 9. H.P. Katseff, A new solution to the critical section problem, Proceedings of the lOth Annual ACM Symposium on Theory of Computing, pp. 86-88 (May 1978). 10. M. A. Eisenberg and M. R. McGuire, Further comments on Dijkstra's concurrent programming control problem, Communs Ass. comput. Mach. 15, 999 (1972). 11. D. P. Reed and R. K. Kanodia, Synchronization with eventcounts and sequencers, Communs Ass. comput. Mach. 22, 115-123 (1979). 12. G. Ricart and A. K. Agrawala, An optimal algorithm for mutual exclusion in computer networks, Communs Ass. comput. Mach. 24, 9-17 (1981). 13. L. Lamport, The synchronization of independent processes, Acta Informatica 7, 15-34 (1976). 14. G. Holober and L. Snyder, Scheduling parallel processes without a common scheduler, Proceedings of the 1979 International Conference on Parallel Processing, pp. 186-195. 15. P. J. Courtois, F. Heymans and D. L. Parnas, Concurrent control with "readers" and "writers", Communs Ass. comput. Mach. 14, 667-668 (1971).
60
W.C. YEN and K. S. Fu
16. P. J. Courtois. F. Heymans and D. L. Parnas. Comments on "'A comparison of two synchronizing concepts by P. B. Hansen". Acta Informatica i, 375-376 (1972). 17. E. W. Dijkstra, Hierarchical ordering of sequential processes, Acta Informatica 1, 115-138 (1971). 18. P. Brinch Hansen, Operating System Principles, Prentice-Hail, Englewood Cliffs, NJ (1973). 19. P. Brinch Hansen, A comparison of two synchronizing concepts, Acta Informatica 1, 190-199 (1972).
About the Autbor--WEI C. YE.~ received the Ph.D. degree in Electrical Engineering from Purdue University, West Lafayette. Indiana. Currently he is a member of technical staff at Hewlett-Packard Laboratories, Palo Alto, California and involved in the research and development of computer communication networks. Before he joined HPL, he was with Fairchild Advanced Research and Development Laboratory, PaIo Alto, CA, where he worked on the design of high performance microprocessor. His research interests include computer networks, operating systems, and computer architectures. Dr Yen is a member of Phi Kappa Phi, and Sigma Xi. About the Antbor--Kt~G-SV.X Fu received his Ph.D. from the University of Illinois, in 1959. He is presently Goss Distinguished Professor of Engineering and Professor of Electrical Engineering at Purdue University. He is a Fellow of IEEE, a member of the National Academy of Engineering and Academia Sinica, and a Guggenheim Fellow. He received Certificate for Appreciation (1977, 79) Honor Roll (1973) Special Award (1982) and the Outstanding Paper Award (1977) of the IEEE Computer Society, the 1981 ASEE Senior Research Award, 1982 IEEE Education Medal and 1982 AFIPS Harry Goode Memorial Award. He is the Vice President for Publications of IEEE Computer Society, Editor of Information Sciences, an Associate Editor of PATTERN RECOGNITION, Computer Graphics and Image Processing, Journal of Cybernetics, and International Journal of Computer and Information Sciences. He is the first President of International Association for Pattern Recognition (IAPR). He is author of the books Sequential Methods in Pattern Recognition and Syntactic Methods in Pattern Recognition, published by Academic Press in 1968 and 1974 respectively, Statistical Pattern Classification Using Contextual Information, published by Wiley in 1980, and Syntactic Pattern Recognition and Applications published by Prentice-Hall in 1982.