Algorithms for file replication in a distributed system

Algorithms for file replication in a distributed system

J. SYSTEMS SOFTWARE 1991: 14:173-181 173 Algorithms for File Replication in a Distributed System Anna Ha6 A T&T Bell Laboratories, Naperville, Illin...

879KB Sizes 14 Downloads 122 Views

J. SYSTEMS SOFTWARE 1991: 14:173-181

173

Algorithms for File Replication in a Distributed System Anna Ha6 A T&T Bell Laboratories, Naperville, Illinois

Xiaowei Jin Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland

Jo-Han Soo Advansoft Research Corporation, Santa Clara, California

This paper describes an implementation and performance evaluation of file replication in a locally distributed system. Different mechanisms are used to update the copies of a replicated file. The algorithms use both sequential and concurrent update methods. When updating the files sequentially, the algorithm is executed on the host on which the file has been written. Concurrent update is executed by using one machine or by using all the remote machines on which the file is replicated. The algorithms are proposed to update the copies of the replicated file by using various methods. These algorithms are implemented in a locally distributed system. The performance of the algorithms is compared for different loads and various read and write accesses to the files. 1. INTRODUCTION

The availability of resources in a distributed environment is an important factor to be considered when designing a distributed system. Resource sharing includes physical devices, data bases, files, compilers, and other processing elements. Resource sharing in a distributed environment may be solved through replication. Replication increases availability and system reliability; it may decrease performance, however, due to the necessity of updating remote copies. Moreover, the

Address correspondence to Anna Ha& AT&T Bell Laboratories, Room IF-364, 1200 E. Warrenville Rd., Napervilie, IL 60566.

0 Elsevier Science Publishing Co., Inc. 655 Avenue of the Americas, New York, NY 10010

cost of having resources replicated may be too high. There are a number of approaches and solutions to file replication in the distributed environment. The problem of file allocation was introduced in Chu [2], where it is discussed how the multiple files are placed on a multiprocessor system. An algorithm that determines file allocation in a distributed computer communication network is presented in Laning and Leonard [ 121. This algorithm minimizes the sum of file storage costs and message transmission costs in order to decide on file placement. An algorithm for file replication and migration in a distributed system is proposed in Hac [8]. The decision where to replicate a file depends on the system load and the number of read and write accesses to the files. The optimization of the number of copies in a distributed database is discussed in Colfman et al. [3]. The update of remote copies of a file that has been written may cause consistency problems [ 1,4,5,6, 141. There are a number of methods to update the copies [lo], and to synchronize the update [ 131. Performance analysis and evaluation of a distributed file system with a synchronization mechanism is presented in Had [7]. Performance evaluation and implementation of deadlock prevention algorithms in a distributed file system is discussed in Had et al. [9]. In this paper we introduce three algorithms to update the copies of a replicated file. The algorithms use both sequential and concurrent update methods. When updating the files sequentially, the algorithm is executed on

0164-1212/91/$3.50

174

Ha6 et al.

J. SYSTEMS SOFTWARE 1991; 14:173-181

the host on which the file has been written. Concurrent update is executed by using one machine or by using all the remote machines on which the file is replicated. These algorithms are implemented in a locally distributed system. The performance of the algorithms is compared for different loads and various read and write accesses to the files. 2. THE DISTRIBUTED

n

errniml

completed locor processes

user local processes

SYSTEM AND WORKLOAD

The system in which the algorithms were implemented is a loosely-coupled distributed system consisting of a number of hosts connected by a local area network. The configuration of this system is shown in Figure 1. Five AT&T 3B2 computers are connected in a ring by the 3BNET (an Ethernet compatible, lOMb/s) local area network. A host consists of a CPU, a disk, and a number of terminals. The model of a host is shown in Figure 2. The system is homogeneous since the machines have the same software and hardware architecture. An experimental version of AT&T distributed UNIX system [ 1l] is available to our study. The distributed file system allows read and write accesses to the files placed on the remote machines, and execution of files and commands on the remote hosts. The distributed system also allows remote process execution and data transfer over the network. The local processor can invoke a remote processor to execute the local process. The remote processor can automatically read the data from the terminal or from the files on the local processor. When the process is completed, the remote proces-

Figure 2. A model of a host.

sor transfers the data back to the terminal or to the files on the local processor. The system workload consists of different processes, The processes can be executed locally or remotely. The files have various sixes and can be accessed locally or remotely. The description of the processes and files will be given in the section on the experiments and results. The experimental results presented in the paper depend on the system configuration. These results depend on the network topology as well as on the operating system executed in the network. This paper is a case study of the file replication algorithms in a ring network running AT&T distributed UNIX system. 3. ALGORITHMS

FOR FILE REPLICATION

File replication algorithms are executed on the hosts on which there exists a copy of a replicated file. The algorithms for file replication considered in this study use a number of procedures to access and update a replicated file. The read and write accesses are identical for all the algorithms. The file can be locked for reading if it is not being written. The file can be locked for writing if it is not being read or written. Locking of files is implemented as the critical section by using a semaphore provided by the Unix System V. The description of the procedures is as follows.

ReadLock file name) (1) if (file name is currently under writing) then return control to the calling process with the (2) error code set (3) else return control to the calling process with the (4) successful code set

WriteLock (fiie name) Figure 1. A configurationof a distributed system.

(1) if (Jile name is currently under reading or writing) then

File Replication in a Distributed System return control to the calling process with the error code set (3) else (4) lock jile name for writing return control to the calling process with the (5) successful code set

(2)

These procedures are used in the file replication algorithms. In these algorithms, the synchronization problem is solved by using the Unix system approach: the most recently written copy of the file is the updated copy. When there are more than one user that want to update the same file at the same time, the most recently written copy is the updated copy. Two different copies of the same file can not exist in the system because it is not allowed by the file replication algorithm. The file locking mechanism is used when updating the file so that the file can be locked before it is copied to a remote host. In the file replication algorithms there is no fairness or priority procedure implemented. Therefore there is a race condition. Race condition can happen when more than one user want to update the same file. Since the updating of a file is a mutually exclusive operation, the user processes have to race to get into the critical section to update the file. It is assumed that the processes eventually release the files that they locked. Because the main objective of the paper is to compare performance of various replication techniques, a fairness or priority algorithm would cause additional overhead. This overhead would have to be considered in the performance comparison and might detract from the main goal of the evaluation.

3.1 File Replication by Using a Single-Host Sequential Update Algorithm The Sequential Update File Replication Algorithm (called SU-Algorithm) updates the target files sequentially, i.e., the updating routine is executed sequentially on one machine for as many times as the number of hosts. In this algorithm, every file on the remote host must be locked before copying can take place. The description of the update procedure in the algorithm is as follows.

J. SYSTEMS SOFTWARE 1991; 14:173-181

175

4. FILE REPLICATION BY USING A SINGLE-HOST CONCURRENT UPDATE ALGORITHM The Concurrent Update File Replication Algorithm (called CU-Algorithm) uses only one machine to update the target files. The host machine creates as many children processes as the number of machines; thus each child process updates the target file on one machine. Since these children processes are executed concurrently, this algorithm is expected to be more efficient than the sequential update file replication method. The description of the update procedure in the algorithm is as follows. CU-Algorithm UpdateFile (file name) (1) for every remote host Hi do (2) begin (3) fork a child process if (current process is a child process) then (4) lock the file and copy file name into host Hi (5) terminate the current child process (6) (7) end return control to the calling process with the (8) successful code set 4.1 File Replication by Using a Multiple-Host Update Algorithm

The Multiple-Host Update File Replication Algorithm (called MHU-Algorithm) uses a monitor on every machine. The name of the file to be updated along with its location (the machine on which the file resides) is placed into the mailbox of every other machine. The mailboxes in the network have to be locked before processes can be forked to update the file. The monitor that runs on each machine then gets the information from the mailbox and copies the target file over. In this way the updating jobs are evenly distributed among the hosts and thus no machine becomes heavily loaded. The description of the update and monitor procedures in the algorithm is as follows. MHU-Algorithm

SU-Algorithm UpdateFile (file name) (1) for every remote host Hi do (2) lock the file (3) for every remote host Hi do copy fire name into host Hi (4) return control to the calling process with the (5) successful code set

Update-File (file name) (1) for every remote host Hi do (2) lock the mailbox on host Hi (3) for every remote host Hi do (4) begin (5) fork a child process and let this process pass Jile name and host name (location of the target file) into host Hi mailbox

176

J. SYSTEMS

(6) terminate the child process (7) end (8) return control to the calling process with the successful code set Monitor (1) (2) (3) (4) (5) (6) (7) (8) (9) (10)

(11) (12) (13) (14) (15)

Had et al.

SOFTWARE 1991; 14:173-181

while (true) do if (mailbox is not empty) then for every file fj in mailbox do begin fork a child process if (current process is the child process) then let this child process copy file fi from the host where file firesides terminate this child process else {the current process is the main process} be idle for the variable amount of time which is necessary for the child process to get the address of file fi remove file fifrom the mailbox end else be idle for a certain amount of time endwhile

Note that the mailbox is the critical section and all operations on the mailbox are atomic. In the monitor, the child process gets the address of the file to be replicated that allows the parent process to remove the file. The elapsed time of this process can vary. The file replication algorithms were implemented in the locally distributed system. In this system the Unix function fork was used to create children processes from the parent process when updating concurrently the copies of the replicated file. The monitor in the Multiple-Host Update Algorithm was implemented by using a file as the mailbox for the processes that deposit the message about the file to be updated. This mailbox is implemented as the critical section. To ensure that no more than one process writes to the mailbox at the same time, the semaphore is used to prevent more than one user from entering the critical section. The semaphore uses the Unix function treat which allows only one user to create the file with the name and thus allows for the mutual exclusion. 5. ALGORITHM IN THE SYSTEM WITHOUT FILE REPLICATION No File Replication Algorithm (called NFR-Algorithm) uses remote read and writes. This algorithm is different from the file replication algorithms. The idea of the algorithm is similar to that of the file server. Instead of keeping a copy of a file on every machine, No File

Replication Algorithm keeps only one copy of the file in the distributed system. If two processes attempt to access the same file, one of them is refused to access the file. If two processes access different files, they can run concurrently with no conflict. In the system without file replication, the user has to access the file remotely. The advantage of this algorithm is that it does not require the file updating procedures. The description of the algorithm is as follows: NFR-Algorithm ReadLock (file name, host) (1) (2) (3) (4) (5)

if (file name on host is currently under writing) then return control to the calling process with the error code set else lock fire name on host for reading return control to the calling process with the successful code set WriteLock file name, host)

(1)

if (file name on host is currently under reading or writing) then return control to the calling process with the error code set (3) else lock pie name on host for writing (4) return control to the calling process with the (5) successful code set This algorithm was implemented in a locally distributed system. The comparison of NFR-Algorithm and file replication algorithms is given in the next sections.

6. PERFORMANCE COMPARISON REPLICATION ALGORITHMS

OF FILE

The performance of file replication algorithms depends on the system hardware and software, the load on every host and in the network, and on the implementation of the algorithms. Because every write access invokes updating of the remote copies of the file that has been written, read and write accesses should be kept carefully distinct in the system. The performance of the system depends on the type of file accesses involved (read or write, local or remote). In this section, the general results held under certain assumptions are derived. The experimental results that confirm introduced general results and the assumptions are presented in the next section.

File Replication in a Distributed System If the files are read only, then the system performance is improved by using file replication algorithms. The performance of a user job on host rri is improved in terms of the elapsed time of the user job by using any one of three file replication algorithms in comparison with no file replication if there are only remote file readings involved in the user job and no file update is processed on host Hi. File replication algorithms are executed on the hosts on which there exists a copy of a replicated file. By using any one of three file replication algorithms, all remote file reading becomes local file reading. Since only remote file reading is involved in the user processes, no UpdateFile procedure is invoked. Thus the performance of user job is improved because the local file reading is more efficient than the remote file reading and no overhead caused by file update exists on host Hi. The overhead to check whether the replicated files are currently written on the local host by using a system semaphore is negligible in comparison to the elapsed time of remote file reading. If the files are written, then the system performance is improved by using the MHU-Algorithm. The performance of a user job on host Hi is improved in terms of the elapsed time of the user job by using MHU Algorithm in comparison with no file replication if there is some remote file writing involved in the user job and no file update is processed by the monitor on host Hi. File replication algorithms are executed on the hosts on which there exists a copy of a replicated file. Because of the replication of the files, all remote file writing will become local file writing. After writing the files the user process will invoke UpdateFile procedure for MHU-Algorithm to pass the name of the file and the address of the local files to the mailboxes on all the remote machines. It is assumed that this broadcast causes lower overhead than the overhead of remote file writing. Since no file update is processed by the local monitor, there is no additional overhead on the local host due to file replication (note that the local monitor will be idle so that the overhead is negligible). Thus we can definitively say that the performance of this user job is improved. However, the monitors on the remaining processors in the system will find a file in the mailbox on host Hi and copy the file from host Hi to the local host that will cause higher load on each host (except for host Hi) in comparison with the system without file replication, and will slow down the execution of the processes on these hosts. If various files are written, then the system performance is improved by using MHU-Algorithm rather than by using CU-Algorithm. The more different files are written remotely by the user processes on host Hi, the more performance of a

J. SYSTEMS SOFTWARE 1991; 14:173-181

177

user job on host Hi is improved in terms of the elapsed time of the user job by using MHU-Algorithm than by using CU-Algorithm if no file update is currently processed by the monitor on host Hi. Let P denote the number of hosts in the system and Hi denote host i for i = 1, . . . , P. Without loss of generality, assume that there are K (K 1 1) files written by some user processes on host H, and the file writing can be completed at the same time or at different moments of time. By the implementation of CU-Algorithm, the user processes will invoke the UpdateFile procedure to update the copies of the file that have been written. This will generate (P - 1) x K children processes. Clearly, this will significantly overload host H, and slow down the execution of the user processes on host H,. However, the overhead due to file update is averagely accounted to all remote hosts if MHU-Algorithm is used. This will definitely free host H, from being heavily loaded and improve the performance of user processes on host H, if the overhead caused by the execution of monitor on host H, is negligible since there is no file update involved in the process on the host by the assumption. Note that the overhead caused by the execution of monitor to update the copies of the tiles on the remaining hosts will overload each host in comparison with no file replication and may slow down the execution of user processes on these hosts. Also, if the overhead caused by the Monitor to broadcast the information to the remote hosts and the overhead caused by the file update on the remote hosts are negligible, then the user processes may perform better then in the system without file replication. This also depends on how the user process writes the files on all the remote hosts. On the heavy loaded host, the SU-Algorithm allows for better performance than the CU-Algorithm if the file to be updated is not requested by any process. The SU-Algorithm allows for better performance of host Hk than the CU-Algorithm on host Hk if for any replicated file fi on host Hk there are no other user processes writing and reading involved in file fi before update of the copies of file _fi is completed. Given any file fi on host Hk, by the CU-Algorithm (P - 1) children processes (where P is the total number of hosts in the distributed system) will be generated at almost same time. The SU-Algorithm generates the same processes sequentially. Thus CU-Algorithm will seriously decrease the performance of host Hk. This degradation is higher than that caused by using SU-Algorithm because host Hk can become heavily loaded. This will affect the execution of other user processes which are executed concurrently with the processes updating the copies of file fi. Because file fi is not required by the other user processes before the update of the copies of the file is completed, there is no delay

178

J. SYSTEMS SOFTWARE

Had et al.

1991; 14:173-181

due to waiting for completion of the update of the copies of file fi. On the lightly loaded host, the CU-Algorithm allows for better performance than the MHU-Algorithm. The CU-Algorithm on host Ilk allows for better performance than MHU-Algorithm in terms of the average elapsed time of all user processes in the system if host Ilk is the least loaded with the number of processes lower than Au-P, where Au is the average number of processes on all remote hosts and P is the number of hosts in the system. Given any fi on host H,, by the CU-Algorithm P - 1 children processes (where P is the total number of hosts in the distributed system) will be generated. Because there are fewer than Au-P processes on host Ilk than the number of processes on the other hosts, CU-Algorithm will make the system load more balanced in terms of the number of processes on each host than the MHU-Algorithm does. The elapsed time of a process on the host is an increasing function of the number of processes on the host. Therefore, the CUAlgorithm will allow for better performance than MHU-Algorithm in terms of the average elapsed time of all user processes in the system. 7. THE DESCRIPTION AND THE RESULTS

too-

-----

SINGLE HOST UPDATE SEQUENTIAL UPDATE

go-

FILE

SIZE

(BYTES)

Figure 3. User elapsed time of a process using either SZNGLE-HOST Concurrent UPDATE or Single-Host SEQUENTIAL UPDATE File ReplicationAlgorithm.

OF THE EXPERIMENTS

The algorithms for file replication were implemented in a locally distributed system. A number of experiments were executed in our system to evaluate performance of the system under various loads. The system load consists of the processes that read and write files. The files that are accessed have various sizes. The elapsed time of a process is measured for different file sizes, and shown in Figures 3-9. Every process can read and/or write different number of files. In our figures, process 1 that writes 2 files and reads 3 files is represented by triple Pl, 2W, 3R. Process 2 that reads a file only is represented by I??, R. The results of our experiments showed that system performance is improved by using any file replication algorithm in comparison with the system without file replication. Elapsed time improves from three to five times for one process in the system reading and writing various combination of files, and for different file sizes. For two processes executed on the same machine and accessing different files, the performance also improves from three to five times in the system with file replication in comparison with the performance of the system without replication. In the system without replication, all read and write accesses are the remote ones. Because the processes access different files there is no waiting in the system with replication for the updating of the copies of the file that has been written. In case of

100

I-

-----

SINGLE HOST UPDATE SEOUENTIAL UPDATE

rz!yd%l /

go-

8070iz w zz F E ii? a

60-

50-

40-

id 30-

20 -

IO-

I

!

31945

I

63890 FILE

I

I

102224 SIZE

217226

(BYTES)

Figure 4. User elapsed time of a process using either SZNGLE-HOST Concurrent UPDATE or Single-Host SEQUENTIAL UPDATE File Replication Algorithm where two processes are executed concurrently on the same machine and access different files.

File Replication in a Distributed System 100

go-

I

-----

179

J. SYSTEMS SOFTWARE 1991; 14:173-181 100

SINGLE HOST UPDATE MULTIPLE HOST UPDATE

80.

80

70-

70-

SINGLE HOST UPDATE MULTIPLE HOST UPDATE

-----

t G; w I I=

Is 60-

w 5 I=

50kc a

50.

IW.3R

E

60-

s kc 6

40-

ii

40-

iii 30.

30-

20-

IO-

31945

63890 FILE

102224 SIZE

217226

(BYTES)

Figure 5. User elapsed time of a process using either NVGLE-HOST Concurrent UPDATE or MULTIPLE-HOST UPDA TE

, 31945

File Replication Algorithm.

the processes accessing the same files, the performance depends on the system load, number of files that are held, implementation of the algorithms, etc. The file replication algorithms are compared in Figures 3-9 on the example of the elapsed time of a process reading and writing different files. The performance comparison of the Single-Host Concurrent Update and Single-Host Sequential Update Algorithm presented in Figures 3-4 shows that the elapsed time of a process improves when the files are updated concurrently rather than sequentially. This result depends also on the load on the host and on the sequence of accesses to the files. The performance comparison of Single-Host Concurrent Update and Multiple-Host Update Algorithm is given in Figures 5-9. The results presented in Figures 5-6 show that when one process is executed in the system, both algorithms perform similarly (Figure 5). For two identical processes executed on the same machine and accessing different files, the performance of the system with Single-Host Concurrent Update Algorithm is slightly better than the performance of the system with Multiple-Host Update Algorithm (Figure 6). The reason is that the host is lightly loaded and the execution of the replication algorithm does not appreciably decrease performance on the local host. On a heavy loaded host, the Multiple-Host Update Algorithm allows for better performance than the Single-Host Concurrent Update Algorithm. This is shown in Fig-

! 102224

I 63890 FILE

SIZE

, 217226

(BYTES)

Figure 6. User elapsed time of a process using either SINGLE-HOST Concurrent UPDATE or MULTIPLE-HOST UPDATE File Replication Algorithm where two processes are executed concurrently on the same machine and access different files.

-----

SINGLE HOST UPDATE HOST UPDATE MULTIPLE

I

31945

I

I

63890 FILE

1

217226

102224 SIZE

(BYTES)

Figure 7. User elapsed time of a process using either SZNGLE-HOST Concurrent UPDA TE or MULTIPLE-HOST UPDATE File Replication Algorithm, and executed concurrently with a CPU-intensive process on the same machine.

180

J. SYSTEMS SOFTWARE 1991; 14:173-181

Had et al. IW.3R

360

-----

-

Sbti-

/3W,lR

SINGLE HOST UPDATE MULTIPLE HOST UPDATE /

320

280

_

-----

320-

SINGLE HOST UPDATE MULTIPLE HOST UPDATE

#’

280-

-

_ m

240-

,3W,IR

240-

: F

200

z 2

160-

-

5 W

l20-

80-

40 i 31945

63890 FILE

102224 SIZE

217226

31945

FILE

(BYTES)

Figure 8. Overhead of a process using either SINGLEHOST Concurrent UPDATE or MULTIPLE-HOST UPDATE File Replication Algorithm, and executed concurrently with a CPU-intensive process on the same machine.

ures 7-9 on the example of the elapsed time of two

processes, one of which accesses different files and the other one performs intensive computation. The elapsed time and the overhead of the process reading and writing files is smaller in the system with Multiple-Host Update Algorithm (Figures 7-8). Also, the elapsed time of the CPU-intensive process is smaller in the system with Multiple-Host Update Algorithm (Figure 9). The reason is that the CPU-intensive process loads the local host, and the additional load caused by Single-Host Concurrent Update Algorithm decreases system performance. When using the Multiple-Host Update Algorithm, the additional load due to the execution of the algorithm is balanced over the remaining remote hosts.

63890

102224 SIZE

217226

(BYTES)

Figure 9. User elapsed time of a CPU-intensive process executed concurrently on the same machine with the process using either SINGLE-HOST Concurrent UPDATE or MULTIPLE-HOST UPDA TE File Replication Algorithm.

pared for different system loads, number of files read or written, and different file sizes. It was shown that different algorithms should be chosen in different working conditions to maximize the system performance. For instance, file replication algorithms should be used when processes mostly read files. The Multiple Host Update Algorithm outperforms the Single Host Concurrent Update Algorithm if various files are written by the user processes. The Single Host Sequential Update Algorithm outperforms the Single Host Concurrent Update Algorithm on the heavy loaded host if the file to be updated is not requested by any process. Finally, the Single Host Concurrent Update Algorithm allows for better performance than the Multiple Host Update Algorithm on the lightly loaded hosts.

8. CONCLUSION In this paper the file replication algorithms were described and implemented in a locally distributed system. The experiments executed in the system show that the performance can be improved by using file replication algorithms. The proposed algorithms were com-

REFERENCES 1. P. Brereton, Detection and Resolution of Inconsistencies Among Distributed Replicates of Files, ACM Operuting Systems Review, 16, lo-13 (1982). 2. W. W. Chu, Multiple File Allocation in a Multiple

J. SYSTEMS SOFTWARE 1991; 14:173-181

File Replication in a Distributed System

3.

4.

5.

6.

7.

8.

Computer System IEEE Transactions on Computers, C-18, 885-889 (1969). E. D. Coffman, Jr., E. Gelenbe, and B. Plateau, Gptimization of the Number of Copies in a Distributed Data Base, IEEE Transactions on Software Engineering, SE-7, 78-84 (1981). C. A. Ellis, Consistency and Correctness of Duplicate Database Systems, Proc. Sixth ACM Symposium on Operating Systems Principles (Nov. 1977) 67-84. H. Garcia-Molina, T. Allen, B. Blaustein, R. M. Chilenskas, and D. R. Ries, Data-Patch: Integrating Inconsistent Copies of a Database after a Partition, Proc. Third IEEE Symposium on Reliability in Distributed Software and Database Systems (October 17-19, 1983) 38-44. J. Gray, Notes on Data Base Operating Systems, IBM Research Report, RJ2188(30001) (Feb. 23, 1978) IBM Research Laboratory, San Jose, California. A. Had, A Decomposition Solution to a Queueing Network Model of a Distributed File System with Dynamic Locking, IEEE Transactions on Software Engineering SE-12, 521-530 (1986). A. HaC, A Distributed Algorithm for Performance Improvement Through File Replication, File Migration and

9.

10.

11. 12.

13.

14.

181

Process Migration, IEEE Transactions on Software Engineering, 15, 1459-1470 (1989). A. Hat, X. Jin, and J.-H. Soo, A Performance Study of Deadlock Prevention Algorithms in a Distributed File System, Software Practice and Experience, 19, 461-489 (1989). E. Holler, Multiple Copy Update, Distributed Systems-Architecture and Implementation, Lecture Notes in Computer Science 105, Springer-Verlag, 1981, 284-303. R. M. Klein, Portable Distributed UNIX User Guide, AT&T Information Systems, Oct. 1984. L. J. Laning and M. Leonard, File Allocation in a Distributed Computer Communication Network, IEEE Transactions on Computers, C-32, 232-244 (1983). G. LeLann, Synchronization, Distributed Systems-Architecture and Implementation, Lecture Notes in Computer Science 105, Springer-Verlag, 1981, 266282. D. S. Parker, G. J. Popek, G. Rudisin, A. Stoughton, B. Walker, E. Walton, J. Chow, D. Edwards, S. Riser, and C. Kline, Detection of Mutual Inconsistency in Distributed Systems, IEEE Transactions on Software Engineering, SE-9, 240-247 (1983).