The use of a remote procedure call protocol for high speed data transfer on an ethernet

The use of a remote procedure call protocol for high speed data transfer on an ethernet

Interfaces in Computing, 3 (1985) 153 - 162 153 THE USE OF A REMOTE PROCEDURE CALL PROTOCOL FOR HIGH S P E E D D A T A T R A N S F E R O N AN E T H ...

566KB Sizes 0 Downloads 23 Views

Interfaces in Computing, 3 (1985) 153 - 162

153

THE USE OF A REMOTE PROCEDURE CALL PROTOCOL FOR HIGH S P E E D D A T A T R A N S F E R O N AN E T H E R N E T ROBIN TASKER, FRANCES RAKE, PAUL KUMMER and DAVID HINES Daresbury Laboratory, Science and Engineering Research Council, Warrington WA 4 4AD (Gt. Britain)

(Received February 12, 1985)

Summary In this p a p e r t h e d e v e l o p m e n t o f a local area n e t w o r k based o n Ethern e t r u n n i n g a r e m o t e p r o c e d u r e call p r o t o c o l is described. A w o r k i n g definit i o n o f the s o f t w a r e p r o t o c o l is p r o v i d e d t o g e t h e r w i t h t h e i m p l e m e n t a t i o n details for an L S I 1 1 / 2 3 c o m p u t e r r u n n i n g R T l l and f o r a V A X 1 1 / 7 5 0 c o m p u t e r r u n n i n g VMS. T h e p e r f o r m a n c e c h a r a c t e r i s t i c s of this n e t w o r k have b e e n m e a s u r e d a n d d a t a rates o f at least 62 k b y t e s s 1 have b e e n c o n s i s t e n t l y r e c o r d e d f o r disk-to-disk d a t a t r a n s f e r b e t w e e n these t w o machine types.

1. I n t r o d u c t i o n In a p r e v i o u s p a p e r [ 1 ] we r e p o r t e d o n the p e r f o r m a n c e m e a s u r e m e n t s o f an E t h e r n e t local area n e t w o r k . This w o r k was carried o u t as an evaluat i o n exercise t o d e t e r m i n e w h e t h e r E t h e r n e t t e c h n o l o g y was suitable for our particular requirement. T h e s y n c h r o t r o n r a d i a t i o n s o u r c e (SRS) o f the D a r e s b u r y L a b o r a t o r y p r o v i d e s r e s e a r c h facilities f o r visiting scientists and c u r r e n t l y s u p p o r t s m o r e t h a n 20 s e p a r a t e e x p e r i m e n t a l areas. All e x p e r i m e n t s are c o m p u t e r cont r o l l e d with each e x p e r i m e n t a l s t a t i o n having either a PDP 1 1 / 0 4 or an L S I 1 1 / 2 3 c o m p u t e r . T h e s e s y s t e m s are fully utilized during e x p e r i m e n t a l runs, c o n t r o l l i n g e x p e r i m e n t s a n d collecting a n d storing t h e d a t a to disk. M o s t s t a t i o n s are u n a b l e t o c a r r y o u t a n y d a t a analysis a n d it is t h e r e f o r e essential t h a t t h e d a t a files can be t r a n s f e r r e d o n t o larger c o m p u t e r s f o r analysis a n d also f o r security. T h e d a t a are c u r r e n t l y s t o r e d on the L a b o r a t o r y m a i n f r a m e c o m p u t e r , b u t t h e y are initially c o n c e n t r a t e d o n a V A X 1 1 / 7 5 0 w h e r e a c e r t a i n a m o u n t o f analysis is carried o u t . T h e c u r r e n t conf i g u r a t i o n b e t w e e n t h e e x p e r i m e n t a l s t a t i o n s and the V A X is r e a c h i n g its limit b o t h in t e r m s o f t h e n u m b e r o f s t a t i o n s t h a t can be s u p p o r t e d a n d t h e d a t a r a t e t h a t t h e n e t w o r k can deliver. F u t u r e e x p e r i m e n t a l s t a t i o n s will wish to t r a n s f e r c o n s i d e r a b l y m o r e d a t a a n d t o d o so at m u c h higher d a t a rates. 0252-7308/85/$3.30

~) Elsevier Sequoia/Printed in The Netherlands

154 We h a v e t a k e n t h e p r a g m a t i c view t h a t t h e m o s t i m p o r t a n t c r i t e r i o n in t h e e v a l u a t i o n o f a local area n e t w o r k f o r this e n v i r o n m e n t is a m e a s u r e o f t h e d a t a t h r o u g h p u t t h a t a user o f such a n e t w o r k c o u l d e x p e c t . This was felt t o be a r e a s o n a b l e starting p o i n t given t h e k n o w n f u t u r e requirem e n t s f o r high s p e e d d a t a t r a n s f e r . T h e d a t a rates cited in the r e q u i r e m e n t s i n c l u d e d a p e a k w o r k i n g r a t e o f 45 k b y t e s s-1 a n d a g u a r a n t e e d r a t e o f 30 k b y t e s s -1. As a result o f o u r m e a s u r e m e n t s it was p r o p o s e d t h a t an E t h e r n e t local area n e t w o r k be installed. T h e s u b s e q u e n t s o f t w a r e d e v e l o p m e n t s are d e s c r i b e d in this p a p e r and t h e m e a s u r e d t h r o u g h p u t s t h a t h a v e b e e n a c h i e v e d using the SRS E t h e r n e t n e t w o r k r u n n i n g o u r r e m o t e p r o c e d u r e call (RPC) s o f t w a r e are r e p o r t e d . Figure 1 s h o w s in d i a g r a m m a t i c f o r m t h e l a y o u t o f the n e w R P C n e t w o r k t o g e t h e r w i t h s o m e t y p i c a l activities a s s o c i a t e d w i t h t h e a t t a c h e d c o m p u t e r s . Wide a r e a network connection Data analysis Image processing Disk storage

Ethernet

cable

-

-

Data s t o r a g e Experimental

control

Data c o l l e c t i o n

Fig. 1. Diagram of the RPC network: experimental station computers (LSI 11/23 computers) are connected to the VAX via the Ethernet cable. Typical activities of the connected computers are shown.

2. S o f t w a r e p r o t o c o l s H a v i n g e l e c t e d to use E t h e r n e t t e c h n o l o g y , we n e e d e d to select a softw a r e p r o t o c o l w h i c h w o u l d p r o v i d e a reliable m e a n s o f d a t a t r a n s f e r and still m a i n t a i n t h e high d a t a rates t h a t w e h a d p r e v i o u s l y m e a s u r e d . O u r experimental protocol provided no more than a sequence number check

155 for data loss and the ability to vary the acknowledge window size between processes running in two n e t w o r k machines. Clearly the software we required needed to be m or e sophisticated t o allow several experimental stations to transfer data files to the VAX concurrently. We made the early decision to run, as the carrier service, the logical link control, class 1, (LLC1) as defined by IEEE [2]. This decision will allow future network-level protocols to be added to provide additional services with o ut disruption to what is already in use. While the use of m a n y light-weight prot ocol s has been r e p o r t e d [3 - 7] none seemed t o provide the ideal solution to our particular requirement. The c o n c e p t of RPC seemed to be best suited for our needs, providing also the flexibility for f ut ur e developments. T he r e m o t e p r o c e d u r e schemes we investigated appeared to us to be overcomplicated for our application, given th at the primary r e q u i r e m e n t was of high speed reliable data transfer. We have, however, i ncor por at ed m a n y of these concepts in our protocol.

3. R e m o t e p r o c e d u r e call A full specification of this p r o t o c o l is described elsewhere [8], but below a brief s um m a r y of its working is outlined. The basis of RPC is that a process running in one n e t w o r k machine may call a specified p r o c e d u r e (or group of procedures) in a n o t h e r net w ork machine. This would normally involve the exchange of two messages across the network. The first would make the request and pass parameters to the requested procedure, while the second would carry the response which may include returning parameters. Th e initiator of all RPC activity is called the requester, which com m unicates its requests to a server. An RPC manager is defined, responsible for all network-level functions: checking for adherence to the RPC protocol, routing incoming RPC traffic to the correct requester or server and monitoring the RPC traffic to de t e c t and r e p o r t data loss on the net w ork. Figure 2 shows a diagrammatic representation of RPC p r o t o c o l c o m p o n e n t s in one n e t w o r k machine. Although RPC is inherently a connectionless operation on the network, our initial application (of reliable high speed file transfer) does imply that a c o n n e c t i o n be established at the application level. We decided to include sequencing in t he RPC p r o t o c o l and t o make sequence n u m b e r checking the responsibility of the RPC manager. A "single-shot" mechanism is also defined within RPC t o allow for true connectionless applications, but the use o f this mechanism is the subject of future developments. RPC defines two extensions to the basic mechanism of r e q u e s t response. The first allows a r e m o t e p r o c e d u r e to be invoked but does n o t require a response, the second requires that, before an RPC exchange, the requester should transmit a multicast message on the E t h e r n e t which contains either the name of the required n e t w o r k machine or the generic name

156 Network

machine

Other

u s e r s of L L C I

RPc o

--

Manager

Server i L___

I i J

Ethernet

Fig. 2. D i a g r a m m a t i c r e p r e s e n t a t i o n o f R P C p r o t o c o l c o m p o n e n t s in o n e n e t w o r k machine.

o f the service required. The f or m e r extension allows for transmission of bulk data at a higher rate than the simple r e q u e s t - r e s p o n s e mechanism. The latter extension removes the dependence of each n e t w o r k machine knowing the physical E t h e r n e t address of all ot her machines on the network, and which generic servers t h e y each support. If, in response to a received multicast, a n e t w o r k machine is able to service the named request, it must reply and in so doing r e t ur n its physical E t h e r n e t address. Error recovery is the responsibility o f the requester process, but RPC defines three rules to ensure that recovery is always possible from a n e t w o r k error. Firstly, requests are classified as one of two types. Either the request invokes some non-repeatable action (e.g. t o open a file) or it invokes some repeatable action (e.g. to close a file). Secondly, no-response requests are only available for data transfer and the last block of the data transfer must request a response. Thirdly, when a non-repeatable request fails, the error recovery must invoke some repeatable request and, when a repeatable request fails, it should be repeated up to some specified limit. In addition t o these rules RPC defines a check status facility t hat m a y be invoked by a requester once a n e t w o r k error has been reported. The requester sends to the r e m o t e RPC manager a check status request for the particular server t ha t it controls. The manager will return the sequence n u m b e r o f th e last packet successfully processed by the server, the n u m b e r o f packets waiting to be processed by the server, the c u r r e n t know n status o f the server and the value of the n e x t sequence n u m b e r t hat the manager is willing to accept for this server. Only when the queue of packets for the server is e m p t y , and the server is n o t processing, can the requester c o m m e n c e error recovery. The c o n c e p t of sending a check status request is the fundamental example of a repeatable request being invoked on error detection. Because requesters are the controlling processes of an RPC network, the c o m p l e x i t y o f the entire system is m uch reduced. This has the benefit t hat the c o n t r o l of an RPC exchange is located in one n e t w o r k machine, and the problems of timers distributed over the n e t w o r k [9] are avoided. A requester must provide a t i m e o u t period for the execut i on of a request

157

and proceed to error recovery as described above should this timer expire. The timeout period is clocked in the network machine containing the requester and the remote network machine involved in the RPC exchange knows nothing about it. Generally an RPC exchange follows a set pattern and Fig. 3 gives an example of such an exchange. The requester issues a multicast for a particular service which is received by the manager process running in a network machine. As the manager knows about the requested service, it responds, returning its physical Ethernet address. The requester then issues a create server request to the remote manager specifying the name of the service that is required, in this example to create a filestore server. Having successfully created this server, the manager responds saying success and the requester is then able to communicate directly with the newly created server. The manager checks only that the network has n o t lost data, or that the RPC protocol has n o t been violated. An open file request is next sent to the server, to which a response is required. Subsequently, requests are sent to the server containing data to be written to the file that has just been opened. To increase data throughput, n o t all these requests demand a response from the server, but only when a response is sent can the requester be certain that no data have been lost. When all the data have been transferred, the file is closed via a close file request to which the server must respond, and, when this has been successfully achieved, the requester sends a delete server request to the remote manager. The manager deletes the process and responds with a successful c o m p l e t i o n to the requester. Loca] Requester m u l t i c a s t

request

. . . . . . . . . . . . .

server

. . . . . . . . . . . . .

remote

file

. . . . . . . . . . . . .

Server

multicast >

response

create

......

>

response

< . . . . . . . . . . . . .

open

Remote

>

< . . . . . . . . . . . . .

create

Remote Manager

.......

>

response

< . . . . . . . . . . . . .

write

data

noresp

. . . . . . . . . . . . .

>

wrlte

data

noresp

. . . . . . . . . . . . .

>

Wrlte

data

noresp

. . . . . . . . . . . . .

>

write

req

. . . . . . . . . . . . .

>

wrlte

data

noresp

. . . . . . . . . . . . .

>

write

data

noresp

. . . . . . . . . . . . .

>

wrlte

req

. . . . . . . . . . . . .

>

close

remote

>

response

< ....... .......

>

response

< ....... .......

>

response

< ....... ....... < RESPONSE

response

< . . . . . . . . . . . . .

.

.

.

.......

> .

.

.

.

response

< ....... .......

b

response

< ....... .......

>

response

< ....... RESPONSE

response

< . . . . . . . . . . . . . file

. . . . . . . . . . . . .

server

. . . . . . . . . . . . . < . . . . . . . . . . . . .

Fig. 3. A typical set

of RPC

.......

response >

delete

response

requests and

responses.

>

response

< .......

>

< . . . . . . . . . . . . .

delete

.......

< ....... ......

>

response

>

>

response

158 R P C is u n c o n c e r n e d w i t h t h e d a t a w h i c h h a v e b e e n e x c h a n g e d a n d is able o n l y to r e p o r t n e t w o r k - l e v e l errors. H o w e v e r , w i t h i n an RPC p a c k e t a field has b e e n d e f i n e d f o r a server t o r e t u r n an e r r o r t o its r e q u e s t e r . T h e c o n t e n t s a n d use o f this field are o f no c o n c e r n t o RPC.

4. I m p l e m e n t a t i o n details T h e initial d e v e l o p m e n t o f R P C was designed t o p r o v i d e t h e m e a n s w h e r e b y e x p e r i m e n t a l users o f t h e SRS c o u l d t r a n s f e r files f r o m an experim e n t a l s t a t i o n t o t h e d a t a c o n c e n t r a t o r , i.e. t h e V A X 1 1 / 7 5 0 . T h e implicat i o n s o f this r e q u i r e m e n t w e r e t h a t a V A X - V M S filestore server a n d an R T l l r e q u e s t e r h a d t o be d e v e l o p e d . In b o t h cases an RPC m a n a g e r was r e q u i r e d b u t , b e c a u s e o f s p a c e c o n s t r a i n t s in t h e e x p e r i m e n t a l s t a t i o n c o m p u t e r s , t h e m a n n e r o f t h e i m p l e m e n t a t i o n s d i f f e r e d significantly. In b o t h cases t h e c o d e was w r i t t e n in t h e C language, w i t h o n l y a v e r y f e w a s s e m b l e r c o d e r o u t i n e s . F o r t h e e x p e r i m e n t a l s t a t i o n s o f t w a r e an i n t e r f a c e in F O R T R A N was also p r o v i d e d .

4.1. Experimental station software T h e R P C r e q u e s t e r s o f t w a r e has b e e n i m p l e m e n t e d as a set o f subr o u t i n e s t o p e r f o r m t h e tasks involved in an R P C e x c h a n g e . T h e s o f t w a r e has b e e n s t r u c t u r e d i n t o layers, w i t h access available to t h e user a t each layer. This was designed t o e n a b l e t h e m o s t e f f i c i e n t use t o be m a d e o f t h e s o f t w a r e in t h e d i f f e r e n t e n v i r o n m e n t s in w h i c h it is t o be used. T h e h i g h e s t level o f s o f t w a r e is t h e u t i l i t y . A t this level a s t a n d - a l o n e utility or a utility s u b r o u t i n e c a n be r u n to p e r f o r m t h e w h o l e o f t h e required o p e r a t i o n . This has b e e n designed f o r t h e n o v i c e user w h o wishes, f o r e x a m p l e , to t r a n s f e r a d a t a file to t h e V A X . This level hides all detail o f t h e R P C s o f t w a r e , a n d use o f E t h e r n e t , f r o m t h e user. A l o w e r level, called t h e user level, was r e q u i r e d so t h a t R P C subr o u t i n e s c o u l d be i n c l u d e d as an integral p a r t o f d a t a a c q u i s i t i o n s o f t w a r e , a n d so b e s t use c o u l d be m a d e o f l i m i t e d s p a c e b y o v e r l a y . This level also hides all details o f R P C a n d E t h e r n e t f r o m t h e user, a n d has b e e n implem e n t e d as a set o f s u b r o u t i n e s w h i c h i n t e r f a c e t o t h e r e q u e s t e r . T h e l o w e s t level o f access is t h e r e q u e s t e r level, a n d this allows d a t a a c q u i s i t i o n p r o g r a m s t o be t a i l o r e d t o t h e p a r t i c u l a r h a r d w a r e o r f o r high p e r f o r m a n c e . A t this level a k n o w l e d g e o f R P C is r e q u i r e d t o g e t h e r w i t h an u n d e r s t a n d i n g o f E t h e r n e t . H o w e v e r , n o k n o w l e d g e o f t h e i n t e r f a c e t o t h e E t h e r n e t device driver is n e c e s s a r y , as t h e r e q u e s t e r level is i m p l e m e n t e d as a set o f s u b r o u t i n e s w h i c h i n t e r f a c e t o t h e driver. In this w a y , a distinct R P C m a n a g e r d o e s n o t exist, a n d t h e f u n c t i o n s o f a m a n a g e r are d i s t r i b u t e d b e t w e e n t h e s u b r o u t i n e s w h i c h m a k e u p t h e r e q u e s t e r s o f t w a r e . T h e t a s k is simplified b y t h e f a c t t h a t o n l y o n e r e q u e s t e r will ever be active a t a n y o n e t i m e in a n R T l l e x p e r i m e n t a l s t a t i o n c o m puter.

159

4.2. V A X - V M S software T h e V A X RPC s o f t w a r e has an RPC m a n a g e r p e r m a n e n t l y r u n n i n g as a d e t a c h e d process. This process c o n t r o l s all o t h e r parts o f the RPC s o f t w a r e c u r r e n t l y i m p l e m e n t e d . T h e link c o n t r o l s o f t w a r e , ( L L C 1 ) which interfaces to t h e E t h e r n e t driver software, is r u n as a subprocess o f the m a n a g e r and started b y the manager. Servers are c r e a t e d b y the m a n a g e r as subprocesses in response t o requests issued b y requesters. T h e m a n a g e r maintains a list o f all server t y p e s t h a t it is able t o create. This design m e a n s t h a t it is easy t o add a d d i t i o n a l server t y p e s to the V A X s o f t w a r e , and t h a t each server is a separate process. T h e f o r m e r p o i n t m a k e s it simple to add new server t y p e s , and the latter p o i n t m e a n s t h a t a r e q u e s t e r has t h e exclusive use o f a server. Clearly, w h e n m o r e than one r e q u e s t e r is active, the t o t a l b a n d w i d t h o f t h e E t h e r n e t will be shared, and t h r o u g h p u t r e d u c e d , b u t this is a characteristic of any n e t w o r k . T h e imp o r t a n t f a c t o r is t h a t , having r e a c h e d the target m a c h i n e , f u r t h e r delays are r e d u c e d t o a m i n i m u m . Full details o f the i m p l e m e n t a t i o n o f RPC software on t h e V A X are described elsewhere [ 1 0 ] .

5. P e r f o r m a n c e m e a s u r e m e n t s As discussed above, we have t a k e n the view t h a t data t h r o u g h p u t is the m o s t i m p o r t a n t c r i t e r i o n in the evaluation o f the n e t w o r k . We have t h e r e f o r e e q u a t e d t h e m e a s u r e m e n t o f p e r f o r m a n c e with the measurement; o f data t h r o u g h p u t . T h e m e a s u r e m e n t s we have m a d e show t w o aspects o f t h e w o r k i n g o f t h e RPC n e t w o r k . Firstly t h e y show the t h r o u g h p u t that; a user m i g h t e x p e r i e n c e u n d e r d i f f e r e n t n e t w o r k c o n d i t i o n s , and s e c o n d l y a m e a s u r e of t h e n e t w o r k p e r f o r m a n c e .

5.1. Methods In all cases t h e r e were n o o t h e r users o f t h e c o m p u t e r systems. T h e results are based o n t h e m e a s u r e m e n t o f d a t a t r a n s f e r f r o m an e x p e r i m e n t a l station c o m p u t e r t o the d a t a - c o n c e n t r a t i n g c o m p u t e r . T h e e x p e r i m e n t a l station c o m p u t e r s are all LSI 1 1 / 2 3 c o m p u t e r s r u n n i n g R T l l version 5 . ] using I n t e r l a n Q-bus i n t e r f a c e b o a r d s ( N I 2 0 1 0 ) c o n n e c t e d t o the E t h e r n e t b y an Interlan transceiver. T h e d a t a - c o n c e n t r a t i n g c o m p u t e r is a VAX 1 1 / 7 5 0 r u n n i n g VMS version 3.7 using an I n t e r l a n Unibus i n t e r f a c e b o a r d ( N I 2 0 2 0 ) c o n n e c t e d t o t h e E t h e r n e t b y an I n t e r l a n transceiver. All E t h e r n e t e q u i p m e n t used in this n e t w o r k was c o m p a t i b l e with the I E E E P 8 0 2 . 3 specification [ 11]. All m e a s u r e m e n t s were based o n the time t a k e n to transfer a file of 1 0 0 0 k b y t e s f r o m t h e disk o f an e x p e r i m e n t a l s t a t i o n c o m p u t e r to the VAX. Times were r e c o r d e d on f o u r d i f f e r e n t e x p e r i m e n t a l stations w h e n there was n o o t h e r s t a t i o n t r a n s m i t t i n g , o n e o t h e r station t r a n s m i t t i n g and t w o o t h e r stations t r a n s m i t t i n g . T h e results r e p r e s e n t the average o f at least five separate timings f o r each station u n d e r all the e x p e r i m e n t a l c o n d i t i o n s .

160 5.2. R e s u l t s

Table 1 shows t h e time t a k e n t o transfer the test file f r o m the diff e r e n t e x p e r i m e n t a l stations t o t h e V A X f o r the d i f f e r e n t e x p e r i m e n t a l c o n d i t i o n s , t c g e t h e r with the average f o r all stations. T h e results s h o w a degree o f variability b e t w e e n the e x p e r i m e n t a l stations. Table 2 shows data t h r o u g h p u t per station and f o r t h e t o t a l n e t w o r k f o r the d i f f e r e n t experim e n t a l c o n d i t i o n s . This shows a general decline in data t h r o u g h p u t per s t a t i o n as m o r e e x p e r i m e n t a l stations access the n e t w o r k s i m u l t a n e o u s l y . T h e t h r o u g h p u t per s t a t i o n falls f r o m 62.5 k b y t e s s -1 f o r an e x p e r i m e n t a l s t a t i o n having sole use o f the n e t w o r k t o 26.8 k b y t e s s -1 f o r an e x p e r i m e n t a l s t a t i o n sharing t h e n e t w o r k with t w o o t h e r e x p e r i m e n t a l stations. T h e t o t a l n e t w o r k t h r o u g h p u t increases f r o m 62.5 k b y t e s s-1 w h e n one experim e n t a l s t a t i o n is active t o 80.4 k b y t e s s -1 w h e n t h r e e e x p e r i m e n t a l stations are s i m u l t a n e o u s l y active. TABLE 1 Times (in seconds) to transfer 1000 kbytes from an experimental station computer (an LSI 11/23) to a VAX 11/750 under different network loadings determined by the number of experimental stations simultaneously transmitting Transfer times (s) for the following num bets o f stations transmitting simultaneously

Station 1 Station 2 Station 3 Station 4 Mean time

I

2

3

15.5 17.4 15.2 15.9 16.0

24.9 26.1 28.5 28.9 27.1

Not tested 32.0 40.0 39.8 37.3

TABLE 2 Data throughput (in kbytes per second) measured per station and for the total network under different network loadings determined by the number of experimental stations simultaneously transmitting Data throughput (kbytes s-1 ) for the following hum bers o f stations transmitting simultaneously

Data rate per station Total network data rate

1

2

3

62.5 62.5

36.9 73.8

26.8 80.4

In t h e cases o f t w o a n d o f t h r e e e x p e r i m e n t a l stations t r a n s m i t t i n g s i m u l t a n e o u s l y it was n o t e d t h a t e r r o r r e c o v e r y p r o c e d u r e s were used by t h e e x p e r i m e n t a l s t a t i o n software. These o c c u r r e n c e s were very i n f r e q u e n t , and n o m o r e t h a n t h r e e errors were ever o b s e r v e d o n a n y transfer. T h e y

161 r e s u l t e d f r o m e x p e r i m e n t a l s t a t i o n s o f t w a r e t i m i n g o u t waiting f o r an a c k n o w l e d g m e n t to d a t a p a c k e t s . In all cases, r e c o v e r y was successful, and t h e t e s t file was successfully t r a n s f e r r e d .

5.3. Discussion When an e x p e r i m e n t a l s t a t i o n has t h e sole use o f the n e t w o r k , the t i m e t a k e n to t r a n s f e r t h e t e s t file reflects t h e c o n f i g u r a t i o n o f t h e experim e n t a l station. T h e results s h o w v a r i a b i l i t y f r o m 15.2 to 17.4 s to t r a n s f e r the t e s t file, a n d this d i f f e r e n c e c a n be e x p l a i n e d b y t h e d i f f e r e n t W i n c h e s t e r disk s y s t e m s t h a t t h e e x p e r i m e n t a l s t a t i o n c o m p u t e r s possess. T h e r e were no o t h e r k n o w n d i f f e r e n c e s b e t w e e n t h e s y s t e m s . T h e average d a t a r a t e f o r an e x p e r i m e n t a l s t a t i o n having sole use o f t h e n e t w o r k o f 6 2 . 5 k b y t e s s 1 is slightly higher t h a n t h e e q u i v a l e n t d a t a r a t e t h a t we r e p o r t e d earlier [1]. T h e R P C s o f t w a r e is m o r e s o p h i s t i c a t e d t h a n o u r original t e s t s o f t w a r e a n d so this result is very pleasing. Clearly we designed t h e R P C s o f t w a r e to m i n i m i z e o v e r h e a d s w h i c h w o u l d slow t h e rate o f d a t a t r a n s f e r , a n d e q u a l l y t h e s e results c o n f i r m t h a t o u r original t e s t s o f t w a r e was s o m e w h a t crude. T h e r e s e e m e d to be little p o i n t in s p e n d i n g m u c h t i m e o n i m p r o v i n g t h e t e s t s o f t w a r e w h e n o u r original e f f o r t s demo n s t r a t e d t h a t E t h e r n e t t e c h n o l o g y was suitable f o r o u r r e q u i r e m e n t . T h e decline in t h e d a t a rates seen as m o r e e x p e r i m e n t a l s t a t i o n s accessed t h e V A X s i m u l t a n e o u s l y was e x p e c t e d . T h e f a c t t h a t the n e t w o r k t h r o u g h p u t increased u n d e r t h e s e c o n d i t i o n s suggests t h a t the d e g r a d a t i o n in t h r o u g h p u t at an e x p e r i m e n t a l s t a t i o n r e s u l t e d f r o m increased l o a d i n g on t h e V A X . T h e n e t w o r k errors d e t e c t e d b y t h e e x p e r i m e n t a l s t a t i o n softw a r e r e s u l t e d f r o m R P C t i m e o u t s . I t is m o s t u n l i k e l y t h a t t h e s e errors were t h e result o f collisions o n the E t h e r n e t [12, 1 3 ] , a n d t h e y can be t a k e n as f u r t h e r i n d i c a t i o n t h a t t h e V A X was b e c o m i n g increasingly t h e limiting p a r t o f t h e n e t w o r k . N e t w o r k errors c o u l d result f r o m either the V A X being u n a b l e to k e e p u p w i t h t h e received d a t a or t h e V A X being u n a b l e to process d a t a fast e n o u g h a n d b o t h o f t h e s e h a v e b e e n observed. T h e l a t t e r implies t h a t , a l t h o u g h t h e d a t a a c k n o w l e d g m e n t s w e r e sent, t h e t i m e r r u n n i n g in t h e e x p e r i m e n t a l s t a t i o n s o f t w a r e e x p i r e d b e f o r e t h e y w e r e received. During t h e d e v e l o p m e n t o f R P C we h a v e a d j u s t e d this t i m e r interval so t h a t this e r r o r occurs v e r y i n f r e q u e n t l y . This suggests t h a t the o b s e r v e d n e t w o r k errors r e s u l t e d f r o m p a c k e t s being ignored b y t h e I n t e r l a n i n t e r f a c e b o a r d w h e n its b u f f e r s w e r e full. T h e f a c t t h a t R P C n e t w o r k errors o c c u r r e d suggests t h a t a d e g r a d a t i o n o f the n e t w o r k p e r f o r m a n c e s h o u l d h a v e b e e n r e c o r d e d . T a b l e 2 s h o w s t h e a b s o l u t e d a t a r a t e t o be increasing as t h e n u m b e r o f e x p e r i m e n t a l s t a t i o n s increases, b u t t h e r a t e o f increase is declining. T h e decline results in p a r t f r o m t h e n e t w o r k errors t h a t h a v e b e e n d e t e c t e d a n d are discussed above, b u t t h e i n f r e q u e n c y o f t h e i r o c c u r r e n c e implies t h a t t h e y are a m i n o r c o n t r i b u t i o n t o this decline. T h e m a j o r c o n t r i b u t i n g f a c t o r is t h e decline in p r o c e s s i n g s p e e d with increased l o a d in t h e V A X . T h e a c k n o w l e d g m e n t s are received b y t h e r e q u e s t e r w i t h i n t h e t i m e o u t p e r i o d , b u t t o w a r d s t h e

162 e n d o f t h a t p e r i o d . As a r e q u e s t e r is n o t a l l o w e d t o t r a n s m i t f u r t h e r d a t a u n t i l t h e a c k n o w l e d g m e n t is r e c e i v e d , t h i s will c l e a r l y r e d u c e i n d i v i d u a l experimental station throughput and the network throughput. These measurements confirm that the original requirements for a new d a t a a c q u i s i t i o n n e t w o r k h a v e b e e n m e t . T h e s o f t w a r e is n o w i n u s e i n a service e n v i r o n m e n t . F u r t h e r d e v e l o p m e n t s are p l a n n e d , i n p a r t i c u l a r t o a l l o w r e q u e s t e r s i n t h e V A X t o access servers e l s e w h e r e o n t h e n e t w o r k and to extend this network to the Laboratory mainframe computer.

References 1 F. Rake, R. Tasker and P. Kummer, Performance measurements of an Ethernet local area network, Interfaces Comput., 2 (1984) 221 - 227. 2 Local area networks -- Logical Link Control, IEEE Stand. P802.2, 1985 (IEEE). 3 S. K. Shrivastava and F. Panzieri, The design of a reliable remote procedure call mechanism, IEEE Trans. Comput., 31 (7) (1982) 692 - 697. 4 S. K. Shrivastava, Structuring distributed systems for recoverability and crash resistance, IEEE Trans. Software Eng., 7 (4) (1982) 436 - 447. 5 B. W. Lampson, Atomic transactions, Lect. Notes Comput. Sci., 105 (1981) 246265. 6 D. R. Brownbridge, L. F. Marshall and B. Randell, The Newcastle connection, Software Pract. Exper., 12 (1982)1147 -1162. 7 Courier: the remote procedure call protocol, Xerox System Integrated Stand. XSIS 038112, 1981 (Xerox Corporation, Standford, CT). 8 P. S. Kummer and R. Tasker, Remote procedure call for Ethernet, Protocol Specif. DL/CSE/TM35, 1984 (available from Daresbury Laboratory, Science and Engineering Research Council, Warrington, WA4 4AD, Gt. Britain). 9 L. Lamport, Time, clocks and the ordering of events in a distributed system, Commun. ACM, 21 (1978) 558 - 565. 10 F. M. Rake, R. Tasker and P. S. Kummer, VAX/VMS remote procedure call system documentation, Internal Doc. Release 1.1, 1984. 11 Local area n e t w o r k s - - C S M A / C D baseband technology, IEEE Stand. 802.3, 1983 (IEEE). 12 J. F. Shoch and J. A. Hupp, Measured performance of an Ethernet local area network, Commun. ACM, 23 (1980) 711 - 721. 13 J. F. Shoch, A brief note on performance of an Ethernet system under high load, Comput. Networks, 4 (1980) 187 - 188.