An implementation of the X25 interface in a datagram network

An implementation of the X25 interface in a datagram network

An Implementation of the X25 Interface in a Datagram Network 1. Introduction D.L.A. Barber and T. Kalin European Informatics Network The communicati...

497KB Sizes 22 Downloads 64 Views

An Implementation of the X25 Interface in a Datagram Network 1. Introduction

D.L.A. Barber and T. Kalin European Informatics Network

The communications subsystem of the European Informatics Network (EIN) is based on a datagram service [ 1], while future public packet switching networks will be based on the CCITT X25 recommendation, using virtual circuits [2]. In a datagram network, fully addressed individual packets are entered into the network at any time, by the host computer. They are delivered as they arrive at their various destinations. The host computers are responsible for packet ordering and for recovery from packet loss, by retransmission. To use a virtual circuit network a calling data terminal equipment has to go through a call-establishment phase. Once the connection is established, data can be transferred. The network takes full responsibil-

and C. S o l o m o n i d e s National Physical Laboratory, Teddington, UK The design of a system to allow the EIN network and its hosts to connect to X25 based networks, or alternatively to enable X25 hosts to communicate through EIN is described. The use of multimicroprocessors for implementing the system results in a modular architecture of functionally divided elements. This makes it possible to configure the system in a variety of ways to achieve different goals with the minimum of extra effort. Keywords: Data transmission, packet-switched networks, ' multimicro-processors, X25 interface.

Toma~ Kalin is a member of the research staff at the "J. Stefan" Institute, Ljubljana, Yugoslavia. He has received an engineers degree as well as an MSc. and a Doctors Degree from the Physics Department of the University of Ljubljana. During his work in Computational Physics he became involved in the computer science field. His interests include computer performance evaluation and modelling and computer communication networks. In the years 1977/78 he has worked for the Executive Body of the European Informaties Network project, Teddington, UK, where he has been responsible for the development of the multi-microprocessor based protocol conversion unit.

Early in 1973, Derek Barber became Director of the European Informatics Network Project, which was established by agreement between 10 countries. Since then he has been deeply involved in the administrative and technical problems associated with an international research project involving the cooperation of several Computer Science Research Centres in various countries. As a result he has developed a growing interest in user aspects and in the social implications of the use of data communications networks. He has recently become chairman of IFIP WG 6.1 (The International Network Working Group). Prior to joining EIN Derek Barber was head of Information System Branch of the NPL's Computer Science Division which built the pioneering NPL data network, and he also chaired the Committee that prepared the British Standard 4421 Interface Specification. The work at NPL led to his collaboration with Donald Davies on their successful book 'Communications Networks for Computers'. Before joining the NPL Derek was with the British Post Office, so he has been interested in communications for much of his career.

1 On leave from Republi~ki Ra~unski Center, Ljubljana, Yugoslavia 2 This paper has been presented at the Computer Network Protocols Symposium, held in Liege (Belgium) in February 1978 and organized by the University of Liege. The permission to reprint this paper is gratefully acknowledged.

C.M. Solomonides graduated in physics from the University of Sussex in 1966. He subsequently received a D. Phil degree from the same University for experimental work in particle physics. He joined the staff of the Computer Science Division at the National Physical Laboratory in October 1974 and worked on packet switched network design using simulation techniques. He is now project leader for the work on "Protocols and Standards" at the Laboratory.

© North-Holland Publishing Company Computer Networks 2 (1978) 340-345 340

D.L.A. Barber et al. / The X25 interface in a datagram network

ity for error detection and recovery, and packet ordering. When the data transfer is complete, the terminal has to clear the virtual call. It is obvious that one has to add an appreciable number of functions if an X25 interface is to be introduced in a datagram network. A possible approach would be to change the nodal software of the communication subnetwork, to accommodate the X25 interface in addition to the existing datagram service. But it would mean a complete rewriting of this software, with the accompanying disruption of existing services. So another way has been chosen: a Black Box, interposed between a node of the datagram network and the host computer, using the X25 interface to access the computer network [3]. The box has been named BBX25 i.e. Black Box X25 of the A variety, or Box A, for short.

©

341

Fig. 1 shows other types of units, or Black Boxes, which have been discussed in the EIN Project. Two of them, C and B, are alternative means of effecting the replacement of the present EIN leased lines by the use of the public packet switching network, Euronet. Two boxes C and a permanent virtual circuit connecting them, can be directly substituted for one leased line between two EIN nodal computers. The source Box C puts EIN packet into envolopes with X25 addressing fields, and sends them via the PVC to the destination Box C. Box B offers a more flexible solution, because it can be used in two ways. It may join a Host computer directly to the Virtual Circuit network, or it may connect one of the Host computer ports of an existing EIN network switch to the public network. With the second possibility, the present switches can be retained and used for local switching. Box C 1 is just a simplifed version of Box A, since it can connect only two X25 hosts via the EIN subnetwork.

2. Requirements

(

j

I

I

I )CN~c

SC-NSC ,nterfacl~'-~S...C."

I

I

@

I "~

i

I

r

Ir±

t I I ~-k

@

I

l

.I

It/

/5

I I i__l

C)

(/ i

I i

IL____I I J I

o--u

i L____II I i

EIN sub-net leased lines, replaced by perrnonent virtual circuits in X25 network

(~

X 25 VC computer systems use EIN subnetwork

(~

EIN DG subscriber computers linked by switched virtual

circuits in X 25 network EIN sub-net provides virtual coil and circuit facilities NOTE-AH units have one datagram interface O and one virtual circuit interface X

Fig. l. Some possible X25 adapter units.

)

The X25 interface must be introduced in such a way that new X25 hosts can communicate with those hosts connected to the network via Boxes A, as well as to the existing EIN hosts. This must be done without causing any disturbance in communications between the present EIN hosts. To permit this, a transparent path for EIN datagrams has to be established between two X25 hosts or an EIN and an X25 host. Obviously, all EIN high level protocols will have to be implemented on those X25 hosts wishing to communicate with the EIN hosts, using the datagram service. Fig. 2 shows the use of Box A. The Transport Station and the Virtual Terminal Protocol, will have to be implemented on top of Level 3 of X25, to give access to the EIN community. 3. Outline of the solution The Black Box A, which is intended to enable the present EIN subnetwork to offer X25 interface to new users, is a Data Circuit Terminating Equipment (DCE). The unit must handle both incoming and outgoing calls, interacting with the DTE, according to the X25 recommendation during call setup and clear down, and mapping the X25 data packet addresses into full datagram addresses for transfer through the subnetwork.

342

D.L.A. Barber et aL / The X25 interface in a datagram network

EtN

EIN

HOST

SC-NSC

l

~

] 'Sl /

HOST

SC-NSC TS

EIN \ \ \ \

X25

HOST

BOX

\

A

BOX

\

X25 HOST

A

\

- - ---r~

UP = TS =

USER PROCESS TRANSPORT STATION

END TO END PROTOCOL

Fig. 2. T h e use o f b o x A to interconnect different types o f host computer.

A Box A must be able to establish a virtual call to any other Box A, and must interact with it to control parameters such as the rate of flow of data. After a thorough examination of possible solutions, including one using a minicomputer, all were discarded in favour of a multi-microprocessor arrangement. This is most suitable for the natural division of the logical functions into four largely independent parts, with very clearly defined interfaces: X25 Level 2, X25 Level 3, EIN line and packet level and a "Process in the Middle" (PIM) which takes care of translating packet formats, flow and error control, and is the supervisor for the whole machine. The role of X25, Levels 2 and 3, and EIN line and packet interface is clear enough, and needs no further discussion. But the PIM, which gives the Black Box its character, will now be examined in more detail. Since Box A has to pass EIN-type packets as well as X25 ones, two distinct information paths and corresponding processes have been provided. a) To allow an X25 host to speak to an EIN host, an EIN datagram process must be included. It puts an X25 Level 3 envelope around the packets coming from the EIN subnetwork, and sends them via a permanent virtual circuit to the X25 host, where they

will be passed to a Transport Station. The EIN datagram process has to recognise any EIN packets, which are too long (more than 128 bytes), and divide them into up to three X25 packets, with a "more data" bit set, for transmission to the host. Because of this division into fragments, the present EIN Transport Station has to be modified, or front-ended, with a simple process which recognises fragmented packets and concatenates them before they are submitted to it. In the oppsote direction, packets not exceeding 128 bytes have to be delivered to Level 3, with appropriate signalling to cause, when necessary, a "more data bit" to be set for transmission over the X25 interface to Box B. Similarly, the EIN datagram process will concatenate two or three X25 packets, with "more data" bits set, into one EIN packet, for transmission through the EIN subnetwork to an EIN host or to a similarly equipped X25 Host. Note that for the datagram information path the X25 interface is used only for information transfer between the host and Box A. But this does not imply that an X25 type service can be expected when using this alternative, because it does not provide packet ordering or safe delivery. b) In its second role of adding a virtual call outer

D.L.A. Barber et aI. / The X25 interface in a datagram network

layer in E1N, Box A incorporates procedures for operating an end-to-end protocol. The protocol is implemented between two Box A units. Figure 2 shows the X25 virtual call path between two X25 hosts connected to EIN via Box A processors. The end-to-end protocol must provide recovery from packet loss and packet sequencing, as well as flow control between the two boxes on a per virtual call basis. Two alternatives have been examined for the end-to-end protocol. One, which has not been selected, is similar to the flow control mechanisms employed in the EIN Transprt Station, where the receiver gives the sender a "credit", i.e. the number of packets it can send. The chosen one is very like the window mechanism employed by X25 Level 3. It has been chosen because it can use the same program modules which are used in the module handling the X25 interface. A software checksum is used to detect any corruption of data that occurs while it is passing from the PIM of one Box A to that of another. The end-to-end protocol is logically and physically separate from Level 3 of X25. They each use the same packet numbering, but their window operations are decoupled. The Level 3 window mechanism for data flowing into the Box is advanced to allow the host to send more packets, and is operated with a window size of 2. This ensures that the number of buffers committed per virtual call is kept to minimum. The end-to-end protocol operates with a larger window (the maximum possible is 7) so that virtual call flows are not inhibeted by the end-to-end window size. Level 3 does not advance its window until a request for more packets is received from the process handling the end-to-end protocol. This separation of the local and the end-to-end windows of course implies that the Level 3 acknowledgement is not end-to-end. The local procedures for implementing the end-toend protocol reside in the PIM. They store out-ofsequence packets until the missing information arrives, and they employ retransmission mechanisms to recover from loss of data. Attempts are made to recover from these conditions at all times during which communication exists with the remote PIM. Calls will be cleared only if there is total loss of communication, and reset if an unrecoverable loss of sunchronisation has occurred in the end-to-end protocol. A number of liaisons using the above end-to-end protocol can be active concurrently, one for each established virtual circuit. The number of logical

343

channels Box A can support depends on the window size, i.e. the number of outstanding acknowledgements in relation to the number of buffers in the packet store. A short discussion on this point will follow in the next section.

4. Microprocessor implementation 4.1. Introduction

A multi-microprocessor system has been selected for the implementation of the Black Box [4]. Reasons for this are: - e a c h of the functional units can run on a single microcomputer, without the complex scheduling schemes needed in a mini-computer solution - h a v i n g each module on a separate processor with clean and simple interfaces to the olher modules gives a large degree of flexibility. The microcomputer boards, which have associated PROMs with all software, can be moved and used to perform the same basic functions in different systems, provided that simple buffer handling conventions are satisfied. - the good price/performance of microcomputers. The 8080/8085 family of microcomputers has been selected for its worldwide use, its second source in Europe and with multi-processing as a standard feature of the Single Board Computer SBC 80•20. An additional common board has been developed to provide common diagnostic software, watchdog timers and interrupt generation. 4.2. B r i e f decription o f the hardware

The use of a single board microcomputer has a very useful feature. It enables the use of the internal bus without disturbing the common system bus. Up to 16 micro-computers, sharing the same system bus, can execute their programs, located in the local PROM and RAM without interfering with each other. Only when non-local memory is addressed does the bus logic seek permission is use the system bus. This feature allows each of the four processes which comprise the Box A to be independent, since all the software needed for their operation is in local PROM. They need to access the system bus only when transferring information from the line into the packet store or vice versa for concatenation or fragmentation

D.L.A. Barber et al. / The X25 interface in a datagram network

344 X25

INTERFACE Intel 8085 HOLe CHIP X25 Link level

Utility

board

I Intel SBC 80/20 X25 Pocket level Pocket store

16 kb RAM

I -,t m

_m_~ Inte[ SBC 80120 PIM I

1

I

J

I Pocket store I 16 kb RAM I (Optional)

I I I Intel 8085 EIN link and

I

diagnostic check built into it. At each startup, or on the failure of any processor to regularly toggle the watchdog timer, the self-test diagnostic program residing on the common utility board will be initiated. It will be run by all processors concurrently, with some parts of the test common to all. Provision is also made for extended diagnostics, using a terminal which can be connected to each processor board in turn. After the selftest phase an initialisation phase is entered, during which the HDLC chips and common packet store are initialised. The utility board holds the watchdog timers and bus access priority arbitration, to resolve system bus request clashes and logic for generation of interrupts. This allows a general inter-process communication facility. An alerted CPU can look-up its communication area in the common memory. A common memory is available on the system bus to all CPUs for interprocessor communication and for packet store. Memory size is to be 16k, expandable to 48k bytes if necessary.

Pocket level

EIN

INTERFACE

Fig. 3. Basic architecture of all black boxes.

or for interprocess communication. The four microcomputer boards are (Fig. 3): - line driver and Level 2 of X25 - Level 3 of X25 - PIM and system monitor line driver and EIN link access procedure The most interesting board is the one incorporating an HDLC chip, a DMA chip and an 8085 microprocessor, used for both I/O ends of the Black Box. HDLC chips recognise flags, provide for bit-stuffing and calculate cyclic redundancy checks. Paired with the direct memory access chip, they can transfer data from the line into the packet store, without load on the 8085 CPU, which can process, concurrently, the line level protocol (i.e. Level 2 in the case of X25). For the first phase, while this board is being designed, its functions are performed by an SBC 80/20, controlling character-by-character transfer from the HDLC chip to the common packet store on an interrupt basis. There is another common utility board, which offers an elegant solution for the diagnostic facilities. The Black Box must be an unattended auxilliary device and must therefore have a powerful self-

5. Discussion of the system To the user, i.e. to DTEs, a pair of Boxes A, with the connecting datagram network, should look like an X25 communication subnetwork. It is obvious that only a limited number o f logical channels, as defined in the X25 recommendations, can be provided by the approach which has been used. With approximately 50 buffers available in 16k bytes memory for passing packets and with window size 3, there must be no more than ten virtual calls active at any one time, because the box is obliged to accept thirty packets from the X25 side. With no, or very limited, flow control on the EIN side, and with some possible fragmentation of large EIN packets, lock up could occur if the number of active virtual calls is increased. This project is very interesting, because it can contribute to answering the question of how a virtual call network should be structured internally. Is it better to have a virtual circuit network with fixed paths and reserved resources, like some public networks now under development, or can a datagram system with an end-to-end procedure serve equally as well? At first sight, there seems to be a problem in testing the implementation, as there is no X25 public network available. Plans were made to tests the X25 part of Box A, using three boxes, one o f them of a

D.L.A. Barber et al. / The X25 interface in a datagram network

special design, with an X25 interface on both ends. This approach is attractive because all testing can be done in-house, but it requires additional effort and provides a test o f only limited value since it compares one interpretation of X25 specifications against itself. Fortunately, the French Post Office kindly agreed to make available their X25 interface test equipment, which helped considerably in the verification of the design.

6. Future possibilities The decision to adopt a modular architecture to facilitate the construction of several types o f unit has a number of important consequences. Firstly, some o f the modules already developed may be employed for other purposes, e.g. as a basis for an interface tester or in the construction of a terminal concentrator for X25 or datagram type networks. Secondly, the relatively low cost of the multi-microprocessor unit suggests that a similar system could be used at each of the hosts of a closed user group to implement high level protocols in a common manner. This would reduce considerable the cost of interfacing a variety of different host to the public data network. If this were done, some interaction between the interfacing

345

units might be introduced. This could help to give a uniform interface to different host systems, and migh even allow information about the status o f hosts in the network to be made available to users. These possibilities are described in ref. [5]. The authors wish to acknowledge the many helpfull discussions with colleagues at NPL and in EIN. In particular, they express their appreciation of the invaluable contribution from Brian Hicks of the University of Queensland. While a guest worker at NPL he was responsible for much of the detailed design of the system.

References [ 1 ] SESA-Logica: EIN, Final Documentation, Doc No. 714/5 1520-1005 2/01 (Dec 1975) [2] Recommendation X25: CCITT; VIth Plenary Assembly, Orange Book, Vol. VIII.2, Public Data Networks, ITU Geneva, 1977. [3] D.L.A. Barber and R.H. Willmott, An X25 Interface for EIN: Some Technical Issues, EIN/TAG/76]024 (Dec. 1976). [4] An Outline of the Structure of BBX25, EIN]MANC[77/ 020 (July, 1977). [5] D.L.A. Barber and T. Kalin, The THING-Ring Approach to Network Architecture, (Sept. 1977) submitted to the On-Line Conference.