Future challenges for the adaptation layer Julio Escobar
Advances in networking and data processing technology are fostering new high-rate applications demanding individual throughput in the hundreds of megabits-per-second. These applications are harder to classify than conventional applications in terms of their network-service requirements, and will present new challenges to the design of the adaptation layer for ATM networks. In this paper we discuss three such challenges: the need for mixed adaptation-layer services by a single application, the need to avoid retransmissions, and the need for throughput that exceeds the link rates of most cell networks being planned. Keywords: ATM. adaptation layer, high-speed networks
The crucial role of the adaptation layer is to match the needs of its client applications to the technology of a network. For ATM networks, the adaptation layer must provide services of data segmentation and cell reassembly, bit-error control for cell payload, and cellloss detection, among others. Traditional ATM Adaptation Layer (AAL) protocol designs target specific types of applications. However, emerging technologies are fostering new high-rate applications which are hard to classify along conventional lines. These applications will challenge the current design of adaptation layers. Real-time medical imaging applications, for example. can generate steady bit streams with low tolerance to loss or bit-error corruption, unlike conventional constant-bit-rate applications. Moreover, their high data-rate, in the hundreds of Mbit/s represents a significant fraction of the link rate for most ATM networks proposed as LANs and MANs. This leaves little spare capacity for retransmission. Likewise, for WANs with high rate-delay products, a source would have to buffer tens of megabits to service a potential
retransmission, since that many bits may have been sent by the time an end-to-end problem is detected at a source operating at Gbit/s. In the worst case, high-rate applications in the field of supercomputing, medical-imaging, scientific visualization and entertainment will have individual bandwidth requirements exceeding the link rate of many commercial ATM switches. This situation threatens to relegate high-rate applications to specialized network platforms, which will slow down their development, and limit their benefits. Why should we care about these emerging applications? Even though they currently represent a small subset of existing applications, the sustained growth of super mini-computers, mini supercomputers, and supercomputers j means that these applications are already riding on a multi-billion dollar industry. The current drive to address critical computationallyintensive problems like global climate prediction'make the impact of these applications hard to understate. These applications represent a solid entry in the field of high-speed networking, with high potential for marketing and research funding. This paper concentrates on three specific challenges that we expect to encounter from these emerging applications: the need to support hybrid services, the need to minimize retransmissions, and the need for high rate connections. We discuss these challenges in the context of adaptation layer design for ATM networks, especially for LAN and MAN environments. The ideas in this paper are based in part on earlier research for the design of a segmentation and reassembly protocol 3.
BBN. 10 Moulton Street. Cambridge. MA 02138. USA (e-mail: jescobar(, bbn.com) 0140-3664193/020077-06 © 1993 Butterworth-Heineman n Ltd computer communications volume 16 number 2 february 1993
CURRENT
ADAPTATION
LAYER DESIGN
AAL protocols being standardized at the moment 4-6 are tailored to different types of applications. Their
77
Future challenges for the adaptation layer: J Escobar
design divides the adaptation layer into an upper sublayer, the convergence sublayer, and a lower sublayer, the segmentation and reassembly sublayer. The segmentation and reassembly sublayer implements per-cell services - sequence number, for example. The convergence sublayer implements services that treat the data unit of the client as a single unit - data length count, for example. Standards committees are considering four protocols at the moment. AAL 1 targets constant-bit-rate connection-oriented applications. It includes a cell sequence number for cell-loss detection, stresses bandwidth efficiency, and intends to have a lightweight or non-existent convergence sublayer. AAL 2 targets variable-bit-rate connection-oriented applications, for example compressed video. It has cell sequence number and error control. Its convergence sublayer includes delimiters for the data unit of the client protocol, length count and flow control services. AAL 3/4 merges two earlier proposals and targets variable-bit-rate connectionless data: for example, datagrams for network control information or data services. It has the same cell sequence number and error control fields as AAL 2. but includes a cellmultiplexing ID field on each cell. Its convergence sublayer is the same as for AAL 2. AAL 5 also targets connectionless data services, but stresses bandwidth efficiency and minimality of functions. It uses no bits at the segmentation and reassembly sublayer and uses one bit from the ATM header to mark the end of its convergence sublayer data unit. It can multiplex data services over a single AAL connection. Unlike AAL 3/4. this multiplexing would happen at the convergence sublayer. It has stronger burst-error correcting capabilities, perhaps anticipating that many early networks will run on copper and not optical fibre. Its convergence sublayer implements the error control, length count, and contains a control field (useful in multiplexing). The fixed overhead per cells are: one octet for AAL 1, two octets for AAL 2, four octets for AAL 3/4, and no overhead for AAL 5. Several companies plan to run their cell networks over copper T1 ( 1.5 Mbit/s) and T3 (45 Mbit/s) technology, which explains in part the priority that AAL 5 gives to bandwidth efficiency. For the last three protocols there is also a convergence sublayer overhead. When large data units are used at the convergence sublayer the average impact of this overhead on acell is minimal. When small data units of a few cells are used, the impact is much greater, but tends to be overshadowed by the inefficiency of padding the final cell when the data unit is not an integer multiple of cell payloads. AAL I offers no error control. AAL 2 and AAL 3/4 use 10 bits of CRC (Cyclic Redundancy Code) per cell. This CRC can correct a cell payload with one bit in error, or detect when a cell payload has two bits in error. AAL 5 uses 32 bits of CRC at the convergence sublayer.
78
This CRC offers the same single-error and doubleerror capabilities but over one convergence sublayer data unit. Because this data unit typically spans many cells, AAL 5 is not as strong against independent bit errors. However, it has stronger burst-error detection capabilities: up to 32-bit bursts 7. AAL I, AAL 2 and AAL 3/4 have a 4-bit cellsequence number. The sequence is used to detect lost cells. ATM networks will probably multiplex many applications whose rates have a high peak-to-average ratio. This makes it likely that cells will be lost in clumps, during congestion. AAL 5 relies on its CRC to detect missing cells. Commercial ATM switches will typically use optical links at 155 and 622 Mbit/s rates, with 1.2 and 2.4 Gbit/s to eventually follow. Many companies are also planning on switches that will use T1 and T3 rates. The switches do not process the cell headers in real-time (selfrouting), which makes for very, fast switching. This is ideal for applications requiring high rate or low delay. We will now consider some of the most demanding among these applications. By high rate applications we mean applications that will individually require rates of approximately I00 Mbit/s and higher.
FUTURE APPLICATIONS Supercomputer traffic requires high throughput rates. as evidenced by the industry, momentum behind the 800 Mbit/s HIPPI (High Performance Parallel Interface) standard s. Traffic contents may represent sensor data or numerical results of computations 9 ]0 and can consume close to the full capacity of the link. In both cases the traffic requirements may go from near constant-bit-rate to bursty, depending on the behaviour of the sensor targets or the phase of the computation being performed. The need for reliability may also change, depending on how critical the information is considered by the application at any given time. The nature of the computation will dictate the sensitivity of the final results to corrupted data. This in turn determines the applications" tolerance to bit errors and lost cells. A paramount example of the need for reliability is medical imaging. There are important applications in the field of radiology, tomography, magnetic resonance, pathology, and others al. The image contents can range from a few megabits to 400 Mbit for the most demanding among them. A real-life example of a connectionoriented application is the remote manipulation of a microscope in real-time by a pathologist ~2. Because these applications involve human lives and legal lawsuits, the medical community is highly concerned with reliable, high-resolution reproductions of images through their networks. This limits the ability for compression and the tolerance to bit errors or lost cells. Rate requirements will typically be in the hundreds of
computer communications volume 16 number 2 february 1993
Future challenges for the adaptation layer: J Escobar
megabits-per-second. For motion applicationS, with maximum resolution images and little compression, it would be necessary to supply a throughput of several gigabits-per-second. One of the main traits of the applications we are discussing is their individual requirement for hundreds of megabits-per-second. Many of these applications are visual in nature. HDTV (High Definition Television) gives one example. Depending on video compression and display parameters being standardized, its rate requirements will typically vary from under 100 Mbit/s to 800 Mbit/s 13 14. (For entertainment applications, however, much better compression ratios will appear.) Such high rates challenge the individual port (link) capacity of first and second generation ATM switches (45. 155 and 622 Mbit/s). As the desired rates increase, it becomes less and less practical to use small size data units, because header processing operations have to be invoked more and more often. Moreover, for visually oriented applications like medical imaging, the basic user data unit of importance is an image. Where reliability is a concern, an image will typically not be displayed unless it is certain or extremely likely that all its pixels will be available. For high-resolution colour images. this data unit can easily exceed 50 Mbit. Similar examples of high-resolution colour applications exist for scientific visualization 15. A computer may solve the time evolution of physical systems and colour-code it for motion display at the graphics terminal of a researcher. A visualization may be tolerant to the corruption of information while a system exhibits simple behaviour, but may require high reliability of display as transients or turbulences set in. In this respect it exhibits the same ambiguity of requirements as its parent supercomputer applications, At peak resolution and reliability, the image sizes and rate requirements compare or exceed those of uncompressed HDTV (over half a gigabit-per-secondj. We have tried to underline the following trends. There are applications which are hard to characterize as constant-bit-rate versus variable bit rate, or reliable versus unreliable. There are important constant-bit-rate applications that require reliability. There are individual applications whose rate-requirements are a large fraction and even exceed the link rate of many planned ATM networks. For visual applications, the basic data unit of interest to the user will be very large.
aI~tlCh
may be a m i n i m u m - c o m m o n - d e n o m i n a t o r service, fusing the critical services of the different designs and removing unnecessary, functions to minimize overhead. Consider a visualization application using 50 megabit images-with high resolution, with a large grey or colour range, and little compression. A 155 Mbit/s link could sustain a motion display of approximately 3-30 images per second, depending on compression. Note that for scientific visualization, the way to achieve good compression without losing critical data may not be obvious The application would like as clean a display as possible, so it would prefer error control. To minimize retransmissions it prefers cell sequence numbers so that it can simply omit the pixeis affected by lost cells. These functions are present together in AAL 2 and AAL 3/4. However. services like flow control or cellmultiplexing tD may not be necessary. The other two AAL alternatives are simpler to implement and more bandwidth efficient, but the application must then choose between sequence number or error control. If visualization is being used interactively9. the person in control may want to switch from compressed. low-reliability (quality) display which economizes link capacity to uncompressed, high reliability display based on the results of the simulation. Switching among AAL protocols within an application's lifetime would be especially cumbersome at high-rate. The approach of designing a minimum-commondenominator service consists of trying to provide all main services required in one protocol and then minimizing the overhead required by these services, instead of specializing to specific applications. The idea is that applications which require a mix of services will not be forced to choose an inefficient protocol for the sake of obtaining all necessary services. A minimum-common-denominator approach would simplify implementation, since the same type of AAL protocol applies to all ATM connections, and should be more bandwidth efficient on average for those applications whose service requirements may change within the application's lifetime. AAL 5 embodies this idea within the context of datagram services, and intends to multiplex several data applications ~ on a single AAL connection. The design in Escobar and Partridge 3 pursued the same goal but including possible connection-oriented applications.
MIXED SERVICES
AVOIDING RETRANSMISSIONS
In a sense, the challenge of providing mixed services to individual applications stems from the fact that many of the emerging applications are partly data and partly visual display, as in visualization, or partly preditable and partly unpredictable, as in distributed computations among supercomputers. One appropriate
High-speed networks and high-speed applications each present a problem for retransmissions. The higher the link transmission rate and the higher the end-toend delay of a high-speed network, the larger the buffer requirements to service retransmissions. For optical networks, signals propagates through fibre at approxi-
computer communications volume 16 number 2 february 1993
79
Future challenges for the adaptation layer: J Escobar
mately 0.7c, where c is the speed of light in vacuum. A WAN joining Boston and New York would have a round trip delay of approximately 3 ms. Between Boston and Los Angeles the round trip delay would be close to 45 ms. One application running at 100 Mbit/s would have sent close to 40 kilobytes in the first case, and close to 600 kilobytes in the second case. before it detects a need for retransmissions. At l Gbit/s transmission rate the buffer requirements increase tenfold. In MANs and LANs, whose coverage is ten or a hundred times smaller, the buffer problems are much smaller. However. high-rate applications present their own problem to retransmissions, namely that they leave little spare capacity for retransmissions. Consider the example of an HDTV optical-distribution system using a coded that requires 125 Mbit/s on average ~3. The rate corresponds to a 6:1 compression ratio. approximately. The application will be run over an ATM network operating with link rates of 155 Mbit/s. Because of the SDH/SONET* framing overhead at the physical layer and the 5 octets of ATM cell overhead, the maximum payload rate for these links is 135 Mbit/s and the spare link capacity would be l0 Mbit/s on average. The largest convergence-sublayer data unit for standard adaptation layers is 65.5 kilobytes (AAL 2, AAL 3/4. AAL 5). If one of these data units must be retransmitted, it would require 50 ms on average at l0 Mbit/s. This is the average time for !.5 more images: which implies delaying the motion display at the destination by 2.5 images. Most applications would prefer to skip the image affected by the corrupted data. Using smaller data units alleviates this problem, but brings its own problems of unnecessarily frequent header processing. Higher link rates and improvements in codec technology would void the problem of slow retransmissions illustrated with this specific example. However, it is clear that this problem returns if one considers sharing multiple users on a link or serving higher-rate applications, some of which will not permit the compression ratio assumed for the codec in our example. To avoid retransmissions the system design must guard against corruption of information. The two main problems leading to corrupted information are cell loss and bit errors. A network can attempt to prevent cell losses by judicious allocation of resources to applications. Other schemes propose redundancy to recover lost cells 1617. We wish to concentrate on bit-error control, which is more directly relevant to the adaptation layer. The tolerance to bit errors must consider that for many visual applications the basic user data unit is a high-resolution colour image, which arrives every few tens of milliseconds and is displayed to a human. A *SDH: Synchronous Digital Hierarchy (CCITT terminology). SONET: Synchronous Optical Network {ANSI terminology).
80
similar situation may apply to supercomputer applications exchanging large blocks of data at rates of hundreds or thousands of Mbit/s. For optical fibre links we must consider two kinds of errors: burst bit-errors and independent identically distributed (liD) bit-errors. Because fibres are very resistant to electromagnetic interference, burst errors are typically induced mechanically. Apparently. typical splicing or connectorization events induce burst errors that last a few tens of milliseconds I~. For a link operating at 155 Mbit/s this translates into thousands of cells, and will probably be detected by the ATM header CRC. (For copper links and lower rates, burst errors may be less than one cell long. We will focus on optical networks.) Regarding liD bit errors, bit error rates ranging from 10-9 to 10-II are quoted in the literature 13. ]9. In fact, the value depends on the designer's choice of transmitter power and receiver sensitivity, which will be primarily influenced by the price-performance ratio of market devices and systems. A special problem arises for LAN environments. It is likely that ATM LANs will be privately operated and maintained by the owner, as is the case for most current LANs. This means that untrained personnel will be in charge of optical connector operations, and LANs may experience suboptimal bit error rates. When the bit error probabilityp is small (p << I), the probability of one error in L bits is of the order of Lp and the probability of two errors in L bits is of the order of (Lp) 2. To illustrate the implications of these parameters, consider an application using 5 megabit images at 30 images per second. This figure is consistent with the HDTV example above. An application using an AAL with no error correction will observe on average one error every 11 minutes if p = 10-~1, and every 7 seconds ifp = 10-~. If the AAL can correct one bit error per cell, the average time between errors observable by the application are 5000 years and 7 months, respectively, depending onp. If the AAL can simply correct one bit error over the data-unit at the convergence sublayer and we assume the maximum size for this data unit (65.5 Kilobytes), the figures are 5 years and 4 hours, respectively. These calculations show that for applications which do not tolerate bit errors and that use 5 megabit images, choosing no error correction is risky, yet choosing error-correction at the convergence sublayer is sufficient. Consider now images of 400 megabits, with 24 bits of colour or grey scale, and displaying 4K X 4K pixels, for the most ambitious medical imaging applications I~. (Note that for a real-time application, this would require the very high rate of 12 Gbit/s.) In the worst case (p = 10-9) an AAL that can correct one bit error per cell leads to an average of 2.5 days between erro~ observable by the application, but an AAL that corrects l-bit error at the convergence sublayer leads to approximately 3 minutes between observable errors.
computer communications volume 16 number 2 february 1993
Future challenges for the adaptation layer: J Escobar
Data for typical bit-error-rate performance of optical links is just beginning to emerge, and the bit-error-rate tolerance of emerging high-rate applications has not been characterized yet. However. error-control at the convergence-sublayer should be sufficient for most of them, and is the most bandwidth efficient. Per-cell error correction would only be needed if individual applications reach rates of several gigabits-per-second.
AGGREGATE BANDWIDTH We assume that in the early years of ATM deployment, LANs and MANs with link rates of 45, 155 and 622 Mbit/s will predominate. Unfortunately, most of the applications discussed in this paper demand transmission rates in the hundreds of Mbit/s, and processing power is increasing at a rapid pace. This poses a bandwidth challenge to the early generations of ATM networks. Network designers can simply wait for higher link rates to become established, or can assemble specialized platforms to serve the high-rate applications. A more attractive approach is to exploit the aggregate bandwidth of their network resources. ATM switches operate as fast cross-bar switches with several input ports dynamically cross-connected to several output ports. Many switches are planning on eight or more pairs of input-output ports. An eight-by-eight ATM switch with ports operating at 155 Mbit/s has over 1 Gbit/s of total throughput, and could theoretically accommodate one or a few high-rate applications. To achieve this, the main protocol problem to solve is to preserve the cell sequence across multiple physical links. Different physical links require independent ATM Virtual Circuits Identifiers (VCI). While ATM standards require that cells of the same VCI can be delivered in sequence, a different mechanism would need to ensure cell sequence across multiple VCIs. The natural candidate to provide this service is the adaptation layer, especially if a cell sequence number already exists for detection of lost cells. For WANs with tens of milliseconds of end-to-end delay, the potential delay difference among physical paths is thousands of cells: but for LANs or MANs the challenge is much more tractable. Their difference in path delays will typically range between a few microseconds for LANs, and a hundred microseconds for MANs. Even with I Gbit/s link rates, this delay difference represents from 3 to 344 ATM cells only. A 10-bit sequence number (a 1024 modulo-count) would be sufficient to deliver the cells in the correct sequence through an adaptation layer connection 3. Bandwidth-aggregation would require a roundrobin cell service of the segmented data at the source and a resequencing and reassembly operation at the destination. It would also require new algorithms to allocate bandwidth to ATM connections, and hostcomputer communications volume 16 number 2 february 1993
i n t ~ f a ~ hardware that can serve multiple ATM links with a single adaptation layer connection. While these problems are not simple, the first two are well understood, and the third admits many heuristic approaches. Perhaps the greatest challenge is to decouple the AAL hardware operation from the ATMlink hardware operations. These problems do not constitute a fundamental technological challenge, and these difficulties must be seen in the light of the new high-rate applications that would be served. Because it appears likely that LANs and MANs will dominate the early stages of ATM deployment, and will provide important testbeds for high-rate applications, the ability to aggregate the bandwidth of several links could turn early generation ATM switches into powerful tools for gigabit applications. Moreover, supercomputing applications are just beginning to emerge and their bandwidth demand, driven by computationally-intensive problems-', may continue to outpace the link rate of most commercial ATM networks for several years.
CONCLUSIONS Advances in network technology and processing power propitiate the development of interesting computer applications requiring individual throughputs of hundreds of Mbit/s. ATM networks are quickly becoming the architecture of choice for the next generation of high-speed networks. Because of the rapid pace of technological advance, it pays to look ahead and examine what challenges are in store. Different proposals available for the ATM adaptation layer propose different formats and capabilities. In part the differences stem from the different target applications that each proposal is trying to serve. But a new set of applications is emerging for which the conventional taxonomy of applications appears ill suited. The advantage of application-specific AAL protocols is that they can optimize design based on their applications and economize on bandwidth. A minimum-common-denominator approach is less bandwidth-efficient under some applications, but may better serve complex applications and prove more bandwidth efficient on average. The approach of a minimum-common-denominator would extend the concept of integrated services from the cell layer into the adaptation layer, to facilitate implementation and interoperability. The high individual rate of some applications leaves little spare link capacity for retransmissions. Even if high rate-delay values for the network, as for WANs, are not a concern, it is convenient to avoid retransmissions. This calls for error control. The ability to correct one bit in error and detect when two bit errors occur is sufficient. Error control over many cells at the convergence sublayer is very efficient regarding band81
Future challenges for the adaptation layer: J Escobar Grand Chalh,nge~- 1993: High Perfonnam'e Computing and Communicationx Technical Report. Committee on Physical.
width, and will be sufficient for most applications. When applications move into the gigabit-per-second rates, it may eventually be necessary to use stronger error control, like per-cell correction o f 1-bit errors. For high-rate optical links it appears that burst errors will result in thousands o f ceils lost. and other network m e c h a n i s m s will have to detect this problem. for example by detecting ATM header errors. Just as for bit error control, it is necessary to make cell-loss events very infrequent, either by preventing contention for network resources or by using r e d u n d a n c y to recover lost cells. The aggregation o f bandwidth across A T M virtual circuits by using a sequence n u m b e r in the adaptation layer is very feasible in LAN and M A N scenarios. Less than two octets o f sequence n u m b e r per-cell are sufficient to support this bandwidth aggregation. This service has the potential o f extending the range of applications o f early A T M switches to an important d o m a i n o f high-rate applications. The consideration is particularly important given that LANs and M A N s are expected to d o m i n a t e the early stages o f ATM deployment. and to provide the testbeds for m a n y high rate applications. AAL functions that must be implemented on a percell basis imply greater bandwidth overhead. However, the service rendered to the applications may outweight this inefficiency, especially if it provides services not available otherwise, like bandwidth aggregation.
3
4 5
6 7 8
2nd 1FIP WG6.1/WG6.4 lnt. Workshop on Protocols for HighSpeed Networks. Palo Alto, CA (November 1990) Sinha, R (ed) Broadband Aspecz~ of ISDN. Baseline Document.
Technical Report TISI.5/90-(X)I R2. ANSi. TISI Technical Sub-Committee (June 1990) CCITT Drq~, Recommendations .h~r BISDA( Technical Report !.361-1.363. CcITr SG XVIlI. Geneva, Switzerland (June 1990) Bergman,W C (eontaet)AAL 5 - A New High Speed Data Tran.~'fer AAL. Technical Report TISI.5/91-449. ANSI TISI.5. USA (November 1991) Lyles,BReliahilin, analysis qf Proposed AAL 5. Technical Report TIS1.5/91-458. ANSI TISI.5, USA (November 1991) High-Performance Parallel inte.rfaee-Framing Protocol. Technical Report X3T9.3/89-013. REV 2.6. ANSI X3Tg.3.USA (July
1990)
9 l0 II 12 13 14 15
ACKNOWLEDGEMENTS This paper is based on ideas developed with Craig Partridge at BBN during the design of a Segmentation and Reas.sembly Protocol for ATM.
16
REFERENCES
18
17
Patton, P "Survey forecasts supercomputer market growth at 35% to 40%', Supercomputing Rev. (28-29 September 1988) pp 28-29
82
Mathematical. and Engineering Sciences. Office of Science and Technology Policy. USA 119931 Eseobar, J and Partridge, C "A proposed segmentation and re-assembly protocol for use with asynchronous transfer mode',
19
Hihbard,W and Santek, D "Visualizing large data sets in the earth sciences'. Computer. Vol 22 No8 (August 1989) pp 53-
57 Boulanger,A and Escobar, J "Looselycoupled environmental models'. Earth & Space Seiem'e h!/'ormation Systems Cm!f.. Pasadena. CA (February. 1992) Kohli,J "Medical imaging applications of emerging broadband networks'. IEEE Commun. Mug.. Vol 27 No 12 (December 1989) pp 8-16 Weinstein,R, Bloom, K J nnd Rozek, S "Static and dynamic imaging in pathology',1EEE Proc. Image Manage. & Commun.. Los Alamitos, CA (1989) Fleischer,P, Lan, R and Lalta~, M "Digital transport of HDTV on optical fiber'. IEEE Commun. Mug.. Voi 29 No 8 (August 1991) pp 36--41 Kishimoto,R and Yamaskita, I "HDTVcommunication systems in broadband communication networks'.IEEE Commun. Mag.. Vol 29 No 8 (August 1991) pp 28-35 DeFanti, T, Brown, M and McCormick, B "Visualization: expanding scientific and engineering research opportunities'. Computer. Vol 22 No 8 (August 1989) pp 12-25 Shneham,N and MeKenney, P 'Packet recovery in high-speed networks using coding and buffer management',Proc. Infocom., San Francisco, CA (June 1990) MeAuley,A "Reliable broadband communication using a burst erasure correcting code'. Comput. C0l~11~11111. Rel'., VOI 20 No 4 (September 1990) pp 297-31)6 Goldstein, F ,4hnomlal ErrorRates in Broadband ISDN h~stallations. Technical Report TISI.5/90-007. ANSI TISI.5. lISA (February. 1990) Muller, N and Davidson,R L4Ns ta W.4N~:Network Management in the 1990~. Artech House. NY (1990)
computer communications volume 16 number 2 february 1993