Protocol testing techniques Baoyu Wang* and David Hutchison t provide a survey of testing techniques for communication protocols
This paper deals with testing techniques for communication protocols. Several protocol testing support systems designed for testing protocol layers of the ISO/OSl (International Standards Organisation/Open Systems Interconnection) Reference Model are introduced. These systems are mostly designed for the conformance testing of protocols produced by third parties, which is intended not only to provide test sen/ices for both suppliers and users but also to meet the requirements of standards. Several test case generation methodologies are described, and the authors' experience in using one of the methods is presented. Keywords: communication networks, protocol testing conformance, third party testing
In data communications it is generally accepted that protocols should be standardized, otherwise it will be hard to reach the goal of open communication among different systems. However, users must be assured that products based on the same standard but produced by different suppliers can communicate with each other successfully and reliably. Protocol testing can be used to show whether the products under test meet the standard specification. In this situation suppliers and users may rely on independent agencies who provide a test service for the conformance of protocol products to standards (such an agency is called a third party). How are communication protocols tested? The major effort in testing has been in two areas. The first area involves building a support test system; this is because the implementation of protocols should be tested by a trusted independent assessment system. The test system should provide a simulated environment which ensures *Computer Centre, Nanjing Institute of Technology, Nanjing, Jiangsu, People's Republic of China tDepartment of Computing, University of Lancaster, UK
that the protocol is working under actual running conditions. Traditional testbeds for testing protocol implementations are separately designed by suppliers and users to their own requirements. Obviously, there are two main disadvantages. First, the cost of building individual testers will be high. Second, as different products choose their own reference product for communication, standardization for protocol products will hardly be attained. As a result of these, a third party can be employed to test products in order to eliminate both disadvantages. The second area is to create or to apply a suitable method for generating efficient test cases to test implementation products. During one test, thousands of test cases may be executed. How can finer test cases be applied, and yet all the errors surely found? How can a test system generating test cases require less human involvement? These are the main research interests in this area.
CONFORMANCE TESTING The term 'testing' indicates that some operations are carried out to certify the conformity of an implementation to its specification. In data communications, protocol testing means that the test object is a protocol implementation. It can be simply defined: executing a sequence of coordinated operations to drive the implementation of the protocol through a given sequence of input stimulations and observing how the implementation responds to well determined actions. However, testing a protocol also involves testing the services which are provided by the protocol. Such testing is called 'service testing', and it verifies the interaction of the protocol implementation with its user. Faults detected in service testing can be evidence of a problematic protocol implementation, but the absence of faults detected in service testing may be insufficient to declare that the protocol implementation is correct.
0140-3664/87/020079-09 $03.00 © 1987 Butterworth & Co (Publishers) Ltd vol 10 no 2 april 1987
79
In software engineering, testing can usually be designed from either a functional or a structural point of view. Structural testing is often called 'white-box' testing which is particularly used by the implementer to check the internal operation of a program. Functional testing is commonly known as 'black-box' testing: it treats the program or system as a black-box and assumes that the internal structure of the implementation is not available. For communication protocol testing both sorts of tests are necessary. It is also necessary to have a trusted protocol (a 'peer protocol' or reference protocol) to communicate with. A reference protocol implementation should be connected with the implementation under test in order to access the test object within an operational environment. If the protocol conforms to its specification, the communication should be successful, without any error. Normally the tests are done by two parties: one is the supplier and the other the user. For communication protocol products, third parties are active in testing product conformance to the ISO's (International Standards Organization's) OSI (Open Systems Interconnection) standard network model. The supplier, as the first party, should test the conformance of a product to its specification and obtain enough evidence to show how reliable the product is. The user, as the second party, does not need to understand different implementations of a protocol, and is only concerned with services provided. As the service specification is semantic-free, all correct implementations of a protocol should provide the same services. The users have to ensure that the products can meet their own requirements and thus will set certain criteria to measure whether the product is of good quality. The third party is independent from both suppliers and users, and normally provides several testers which are designed for individual layers of the OSI standard. Any OSI products to be tested are connected with the test system so that conformance testing can be carried out in a standard communication environment. During the last few years, interest in third party testing of communication protocols has been gradually increased internationally. At present most test systems deal with the network and transport protocols of the OSI standard. Work is continuing on test facilities for the higher layers. A successful tester should include an efficient test case generator, therefore researchers are investigating methodologies for generating more effective test cases, in order to achieve the goal of automatic testing protocols with very little or no human interference.
UK's well known NPL assessment centre, will be given below. NPL assessment centre The National Physical Laboratory (NPL) in the UK started to develop appropriate testing techniques for protocol implementation assessment in 19801, and is considered one of the pioneers in the field of protocol testing. The main objective of the NPL is to establish techniques applicable to international standards. Their approach is to test one layer at a time (layer N) by checking its conformance to the ISO/OSI Reference Model, and all layers below the layer under test (layer N-l, N-2, etc.) are assumed to have been adequately tested. Early work in the field has provided a basis for the development of many protocols, and concentrates on the requirements for testing OSI products by using a real network connection 2. The simple physical architecture of the assessment centre consists of three components: AT •
NET
~ SUT
The AT (active tester) communicates with the SUT (system under test) via a communication medium (NET). At the N PL, British Telecom's Packet SwitchStream (PSS) is used as the primary communication medium. The test can be conducted many miles away from the system under test. When the (N-l) service is not end-to-end, an advanced physical architecture will be used. This architecture inserts a transportable box called the EMU (environment manipulation and monitoring unit) between the communications medium and the client's system. The AT is the key part of the assessment centre and has been developed in two phases: the first adopts the simple physical architecture and is restricted to the types of errors that the SUT can be subjected to. The second uses the advanced physical architecture which includes the test driver, communication medium and EMU, and is closer to the real structure of the corresponding OSI service and protocols. The logical architecture of assessment testing can be described in terms of the OSI layers as indicated in Figure 1. Client's system
-'=--- Active tester [
Test driver
[=
Test driver Responder
I
protocol
_I - I
Test responder
I
(N) protocol+err J_ encoder/decoder r
(N)- protocol Errors
_J -J
Implementotion under test (TUT)
I_ I-]
(N-I) protocol
I =I I
(N- I ) protocol
SURVEY OF TESTING S Y S T E M S In recent years, several protocol testing systems have been designed world-wide, specifically in the UK, USA, Canada, France, Italy and FRG. Most of these have already been put into operation, and all test systems are built for particular layers of the OSI Reference Model. A brief introduction to test systems and tools, as well as comparing their major characteristics with those of the
80
(N- I ) protocol implementotion exception
implementation
generator
Underlying communicotion channel
Figure 1.
Logical structure of assessment centre
computer communications
I
In its initial model, this testing system was built for testing protocols for the network layer of the OSI model. Later on, however, it proved to be satisfactory for testing protocols above the network layer when the underlying service was expected to be end-to-end. The NPL is working on adapting the active tester from layer-by-layer testing to multilayer testing, which will not consider intermediate interfaces, so that the testing process will become more manageable. Now NPL is takingthe lead in developing an OSI standard for an OSI conformance testing methodology and framework.
J Datosink J
I Operator COnSOle
I o g ~ / Test centre driver(TC)
Exception generator
implementation
At the National Computing Centre (NCC) in the UK, a third party testing service has been built under the name of NCC Comms-AID (Communication Software Assessment and Interactive Development) 3. The NCC objectives are2:
The NPL and NBS test systems (see below), which are separately used in testing network and transport layer protocols were both installed at the NCC in 1983. The NCC Com ms-AID service is a remote testing facility which includes a tester located at the NCC site, the product under test is installed on the client's own system, and the connection of both sites is made via PSS. The tester can generate test data and interpret responses as well. The significant feature of this system is that the test can be controlled either at the NCC or at the client's site. NCC test facilities are already available for testing the network layer and most parts of the transport layer. Higher layer testing facilities are under development. The NCC will use their system to provide a commercial testing service.
NBS testing system The National Bureau of Standards (N BS) in the USA started to build a certification centre in 19824. The initial purpose was to meet the requirements of testing their own protocol implementation conformance to Federal Information Processing Standards. The architecture of the tester was designed for the transport layer of the OSI standard as shown in Figure 2. Both tester and client are connected via the network protocol during testing. The major difference from NPL's tester is that it does not require the test driver-responder protocol. The aims of this testing centre are to ensure that the implementation under test can respond correctly to all
vol 10 no 2 april 1987
Datagenerator
1 T r o n sport protocol
~
NCC service
• to establish an effective resource to assist the OSI user community in the UK; • to provide test facilities for as many protocols as practicable; • to provide a better indication of commercial feasibility; • to test and develop further techniques; • and to promote the OSI model and the concept of third party testing.
J Scenariofile I
_ I Network I --i interface I
1
Networkprotocol NBS tester
Figure 2.
valid events at the service interface or from the peer protocol; that it can reject errors passed across the service interface and peer protocol errors, and that it can handle timeouts or multiplexed connections correctly as well 4. Although this system has been successfully used in transport layer protocol testing, the NBS considers that it is unreasonable to construct one testing system for each protocol. Consequently, an adaptation of this system has been planned for use in testing other layer protocols of the OSI model.
Two Canadian testing systems Bochmann's group from the University of Montreal have developed some systematic methods to generate test sequences for communication protocols, particularly for transport layer protocols s. They have designed a simple and general test architecture as an experimental bed (see Figure 3) to try their methods for generating test cases. The test bed consists of an active tester, connected to a test responder. At the BNR laboratory in Ottawa, an integrated test centre has been built for testing SL-10 (a trade mark of Northern Telecom) packet network products. This centre provides four test tools which make the network testing more manageable for the implementer, manufacturer and network operator, and also automatically allows
Activetester Peerunit Network protocol
I Figure 3.
L_ Testprotocol - [ I !I I---j Communication link
Testresponder Unit undertest Network protocol
I
A test bed structure
81
terminals and host computers as well as total networks to be tested for conformance and performance 6. The function of these four tools can be simply described as follows: IPT (Interactive Protocol Tester) is for checking protocol implementation against specification; NLTS (Network Load Test System) generates simulated traffic and measures the network's performance; NPM (Network Process Monitor) examines operating software at the module level; and NTS (Network Test System) provides an automatic or interactive test sequence for coordinating and controlling distributed network testing. Each tool can be operated individually by using a standard data terminal. IPT, NLTS and NPM tools run on SL-10 network processors and can be operated remotely via the network or the NTS Driver. By using these tools in combination, a fully integrated and automatic operational test centre can be achieved.
French test systems France has been the most prolific source of development of testing systems 2. Before 1983, there was the ADI group which worked on the RHIN project. This project developed three test systems7 for protocol implementation, namely CERBERE, GENEPI, and STQ. Since 1984 another group called BULL has been working on building an OSI protocol testing environment based on a distributed testing architecture (a remote testing system and a related responder system) called STP8-11. CERBERE is a tool designed to be introduced between two pieces of equipment (implementation under test and reference implementation) running high level protocols. It acts as a relay in Layer 3 of the OSl model, and is a portable tester that is connected between the subnetwork and the system under test. Concurrently, it is able to monitor, analyse or perturb the traffic between that system and the tester. GENEPI is a protocol data unit generator which was initially used for testing Layers 4 and 5 of the OSI model, and it can now be configured to test a single layer or a pair of adjacent layers. It takes the same position in the architecture as the encoder/decoder. STQ (Test and Qualification System) consists of software designed to run on a certification centre for testing an (N)-layer protocol. The main characteristic of STQ is that it allows testing of Layers 4, 5 and 6 of the OSl model separately or simultaneously, and also it assumes that there is testing access directly to the lower layer interface of the implementation under test. The architecture is shown in Figure 4 and the system is available for testing Layer 4 of the OSI model. Finally, STP (System for Testing Protocols) is a system in which the defined responder is simple, small and easy to implement. This system allows an efficient methodology for testing OSl protocol entities. The general architecture of STP is shown in Figure 5. The TS of STP system constructs (N)-protocol data units to be sent to the (N) EUT (entity under test). The
82
Certificotion
centre (STQ)
Vendor test
I
driver ~.~ J
Test
'
I
N reference entitt = I
Test protocol N protocol (N-I) service
Test responder
N
entity under test
I
Public network Figure 4.
STQ test system
TC
SUT
Ts 1
AR
scenorio
(N-I)
(N)EUT
service
CT CUT Figure 5. STP test system; TC = test centre, TS = testing system, SUT = system under test, AR = astride responder, (N) EUT = N entity under test, CT = connection for testing, CUT = connection under test
testing system (TS) is different from most other test systems (but see also the NBS system in Figure 2) because it employs a 'scenario' which basically generates test cases, rather than by using a reference implementation. PDUs are produced by the tester, and the (N-l) service is used directly to exchange the (N) PDU with (N) EUT. Another difference is that the (N) EUT allows messages from the TS to gain access to either the upper interface through the astride responder (AR) or the lower interface. On the contrary, most other testing systems allow the message from the active tester to enter only one interface. The STP system has been implemented and is used for interactive testing of a transport entity.
CREI system The CREI group in Italy started their research project to design a testing system in 198312. The system is intended to be used to test protocol implementations in terms of both protocol and service testing. The basic architecture of the system is as shown in Figure 6.
Test driver (TD)
~
Testingprotocol .~[
Test responder
(TR)
__I
I
implementation (PrR)
1 F Figure 6.
under test
under test
Underlying Ioyer
(IUT)
L
]
CREI test system
computer communications
A special feature of this system is that its user can be the implementer of the object under test, or the operator of the system without making changes in the system structure. The implementer using the system can work locally to debug the implementation under test on line and to perform the test with a remote reference implementation (PIR). When the test centre operator uses the system, a remote customer implementation (IUT) is tested against the local reference implementation (PIR). This system allows the TD (tester driver) to transfer information about the service primitives to the TR (tester responder) before the testing session starts. Therefore the TR is required to have the memory and an interpreter for storing and performing the TD instructions. The major functions performed by the TD and TR are: notification of the testing session opening and closing; data needed for the synchronization of the TD and TR behaviour; and indication of any failure 12. The TD consists of five modules, namely the database, operator-interface, control, test processing, and mapping module. The TR consists only of the test processing and mapping modules. All modules implemented depend on the testing layer. It is proposed that the system should be able to test any layer of the OSI architecture, or a sublayer, or even the combination of several layers of the OSI. G M D system At GMD of FRG, a special protocol tester for testing and diagnosis of higher level protocols has been developed 13. The structure of the tester is similar to Figure 4 (the French STQ system). The proposed testing techniques assume that tests are driven under remote control by the client. At the beginning, the strategy of manually driven tests was used, and included the following: no test driver responder protocol; no constraints for test responders; and a test driver manually controlled by the client by using test commands. Later, the strategy of automatic tests was developed, based on common agreement of test protocols, definition of test responders and an active test driver which reads the test commands from a file. AT GMD, the active tester can be accessed via X.25 or X.28 and can be called as a test partner from the client's system. It can also be inserted into the connection between two partners for the purpose of arbitration testing (which means to determine the cause of a problem) and allows the testing of several parallel test connections. The GMD project is one of the collaborative projects funded by the CEC (Commission of the European Communities).
Remarks As shown above, test systems are commonly constructed by using an active tester connected to a test responder for testing OSl or similar products. A protocol product under
vol 10 no 2 april 1987
test is located at the responder site and a reference product is set at the active tester. All test systems should have a mechanism for generating test cases so that the operator does not need to input the test data manually. Most systems provided by third parties are designed to operate on the active tester's local site and to look on test responders as customers. Only a few, like GMD and CREI, allow the test responder to call the tester and can be operated at both sites. Current standardization work by ISO and CCITT is contributing in the following areas: first, the development of a portable test system for the OSI File Transfer Access and Management (FTAM) (Layer 7) protocol; second, automatic generation of test cases from formal protocol specifications; and third, the development of standards for conformance testing and formal protocol description techniques. When building a test system, designing the test case generator is a vitally important part of the work. Therefore, research to find better methods for generating test cases is very valuable.
M E T H O D O L O G I E S IN PROTOCOL TESTING How can a system be produced with effective test cases and how can these be automatically generated without looking at the details of the implementation? These problems have attracted researchers to concentrate their attention on test case design methodologies. As a result of this, a number of techniques for generating test cases have recently become available. For instance, Sarikaya and Bochmann have found that various methods for testing state machines can be applied to the selection of test sequences for protocols specified as a finite state machine s. They have applied three methods: transition tour, checking sequence and W-method, in testing a transport layer protocol. Ural and Probert TM have also adapted the method of context-free grammars to protocol testing. Most recently, Sabnani and Dahbura 15 have created a new method to produce unique input-output sequences for protocol testing.
Transition tours The method of transition tours, like the others, assumes that the protocol to be tested is specified as a finite state machine (FSM). This is one of the simplest methods. Transition tour is the name of an input sequence of the FSM, which starts with the initial state and will cover all the transitions in the FSM state table at least once. It detects all operation errors (errors in the output function), but it does not detect all the transfer errors (errors in the next state function). As the test sequence is based on the traversal of the FSM graph, the test should start from the initial state, go through all possible transitions, and then come back to the initial state.
83
b,d
b
0
b
c
d Figure Z A finite state machine example; numbers represent states, letters represent inputs In Reference 5 the transport protocol has been used to illustrate this method. Here a simple finite state machine (see Figure 7) will be used to show the transition tour sequence. Assuming that State I is the initial state, a transition tour sequence may be: Input State
a 1
b 2
b 1
d 1
a 1
c 2
d 3
c 3
a 1
c 2
b 3
a 1
c 2
g 3
1
The sequence starts from and terminates at State I and will direct the machine to go through all possible paths. This method was first demonstrated by Naito 16 in his paper on software testing. Later, Sarikaya and Bochmann applied it in testing a transport layer protocol implementation, and found that the method was applicable to incompletely specified machines if the machines were strongly connected, contrary to the original assumption that the machine under test should be minimal, strongly connected and fully specified. This method is generally applicable to all protocols specified by a finite state machine. The limitation is that the fault detection capability finds operation errors but not transfer errors.
W-method The W-method involves two sets of input sequences: one is the W-set, the other is the P-set. The W-set is a characteristic set of the minimal FSM, and consists of input sequences that can distinguish between the behaviours of every pair of states. The P-set consists of all partial paths. The W-method gives a set of test sequences formed by the concatenation of the W-set and P-set. Each test sequence starts with the initial state and returns to it again afterwards. It is also guaranteed to detect any misbehaviour of the machine. We also use the example in Figure 7 to describe how to form a P.W set. As the W-set sequence should distinguish the behaviours of every two states, input 'b' should be the W-set for this finite state machine. The P-set can be obtained from the tree in Figure 8.
84
Figure 8.
Testing tree:
P= { { } , b , a , d , a . {b,c},a.c. {d,a,c,b,g} } PW = {b,b.b,a.b,d.b,a. {b,c}.b,a.c. {d,a,c,b,g}.b} The W-set is constructed using aspecial method 17, and the P-set can be formed from atestingtree ~8,which shows every transition from State i to State j on each input. However, the limitation of using this method is that it is not certain that every finite state machine will have a Wset sequence, especially if it is an incompletely specified machine. Therefore, if intending to use this method, one should first make sure that the defined machine has a Wset sequence. When a machine does not have a W-set, a procedure can take place to form a completely specified machine which has a W-set, for example, adding an 'error' state and declaring all unspecified transitions to lead to this state.
Checking sequences A checking sequence is a test sequence which consists of three parts: • Initial sequence • State recognition sequence • Transition checking sequence Users of this method should ensure that the FSM under test will have a 'distinguishing sequence' (DS). The DS is similar to the W-set mentioned above. It is an input sequence, on which, for each initial state, the machine will produce a different output sequence. One can get a DS by constructing a successor tree 19 for the FSM. At the start of testing, one should first use an initial sequence to bring the FSM to the initial state. Then a state recognition sequence is used to show the response of each state to the DS, followed by a transition checking sequence to check all individual transitions in the state machine. Each transition from a certain state using input Xi (say) is checked by applying the sequence Xi.DS. The checking sequence is formed by the concatenation of all Xi.DS
computer communications
sequences. The problem is that when the number of states is large it is difficult to find a DS in a specified machine. Normally, a 'read state' input/output facility is added to help obtain a DS.
Context-free grammars Grammars are often used to define languages. The method of context-free grammar which will be discussed here uses an attributed grammar to build a program language for specifying and generating test sequences from the service or protocol specifications. The use of an attributed grammar to generate test cases in software testing was suggested by Duncan and Hutchison 2°. Ural and Probert have adapted this method from its use in general software testing, and have developed a method to generate test sequences for transport layer service and protocol testing TM. When the protocol is specified by using a context-free grammar, it can easily be tested using this method. When using this method in testing protocols, it is necessary, first, to understand correctly the functional requirements of the communication protocol; second, an attributed context-free grammar is constructed and test sequences specified from a functional requirement; third, a language defined by the grammar is used to specify a generator for test sequences; fourth, the grammar is implemented as a generator program; finally, the programs are run in a controlled fashion and a set of representative test sequences generated randomly, systematically, or selectively. Although using the test sequence generator can automatically produce test sequences, the generator design needs careful human insight. Therefore, this is called a semiautomatic method.
UlO sequence generation Sabnani and Dahbura is have introduced another technique for generating protocol tests by means of a novel procedure which can generate test sequences. The procedure is based on the assumption that the protocol is specified as a minimal finite state machine. The key idea in the procedure is first to compute a unique inputoutput (UIO) sequence for each state in the protocol specification, and then to generate a test sequence to visit all states and state transitions by using the UIO sequences. The testing covers all the state transitions and checks the UIO sequences for each state. It can also detect the problems that exist in the implementation, such as an incorrect edge label, an incorrect header state or an incorrect tail state. The procedure for generating test sequences consists of two main steps: First compute a UIO sequence for each state. Computing the UIO sequence includes: • For each edge label, compute the list of edges with that label (for later use):
vol 10 no 2 april 1987
• Compute all input/output sequences of length 1 for each state; • Check if these are unique; if yes, a UIO sequence for the state has been found; • If not, compute sequences of length 2 for the state which does not have a UIO sequence so far; • Continue to compute length I + 1, and check uniqueness until the UIO for every state is found. The second step is to generate a test sequence, as follows: (1) Generate a test to check whether each state is reachable from the initial state and whether it has a UIO sequence (by using the method above); (2) Generate a test to check whether each edge has the correct head state, the correct tail state, and the correct label; (3) Concatenate the tests from (1) and (2), and remove redundancy in order to reduce the test sequence length. The format of the test sequences is the edge label sequence:ll/01 12/02 13/03 . . . ; the label of an edge contains the input part (I) and expected output part (O). The test sequence is able to give a clear overview of the input and output behaviour, and it is therefore very helpful to carry out a result comparison with it. When one takes the input part of the labels as a test input sequence to do the protocol testing, a sequence of results is obtained. If there is any difference between the actual result and the output part of the test sequence, it is obvious that the protocol under test is faulty. This method has also been applied to the Alternating Bit Protocol 15 and the Proway protocol 22.
Comparison Except for the context-free grammar method which is mainly used when the protocol is specified by a contextfree grammar, all others are based on the finite state machine specification. Bochmann has carried out a comparison on the first three methods mentioned above. For fault detection capability, the checking sequence is best, and the W-method is better than the transition tour. For the length of sequence, the transition tour gives the shortest test sequence because the test sequence is directly formed by checking the FSM diagram, and does not need to add any extra characteristic sequence. For the W-method and checking sequence methods, the problem is that sometimes it is difficult to find the DS or W-set for a finite state machine module, resulting in much more work. All these methods test only whether every path is reachable or not, and do not refer to further test cases chosen. The UIO sequence technique is an advance on the others in that it is an approach to generating tests automatically. The technique emphasises testing of each state rather than testing the whole machine. Any level of protocol which is specified by a FSM can be tested by this technique. Those who do the testing need only input an
85
edge set which consists of head state, tail state and edge label, together with the num ber of edges, and then a serial test sequence will be generated for each state and each edge. A great advantage of this technique is that it is possible to make short test sequences because although each state has a different UIO sequence, the majority of UIO sequence lengths are one.
EXPERIENCE IN IMPLEMENTING A UlO GENERATOR As has already been described, the technique for generating UIO test sequences is fairly straightforward and the ideas involved are easy to understand. The range of usage is so wide that any protocol specified by a finite state machine can use it directly to generate the test sequences unless it is not a minimal FSM. Nevertheless, it is enough to add only a minimization process to those exceptions. The authors have implemented this technique 22 and found that the overall test sequence generator consists of four basic parts: • The UIO sequence generator: generates a unique input and output sequence for each state. • The shortest path generator: generates a sequence to find the shortest path for each state. • Edge test sequence generator: generates a test sequence for each edge, the sequence being formed by the shortest path of the edge head state, plus the edge label, plus the UIO sequence of the edge tail state. • State test sequence generator, generates a test sequence for each state, and is formed by the shortest path of the state, plus the UIO sequence of this state. The unoptimized test sequence is the concatenation of two state and edge test sequences. In order to remove sequences that are completely contained in others and thus make the test sequence shorter, another comparison procedure is needed. This is a fifth part of the test sequence generator. The authors have applied the technique to generate test sequences for testing Proway21 protocol implementations. The inputs were entirely based on the state machine diagram of the protocol specification. After processing the inputs, the generator produces a set of test sequences automatically; these sequences are directly input to the implementation under test. Therefore, test results were easily obtained without doing any more work. When the total results including the test sequences generated by the procedure were obtained it was necessary to only compare the outputs produced by the implementation with the output part of the test sequence. The authors' experience of implementing and using this technique shows that it is simple and effective and applicable for testing most protocols.
CONCLUSION In this paper a survey of protocol testing systems has been
86
presented and some methods used in generating test cases described. Existing support test systems are all designed for particular layers of the OSI model, especially for testing network and transport layer protocols. These systems are successfully used in fulfilling their original g o a l conformance testing for protocol standardization. With such testing systems, standard protocol products will have a large market and data communication between different devices will be much easier than before. Finding a new and simple method for generating test cases is very important in the protocol testing area. As the specification of a protocol is usually described formally in a standards document, designing a test case generator will not generally be too difficult. In this paper, the five methods introduced can all be used to generate test cases for communication protocols, and most of these aim at automatic testing. Once the active testers are armed with automatic test case generators, automatic testing can be fully realized.
ACKNOWLEDGEMENTS The authors are grateful to the two referees for their helpful and detailed criticism of the original version of this paper.
REFERENCES 1 Rayner, D 'A system for testing protocol implementations' in Sunshine, C (ed) Protocol specification testing and verification North-Holland (1982) 2 Davidson, I 'Testing conformance to OSI standards' Comput. Commun. Vol 8 No 4 (August 1985) pp 170-1 79 3 Davidson, I 'Independent testing of protocols' Proc. Networks '84 (June 1984) 4 Nightingale, J S 'Protocol testing using a reference implementation' in Sunshine, C (ed) Protocol specification testing and verification North-Holland (1982) 5 Sarikaya, B and Bochmann, G V 'Some experience with test sequence generation for protocols' in Sunshine, C (ed) Protocol specification testing and verification North-Holland (1982) 6 Hornbeek, M W A 'An integrated test centre for SL-10 packet networks' ACM CompuL Commun. Rev. Vol 15 No 4 (September 1985) 7 Ansart, J P 'A protocol independent system for testing protocol implementation' in Sunshine, C (ed) Protocol specification testing and verification North-Holland (1982) 8 Rafiq, O and Haddad, J 'Description of protocol testing scenarios' COMNET '85 (Conference on 'Services conveyed by computer networks' Budapest, Hungary (1-4 October 1985) 9 Rafiq, O 'Tools and methodology for testing OSI protocol entities' Int. Symp. Fault Tolerant Computing IEEE, Ann Arbor, USA (19-21 June 1985)
computer communications
-
0
10 Rafiq, O 'A good approach for testing protocol implementations' Int Semin. CompuL Network. Perf. EvaL Tokyo, Japan (18-20 September 1985) 11 Rafiq, O et al. 'Towards an environment for test OSI protocol' Fifth International Workshop on Protocol Specification, Testing and Verification ToulouseMoissac, France (10-13 June 1985) 12 Palazzo, Set al. 'A layer independent architecture for a testing system of Protocol Implementations' in Rudin, H and West, C H (eds) Protocol specification testing and verification Elsevier Science (1983) 13 Giebler, A 'Testing and diagnosis aids for higher level protocols' in Rudin, H and West, C H (eds) Protocol specification testing and verification Elsevier (1983) 14 Ural, H and Probert, R 'User-Guided Test Sequence Generation' in Rudin, H and West, C H (eds) Protocol specification testing and verification Elsevier (1983) 15 Sabnani, K and Dahbura, A 'A new technique for generating protocol tests' ACM Comput. Commun.
vol 10 no 2 april 1987
-
0
0
0
0
Rev. Vol 15 No 4 (September 1985) 16 Naito, S and Tsunoyama, M 'Fault detection for sequential machines by transition tours' Proc. IEEE Fault Tolerant Computing Conf. (1982) 17 Gill, A Introduction to the theory of finite state machine McGraw Hill, USA (1962) 18 Chow, T S 'Testing Software Design Modelled by Finite State Machines' IEEE Trans. on Software Engineering Vol SE-4 No 3 (May 1978) 19 Kohavi, Z Switching and finite automata theory McGraw Hill, USA (1978) 20 Duncan, A G and Hutchison, J S 'Using attributed grammars to test designs and implementations' Proc. 4th Int. Conf. Software Engineering (1981) 21 British Standards Instilution Draft--Process data highway, type C, for distributed process control system (1985) 22 Wang, B 'Implementation and testing of computer network protocols' M. Phil Thesis, University of Lancaster, UK (1986)
87