Conformance testing for OSI protocols

Conformance testing for OSI protocols

203 Conformance Testing for OSI Protocols * Richard J. L I N N , Jr. National Institute of Standards and Technology, National Computer Systems Labora...

2MB Sizes 1 Downloads 151 Views

203

Conformance Testing for OSI Protocols * Richard J. L I N N , Jr. National Institute of Standards and Technology, National Computer Systems Laboratory, Gaithersburg, MD 20899 USA

Abstract. In the early 1980's, research and development was initiated on methods to test the developing International Standards Organization's Open Systems Interconnection (ISO/OSI) communications protocols. In 1983, ISO initiated work to develop standardized test methods. A tutorial overview of the test methods and test notation named q'TCN which were developed within ISO are presented. Issues regarding multilayer test methods are still not completely resolved. These issues are identified and alternatives to ISO's methods are explored. Application of the formal description techniques named ASN.1 and Estelle to multi-layer test systems is illustrated. Concluding remarks summarize the status of current practice in conformance testing and status of evolving OSI testing standards. Keywords. ASN.1, communications protocols, conformance testing, ferry methods, Open Systems Interconnection (OSI), Estelle, test architectures, test methodology, Tree and Tabular Combined Notation (TTCN).

1.

I n t r o d u c t i o n

Given that we have international standards for communication protocols which define their behavior and encoding of messages, testing implementations of protocols should be done in a uniform way and be relatively easy. This is the shared goal of the standards bodies who define the protocols and test methods, test laboratories, and vendors who implement the standards in products. (Consumers of communication products just want them to work together!) But given the complexity of Open Systems Interconnection (OSI) protocols and the state of conformance evaluation methods, "Is the goal a reality or a fantasy ... ?" The reality is that conformance testing is a difficult technical issue fraught with problems. So, we start with a discussion of the issues and problems which ineludes at least the following: Initially, standards may be based upon a paper design, and as a result, may contain errors, omissions and ambiguities. Several options or alternative behaviors may be allowed. Often, conformance criteria are not precisely specified in a standard. In a layered protocol architecture, is a stack of protocols to be tested as individual layers, as a collective unit, or both? Individual layer testing implies exposed interfaces must exist at each layer (interfaces might not be exposed in a product). Multi-layer testing (a group of two or more layers) raises issues of how layers should be grouped for testing; i.e., at which layers will interfaces exist and what services are actually implemented? a result of decisions made regarding grouping, multi-layer testing may restrict the amount of testing possible on lower layers within the group. Is the testing to be conducted: locally (the test system has direct access to the interfaces of the implementation), or remotely (the test system has access to the implementation via underlying communications services)? -

-

-

Richard (Jerry) Lima is a computer scientist and manager of the Automated Protocol Methods Program within the National Computer Systems Laboratory, at the National Institute of Standards and Technology (NIST) in Maryland. Before joining NIST, he was a senior systems enginecr and managed groups supporthag data communications, field engineering, and systems engineering of data acquisition and process control systems used in research laboratories at Virginia Tech. His recent research and development activities at the NIST have focused on the formal description techniques for OSI protocols named Estelle and Abstract Syntax Notation One (ASN.1) and conformance evaluation methodology for OSI protocols. He has contributed to international standards in both areas. Linn received his B.S. degree in Forestry in 1972 and an M.S. degree in Computer Science in 1981 from Virginia Polytechnic Institute and State University.

-

-

-

-

A

s

-

* This work is a contribution of the National Institute for Standards and Technology (formerly, the National Bureau of Standards) and is not subject to copyright.

-

-

North-Holland Computer Networks and ISDN Systems 18 (1989/90) 203-219

0169-7552/90/$3.50 1990, Elsevier Science Publishers B.V. (North-Holland)

204

R.J. Linn, Jr. / Conformance testingfor OSI protocols

What test coordination procedures are to be used (if any) and what degree of coordination is desirable a n d / o r acceptable? - Are test coordination procedures to be automated? If so, what test coordination protocol definitions are to be used in an automated environment? - May requirements beyond those specified in a protocol standard be placed upon a supplier to make a product testable? - How are tests to be specified: test cases, interpreted/executed by a software tool; by a suite of programs, each testing specific functions; - by a reference entity enhanced with features for testing? Are the tests themselves to be standardized? If so, are test cases generated by manual, semi-automated or fully automated methods? How is test suite coverage to be measured, and how many tests are enough? How are test cases to be selected from a test suite given optional features in a protocols and valid implementation choices? What information must be recorded during testing to support verdicts rendered either manually or automatically? What is the form and structure of the information recorded? What is the format of test reports: what information must be reported; what information must be treated as confidential; - who owns the information; and when must a test report be made public? - Are executable test systems to be standardized? - Are testing procedures to be standardized? How are mutual recognition of test results and test reports to be achieved? The issues range from technical to political and legal areas. The issues and the assumptions underlying the issues influence answers to the questions. Different resolutions lead to different ways to conduct conformance testing. Nonetheless, all test methods attempt to assess the static and dynamic aspects of a protocol implementation with respect to criteria specified in a standard. Exampies of the static aspects of testing are assessing the validity implementation choices regarding options and services offered, and assessing encodings -

-

-

-

-

-

-

-

-

-

-

-

-

of protocol data units (PDUs). Examples of the dynamic aspects of testing are assessing the validity of messages sent in response to messages received and assessing error recovery mechanism. The process of testing is outlined after ISO's test methodology is introduced. ISO has provided some answers in an evolving standard composed of five parts [11-13]. The scope of the standard includes definition of concepts and terms, test methods, a test notation, and proformas for use in protocol standards and test reports. It also defines requirements for suppliers of test systems and executable test suites, test laboratories, and their clients. Since this work is intended to be applied by other working groups within ISO and CCITT, other answers are found in the protocol standards themselves: annexes to standards identify specific testing requirements and solicit specific answers from suppliers of OSI products. Additional answers are found in test suites being developed by ISO and CCITT for individual protocols or groups of protocols (e.g., what test methods will be standardized for a particular protocol). This tutorial focuses on a subset of the technical issues. It begins with an overview of early research and development. Then an overview of the ISO work on conformance evaluation methodology presents a number of alternatives and answers to some of the questions raised. Finally, problems unanswered by current ISO work are identified, and alternative solutions to the problems are proposed.

2 .

B

a

c

k

g

r

o

u

n

d

Within Europe in the early 1980's, formal collaborative research efforts were initiated to establish methods for testing implementations of communications protocol, for conformance, as international standards were emerging under the umbrella of Open System Interconnection (OSI). Initial collaboration was between Agence de r l n formatique (ADI), Paris, France; Gesellschaft fiir Mathematik und Datenverarbeitung (GMD), Darmstadt, F R G ; and the National Physical Laboratory (NPL), Teddington Middlesex, England. Each research laboratory had industrial partners and focused upon a different aspect of testing. ADI designed and implemented an X.25

R.J. Linn, Jr. / Conformance testingfor OSl protocols

tester; GMD developed a language oriented analytic tool for passive monitoring and error detection for the ISO Session protocol; NPL developed a test system for a Network Service. Since the National Institute of Standards and Technology (NIST) in the USA was developing a test system for the ISO Transport Class 4 protocol, NIST was invited to participate. Each of the tools developed were based upon different design architectures and philosophies. By 1984, other researchers from Europe and Canada were invited to join the effort. Much of the early work is reported in the proceedings of the first through fifth IFIP workshops on Protocol Specification, Testing and Verification [1-5]. Early work considered a variety of test architectures, test languages, and demonstrated the viability of several test methods, automated generation of test sequences from formal specifications, and specification languages for protocols. In 1986, Bochmann [25] and his students surveyed this work and identified tools for development of formal specifications, code generation from formal specifications and tools employed for testing. Results and terminology coming out of the initial research efforts were subsumed by ISO's work on conformance testing methodology. Initiated in 1983, it subsequently became a joint effort with the International Telegraph and Telephone Consultative Committee (CCITT). A sequence of demonstrations between 1984 and 1988 showed the progress of this work. Among the earhest demonstrations of OSI protocol was one at the National Computer Conference in 1984. For six months prior to the show, the National Bureau of Standards and General Motors Corporation conducted testing on thirteen vendors' prototype implementations of the Transport Class 4 and IEEE 802.3/4 protocols [34]. In 1985, the Industrial Technology Institute conducted tests on prototype implementations of the File Transfer and Management, Session, Transport, Connectionless Network and IEEE 802.3/4 protocols in preparation for a demonstration at the AUTOFACT'85 trade show [34]. In 1988, the Enterprise Networking Event (ENE '88) demonstrated Message Handling Systems (electronic mail) as well as the other protocols operating over local networks and global interconnections via wide area networks. Corporation for Open Systems demonstrated testing of OSI products at ENE '88. PreENE '88 testing employed tools which implement

205

some of the test methods described in the next section. However, this does not imply that all aspects of OSI protocol testing are mature. Testing of lower layer protocols is the most mature because they are older (e.g., X.25 and Transport); upper layer protocols are newer and present unique problems for two reasons: (1) several layers of protocol may be embedded in a single product; and (2) use of Abstract Syntax Notation One (ASN.1) [7-8] to describe message syntax permits many encodings of the same message. The next two sections 1 introduce ISO's test methods and test notation (Tree and Tabular Combined Notation--TTCN). The terminology of ISO is adopted and is introduced in context.

3. ISO's Single Layer Conformance Methods A brief overview of the methodology and framework developed by ISO is presented. The concepts and vocabulary used within ISO also provide the framework to discuss other work. The ISO documents [11-13] and Rayner [40-42] provide more detail. It is important to remember, this discussion focuses on models or a framework which are independent of implementation. ISO's work defines a framework for the description of tests whose purpose is to provide sufficient information to determine if an implementation of a protocol conforms to a standard. A judgment is made based upon static and dynamic aspects of an implementation. Static conformance criteria focus on proper formation of data structures, coding of protocol data units and conditional criteria which depend upon legitimate choices by an implementor about services offered by the Implementation Under Test (IUT). Dynamic conformance criteria focus on the behavior of an IUT as it interacts with another implementation (or a test system). 3.1. The Local Method The local method of conformance testing (Fig. 1) is important as the basis for a large body of 1 D u e to overlapping subject matter, parts of Sections 3, 4 and 6 are nearly identical to materials contained in another paper

[38].

206

R.J. Linn, Jr. / Conformance testingfor OSl protocols

Upper Tester

Fig. 1. Local method.

terminology and concepts that are subsequently applied to other models. A basic assumption of the local method is that exposed interfaces exist above and below the IUT. These interfaces serve as points of control and observation (PCOs); i.e., points at which a real test system can control inputs to and observe outputs from an IUT. Using ISO's conventions, the layer under test is referenced as the N-layer and the next lower layer as the ( N - 1)-layer. Protocol service definitions define abstract service primitives (ASPs) exchanged at the top interface of a protocol entity. Abstractly, each protocol entity exchanges ( N - 1)-ASPs with an underlying service provider. ASPs comprise a logical set of test events (with parameters) which can be controlled and observed at the interfaces of an IUT. The set of test events exchanged at the top interface are denoted as (Nt)-ASPs; t for top. The same is true of the b o t t o m interface, and the test events are called ( N b - 1)-ASPs; b for bottom. Each protocol defines a set of messages to be exchanged with a peer entity; they are called Protocol Data Units (PDUs). ( N ) - P D U s are exchanged as data in the parameters of some of the ( N b - 1)-ASPs; other ( N b - 1)-ASPs convey control information to use the services of the ( N - 1)-layer. The test method includes two logically distinct elements which are called the upper tester and lower tester because of their relationship to the interfaces of the IUT. In earlier work, an implementation of a lower tester was called either a test driver or an encoder/decoder; an upper tester was called a test responder. But, ISO did not adopt these terms. The upper tester is assumed to generate and re-

ceive a set of test events complementary to the set of (Nt)-ASPs generated and received by the I U T at its top interface. Similar assumptions are made for the b o t t o m interface of the I U T and the lower tester. The model in Fig. 1 comprises a test harness around the I U T which coordinates the actions of the upper and lower tester. The roles of the upper and lower testers are to stimulate the I U T by exchanging test events at the top and b o t t o m interfaces of the IUT. The lower tester is also assumed to record (log) test events so that the behavior of the I U T m a y be assessed and there is evidence supporting p a s s / f a i l (or possibly inconclusive) verdicts. Test cases provide the means of specifying the actions of the upper and lower testers. Test cases specified in T T C N m a y be interpreted by a test system (manually translated, or compiled into some executable form). The statements of a test case may make reference to named PCOs, (Nt)ASPs, ( N b - 1)-ASPs, ( N ) - P D U s and their component fields. Thus, test cases define the actions of the upper and lower testers which control and observe test events exchanged at the upper and lower interfaces of the IUT. In the local method, test-coordination procedures are expressed abstractly by the ASPs used to specify the actions of the upper and lower testers. Note, even though the lower tester is below the IUT, it functions as a peer protocol entity. The lower tester exchanges N - P D U s with the I U T through its b o t t o m interface. Therefore, the lower tester may emulate normal behavior of an N-layer protocol entity during testing, or it may inject errors to test for error recovery. In summary, the local method of testing assumes a test harness around the I U T and exposed interfaces above and below the IUT. It may not be applicable when conformance evaluation is done by a test laboratory because these of assumptions.

3.2. Distributed Method When an implementor (client) arranges for a test laboratory to test a protocol implementation, direct access to the b o t t o m interface of the I U T is not likely to be available (except in the data link layer where media standards require an exposed interface). The distributed method (Fig. 2) is one of three models which make no assumptions about the existence of a PCO at the b o t t o m of the IUT.

R.J. Linn, Jr. / Conformance testing for 0.9 protocols

c (N-l)

Service

Fig. 2. Distributed

I

Provider method.

In all methods employing a communications service, IS0 denotes the complementary set of abstract service primitives available to the lower tester as (Nb - l)-ASP”s (double prime). Note, the lower tester and IUT reside in two different systems. The lower tester and IUT are connected by an underlying OSI service which offers an (N - 1)-service using lower-layer protocols and the physical media connecting the systems. Despite its name, the lower tester is obviously a peer entity of the IUT in this figure. The arrows between the IUT and OS1 service do not imply a real interface, just the conceptual flow of N-PDUs. Also note, an upper tester is part of the model and an exposed top interface is assumed at the PCO. Several consequences flow from changed assumptions: (1) abstract test cases written for the local method are not applicable-they must be rewritten to reflect the abstract service primitives available to the lower interface of the lower tester. Generally, they are complementary to those assumed available at the bottom interface of the IUT in the local method; (2) the lower tester and IUT are physically separated with the implication that they observe the same test event at different times; (3) data loss, delivery out of sequence, and data corruption are possible, particularly at lower layers due to the quality of some lower-layer services;

201

(4) synchronization and control are more difficult because elements of the test system are distributed over two systems, Synchronization and control (test coordination procedures) may be specified by ASPS exchanged at PCOs (or possibly by a test management protocol which is not standardized). The distributed method relies on the protocol being tested to provide sufficient synchronization to achieve test purposes. Therefore, judgments and verdicts formulated depend upon behavior observed by the lower tester. In practice, there must be some coordination between upper and lower testers; the same tests must be selected and executed concurrently. However, the distributed method does not address how these issues are resolved. In summary, the distributed method is a logical equivalent to the local method with lower tester and IUT interconnected by a communications service. However, the structure of the local method implicitly gives the capability to synchronize and control the upper and lower testers because they are elements of the same test harness. Information collection and sharing is possible (although a local issue). Thus, the distributed method is not a functional equivalent to the local method. 3.3. Coordinated Method Two features that distinguish the coordinated method (Fig. 3) from the distributed method are: (1) no exposed upper interface is necessary within the IUT (although this is not precluded); and

Fig. 3. Coordinated

method.

208

R.J. Linn, Jr. / Conformance testing for OSl protocols

(2) a standardized test management protocol (TMP) and test management protocol data units (TMPDUs) are used to automate test management and coordination procedures. Often it it is assumed that the lower tester is the master and the upper tester is a slave to minimize the effort in realizing an upper tester. This is the most sophisticated model. It allows a very high degree of coordination and reporting of information observed and collected at both the upper and lower testers. Upper and lower testers may be synchronized, information received by the upper tester may be reported back to the lower tester for formulation of verdicts, selection of test cases by an operator may be conveyed from the lower tester to the upper tester, and branching logic may be implemented for selecting tests if a previous test case fails. To date, ISO has assumed minimal coordination (e.g., test cases will be executed in a predefined order with no branching if the verdict of a test case is fail). This assumption can lead to serious synchronization problems if the test management protocol does not resynchronize the upper and lower testers after a test fails. Communications between upper and lower testers may be in-band (TMPDUs are carried as data via the protocol being tested) or out-of-band (use of a lower-layer protocol which is assumed to be reliable enough to carry TMPDUs). Test management using out-of-band services raises issues about which layer services are assumed to be exposed. Unfortunately, ISO conformance methodology has not addressed the following issues: - D e f i n i t i o n of a test-management-protocol kernel which is independent of its application. Without a standardized test-management-protocol kernel, standards groups will have to invest the effort to invent their own. This has happened with the Session and Transport abstract test suites, and the test management protocols are distinctly different. If test laboratories invented their own test-management protocols, test services ~ould be incompatible which is even worse. Currently, ISO is studying the issue of a standardized kernel of test management functions. - Recommendations for in-band or out-of-band communications. Both are feasible; both have drawbacks. In-band communications may be difficult at the application layer but is feasible in some cases (e.g., electronic mail). Out-of-band

('3 Test Coordination

procedures

Lower Tester

~

~!~i~

Implementation UnderTest

(Nb-])-ASP"s t

(N-l) Service Provider Fig. 4. Remote method.

communications below the network layer and above the transport layer raises serious problems with exposed interfaces. Historically, it is generally accepted that the transport layer will have an exposed interface and be the basis of a "test platform". Thus, the transport layer is a logical candidate for providing out-of-band communications for upper-layer testing.

3.4. Remote Method The last method defined by ISO for single layer testing is the remote method (Fig. 4). The significant features are that no interface at the top of the I U T is assumed, and no explicit test coordination procedures are assumed. (Test coordination, if any, is manual.) The method relies solely on the protocol being tested for synchronization of the lower tester and the IUT. The method assumes that the state of the I U T is known from actions specified for the lower tester including knowledge of NPDUs transmitted and received via ( N b ) - A S P " s . Verdicts must be formulated based upon stimulus provided by the lower tester and the responses of the IUT as observed by the lower tester. This method is widely used for testing implementations of X.25.

4. Tree and Tabular Combined Notation (TYCN) The usual assumption regarding conformance evaluation is: a test case (also called a test scenario) is written with a particular test purpose in mind. If test cases are to be the basis for I S O / O S I

R.J. Linn, Jr. / Conformance testingfor OSI protocols

conformance testing, it is clear that a notation or language is required to specify behaviors of a test system and a protocol entity to be tested. The language must contain sufficient features to describe tests for all OSI protocols. With these purposes in mind, ISO has defined a notation which is called Tree and Tabular Combined Notation (TTCN). It is defined in the third document (called Part 3 for brevity) of the ISO Conformance Testing Methodology and Framework [12]. A test case written in TTCN specifies, as a sequence of atomic test events, the behaviors of a test system and a protocol entity in sufficient detail to judge the behavior of an implementation of the protocol to formulate a verdict of pass/fail. TTCN is intended to be applied by protocol developers within CCITT and ISO. They will define abstract test suites. (An abstract test suite is independent of the features of a protocol incorporated in a particular IUT, and it is independent of any test system(s) employed by test laboratories to conduct protocol testing. As such, abstract test suites are not necessarily executable and TTCN was not intended to be an executable test language.) Thus, when developing an abstract test suite, a standards body may assume a particular test method from those outlined in the previous section. However, standards bodies are not responsible for executable test suites or the implementation aspects of the methods employed to describe tests. To understand TTCN (and its name) one must understand the structure of the notation and what it is intended to express: TTCN is employed to describe the dynamic behaviors and exchange of messages between a test system and a protocol entity. A graph of the dynamic behavior of a protocol entity may be drawn with the nodes of the graph labeled with inputs and edges labeled with outputs (or vice versa). The resulting graph is a tree reflecting the dynamic behavior of a protocol entity in response to its inputs. (Similar information is often expressed as state graphs with both inputs and outputs used to label the edges.) Thus, TTCN is a language for describing trees of behavior and actions associated with sending/ receiving test events and protocol data units (PDUs). TTCN's name is derived from the fact that the graphic definitions in Part 3 use trees of behavior specified in a tabular form to describe the dy-

209

namic aspects of a test case, and additional tables to describe the static aspects of a test suite. For example, tables are used to declare variables, identify points of control and observation, define references to other standards where data types are defined (structure and fields of protocol data units), and to specify assumed constraints. (This form is often called the graphics form, or TI'CNGR.) For every tabular element in TTCN-GR, Part 3 also includes BNF syntax definitions for the machine processable form of TTCN (TFCNMP). TTCN-MP is also called the transfer syntax of TTCN because it allows for electronic exchange of test suites. The dynamic elements of TTCN include assignment and arithmetic operators, predefined functions (e.g., string manipulation), label declarations, a goto statement (used to specify cyclic behaviors), and input/output operations. The send operation is denoted "!"; receive is denoted as "?". Both may be qualified by the name of a point of control and observation. These dynamic aspects of TTCN are presented on paper in tabular form. The column containing the text describing actions is significant; i.e., blanks and tabs are significant in the semantics of the language. When describing a tree of possible input/output sequences, tabs are used as a shorthand notation for the text in the same column on previous lines; i.e., tabs are interpreted as if the text above was repeated. An example may help. Consider a connection oriented protocol which a lower tester employs as an ( N - 1) service and assume the distributed test method (Fig. 2). Five dynamic behaviors that might be found in an input/output trace (log) are tabulated below. They are presented in Table 1 as sets of events and in Table 2 as a tree of behaviors that may be observed at the PCO of the lower tester (as they might be specified in TTCN). (Nb - 1)-ASP's initiated by the lower tester are Table 1 A set of dynamic behaviors for a connection oriented service 1 2 3 4 5

Con.Req Con.Req Con.Req Con.Req Con.Req

Con.Conf Con.Conf Con.Conf Con.Conf Dis.Ind

Abbreviations: Con Dat Dis

Dat.Req Dat.Req Dat.Req Dis.Ind.

Connect Req Data Con Disconnect Ind

Dat.Ind Dat.Ind Dis.Ind

Request Confirm Indication

Dis.Req Dis.Ind

210

R.J. Linn, Jr. / Conformancetestingfor OSl protocols

Table 2 A tree of dynamic behaviors observable by a lower tester 1 !Con.Req ?Con.Conf !Dat.Req ?Dat.Ind !Dis.Req 2 ?Dis.Ind 3 ?Dis.Ind 4 ?Dis.Ind 5 ?Dis.Ind -TIME Legend: ! Send ? Receive

in bold font in both tables; those observed by the lower tester in response to actions taken by the I U T are in R o m a n font. The numbers in the first column of the two tables identify the corresponding set of events. In Table 2, empty entries in the left-hand columns of a row are interpreted as if the text above it in the same column were copied into the row. Table 2 is not a test case. In a test case, the entries in row 1 of Table 2 could be put into separate rows and each entry in the table could be followed by additional test steps which might include verdict assignment (verdicts are specified in a separate column). Comments are placed in the last column of a test case. A group of test steps may be named and referenced as group. In T T C N , the scope of variables is global. Behavioral expressions are deterministic; i.e., textual order of test events in a test case is used to resolve nondeterminism. Basic assumptions underlying T T C N are that a protocol entity in an I U T can be driven into an assumed " s t a t e " b y a sequence of inputs and outputs (test steps). Once reaching that state, another sequence of test steps judges the behavior of a particular aspect of the I U T and a verdict is formed. Finally, the I U T is driven back to a known initial state by subsequent actions specified in a test case. This suggests a preamble, followed by a sequence of test steps, followed by a postamble. The notions of preamble and postamble are used in organization of a test suite where certain ~aamed groups of test steps may be used repeatedly (e.g., connection establishment and termination). This functional grouping of test steps leads to the notion of libraries of named entries composing a test suite. T T C N also facilitates organization of the elements of a test into larger units and, ultimately, into a test suite; i.e., a hierarchical collection of test cases. A test case m a y be composed of a

named preamble, body, and postamble; each may be composed of one or more named elements. (However, a p r e a m b l e and p o s t a m b l e are optional.) Since all identifiers are global, individual elements of a test case m a y make reference to previously identified objects (e.g., types, variables, labels associated with a tree of behaviors). T T C N allows reference to d e m e n t s of a test suite by attaching a sequence of named test steps. Conceptually, this is equivalent to the "include" or " c o p y " facilities of some programming languages; however text substitution follows specific recursive rules. N a m e s m a y be qualified by other names denoting the inclusion of hierarchically structured elements of text. An organization strategy for test suites suggested by ISO is: basic interconnection tests which are intended to establish that an I U T conforms sufficiently to justify further testing (or identify severe cases of non-conformance); capability tests which are intended to establish that the static conformance requirements of a protocol are met (but not probe the I U T for detailed behaviors); and behavior tests which are intended to test a full range of dynamic behaviors which the supplier of a product claims to support. Together, capability and behavior tests are intended to establish conformance or non-conformance of an IUT. Rayner [42] describes the purpose of this organization of tests in detail. Operational semantics are being defined for T T C N . (Early versions had no formal semantics.) Without rigorous semantics: (1) T T C N is subject to subtly different interpretations; and (2) it is possible for either humans or automated tools to interpret T T C N differently and introduce errors when translating T T C N into an executable test suite. Wiles [46] describes an environment for T T C N which includes both a syntax directed editor and interpretive execution. Currently, this is the only environment known which directly executes abstract test cases. Probert et al. [39] are developing an integrated environment for specification of T T C N test suites which includes the facilities to edit and transform canonical representations of test cases into T T C N - G R , T T C N - M P and an executable form. Others are also working on T T C N -

-

-

R.J. Linn, Jr. / Conformance testingfor OSI protocols tool kits. Since machine translation and execution/interpretation of T T C N test suites is feasible, it is obvious that rigorous semantics are required if the intent of those individuals writing test suites are to have the same interpretations.

5. Synthesis

of the Concepts

Presented

Before introducing new topics, let us summarize the discussion thus far. ISO has defined T T C N as a language for the specification of abstract test suites for OSI protocols a n d I S D N systems. T T C N was not intended to be an executable language. Nonetheless, tools are being built to directly interpret or translate T T C N into executable tests. The g r a m m a r of T T C N - M P and operational semantics provide necessary definitions to support these developments. Thus, T T C N must be considered an executable language. ISO has defined four single layer test methods for OSI protocols. U p p e r and lower testers are abstractions of elements that m a y be found in real test systems; they serve as virtual interpreters of T T C N within the constraints of a particular test method• These aspects are c o m m o n to the four test methods: - The role of the lower tester is to interpret abstract test cases and formulate verdicts based upon behaviors specified in a test case. (Note, there is no requirement that elements of a real test system formulate verdicts in real-time.) - The lower tester acts as a peer entity of the I U T and exchanges N - P D U s with the IUT. These are the differences in the test methods: - Control and observation by the lower tester is specified in terms of: - ( N b - 1)-ASPs in the local method; - ( N b - 1 ) - A S P ' s in the other methods. - The remote method assumes no control and observation using an upper tester. - The coordinated method assumes no upper interface and employs T M P D U s to control the upper tester. - The local and distributed methods assume an upper tester plus control and observation specified in terms of (Nt)-ASPs. - The distributed method m a y employ a nonstandard test management protocol (TMP).

211

However, a non-standard T M P is unlikely to be used in a standardized test suite. 5.1• The Testing Process Thus far, the process of testing an I U T has been ignored; ISO calls this a test campaign• Figure 5 depicts how testing proceeds given some of the elements discussed. It also introduces some new elements. The following description is idealized: some steps m a y be combined, omitted a n d / o r reordered. The process model assumes a standards committee has done the following for each protocol to be tested: - developed at least one standardized abstract test suite for the protocol using one of the test methods described earlier; - developed a Protocol Implementation Conformance Statement (PICS) which defines a set of questions that a product supplier answers before taking a product to a laboratory for testing; - developed a Protocol Implementation eXtra Information for Testing (PIXIT) statement. The P I X I T identifies implementation-specific information regarding the I U T that is necessary for testing (e.g., timer values required, number of P D U s acknowledged in a single acknowledgment, addressable elements in a stack of protocol entities). The ISO test methodology identifies the latter two items as required parts of a protocol standard. At

I of

Static Analysis Options

I No >

Test

Results

Fail

I ~ SCTR~

Parameterization

Test Execution

Verdict:

Analysis

Fig. 5. Test campaign.

212

R.J. Linn, Jr. / Conformance

this time, these elements do not exist for all OS1 protocols. A test laboratory must either have the means to interpret an abstract test suite directly or have transformed it into an executable test suite by some means. If a laboratory offers more than one test method, it is the client’s choice which will be used to test an IUT. The information contained in PICS provided by a client identifies what options and features of a protocol have been implemented. The client’s PICS is used by a test laboratory for static conformance assessment. Specifically, a laboratory checks to assure that mandatory features are included and conditional features (those dependent upon other options) are also implemented as required. The PICS is also used for test selection; i.e., the laboratory eliminates tests for features and options which are not implemented. Information contained in the PIXIT is used to parameterize a test suite. For example, timers and addresses of each component protocol of the IUT may have to be set before testing commences. Tests are executed, results are analyzed, and two reports are generated. A Protocol Conformance Test Report (PCTR) contains sufficient information to uniquely identify the system and protocol tested, identifies the test suite and test method employed, and contains verdicts for each test case executed (or a note indicating that a test was not executed) and identifies the conformance log supporting verdicts rendered. Since more than one protocol may be tested, there may be more than one PCTR. A System Conformance Test Report (SCTR) identifies a system which was tested and gives a summary conformance statement for each component protocol tested.

testingfor OSI protocols

tocol layer at a time within an IUT which is embedded one or more layers down in a stack of protocols within an IUT. Usually, testing proceeds bottom-up within the stack, and upper layers are tested incrementally as conformance of lowers is established; i.e., focus of testing switches between layers as testing proceeds. IS0 has defined embedded testing for each of the four methods outlined in Section 3. Conceptually, it is easy to consider lower layer protocols as envelopes wrapped around upper layer PDUs. This is certainly true for the data transfer phase. At each successive layer, a header prefixes “data” from the adjacent upper layer. It is this relatively simple model that underlies the notions of embedded testing. Assume that the N-layer is to be tested and (N + l)-PDUs are to be embedded in N-PDUs. Conceptually, two-layer embedded testing defines a set of (N + l)-PDUs and then envelopes this set of data in appropriate N-PDUs. This model is depicted in Fig. 6. The direct consequence of these assumptions is that each test case must concurrently specify all the actions of two layers (or at least a functional subset of the (N + 1)-layer) in order to achieve a test purpose. A test case must have a specific test purpose and must reflect dynamic behaviors possible for two layers of protocol in the IUT. Additionally, test cases must reflect the context of an (N + l)-protocol in the lower tester (e.g., results of negotiating options, or variables which record protocol control information that influence subsequent behavior) as the protocol progresses through its phases (e.g., connection establishment, data transfer, and connection termination). A test suite

Upper

6. Embedded and Multi-layer Testing

Tester

Thus far, single layer test methods were presented. In the following subsections, we look at ISO’s embedded test methodology, and then report the status of multi-layer testing within ISO.

Under Test

6.1. Embedded Methodr Currently, IS0 has only defined what are known as the embedded methods for testing one or more layers of a multi-layer IUT without exposed layer interfaces. Embedded testing focuses on one pro-

I

1

1 (Nb-I)-ASP?



I

t

I

(N-l)-Service

Fig. 6. Embedded

method

with test coordination.

R.J. I.a'nn,Jr. / Conformance testingfor OSI protocols

consists of describing all possible behaviors between two peer protocol entities and reflecting the context of the (N + 1)-protocol in order to prevent a test from being aborted. In fact, this becomes quite complex very rapidly and becomes progressively more difficult when attempting to specify behaviors for more than two layers. At this time, only limited experience exists with the method. It is viable for two layers, but usually, test cases use a relatively small subset of the (N + 1)-protocol. The advantage of an embedded method is that it does not assume an exposed interface at every layer of the IUT. The disadvantages are: - test cases become more complex in a non-linear relationship for each additional layer above the layer to be tested; - it is correspondingly more difficult to anticipate all possible behaviors given that suppliers have valid implementation options and choices for each protocol that is involved in the test. Thus, verdict assignment is more difficult and the probability of either an inconclusive or invalid verdict increases; - test suites are defined assuming a particular set of protocols; a new test suite must be written to test the same layer if the combination of protocols above N-layer change. Furthermore, if any protocol in the combination changes, the entire test suite must at least be checked, if not entirely rewritten. Due to the reasons identified above, embedded testing presents some significant problems to those defining abstract test suites for embedded methods. Test laboratories employing embedded methods face additional problems: they must obtain or create, and subsequently maintain, executable test suites and systems which reflect the intent of the abstract test suites. Given the possibility of single-layer and embedded methods, ISO could produce test suites for every possible combination of local, distributed, coordinated and remote methods and groupings of protocols. This would require significant resources and add to the complexity of testing. In most cases, standardized abstract test suites are likely to be written employing one method per protocol (or group of protocols). But, there are exceptions: abstract test suites being developed by ISO for the Session protocol include the single-layer coordinated and single-layer embedded methods (under

213

the File Transfer and Management, and Message Handling Systems protocols). 6.2. Multi-Layer Testing

Part I of the ISO documents defines multi-layer testing as "Testing the behavior of a multi-layer IUT as a whole, rather than testing it layer by layer". However, no further guidance is given on the topic and currently multi-layer testing is the subject of research. Advocates of the Ferry Clip method (introduced in the next section) suggest that it may be used for multi-layer testing. However, an assumption they make is that the IUT has exposed interfaces at each layer which is not valid for many commercial products. An example of one formal method to conduct multi-layer testing is included in the next section. While the methodology employs Estelle and ASN.1, the methodology may also be exploited by proponents of LOTOS and SDL.

7. O p e n Topics within I S O Conformance Methodology

While the following topics are under study by ISO, we include them as part of an overview of conformance testing for OSI. The first is ferry test methods. Some argue that the ferry methods are .simply a means to realize the test coordination implied by either the distributed or coordinated methods. Others believe they are distinct test methods. Regardless of opinion, ISO has been requested to consider the ferry methods as one possibility of realizing test coordination as part of a larger question: definition of a kernel set of test management functions a n d / o r test management protocol. Second, ISO is studying the potential roles of formal methods and formal description techniques within conformance testing. The scope of this topic includes generation of test sequences as well as application of the formal description techniques named Estelle [9], LOTOS [10] and SDL [18]. Since an explanation of test generation methodology is beyond scope of this paper and the topic is surveyed in another paper [38], only some of the most recent research results are noted below.

R.J. Linn, Jr. / Conformance testing for OSI protocols

214

Sabnani and Dahbura, and Aho et al. [43,23] describe methods to optimize generation of test sequences. A proposal was made to incorporate results of optimization in the developing abstract test suite for X.25. Sidhu and Leung [44] report on the ability of several test generation methods to detect faults. Ural [45] defined a method for selection of test cases based upon static control flow and data flow analysis of descriptions of protocols written in Estelle. Barbeau and Sarikaya have developed a computer-aided design tool [24] which can display both control flow and data flow graphs of protocol specifications written in Estelle. Forghani and Sarikaya have created a tool to generate TTCN test suites from EsteUe specifications [31]. To date, most applications of LOTOS have focused on protocol specification and verification rather than testing. Exceptions are the work of Brinksma [27] and de Meer [30]. The following subsections report on topics raised in ballots on the ISO documents and are outstanding questions to be resolved.

7.1. Ferry Control and Ferry Clip Methods Zeng [47] defined an alternative to both the distributed and coordinated methods. The upper tester is moved from the client's system to the test laboratory's system. The upper tester in the client's system is replaced by a ferry control protocol entity (the name was originally derived from the notion of a ferry boat). The model is depicted in Fig. 7. Note, the block labeled "Test Coordination Procedures" includes a peer ferry control entity

Ferry Control Protocol

Upper Tester I

I

;

Test Coordination:

|

(Nt)-ASPs

Procedures I Lower Tester

I N-PDUs~

Implementation Under Test

(Nb-1)-ASP"s p,

(N-l) Service Provider Fig. 7. Ferry method.

(although not explicitly identified). It is called an "active" ferry entity because it assumes the role of a master; the entity above the IUT is called a "passive" entity because it responds to actions taken by and directives issued by its peer. The following summarizes the concepts found in the original work and assumes in-band communications except as noted. The ferry control entity serves as a carrier of PDUs received from either the upper or lower testers by retransmitting them to the other entity; i.e., it functions in logical loop back mode. Minimal state information and headers are required to realize a ferry control implementation, and it can be designed in a relatively layer-independent manner. The advantages of the ferry control protocol are: (1) it places a relatively small burden on an implementor, (2) it is protocol independent, (3) it is independent of test management protocol, and (4) most test coordination problems are the test laboratory's. Its disadvantages are: (1) an exposed interface is assumed at each layer of the IUT, (2) synchronization problems unique to the ferry method may arise (it is a state based protocol entity), (3) it requires an "interface adaptor" for each layer, and (4) it simply may not be applicable for Application and Presentation layer protocols. The ferry method does not resolve the in-band/ out-of-band communications issues described earlier. Either may be used if the state of the IUT is not changed by conveying data to and from the upper and lower testers; otherwise, out-of-band communications is required. In his original work, Zeng proposed out-of-band communications via a so-called "ferry control channel". In Fig. 7, some reliable (N - k)-service may be used to realize the ferry control channel (e.g., Transport protocol) if there is an exposed lower layer interface. Otherwise, the ferry control channel may have to be derived by some other medium. Extensions Zeng has proposed to his original work are known as the ferry clip method [48]. In brief, the ferry clip method assumes that exposed interfaces exist at all layers of interest in the IUT,

R.J. Linn, Jr. / Conformance testingfor OSI protocols

and that the passive ferry entity can be attached to multiple PCOs in the IUT. Thus, the ferry entities must multiplex streams of data t o / f r o m upper and lower testers for multiple layers of protocol within the IUT. Additionally, the ferry entities must provide a reliable data transport mechanism when testing lower layer protocols (e.g., network and below) if the method is to provide a reliable means of observation and control. It remains to be seen if ISO will adopt either ferry method.

7. 2. Architectural Refinement Current ISO methodology restricts the number of points of control and observation in a lower tester to exactly one PCO. Therefore, it is impossible to make reference to any PCO other than the one immediately above the ( N - 1 ) service. Specifically, within a test case specified in TTCN, it is impossible to make reference to (Nt)-ASP"s containing (N + 1)-PDUs .which might be generated by any other higher layer entity within a lower tester; i.e., a "higher-layer test case" or reference entity. For our purposes, assume a reference entity is: (1) an implementation of a protocol which supports all functions and options specified in a protocol standard, (2) can interoperate with any implementation which realizes a valid subset of behaviors specified in a standard, and (3) has been tested sufficiently to demonstrate the first two features. While a reference entity may have been derived from a formal description of a protocol by automated methods, this is not strictly necessary. Whether or not an IUT is implemented with interfaces at each layer, a test method may be developed with layered components, each focusing on a specific aspect of testing a multilayer implementation. Such a test method (system) acts as if the IUT is layered, but it does not depend upon the internal structure of the IUT as no assumptions regarding layering and interfaces are necessary. Thus, the OSI Reference Model [6] suggests an alternative to embedded methods. The Reference Model refines the problem of communications into layers of protocols, each with distinct functions and services. This approach suggests reference entities could also be used above the N-layer in multi-layer test systems to maintain the

Lower Tester

215

Upper Tester

Test Mgmt Proel

TMPDUs

-

Layer N+I

•~

(N+I)-PDUs"~

Irnplementatior

UnderTest

Layer N "~(Nb-1

~

N-PDUs

)-ASP"s

(N-1)-Service Fig. 8. Architectural refinement.

protocol context for one or more layers above the layer under test (Fig. 7). Indeed, Davis [29] describes the architecture of a test system which employs reference entities above and below the layer under test. Current practice at the Corporation for Open Systems (COS) demonstrates the viability of the concept. COS employs a reference implementation of the Transport Class 4 protocol above the test entity for the CLNP (Protocol for Providing the Connectionless Network Service). Reasonable arguments support developing a standardized test method which allows reference entities or single-layer test entities to be interchanged at every layer of a real test system. For example, ISO's FTAM and CCITT's MHS protocols use different functional subsets of the Session protocol and may run over different Classes of the Transport protocol. Maintenance of test suites for embedded methods will be expensive to ISO and test laboratories. Perhaps, the strongest argument is ISO's definition of multi-layer testing: "Testing the IUT as a whole". Given the approach depicted in Fig. 8 and a multi-layer test method, the N-layer could be tested with an arbitrary combination of protocols above it. Note, this approach de-emphasizes the role of test cases and puts the emphasis on reference entities. The consequences are: (1) for purposes of testing, the test case/behavior observed focuses on a single layer; and (2) test scenarios play a significantly smaller role and consequently become significantly simpler.

216

R.J. Linn, Jr. / Conformancetestingfor OSl protocols

This architectural refinement of a "lower tester" into a stack of reference entities above and below the N-layer raises new issues: - What role do test scenarios play and how do they differ from single layer test scenarios? What points of control and observation: are relevant within a test system; and - how is error recovery on the part of the IUT tested? - If any protocol entity above or below the layer under test behaves invalidly: - how can this be observed; what verdict should be assigned; what information should be logged; and - how should reference entities be specified a n d / o r realized? If it is desirable to observe and report invalid behavior observed on the part of the I U T in more than one layer, nothing prohibits a test system designer from extending a reference entity into a test entity with this feature. -

-

-

-

7.3. Formal Specification of Multi-layer Test Systems CCITT has produced a Formal Description Technique (FDT) named System Design Language (SDL) [18]. Jointly, ISO and CCITT have produced Abstract Syntax Notation One (ASN.1) [7-8]. ISO has produced two FDTs, Estelle [9,28,35] and LOTOS [10,26]. They became international standards in 1988; thus, their application has been limited. To date, most OSI protocol standards have used natural language and state tables to define the standard. ASN.1 has been employed to define message syntax for upper layer protocols. If standardized FDTs are applied to protocol specification, then it is quite natural to augment their formal descriptions using the same F D T to specify the behavior of a test entity. Since translators for the FDTs exist, it is also quite natural to realize executable test entities by translation. However, testing requires mechanisms to inject errors to test for error recovery on the part of the IUT. A logical way to proceed is to simply extend a standardized protocol specification by specifying the behaviors necessary to invoke error recovery on the part of the IUT. This implies three basic

changes to a formal description of a standardized protocol: (1) generation and transmission of valid PDUs at points disallowed in the standard (inopportune PDUs); (2) transmission of parameters of PDUs which are either out of range or semantically inconsistent with the protocol standard; and (3) invalid encoding of PDUs. The first two cover most dynamic aspects of testing error recovery mechanism in a standard. The first can readily be achieved by defining actions in response to ( N t ) - A S P " s that are never valid user sequences (e.g., request data transmission before a connection is established). The second set can be achieved by defining additional test services which override normal protocol constraints; i.e., define specific actions in response to new abstract service primitives. Note, these additional ( N t ) - A S P " s would not be invoked by a "normal user" and therefore will have no effect unless specifically invoked in either a single-layer or multi-layer test system. The third set raises pragmatic issues. Totally undecodable PDUs are likely to generate inconclusive results unless the protocol has specific error recovery mechanisms for loss and corruption of PDUs. Exhaustive testing in this domain is infeasible. Thus, the usual approach is to intuitively construct a small set of tests covering likely failures on the part of implementors. The approach outlined above is more than theory. Figure 9 depicts the refined architecture of a test system developed at NIST for testing FTAM through the Presentation layer protocols using either the remote or coordinated methods. Thirteen modules are specified in Estelle: the Lower Tester Manager down through a generic Session interface. There are two major clusters of modules: (1) the upper cluster of modules which were designed to test F T A M [14-17] and the layers below it, and (2) the lower cluster of modules which are augmented test entities that implement the FTAM, ACSE and Presentation protocols. All are derived from Estelle specifications of the corresponding protocol entities. Estelle is used to specify and define the dynamic aspects of the upper cluster of modules and the stack of protocols in the lower cluster. Augmented ASN.1 type definitions are

R.J. La'nn,Jr. / Conformance testingfor OSI protocols

LOWER TESTER

Fig. 9. Multi-layertest architecturefor the FTAM,ACSEand presentation protocols. used to specify the encoder/decoders for the protocol stack and to define the test language of the system. Regardless of the internal structer of an IUT, the components of the test system are able to observe behavior at all three layers, detect and report invalid behavior on the part of the IUT, and log observed behavior (as ASN.1 values). These components are also employed as part of a system designed to test an application layer gateway for ISO's FTAM and DoD's File Transfer Protocol (FTP). Translators for Estelle [22] and ASN.1 [32,33] are used to realize implementations of both test systems. Another test system for an electronic mail gateway employs the same methodology. Details of these systems [20,21] and the methods employed to realize them are reported elsewhere [36-38].

8. Concluding Remarks We conclude with comments on fault detection, tool development, progress of testing standards and current testing practice. The fault detection capacity of the test methods described is the sub-

217

ject of open debate. Fault detection is not only related to the test method, but is related to fault models, the test language employed, test suite coverage, and synchronization issues. If all were equal (and they are not), assumptions often made are: from most powerful to least, the fault detection capacity of ISO's test methods is local, coordinated, distributed and remote methods for single layer testing. Embedded methods limit observability and control and are generally considered weaker than single layer methods. Proponents of the ferry methods argue that ferry methods have equal fault detection capacity as the local method. The Conformance Testing Service (CTS) projects (CST-WAN and CTS-LAN), sponsored by the Community of European Countries (CEC), have produced a substantial number of test tools and test suites. Many will be employed within European test centers which have been established to test OSI products for use throughout the CEC. Information regarding the tools and test suites may be ordered from the National Computing Centre in the UK. Parts 1, 2, 4, and 5 of the ISO testing methodology should advance to international standard status within a year; Part 3 is likely lag by up to 18 months. CCITT is applying TTCN to standardized abstract test suites for the X.400 series recommendations for electronic mail (Message Handling Systems), X.25 and its components, and ISDN D-channel testing. These are destined to become CCITT recommendations by the end of the current study period (1992). ISO has started development of abstract test suites for Transport, Session and upperlayer protocols in TTCN. If current schedules within ISO are met, many of these test suites will become international standards between 1990 and 1992. Current testing practice is best described as a patchwork quilt of old and new test systems (with quite a few holes). The Corporation for Open Systems (USA) and National Computing Centre (UK) are using a mixture of older tools (adaptations of 1985 and earlier vintage tools) to test lower-layer protocols and newer tools developed by the National Computing Centre (UK) for File Transfer and Management (FTAM) and Message Handling Systems (MHS). The gaping holes are test suites. Many test suites written in TTCN for MHS and X.25 are available; FTAM test cases ~are

218

R.J. Linn, Jr. / Conformance testing for OSl protocols

scarce. Older tools use ad hoc test suites; but their history of use makes them relatively complete.

References IFIP: International Federation for Information Processing [1] Protocol Testing - Towards Proof, D. Rayner and R.W.S.

Hale, eds., Vol. 1-2, INWG/NPL Workshop, National Physical Laboratory, Teddington, Middlesex, T W l l OWL, United Kingdom, 1981. [2] Proc. Protocol Specification, Testing, and Verification II, C. Sunshine, ed. (North-Holland, Amsterdam, 1982). [3] Proc. Protocol Specification, Testing, and Verification III, H. Rudin and C.H. West, eds. (North-Holland, Amsterdam, 1984). [4] Proc. Protocol Specification, Testing, and Verification IV, Y. Yemini, R. Strom and S. Yemini, eds. (North-Holland, Amsterdam, 1985). [5] Proc. Protocol Specification, Testing, and Verification V, M. Diaz, ed. (North-Holland, Amsterdam, 1986). Organization for Standardization, ISO Secretariat for JTC1/SC21, ANSI, 1430 Broadway, New York, NY 10018, USA. [6] Information Processing Systems - Open Systems lnterconnection - Basic Reference Model, IS 7498, 1984. [7] Information Processing Systems - Open Systems InterconISO: International

nection - Specification of Abstract Syntax Notation One (ASN.1), IS 8824, 1987. [8] Information Processing Systems - Open Systems Interconnection - Specification of Basic Encoding Rules for Abstract Syntax Notation One (ASN.1), IS 8825.2, 1987. [9] Estelle: A Formal Description Technique Based on an Extended State Transition Model, IS 9074, 1988. [10] Information Processing Systems - Open Systems lnterconnection - LOTOS - A Formal Description Technique Based on Temporal Ordering of Observed Behavior, IS

8807, 1988. [11] Information Processing Systems - 0S1 Conformance Testing Methodology and Framework, ISO/IEC JCT 1/SC 21 DIS 9646, Parts 1-2, November 1988. [12] Information Processing Systems - 05rI Conformance Testing Methodology and Framework, ISO/IEC JCT 1/SC 21 N3077, Part 3, February 1989. [13] Information Processing Systems - OSI Conformance Testing Methodology and Framework, ISO/IEC JCT 1/SC 21 DIS 9646, Parts 4-5, March 1989. [14] Information Processing Systems - Open Systems Interconnection - File Transfer, Access and Management Part I: General Introduction, IS 8571/1. [15] Information Processing Systems - Open Systems Interconnection - File Transfer, Access and Management Part H: The Virtual Filestore, IS 8571/2. [16] Information Processing Systems - Open Systems Interconnection - File Transfer, Access and Management Part IH: File Service Definition, IS 8571/3. [17] Information Processing Systems - Open Systems Interconnection - File Transfer, Access and Management Part IV: File Protocol Profile, IS 8571/4.

Other Publications

[18] SDL, CCITT Recommendations Z.101-Z.104 (Blue Book Series), Consultative Committee for International Telegraph and Telephone, International Telecommunications Union, Place des Nations, CH 1211, Switzerland, 1988. [19] Message Handling Systems, CCITT Recommendations X.400, X.401, X.408, X.409, X.410, X.420, X.430 (Red Book Series), 1984. [20] A Test System for Implementation of M H S / S M T P Gateways, National Institute of Standards and Technology, National Computer and Systems Laboratory, ICST/SNA 87/5, Parts 1-3, September 1987. [21] A Test System for Implementations of F T A M / F T P Gateways, National Institute of Standards and Technology, National Computer and Systems Laboratory, ICST/SNA 88/6, Parts 1-3, October 1988. [22] Users Guide for the NBS Prototype Compiler for Estelle, NBS, ICST, ICST/SNA 87/3, (Rev.) January 1989. [23] A.V. Aho, A.T. Dahbura, D. Lee and M.U. Uyar, An Optimization Technique for Protocol Conformance Test Generation Based on UIO Sequences and Rural Chinese Postman Tours, in: Proc. Protocol Specification, Testing and Verification VIII (North-Holland, Amsterdam, 1989). [24] M. Barbeau and B. Sarikaya, A Computer-Aided Design Tool for Protocol Testing, in: Proc. INFOCOM '88 (IEEE, New York, 1988) 86-95. [25] G. Bochmann, Usage of Protocol Development Tools, in: Proc. Protocol Specification, Testing and Verification VI1

(North-Holland, Amsterdam, 1988) 139-161. [26] T. Bolognesi and E. Brinksma, Introduction to the ISO Specification Language LOTOS, Comput. Networks ISDN Systems 14 (1) (1987) 25-60. [27] E. Brinksma, A Theory for the Derivation of Tests, in: Proc. Protocol Specification, Testing, and Verification VIII

(North-Holland, Amsterdam, 1989). [28] S. Budkowski and P. Dembinski, An Introduction to Estelle: a Specification Language for Distributed Systems, Comput. Networks ISDN Systems 14 (1) (1987) 3-24. [29] W.B. Davis, Architecture and Design of a Portable OSI Protocol Tester, in: Proc. 1st International Workshop on Protocol Testing, Comput. Sci. Dept., Univ. British Columbia Vancouver, B.C., Canada V6T 1W5 (1988). [30] J. de Meer, Derivation and Validation of Test Scenarios Based upon the Formal Specification Language LOTOS, in: Proc. Protocol Specification, Testing, and Verification 111 (North-Holland, Amsterdam, 1987) 203-216. [31] B. Forghani and B. Sarikaya, Automatic Dynamic Behavior Generation in TTCN Format from Estelle Specifications, Dept. of Electrical and Computer Eng., Concordia Univ., 1455 de Marsionneuve Blvd. W. 915, H3G 1M8, Canada, 1889. [32] P. Gaudette, S. Trus and S. Collins, An Object-Oriented Model for ASN.1, in: K. Turner, ed., Proc. FORTE'88 (North-Holland, Amsterdam, 1989) 121-134; also Report. No. ICST/SNA 88/4. [33] P. Gaudette, S. Trns and S. Collins, ASN.1 Free Value Tool, National Bureau of Standards, Institute for Computer Sciences and Technology, ICST/SNA 88/2, January 1989. [34] R.J. Linn, Testing to Assure Interworking of Implementations of ISO/OSI Protocols, Comput. Networks 11 (4) (1986) 277-286.

R.J. Linn, Jr. / Conformance testing for OSl protocols

[35] R.J. Linn, The Features and Facilities of Estelle, National Bureau of Standards, Institute for Computer Sciences and Technology, ICST/SNA 87/6, November 1988 (Rev.) [36] R.J. Linn and J.P. Favreau, Application of Formal Description Techniques to the Specification of Distributed Test Systems, in: Proc. 1NFOCOM '88 (IEEE, New York, 1988) 96-109; also, Report No. ICST/SNA87/9. [37] R.J. Lima, J.P. Favreau, L. Gebase and A. Iwabuchi, An Overview of Formally Specified Multi-Layered Test Systems, National Bureau of Standards, Institute for Computer Sciences and Technology, ICST/SNA 88/1, January 1988. [38] R.J. Lima, Conformance Evaluation Methodology and Protocol Testing, IEEE J. Select. Areas Comm. 7 (7) (1989) 1143-1158. [39] R.L. Probert, H. Ural and M.W.A. Hornbeek, An Integrated Software Environment for Developing & Validating Standardized Conformance Tests, in: Proc. Protocol Specification, Testing, and Verification VIII (North-Holland, Amsterdam, 1989). [40] D. Rayner, Towards an Objective Understanding of Conformance, in: Proc. Protocol Specification, Testing, and Verification 111 (North-Holland, Amsterdam, 1984). [41] D. Rayner, Towards Standardized OSI Conformance

[42] [43]

[44]

[45] [46]

[47]

[48]

219

Tests, in: Proc. Protocol Specification, Testing, and Verification V (North-Holland, Amsterdam, 1986) 441-460. D. Rayner, OSI Conference Testing, Comput. Networks ISDN Systems 14 (1) (1987) 79-98. K.K. Sabnani and A. Dahbura, A Protocol Test Generation Procedure, Comput. Networks 1SDN Systems 15 (4) (1988) 285-297. D. Sidliu and T.K. Leang, Fault Coverage of Test Methods, in: Proc. INFOCOM "88 (IEEE, New York, 1988) 80-85. H. Ural, Test Sequence Selection Based on Static Data Flow Analysis, Comput. Comm. 10 (5) (1987) 234-242. A. Wiles, B. Pehrson, ITEX-An Interactive TTCN Editor and Executer, Swedish Institute of Computer Science, Design Methodology Laboratory, Box 1263, S-163 13 Spanga, Sweden. H.X. Zeng and D. Rayner, The Impact of the Ferry Concept on Protocol Testing, in: Proc. Protocol Specification, Testing, and Verification V (North-Holland, Amsterdam, 1986). H.X. Zeng, X.F. Du and C.S. He, Promoting the "Local" Test Method with the New Concept "Ferry Clip", in: Proc. Protocol Specification, Testin~ and Verification VIII (North-Holland, Amsterdam, 1989).