Harmonisation of defence standards for safety-critical software

Harmonisation of defence standards for safety-critical software

MICROPROCESSORSAND MICROSYSTEMS ELSEVIER Microprocessors and Microsystems 21 (1997) 41-47 Harmonisation of defence standards for safety-critical so...

749KB Sizes 1 Downloads 97 Views

MICROPROCESSORSAND

MICROSYSTEMS ELSEVIER

Microprocessors and Microsystems 21 (1997) 41-47

Harmonisation of defence standards for safety-critical software W. Marsh So[?ware and Systems Integrity Department, ERA Technology Ltd, Leatherhead, Cleeve Road, Surrey, KT22 7SA, UK

Abstract Increasingly, UK defence procurement is being carried out as part of internationally collaborative programmes. Purchases may be of oftthe-shelf design or of equipment meeting the requirements of more than one country. This paper addresses the issue of differing national standards for safety-critical software and the need for harmonisation. Differing standards give rise to a number of issues peculiar to software. One such issue is that all standards for software make requirements on the process used to develop the software. When an existing design is purchased the software development process has been completed and therefore cannot be modified. The requirements of differing standards for physical properties such as structural strength can be compared either by reference to an appropriate scientific theory or by experiment. Unfortunately, neither of these approaches can he used to compare objectively the requirements of software standards, especially when the software is safety-critical. The paper compares some of the existing standards to safety-critical software in military avionics and, describes developments taking place in different countries. © 1997 Elsevier Science B.V.

Keywords: Defence procurement; International standards; Safety-critical software; Military avionics

1. Introduction Software is an ever more important factor in the safety of state-of-the-art avionics. As integration of avionic systems increases, safety functions can no longer be assigned exclusively to stand-alone hardware subsystems, or even to simple stand-alone systems containing software. In this paper, we examine how the need to demonstrate the safety of systems containing software affects the procurement of defence equipment, including avionics. Our focus is on procurement and, as a consequence, standards, since standards inform and mediate the technical relationship between purchasers and suppliers.

1.1. Software safe~ The standards of interest concern 'software safety'. Since software is not itself hazardous, the phrase 'software safety' could be confusing. More precisely, software cannot exhibit safety hazards, or be shown to be free of hazards to safety, except in the context of a particular system. Therefore, 'software safety' is a shorthand for 'the absence of hazards to the safety of a system arising from the behaviour of software'.

1.2. International procurement Increasingly, the procurement of defence equipment is international. Systems may be purchased from foreign 0141-9331/97/$17.00 © Published by Elsevier Science B.V. PH S 0 1 4 1 - 9 3 3 I ( 9 7 ) 0 0 0 1 8 - 5

companies or be developed by international consortia. Even when equipment is developed to the detailed requirements of a particular purchaser, important subsystems may be purchased off-the-shelf, both to save cost and achieve commonality. In all these cases, a particular purchaser cannot always completely determine the technical standards used. Software safety standards for defence equipment have mostly been developed nationally, so that international procurement must compare software developed to one standard against the requirements of another. Section 2 of the paper considers the conceptual difficulties of comparing software developed to one standard against the requirements of another standard. This is followed by a brief comparison of some existing standards, illustrating some of the difficulties which affect software procurement. Section 5 considers how a greater harmonisation could be achieved.

2. Complying with software safety standards When defence equipment is purchased off-the-shelf from a 'foreign' supplier it may be necessary to show compliance with 'domestic' standards, which differ from the standards originally specified. Typically, this involves using one of the following processes, listed in order of increasing difficulty:

42

W. Marsh~Microprocessorsand Microsystems 21 (1997)41-47

1. comparing the standards, showing that the requirements are 'equivalent', or 2. reassessing existing data, gathered to show compliance with the original standard, to show that the requirements of the new standard are also satisfied, or 3. making additional measurements. This approach can be applied when compliance with the requirements of the standard can be judged objectively, using measurements founded on a physical theory, such as that for the strength of a structure. Although it may be most cost effective to produce the necessary measurements during the development process, in principle it is not difficult to make further measurements, or reassess existing data, against the new standard. In this section, the reasons why this approach is harder to apply to software are considered.

differences such as the way that the activities of the process are recorded in different documents can be discounted, but otherwise comparison of data does not differ from the comparison of processes. 3. Additional measurements on software to establish compliance to a standard are not possible because of the inherent difficulty of measuring software reliability, described above. Some forms of rework can be carried out on software in order to comply with the requirements of the new standard, but this is not generally possible. For example, additional testing is possible, but the requirement to use a particular design technique or notation cannot be complied with once the design phase of the development has been completed.

2.3. Interpreting standards" 2.1. Measuring software reliability Software used in safety-critical applications must be highly reliable 1. Typical requirements for safety-critical systems are for no more than one failure in 10 6 o r even 108 h. As described by Butler and Finelli [1], reliabilities of this size cannot be measured directly. Moreover, there is no theory which allows software failure rates to be predicted accurately. This is because software fails as a result of design errors rather than as a result of a random process of decay.

2.2. Development processes Since it is not possible to test a program sufficiently to establish that it is highly reliable, all standards covering software quality and safety place requirements on the process to be used for software development. None of the three processes noted above for comparing the products of one standard against the requirements of another can readily be applied to standards which constrain the development process. 1. There is no direct way to compare different development processes. This arises because there is little empirical or theoretical justification for choosing one development process against another or for assuming that a particular process achieves a given reliability. The choice of development process is based on engineering experience and judgment rather than science. This is not, of itself, a reason for doubting that modern software engineering processes are worthwhile, but the differences between standards result from differences of opinion between different experts. 2. The data presented to show compliance with a software engineering standard are the documents which result from the software engineering process. Of course, trivial In Section 3.1 it is noted that reliability is not the same as safety and that additional software attributes are needed for safety.

Bhansali [2] distinguishes the 'whats', which are the objectives set by standards, from the 'hows', which are the techniques available to achieve the objectives. Most software standards contain many 'whats' without giving details of the corresponding 'hows'. This is partly intended; for example, in Section 1.1 of RTCA DO- 178B (see Table 1) it is stated that it is intended to provide guidelines in the form of 'objectives for software life cycle processes' together with 'descriptions of the evidence that indicate that the objectives have been satisfied'. A standard which follows this approach has the advantage that it contains only essential requirements and does not unnecessarily prescribe non-essential aspects of the development process. However, there is also a difficulty which arises from the use of standards based on objectives. Consider DO-178B Section 5.2, which places requirements on the software design process. This design process consists of one or more steps of refining the high level software requirements to a software architecture and to lower level requirements on the components in the architectures. The Table 1 Selected military and aerospace software and safety standards Number

Year

Title

MIL-STD-882C 1993 System Safety Program Requirement MIL-STD-498 1994 Software Developmentand Documentation Int Def Stan 00-55/1 1991~ The Procurement of Safety-Critical Software in Defence Equipment Int Def Stan 00-56/1 1991 HazardAnalysis and Safety Classification of the Computer and Programmable Electronic System Elements of Defence Equipment RTCA/DO-178Bb 1992 SoftwareConsiderations in Airborne Systems and Equipment Certification NSS 1740.13 1996 SoftwareSafety Standard Following the circulation of revised drafts for consultation in 1995, new versions of 00-55/56 are expected to be published shortly. b RTCA DO-178B is a guidance document rather than a standard; it is used by US and European civil aviation authorities as part of their regulatory requirements.

W. Marsh/Microprocessorsand Microsystems21 (1997)41-47 first objective is: 'the software architecture and low-level requirements are developed from the high-level requirements' Further guidance is given, including: 'low-level requirements and software architecture developed during the software design process should conform to the Software Design Standard and be traceable, verifiable and consistent'. But how is this to be interpreted? For example, what constitutes a verifiable low-level requirement and how are they to be traced to the initial requirements? Since the techniques are not prescribed, different practices may grow up in different companies and countries, even when the same standard is required.

3. Existing software safety standards Table 1 shows a small selection of standards relating to safety-critical software. The primary focus of our analysis is the US and UK standards for software safety in military systems. MIL-STD-882C and MIL-STD-498 are the US defence standards covering system safety and software quality respectively. The Interim Defence Standards 00-55 and 00-56, represent the latest UK thinking on the military application of safety-critical software. The international civil avionic guidance document DO-178B, described in Ref. [3], and the recent NASA Software Safety Standard NSS 1740.13 are included for comparison. The NASA standard is accompanied by a guidebook Ref. [4], described in Ref. [5]. There are many other standards relating to software and safety: see Refs. [2,6] for comparisons of a wider range of them. Here, we do not give a full comparison even of the small selection shown in Table 1; instead, examples from these standards are used to illustrate some of the difficulties of complying with differing software safety standards.

3.1. Software safety concepts Safe software is achieved by a combination of:

identified in the system safety analysis. The subsystem design may use software to meet some safety requirements, giving rise to software safety requirements. These safety requirements are traced down through the software design to identify requirements on the lowest level software modules which are important for safety,

3.1.1.2. Distinguishing critical software. Software which implements safety requirements is considered safety critical. Software which could 'affect' safety-critical software is also considered critical, using specified criteria for the effect of one software element upon another.

3.1.1.3. Analysis of hazards arising from software. As well as the hazards identified by the system level analysis, additional hazards may be introduced by the decisions made during the implementation of the software. An analysis is carried out to identify any potential hazards, which, if possible, are then eliminated or controlled elsewhere in the design. This form of analysis can be applied at all levels in the software process starting with the software requirements and going down to the code itself.

3.1.2. Software verification Verification includes both analysis and testing; both can be applied at each stage of the software development process. For example, a design review could be used to verify the design against the requirements and a code review to verify the code against the design. Testing is also carried out in stages: at first, the software modules are tested individually, then in combination, until system tests verify that the complete software system meets its requirements.

4. Standards of disharmony Most safety-critical software standards combine the two concepts: software safety and software correctness. However, there are also some important differences in the realisation of these concepts in the standards of Table I.

1. software safety analysis, and 2. software correctness verification.

4.1. Comparing software safety analysis requirements

Correctness and safety are different properties. Software is correct if it meets its requirements, ensuring that it is reliable in operation, while it is safe if it never contributes to a hazard.

4.1.1. Software safety requirements

3.1.1. Software safety analysis This attempts to ensure that the software does not contribute to hazards, even if it fails. The following activities are included in software safety analysis, which is carried out as part of a system safety programme.

3.1.1.1. Identification of software safety requirements. The safety requirements for the programmable subsystem are

43

Identifying safety requirements is a central activity in a system safety programme. According to Leveson [7], the system safety concept in the USA developed in the postwar period, receiving impetus from the ICBM programmes in the fifties. The overall aim of a safety programme is to design out hazards. Once the overall system safety requirements are known, analysis of the system design determines the safety requirements for each subsystem. When software was used in the implementation of systems with safety requirements, it needed to be included in the safety programme. A version of MIL-STD-882 pubfished in 1987 (MIL-STD-882B, Notice l) distinguished

44

W. Marsh~Microprocessors and Microsystems 21 (1997) 41-47

special software safety analysis tasks. These tasks have been removed from the 1993 version of the standard (MIL-STD882C), on the grounds that the activities of a safety programme can be applied to systems implemented in any medium. However, the standard does not describe how the safety analyses can be performed for software; this sort of detailed information may be more appropriate in a handbook rather than a standard, but to date no such handbook has appeared. Int Def Stan 00-56 is based on concepts which may appear similar, but also have significant differences, although Froome [8] states that the first draft 'largely conformed to MIL-STD-882B'. The central idea is to evaluate the risks arising from the use of a system and to ensure that they are acceptably small. Risk is a combination of the hazard severity and the hazard probability, so that a severe hazard may be tolerable if it can be shown to be sufficiently improbable. Shaw [9] describes how risk assessment has arisen from UK safety legislation, recently leading to the requirement in many industries for a safety case to he produced. A safety case is a reasoned argument to show that the risks arising from a system have been reduced as far as is reasonably practical. Software is distinguished in Int Def Stan 00-56 because its failures occur systematically, rather than randomly. As a result, the rate of software failure cannot be predicted using the techniques applicable, for example, to electronic components. The concept of risk assessment is also present in MILSTD-882C, but it is secondary to the elimination of hazards through design. Risk assessment is used to prioritise the resources available to eliminate hazards, especially if a hazard is discovered in the later stages of a system's development, when the cost of design changes is larger. Risk assessment is also used to determine the acceptability of hazards which cannot be eliminated by design: so-called 'residual risks'.

4.1.2. Safety-critical software Software which implements a safety requirement is considered to be safety-critical or, equivalently, software is safety-critical if its failure could lead to a hazard. Int Def Stan 00-56 and RTCA DO-178B distinguish multiple levels of criticality for software. In 00-56 the criticality depends on the risks posed by software failing, Since the probability component of the risk cannot be determined for software, the analysis is reversed, asking what probability of failure must be achieved for the risk to be acceptable. The software failure rate is measured in qualitative terms, rather than quantitatively. Software with a lower acceptable rate of failure must be developed to more rigorous standards, including the use of the most rigorous verification techniques, which are described below. In DO-178B, there is no concept of software failure rate. Instead, the software criticality depends only on the extent to which failure of the software could lead to loss of the aircraft. The criteria used are most similar to the severity of the hazard, without taking account of its probability.

MIL-STD-882C distinguishes five levels of safetycritical software, although the concept is less central. The definition of the levels is similar to the approach taken in DO-178B; for example, software has the highest level of criticality if it directly controls potentially hazardous systems. The levels are used to prioritise the safety analysis tasks; MIL-STD-882C does not relate the software level to the software development techniques which must be used. The same software levels are described in the guidebook [4] for the application of the NASA standard NSS 1740.13. In this case, the approach of IEC 1508 [10] relating software development techniques to the software criticality level is also added.

4.1.3. Analysis of hazards arising from software The analysis of hazards arising at each level in a system design is a central feature of a safety programme complying with MIL-STD-882. Software safety analysis is not distinguished from the analysis of possible hazards from electronics or other technologies; it is explicitly included in the Subsystem Hazard Analysis (Task 204) and the System Hazard Analysis (Task 205). MIL-STD-882C does not describe how such hazard analyses are to be accomplished. This is a particular problem for safety systems containing software, since applicable techniques are not widely known or used [11]. The NASA Guidebook [4, Section 5] contains safety checklists and describes some techniques, such as software fault trees. This form of hazard analysis is mentioned briefly in DO-178B Section 2.1. Int Def Stan 00-56 places most emphasis on the top-down analysis of systems, to determine how the top-level safety requirements flow down to the subsystems. There is less emphasis on the bottom-up analysis of subsystems, to detect any additional hazards which may arise at each level, although the 'functional analysis' or 'component failure analysis' activities may include bottom-up analysis.

4.1.4. Safety verification Task 401 of MIL-STD-882C requires specific testing of a system against its safety requirements. This contrasts sharply with the approach of DO-178B, which requires all software requirements to be tested, with the degree of criticality of the software determining the thoroughness of the testing. Safety verification is mentioned in Int Def Stan 00-55.

4.2. Comparing software verification requirements In this section, we consider two different approaches to verifying the correctness of software.

4.2.1. Testing with coverage measurement This method of program verification is based on a measurement of the extent of the tests performed on a program. A program could be shown to be correct if it could be

W. Marsh~Microprocessors and Microsystems 21 (1997) 41-47 Table 2 A possible set of test cases required for each level of coverage Test case

Coverage

c1

c2

Statement

Branch

MCDC

T T F

T F T

X

X X

X X X

tested completely: this would require every combination of the input values to be tested, with the outputs meeting the specification. For any program, except the most trivial, the number of test cases required for complete testing is impossibly large. Therefore, measures of the extent of the testing are defined which provide different levels of assurance of correctness, but always falling short of complete testing. In DO-178B, the measures of the extent of testing are based on the branching structure of the program. Three levels of structural test coverage are defined: statement coverage ensures all statements are tested, decision coverage ensures all branches in the program are tested and modified condition/decision coverage (MCDC) ensures that every element of every conditional expression is tested. Consider the very simple program, with boolean expressions c 1 and c2 and statement S:

45

found. Metrics from this project have been analysed in detail by Pfleeger and Hatton [14]. They found that in this project formal methods coupled with thorough testing had led to highly reliable code, although they were not able to conclude from the data available how much of the reliability improvement resulted from the use of formal methods rather than from other factors. Despite the extensive use of formal methods, the development process used on the project described by Hall would not have satisfied the requirements of 00-55, since no formal proof was carried out. (There was no requirement to comply with Def Start 00-55.) Moreover, Hall reports that the use of formal methods was more successful for functional specification than for design, noting that 'the use of formal methods in large-system design is less understood than their use in specification'. Formal methods are also described in the NASA Guidebook although the NASA standard NSS 1740.13 does not mandate formal methods. Compared to lnt Def Start 00-55, the approach is more pragmatic: 'Formal methods is not an all-or-nothing approach .... Although a complete formal verification of a large complex system is impractical at this time, a great increase in confidence in the system can be obtained by the use of formal methods in key locations in the system. [4, Section 4.2.3.2]'

IF cl a n d c2 T H E N S E N D

This program has two statements: the ' i f statement itself and S. Table 2 shows a possible set of test cases required for each level of coverage. Test coverage is mentioned in Int Def Start 00-55, but less detail is given.

Butler and others [15] report on a programme aimed at transferring formal methods technology to U.S. industry. Some of the projects were carried out within the NASA Space Shuttle programme, focusing on the use of formal methods for requirements analysis.

4.2.2. Formal ver!fication An alternative approach to verification is taken by Int Def Stan 00-55, which requires the use of 'formal methods'. A programming language can be considered to be a mathematical notation, so that a program is a large formula. In 00-55, a program must be specified 'formally', that is mathematically, and mathematical proof must be used to show that the program produces only the results allowed in the specification. By stating only what output is required for each input, but not how it is to be calculated, a formal specification can be completely precise, without being as long or complex as the program itself. Int Def Stan 00-55 proved controversial when originally published. It was considered by some [12] that formal methods 'were untried, untested and had very limited tool support'. An information system for air-traffic control which has been developed using formal methods is described by Hall [13]. Formal methods were used, together with conventional methods, for both the specification and design of parts of the software, which totalled approximately 200 k lines of code. The software was delivered in 1992 and proved highly reliable: after twenty months of operation about 0.75 faults per thousand lines had been

4.2.3. Static code analysis Since lnt Def Stans 00-56 and 00-55 have only recently been introduced and are not yet fully adopted, previous practices with the UK defence sector are still relevant. Evaluation of military avionics, including safety, is carried out at the Defence Test and Evaluation Organisation (DTEO) at Boscombe Down, whose requirements are described in [16]. This covers hazard analysis and the identification of safety-critical software. For the latter, static code analysis (SCA) is required to provide assurance of software correctness. SCA covers a range of techniques, with the most rigorous requiring formal mathematical analysis of the software code. Since SCA has been used mainly in the UK, others have confused it with software complexity measurement techniques, which are used to ensure that software is not over complex but cannot verify correctness. Ward [17] describes an application of SCA to a nuclear reactor shutdown system. Static code analysis was used to verify the software source code against its specification rather than specifically for safety analysis. The use of related techniques applied to C-130J avionic software is described in [18], though in this case the techniques were used during software development rather than retrospectively.

46

W. Marsh~Microprocessors and Microsystems 21 (1997) 41-47

5. Possible approaches to achieving harmony In this section, we consider briefly how standards for software safety in defence could be harmonised. The most direct approach would be to adopt a single common standard for software safety in military avionics, following the example of the civil aircraft industry with RTCA DO-178B. This standard does not cover the same scope as existing military software safety standards, but it could be a starting point. Another possible starting point is the standard being prepared by the International Electrotechnical Committee: IEC 1508 [10]. This standard covers the functional safety of safety-related systems, including both hardware and software. It is intended to be 'generic' and to be tailored to the needs of each industry sector. The relationship between DO-178B and IEC 1508 is examined by Garnsworthy and Johnson [ 19], who note the significance of the international consensus which IEC 1508 represents. DO-178B is also the result of a process of consultation which achieved a consensus in the civil aircraft industry. In this paper, we have argued that, at the present time, no such consensus exists between the US and UK defence communities, at least as evidenced by the different standards. It is possible, however, that the standards do not give a complete picture. Of those we have considered, MIL-STD-882C and DO-178B have the longest history: according to Leveson [7], MIL-STD-882 is derived from a US Air Force document published in 1966, while the first version of DO-178 was published in 1981. These standards are most likely, therefore, to correspond with the practices adopted in industry. Int Def Stan 00-56 and 00-55, however, seem to have been put forward to improve existing practices. For safety analysis, Int Def Stan 00-56 and IEC 1508 both appear to require a risk analysis from first principles. Within a particular application such as avionics, these standards could usefully be supplemented by more specific guidance, building on existing experience. The consensus in industry on current best practice therefore needs to be investigated further. The absence of a technical consensus on software safety is a significant obstacle to the adoption of a common standard. Without a technical consensus, a common standard would be hard to agree upon. Moreover, since the requirements of software standards often require interpretation, a common standard adopted before a technical consensus had been reached might not be sufficient to ensure common practices. A better appreciation of any differences in the approach taken to the safety of defence software in the US and UK, and in other European countries, is therefore an essential first step towards the harmonisation of standards.

6. Conclusions Divergent software standards pose a problem for organisations wishing to procure a system, particularly a safety-critical

one, when it contains software and it is necessary to show that the software complies with a software safety standard other than the one used by the original developers. The difficulties result from both general and specific factors. A general difficulty is that the processes used to qualify existing equipment to a new standard may not be effective for software safety. The preponderance of requirements relating to the development process in software standards is the principal difficulty, augmented by the degree of interpretation required to determine whether the standard has been achieved. A brief comparison of US and UK defence standards, RTCA DO-178B and a recent standard and guidebook for software safety from NASA has illustrated some specific differences. The software safety analyses required by US and UK defence standards appear similar and could be interpreted as being similar. However, in our opinion, there are significant differences between these standards. The safety programme of MIL-STD-882C focuses on the elimination of hazards by design, treating a system component implemented in software in the same way as any other component: tracing safety requirements down through the design decomposition and checking, at each level, for any hazards introduced by the realisation of the requirements. In comparison, the UK's Int Def Stan 00-56 focuses on the assessment of risks arising from software failures. When the risks are high, the most stringent design and verification techniques must be employed to reduce the probability of failure. The processes required by different standards to achieve the highest levels of reliability for safety-critical software also differ. DO-178B requires the use of very high levels of test coverage, while lnt Def Stan 00-55 requires specifications to be written in a formal mathematical notation and the correctness of the program to be shown by logical proof, in addition to testing. Since 00-55 is not yet a fully adopted standard, the existing UK MOD requirement for the use of static code analysis to demonstrate the correctness of software has continued. MIL-STD-498, however, has no special requirements for safety-critical software, except that software safety analysis must be integrated with the system safety programme. The greatest differences may be between the safety culture of the US and UK defence communities; it is not clear how much the systems produced in the two cultures differ technically. The effects of the differences we have described between the standards are not clear: are there significant differences in the design or safety of systems produced to the differing standards? However it is enough for the concepts of software safety to differ between defence communities for significant obstacles to international procurement to be created. Although the adoption of a common standard, such as IEC 1508, may be the way forward, a better appreciation of the different perspectives on software safety is an essential first step. Without first achieving a common understanding of the software safety problem, a common standard will not be effective.

W, Marsh~Microprocessors and Microsvstems 21 (1997) 41-47

References [1] R.W. Butler, G.B. Finelli, The infeasibility of experimental quantification of life-critical software reliability, ACM Software Engineering Notes, 16 (5) (December 1991) 66-76. [2] P.V. Bhansali, Survey of software safety standards shows diversity, Computer, 26 (1) (January 1993) 88-89. [3] D.J. Hawkes, W.F. Struck, L.L. Tripp, An international safety-critical software standard for the 1990s~ in: Proceedings of the 1993 Software Engineering Standards Symposium, Brightom UK, IEEE, August 1993, pp. 178-187. [4] NASA guidebook for safety critical software--analysis and development, Technical Report NASA-GB-1740.13-96, Lewis Research Center, Office of Salety and Mission Assurance, 1996. [5] C.F. Radley. Software safety progress in NASA, Technical Report NASA Contractor Report 198,~12, Lewis Research Center, NASA, October 1995. [6] D. Sparkman, Techniques, processes, and measures for software safety and reliability, Technical Report, Nuclear Systems Safety Program, Lawrence Livermore National Laboratory, May 1992. [7] N.G. Leveson, SAFEWARE: System Safety and Computers, Addison-Wesley, 1995. [8] P.K.D. Froome, Interim Defence Standard 00-56: hazard analysis and safety classification of the computer and programmable electronic system elements of defence equipment, Reliability Engineering and System Safety, 43 (1994) 151-158. [9] R. Shaw, Safety cases--how did we get here? in: R. Shaw (Ed.), Safety and Reliability of Software Based Systems, Proceedings of the Twelfth Annual CSR Workshop, Bruges, Springer-Verlag London Lid, September 1996. [10] IEC, IEC 65A/179-185, Draft IEC 1508--Functional safety: safetyrelated systems. Parts 1-7, Technical Report, IEC, June 1995. [11] J.A. McDermid, Software hazards and safety analysis: opportunities and challenges, in: F. Redmill, T. Anderson (Eds.), Safety-critical Systems: The Convergence of High Tech and Human Factors, Proceedings of the Fourth Safety-critical Systems Symposium, Leeds, Safety-critical Systems Club, Springer, 1996, 209-222. [12] C. Rees, G. Oddy, Safety critical software for defence systems: requirements of the Interim Defence Standard 00-55, GEC Journal of Research Incorporating the Marconi Review and the Plessey Research Review, 12 (I) (1995)43-49. [13] Anthony Hall, Using formal methods to develop an ATC information system, IEEE Software (March 1996) 66-76.

47

[14] S. Lawrence Pfleeger, L. Hatton, How do formal methods affect code quality IEEE Computer, 30(2) (1997) 33-43. [15] R.W. Butler, J.L. Caldwell, V.A. Carrefio, C,M. Holloway, P.S. Miner. B.L. Di Vito, NASA Langley's research and technologytransfer program in formal methods, in: COMPASS '95, Proceedings of the Tenth Annual Conference on Computer Assurance, Gaithersburg, MD, USA, June 1995, pp. 135-149. [16] DTEO, Outline of requirements for the provision of a safety case by the Design Authority, Technical Report Rel;erence AEN/18/103 Issue 1, DTEO Boscombe Down, February 1992. [ 17] N.J. Ward. The rigorous retrospective static analysis of the Sizewell "B' primary protection system software, in: Janusz G,Srski (Ed.), SAFECOMP '93: Proceedings of the 12th International Conference on Computer Safety, Reliability and Security, Springer-Verlag, October 1993, 171 - 181. [18] M. Croxford, J. Sutton, Breaking through the V and V bottleneck, in: Proceedings of Ado In Europe 95, Frankfurt, Springer-Ver[ag, October 1995. [19] J.R. Garnsworthy, M.H. Johnston, Is IEC 1508 "Salety-related Systems: Functional Safety" relevant to the avionics industry'? 1995 Avionics Conference and Exhibition, Heathrow, UK, ERA Technology Ltd. November 1995.

William Marsh graduated from Cambridge University in 1983 with an honours degree in Engineering and Electrical Sciences. He subsequently reeeived an MSc in Computation from OxJord University. He started his career developing real-time process monitoring systems, mainly in the Water lndustr)'. After a brief interlude, applying expert systems in the insurance sector, he joined Program Validation Lid, applying soj%vare verification techniques, including static analysis and formal proof to safety-critical software. At this time he was involved in the fi~rmal de[inition of SPARK, a subset of the Ada programming language which has been adopted in several avionics systems. Since joining ERA's Software and Systems bztegrity Department in 1994, he has been involved in the assessment ~ sal'eo,-related programmable systems in various industries. He is responsible for ERA's business in critical software assessment in the avionics and automotive sectors. This work has included advising the MOD on the procurement of safety-critical software. His particular interest is in techniques for sal'eO'-critical sqftware verification, i~lcluding hazard analysis, fi~rmal verifieation and testing and how these techniques should be used in combination to achieve very high integrio,.