The role of data analysis and surveillance in reliability analysis

The role of data analysis and surveillance in reliability analysis

Nuclear Engineering and Design 71 (1982) 367-374 North-Holland Publishing Company 367 Contribution THE ROLE OF DATA ANALYSIS AND SURVEILLANCE IN RE...

631KB Sizes 1 Downloads 58 Views

Nuclear Engineering and Design 71 (1982) 367-374 North-Holland Publishing Company

367

Contribution THE ROLE OF DATA ANALYSIS AND SURVEILLANCE

IN RELIABILITY

ANALYSIS * W. B A S T L

Gesellschaft fi~r Reaktorsicherheit (GRS) mbH., Forschungsgeli~nde, D-8046 Garching, Fed. Rep. Germany

As an introduction the data problem is illustrated by discussing failure modes and the question of random versus systematic failures. Structural failures are addressed briefly, while system failures are treated with reference to control and instrumentation. The relevant German standards are investigated regarding probabilistic requirements. It is concluded, that reliability and risk assessments are still directed towards comparison, which on the other hand provide a powerful tool for a well balanced system design. Finally some examples are given on updating of literature data by experience.

1. Introduction The effect of quality control and surveillance programs on the updating of reliability estimations are considered. Furthermore, the problem areas shall be identified where statistical data are needed urgently. The latter already leads to the general data problem, which is still the critical area in reliability analysis. Especially in nuclear power technology there are plenty of quality assurance measures, in the beginning and throughout the lifetime of the plant. The basis of all the quality control programs is still a deterministic one, i.e., a certain initial failure probability or a certain failure probability during the lifetime of the plant is not guaranteed explicitly. This situation exists for good reasons, because of the high quality of the products used in reactor technology, especially for the relevant safety systems, and the rapid technical progress, which make it very difficult to arrive at statistically founded reliability estimations, at least in the sense of "hard figures" which can be used with quality standards or safety guidelines. This lack of a statistical or probabilistic basis generates many problems in failure data analysis, especially when trying to quantify the influence of all types of quality assurance measures on the reliability of the plant. For all these reasons, it is apt to discuss the

* This paper was presented as an introduction to Session IV 'the role of data analysis' and surveillance in reliability analysis of the SMiRT Post Conference Seminar, Paris 24-25 August 1981. 0029-5493/82/0000-0000/$02.75

© 1982 N o r t h - H o l l a n d

failuredata problem in general. Though in the strict sense, it is often difficult to distinguish between a structural failure and a system failure, they are treated in two separate sections. Following the adapted practice a structural failure is associated with a large structure or heavy components, e.g., reactor vessel, piping.

2. The data problem Following the practice used in the reliability analysis of nuclear power plant systems, we can think of 4 different failure types, depending on the scope of the system or system part (operational or stand-by) and the nature of the system or system part (passive or active) (fig. 1). In this sense a large-size control valve would comprise a "structure" (the housing) and a "system" (actuator, stem and cone), and the expected failure rates will be certainly different; either it could be an operational (failure types 1 and 3) or a stand-by device (failure types 2 and 4). Bearing in mind the specific failure mode which has to be considered depending on the mission, e.g., valve fails to close, fails to open, sticks, internal leakage, we get at the end a variety of different failure rates. Thus we can see even from this simplified scheme that the sampling of relevant field data is a very tedious and extensive task. As mentioned above, the scheme chosen follows established practice and represents a very coarse mesh of parameters which have to be considered with failure rates. Principally, we would like to have available the parameters influencing the failure rates (load, ambient conditions, etc.) in a continuous form. But it is almost impossible to achieve this 15y field

14", B a s t l / The role oJ data analysis and surceillance

368

Operational Systems

Stand-by Systems

Passive Systems/ System Parts ("structures")

Active Systems/ System Parts ("systems")

Examples: Pipes,Vessels

C,[ Systems

Control Valves Failure Type Ior2

Failure Type 3or{,

even more difficult. Problems concerning C M F and human error are discussed more deeply in ref. 1. From the above discussion, the tight interconnection becomes evident between failure data analysis/sampling and system reliability analysis, which needs the data as an input. As cited in ref. 2, data analysis is an integral part of probabilistic risk analysis; it is the process by which the evidence that becomes available to us is incorporated in our models. The theoretical basis for the incorporation of the various types of experience into the reliability prediction is the well-known Bayesian theorem. The paper cited above discusses some problems when applying this theorem in risk analysis.

Failure Types land 3 or

3. Structural failure

Failure Types 2 and Z,

Fig. 1. Schematic of failure types. experience; we would have to rely upon special factory tests, which up to now have been applied successfully with electronic and electric equipment only. From the theoretical point of view, the estimation of system failure probability via the failure probability of its components is based upon the statistical independence of component failures. This means for practical applications that systematic deficiencies of the component do not exist (high quality components proved in long-term application) and there are only the random failures left. As long as interdependencies between component failures can be described probabilistically, conditional probabilities have to be applied, i.e., the failure rate is bound to a certain condition (e.g., ambient condition during an accident). A specific type of interdependency to be addressed at this point is the common mode failure, which, to a large extent, is of the systematic type, because it originates from causes like design or manufacturing deficiencies. From the point of view of reliability theory we also reach a certain border line with this type of failure, because it represents, as a rule, a singularity, and once the reason for its appearance is found it will never show up again in the same manner. However, in practical application we are used to the arguement that for a certain system using a certain technology we will have as a means certain common-mode failure probabilities. This statement is certainly true, but it is of little help for the quantification of the CMF, and as we all know, this is a large problem. The treatment of the human error, which comprises incidental and systematic elements as well, is

As this subject is covered by various papers presented in these proceedings, just a few remarks on the subject will be made here. In the reliability analysis of nuclear power plants, there existed from the beginning the problem of the failure data of large structures. They are of course in most cases unique, and even medium-size heavy components are built in comparatively small quantities. In combination with the high quality, the sample sizes achieved are too small. The only way to solve this problem is to provide probabilistic models for describing the mechanical behaviour of the structure. The various efforts in this field have resulted in a new discipline, probabilistic fracture mechanics, initiated and developed by Freudenthal [3] and in Germany successfully applied and further developed mainly by Schu~ller [4]. All investigations performed in this field until now have shown the crack distribution in the structure to be the essential parameter, with the most critical areas for cracks being the welds and their surroundings. For estimating the crack distribution two methods offered themselves [5,6]: - the assessment of the results of non-destructive acceptance tests (where the crack distribution is given by the distribution of crack depths); - a model for the developing of cracks by considering the welding protocol. Once the initial crack distribution is known, further behaviour of the cracks can be estimated considering the load history throughout the lifetime of the structure. In ref. 7 the influence of ultrasonic inspection of the welds; hydro-test of the vessel; crack growth under normal, upset and test conditions; are discussed with reference to the probability of the brittle failure of the reactor pressure vessel. As for the load history of the

369

IV.. Bastl / The role of data analysis and surveillance

structure it becomes evident that, e.g., calculation of essential parameters (pressure, temperature) under transient conditions will be necessary. In order to be able to consider the uncertainties of a code in a probabilistic manner, the response surface methodology can be used. Beside the above example this method is well suited to estimate the uncertainties of codes in various types of other applications, e.g., coherent blockage in fuel elements due to LOCA, structure behaviour due to seismic occurrences [8].

4. System failure In nuclear reactor technology we are concerned with high reliability systems. The high reliability is achieved by means of high quality components and - in many cases - by means of redundancy. Therefore the best way to arrive at statistically meaningful failure data is to collect those of the components. Consequently, failure of the system is described by component failure (or a combination of component failures) and the structure of the system. From the point of view of failure data sampling, this procedure has the advantage of at least presenting the chance to obtain large enough population sizes. It is evident, that C + I systems should be well suited, and that problems are to be expected for mechanical systems. As a matter of fact, experience has proved this situation, but has also shown the difficulties for C + I, still existing or recently arising, for example: - proper treatment of reliability effects which are due to the design of electronic circuits (and not due to

the quality of the components); - fast development in electronic component technology; - c o m m o n mode failure, especially those stemming from design/technological deficiencies (see point before). This shall be illustrated by discussing some problems that arise with integrated circuits (ICs), especially in the type of testing which is used to detect deficiencies of internal contacts. Highly integrated circuits have as many as a thousand internal contacts. In addition somewhat more than 100 different contacting techniques have been developed. Therefore contacts not only play the dominant role in the reliability of the IC, but also cause substantial difficulties for the proper treatment of a reliability model. The internal contacts can be divided into three categories: (i) Contacts for heat removal. They connect the substrate to the casing and must have good electrical and thermal conduction properties. Tests to detect deficiencies: storage, temperature shock. (ii) Wire- and band-contacts. They connect the microcontacts of the semiconductor systems to the outside contacts of the casing. Test to detect deficiencies: ultrasonic, mechanical shock, humidity. (iii) Contacts to the substrate. They are the most complex and complicated part of the contacting techniques to be used. Metallization is applied in one or several layers. Tests to detect deficiencies: thermal storage, humidity. In order to give a rough view on the situation, table 1

Table 1 The failure mode distribution of integrated Circuitsand associated screening tests [9] Failure modes

Screening Tests Visual Check

Temperature Variation

Moisture Heat

Shocks

Vibration

Leckage Checks

Burn in

Heat Storing

Si-crystal Surface effects



Crystal fixing Metallization



Oxidation



Bonding



Casing Electr. stabilization

• •

• @

370

w. Bastl / The role of data analysis and surveillance

shows failure mode distribution of ICs and the associated screening tests according to ref. 9. When looking only at these few points we can imagine already the complexity of reliability prediction. On the other hand, there has been considerable progress in understanding the specific technological problems, and also substantial progress in the failure rate estimation of integrated circuits [9,10]. Among others, models to predict the reliability of ICs can be found in the MILHandbook 217 B. They are derived from test- and field data. Failure rates of ICs can be evaluated depending on the amount of quality control; the stress during application; the degree of integration. [The degree of integration is given by the number of transistors, gates and bits (for Memory units) respectively.] For all the reasons discussed above, we still have the situation of the relative character of reliability statements, in spite of all the progress in data analysis and data sampling. What has certainly improved is the engineering judgement on the meaning of reliability figures and on the order of magnitude of failure probabilities for specific components or systems. This situation is reflected by the manner in which reliability considerations are treated in reactor safety guidelines. Below are some examples of German guidelines. (i) Safety criteria of the Federal Minister of Interior, Criterion 1.1, Basic Principles of Safety Precautions, para. 1: (1)... according to common engineering experience, malfunctions of plant components or systems (abnormal operating conditions) can occur during the lifetime of the plant. To cope with these abnormal operating conditions, systems shall be provided for operational control and monitoring. These systems shall be designed for such that incidents as a result of abnormal operating conditions are avoided with a sufficient reliability. Comment regarding methodics: to ascertain that the safety concept is well-balanced the reliability of safety related systems and plant components - supplementing the overall assessment of the nuclear power plant's safety on the basis of deterministic methods - shall be determined with the aid of probabilistic methods as far as the required accuracy can be achieved according to the state of science and technology. (ii) KTA Safety Standard 3501, Reactor Protection System and Monitoring of Engineered Safeguards, Section 5, Design of Reactor Protection System, para 5.1.2 Reliability and Quality Control: -

-

-

5.1.2.1. The reliability of the equipment shall be determined. Note: This can be done by qualification tests, by statistical methods, failure effect analyses, worst-case tests or by evaluations of operating experiences. 5.1.2.2 Within the framework of the factory tests, production runs shall be checked for the required quality by checking a representative random sample under operating and worst-case conditions. (iii) KTA Safety Standard 3701.1 General Requirements for the Electrical Power Supply of the Safety System in Nuclear Power Plants; Part 1: Single-Unit Plants, Section 3, General Requirements, para 3.2, Reliability: The power supply for the safety system shall be designed to be of such reliability that it is not the determining factor for the unavailability of the systems to be supplied. The components employed shall be qualification tested or service proved with regard to the intended use and the assumed conditions during use, and shall be as maintenance-free as possible. An adequate reliability shall be demonstrated for the specified normal operation and for incidents to be considered; every component and sub-system of the power supply of the safety system shall be considered for this. As can be seen from the guidelines, probabilistic methods are recommended as additional aid to assess safety related systems, and somewhat more explicit reliability investigations are described for reactor protection systems. In the following, let us explore the subject a little further. Factory tests comprise two essential steps, the testing of the nominal data (e.g. nominal supply voltage and ambient conditions as given in the test room) and the testing under worst-case conditions. Probability methods are used in this context with acceptance criteria. Qualification tests are performed by institutions independent of the vendor, There are two KTA Safety Standards which are presently in preparation. - KTA 3503 Equipment Related Qualification Tests of Electronic Units of the Reactor Protection System. - KTA 3505 Qualification Tests of Transmitters. The draft version of KTA 3503 only contains some aspects of the probabilistic approach, in that the failure effect analysis is described as a means to determine failure modes and subsequently the associated failure rates of the electronic unit. In connection with KTA 3505, proposals have been made to include guidelines for the determination of failure rates. It takes into consideration sample sizes, observation time, appropriate test cycles, and the reduction of testing time. In general it distinguishes between theoretical failure

371

W. Basil / The role of data analysis and surveillance

rate estimation (from the literature) and practical failure rate estimation (from field experience). In the latter case, certain conditions are proposed as: - the equipment tested has to be in operation for longer than two years; - the operation time has to be longer than 107 h; - if the two above conditions are not fulfilled, comparability with existing equipment has to be demonstrated. It has to be shown that no new component types have been used, the same design principles and ambient conditions have been applied. If the comparability is given the failure rates of the existing equipment can be used. So far, we have attempted to illustrate the situation with reference to the safety guidelines and safety standards used in nuclear reactor technologies. It appears that factory tests of production lots are concerned with those areas where statistical methods are applied explicitly. Until now the German guidelines do not ask for factory investigations which would guarantee certain reliability figures throughout the lifetime of the equipment. However, this will be the consequential next step, and the present guidelines at least guarantee the same initial reliability of the equipment delivered. Evaluation of practical experience is an important means of qualification tests. We have to notice, however, some substantial restrictions. When broadening the sample size by including non-nuclear experience, it is often difficult to demonstrate whether the application conditions are really comparable. This at least is the practice in Germany, but it should be mentioned that the lack of guidelines causes additional uncertainties. Nevertheless improvement in this area can be expected when even more standardization of C ÷ I equipment takes place, and when increasing reliability requirements in non-nuclear applications occurs. Then proper evaluation of field data will be improved. There is no doubt that careful failure data-sampling and evaluation by probabilistic methods have substantially influenced the quality of commercial electronics; as is the case for Japanese vendors [ 11]. On the other hand the safety guidelines also show where probabilistic methods can be used successfully in spite of the data problem, namely when assessing the balance of different safety measures. This, more or less, directly leads to the idea of risk assessment, which can be used, e.g., for comparing the risks of impact on the environment of different types of industries. Risk evaluations inherently have the problem of many uncertainties, which do not originate from the probabilistic methodology. In this context consequence modelling should be mentioned. For this reason, the failure data

problem is somewhat easier, and larger confidence intervals are acceptable. Besides reliability analyses of technical systems for the sake of finding weaknesses (bottle necks) the probabilistic approach to assess system quality achieved up to now is of great importance in the context of risk studies [12,13]. Let me now give some examples of the way the data problem is handled in the German Risk Study, and how practical experience is used to update failure data.

5. U p d a t i n g of literature data by e x p e r i e n c e

Assessing data from literature with the aim of using them for a specific case study is very often difficult because of the lack of technical specifications. Frequently, statistical data are also not sampled for the purpose of failure rate estimation, so that even important information like the failure mode, operational conditions, etc. are not given. This leads to a quite large scattering of data. As we know from risk studies of nuclear power plants, ratios of 10-50 between the upper and lower limit of the confidence interval are quite frequent. Therefore worldwide efforts have been made for the past several years in order to build up data banks. In Germany, a reliability data bank system has been built up by GRS, where data from nuclear power plants are sampled, partly from licensing event reports and partly from specific plants, whereby contracts with the utilities enable a much more detailed reporting scheme. In the German Risk Study, data were generally represented in such a way as to fully reflect the scattering found in the literature. This sometimes leads to a large

99,9 99, 97 90 80

5O

f

"J 20 10 5 2

0,5 01 0,02 10-7

I:.

J

2

5

10"*

2

5

10"s

2

5

10 .4

2

Fig. 2. Fitted log-norm distribution of pump failures.

5

10 .3

VK Bastl j~ The role of data analysis and surveillance

372 99,9

99~9 I

99

99 i

97

97

/

90 8O

/

5O

f

./

20 10 5 2

/

#.

0,5 0,1 0,0 2 10-~

O, ' 2

5

10"s

2

5

10-s

10 .4

2

2

5

10 .3

107

2

5

10-6

2

10-s

h-1

2

10 -4

2

5

e,.-

10 -3

h"

Fig. 3. Fitted log-norm distribution of the failure of differential pressure transducers.

Fig. 5. Pump fails to start during operation, 90% confidence interval.

bandwidth which on the other hand could be diminished by the additional assessment of operational data, when followed as a subsequent step. In the following, some results are discussed [14]. Fig. 2 shows the fitted log-norm distribution of pump failures. The points represent the values found in the literature. As can be seen, 58% of the failure rates are below 1 0 - S / h , and about 94% are below 5 × 1 0 - 5 / h . The scattering of the data is quite large, the band containing 90% of the values spreads over 1.5 orders of magnitude. The figure shown is one of the worse examples, though quite good agreement with the log-norm distribution can be noticed. One of the better cases is shown in fig. 3. From a comparison for "pump fails to start", we realize a factor

of 36 between the lower and upper limit of the 90% confidence interval for literature data, and a factor of 8 for experience data (fig. 4). While the mean value is the same, this is not the case for " p u m p fails during operation". This is due to the fact that the literature data contain a greater number of highly loaded operational pumps, whereas the experience data relate to relatively lower loaded pumps, which are partly in stand-by (fig. 5). Again the confidence intervals for experience data are considerably smaller. Relatively good agreement between the literature and experience data can be seen for " m o t o r valve fails to open or to close" (fig. 6). As a conclusion the investigator states that - the data derived from experience show much less

9q,9

99,9

99

99

97

97

90

9O

80

8O

50

I

50

20 10 5 2

2O 10 5 2

0.5

0,5

O, 0,02 10-7

01 0,02

2

5

10.6

2

10"s

2

5

10"l

2

10 .3 h-1

Fig. 4. Pump fails to start, 90% confidence interval.

I0 "7

2

5

I0"6

2

5

10 "~

2 ~

Fig. 6. Motor value fails to open or close.

5

I0 ~

2

5

I0 -3 h -~

W. Bastl/ The role of data analysis and surveillance Table 2 Comparison between calculated and actual failure rates of subsystems as calculated from component failure rates via fault trees.

HP injection LP injection LP sump recirculation Component cooling train Service water train

Nuclear experience

Calculation

8 × 10 3 8 × 10 -3 1.6)< 10 -2 8 X 10 - 3 1 X 10- 2

1.6>( 10 -2 1.6)< 10 -2 2.4>( 10 -2 1.8 >( 10 - 2 1.6 >( 10 2

373

sample sizes to be made is discussed and applied in ref. 16. It considers the degree of utilization, a (taking into account that not all functions of the electronic unit are used for a specific application); - the degree of failure detection, w (taking into account that not all component failures cause a failure of the unit, e.g., components employed to improve the dynamic function of the unit). Given n u as the number of components failed during the observation time T, then the reduced failure rate ~'u can be estimated as -

~'ij = nij/NjN, jaijWijT,

scattering than the literature data; in 50% of the cases the agreement of the expectation values is very good; in those cases where substantial differences between the experience values have been observed, the experience data yield more favourable values. A further possibility for the comparison of data is also discussed in ref. 15. Very often it is more practical to look for experience data from subsystems, because they can be directly taken from the result of the relevant periodic tests, e.g., testing of the HP-injection. These data can then be compared with the failure rates of the subsystems as calculated from the component failure rates via fault trees (table 2). When estimating the component failure rates from failure reports of electronic circuits, it has to be considered that in the most cases only the failure of the electronic unit is reported. This is especially true for plug-in units, which are simply replaced when they fail; then it is not possible to trace back to the component failure. A methodology which is suited to cope with this problem and which enables a better estimate of the -

-

with: Nj... units of type j ; and N j . . . components of type i in unit j. If all the electronic units in a plant are to be considered, we have to sum the nominator and denominator with respect t o j . For practical application it is useful to use the typical global factors a and w. The values can then be estimated according to the situation in the plant under consideration. For the practical analyses performed in ref. 16, aw was estimated to be 0.5. This was based mainly on an investigation of failure data from the coal fired plants at Pleinting, Schwandorf and Aschaffenburg, where all the electronic units of the control systems were analyzed by means of an automatic test device after five years of operation. In table 3 the estimated "real" failure rate X for some of the components used with the control systems SIMATIC-P, and G E A M A T I C - 1 1 5 0 are given and compared with data from the M I L - H D B K 217 B. The original data for the systems are taken from refs. 17 and 18. As can be seen from the result, the MILH D B K data evidently overestimate the failure rates of the components used in the control systems which have been investigated.

Table 3 Failure rates of components from control systems Component

Resistor Condensator Transistor Diode

Failure

rates (10 - 9

h 1)

GEAMATIC

SIMATIC-P

DECONTIC-B

MIL-HDBK 217B a

0,46 14,2 48 3,8

0,4 5,4 130 3,0

0,9 2,7 17 5,0

15 6 180 120

a quality factor 1; ground fixed

374

W. Bastl / The role of data analysis and surveillance

6. Conclusion As failure data sampling of c o m p o n e n t s is not the principal problem, further progress should be possible in putting even more emphasis on the national a n d international data b a n k s which have been enlarged in recent years. It should then be possible to arrive at more meaningful data, i.e. with confidence b a n d s small enough so as to achieve more useful results of the overall reliability or risk of the p l a n t u n d e r consideration, a n d finally make possible a better comparison of the risks involved in different technologies. Designing and m a n u f a c t u r i n g complex technical systems very often requires large c o m p u t e r codes. F o r an overall probabilistic a p p r o a c h to assess these systems, it is also necessary to consider the uncertainties of codes; response surface m e t h o d s are a means to estimate them. Structural failure data would appear to be a greater problem. Since large structures are in most cases of unique configuration a n d / o r design, the prediction of failure probabilistics involves m u c h more modelling as c o m p a r e d to the components. Consequently, validating the data will most p r o b a b l y remain a considerable problem for the years to come. In that context the quantification of experts' opinion and generic experience becomes especially important. Therefore, application of Bayesian methodologies shall be p r o m o t e d much more t h a n up to now, but this would also involve the willingness of the analysts to explicitly accept the subjectivistic character of probability prediction. Even for the specialist in the field, it is often not clear that the subjectivity involved does not degrade the result of the prediction, b u t rather reflects the situation as it stands. Moreover, the outsider is used to regarding subjectivity and usefulness at the same level. It is also to be expected that engineers a n d system analysts should be better trained in probabilistic methods a n d probabilistic thinking. This will certainly be t h e hope, if we are to seriously apply probabilistic methods for decision making in all types of technologies where failures are associated with high risks.

References [1] H. Hoertner, Problems of failure data with respect to systems reliability analysis, these proceedings. [2] G. Apostolakis, Data analysis in risk assessments, these proceedings.

[3] A.M. Freudenthal, Safety reliability and structural design, J. Struct. Div. 87 No. ST3 (1961). [4] G.I. Schu~ller, Einfi~hrung in die Sicherheit und Zuverl~,ssigkeit von Tragwerken (W. Ernst & Sohn Verlag, Berlin, 1981). [5] W. Schmitt and R. Wellein, Model of the flaw size distribution in welds, Frauenhofer institut fi~r Werkstoffmechanik, Freiburg, Fed. Rep. Germany. [6] R. Wellein, private communication. [7] R. Wellein, Influence of pre/in-service inspections and tests on the reliability of reactor pressure vessels, Kraftwerk Union AG, Erlangen, Fed. Rep. Germany. [8] A.C. Lucia, Response surface methodology approach for structural reliability analysis: an outline of typical applications performed at CEC-JRC, ISPRA, Commission of the European Communities, Joint Research Centre - Ispra Establishment. [9] H. Wilde, Aspekte der Zuverl~issigkeit bei Baugruppen und Gerfiten, Seminar-Series QZE on Quality- and Reliability-Assurance in the Electronics Industry, Productonica 77, Munich (22-26 November 1977). [10] W. Gerling and K. Werner, Sicherung der Zuverl~ssigkeit von Bauelementen - technologische Massnahmen und Ergebnisse bei Halbleiter-Produktion, Seminar-Series QZE on Quality- and Reliability-Assurance in the Electronics Industry, Productonia 77, Munich (22-26 November 1977). [11] W. Toelle, Probleme der Zuverl~ssigkeit bei Rundfunkund Fernsehger~ten und den zugehiSrigen Baugruppen, Seminar-Series QZE on Quality- and Reliability-Assurance in the Electronics Industry, Productonica 77, Munich (22-26 November 1977). [12] Reactor Safety Study, An assessment of accident risks in US commercial nuclear power plants, WASH-1400 (NUREG-75014) (October 1975). [13] Deutsche Risikostudie Kernkraftwerke, Eine Untersuchung zu dem durch Sttrf~lle in Kernkraftwerken verursachten Risiko, Hauptband und Fachband 2, Verlag TUV Rheinland (1980). [14] Ergebnisse der Deutschen Risikostudie, 3. GRS-Fachkonferenz, MOnchen, 18-19 September 1980 GRS-34 (September 1981). [15] E. Lindauer and P. Kafka, Auswertung von Betriebserfahrungen durch die GRS, GRS-Fachgesprach, K61n (3031 October 1980). [16] E. Schri~fer et al., Ausfallraten ausgew~hlter Bauelemente und Ger~te der Leittechnik, Lehrstuhl und Laboratorium for Elektrische Messtechnik EMT 1/79. [17] H.D. Hager and U. Steimel, Zuverl~ssigkeit elektronischer Baugruppen beim Einsatz in der Kraftwerksleittechnik, Elektrizit~tswirtschaft 75 (1976) 24, S. 931-933. [18] G. Meyer, Die Zuverl~ssigkeit von Baugruppen der Leittechnik in Kraftwerk Irsching, Lehrstuhl fi~r Elektrische Messtechnik, lnterner Bericht (November 1978).