Reliability Engineering 12 (1985) 43-53
Probabilistic Fracture Mechanics (PFM)--A Meeting Report--March 1983
Notes on the informal discussion meeting held at the London Headquarters of the Central Electricity Generating Board on 7 March, 1983, are presented. The papers consider the use of reliability analyses in risk assessments and in the design of structures to avoid failure. Professor P. Stanley (University of Manchester) chaired the meeting at which four speakers presented aspects of their work in the field of structural reliability. After each talk there was a discussion open to all participants.
The Use of Probabilistic Fracture Mechanics in Devising Quality Control Policies in the Fabrication Industry, by J. Rogerson, Cranfield Institute of Technology, Cranfield, Bedford.
Synopsis It is felt that the development of PFM is inhibited by the lack of sufficient and reliable data so that accurate failure predictions are difficult to make. Nevertheless, PFM analyses can be used as sensitivity analyses for ranking 'quality improving' factors in order of importance and therefore aid the derivation and implementation of quality control policies. Examples taken from the offshore structure field indicate the possibilities and the uncertainties.
Summary The general approach to PFM analysis was first considered, leading onto more specific analyses. For high reliability structures such as a pressure 43
Reliability Engineering 0143-8174/85/$03.30 ~ Elsevier Applied Science Publishers Ltd, England, 1985. Printed in Great Britain
44
Prohabilistic /racture mechanics
a meeting report
March 19~'3
vessel or offshore construction, failure could be catastrophic. Even for low failure rates, there will be insufficient data from past history for accurate reliability predictions. In these cases, P F M can be used as an aid to an assessment of reliability. Insufficient data on design, material and quality levels can still be a limitation of PFM. There is often genuine disagreement between the type of assumptions to be made, for example with regard to stress level, reliability of non-destructive testing (NDT) and fracture toughness measurement. The strength of the approach lies in its ability to predict reliability and to rank various quality improving factors. Specific analyses involve a time independent failure integral calculation in this case. Distribution functions for actual defect and critical defect sizes are combined to give an estimate of the failure integral. The analysis is kept simple since there are few data on other variables or possible interactions. Figure 1 shows the results of two inspections giving defect size against relative frequency. Curves 1 and 2 represent results from two different organisations and procedures, but on the same structure. The apparent discrepancy between the two curves is probably due to a less sensitive inspection (2), unable to distinguish sizes of small defects. Plotting relative frequency thus gives a peak around 3 mm. This simply emphasises the care required in interpreting N D T records. A Weibull distribution was used for the defect sizes. The parameter estimates were chosen to be conservative and such that the distribution
1
30
c
20
20
40 60 80 Defect size, turn
Fig. I.
100
Probabilistic J~'acture mechanics---a meeting report--March 1983
45
was most exact at larger defect sizes. The upper tail is the area of particular importance for reliability estimates and, additionally, the N D T reliability should also be more accurate. Some 1000 m of weldment were used to obtain the data. For the critical defect size distribution (i.e. the fracture toughness), different distributions were appropriate under different conditions. The large spread of data for upper shelf behaviour gave quite a good fit for a normal distribution, both for ferritic steels and their weldments. In the transition region, a log normal distribution seemed a closer approximation. Comparisons were made of results before NDT and repair, at the fabrication stage and after N D T and repair, after the structure had been cleared of defects. In the upper shelf regime, two or three orders of magnitude difference were observed, emphasising the requirement for high reliability techniques for NDT and repair, in particular, for the few large defects. Some sensitivity to assumptions of toughness (Jc) was noted, but stress concentration factors and residual stresses were less important. In the transition region, failure probabilities were much higher and there was a less marked effect between the values before and after N D T and repair. Absolute reliability estimates are of little worth on their own. However, they point the direction for quality control yielding semi-quantitative judgements, as indicated by the sensitivity analyses just summarised. Further details of the work may be found in a paper by Saldanha Peres and Rogerson in Reliability Engineering, 8(3) (1984), pp. 149-64.
Design for Zero Failure, by P. D. T. O'Connor, British Aerospace Dynamics Group, Stevenage--Bristol Division, Stevenage, Herts. Synopsis The talk covered the way in which distributed load and strength interact to cause weakening and failure and how this relates to the predictability of reliability of mechanical items. The paper stressed the practical implications in relation to failure free design. Summary Where it is possible, a structure or component will be designed so that it
46
Probabilistic lracturc mechanics
a rneetin~ report
M a r c h 19~*~
q SM*a ~7L
A
LRE
Fig. 2.
This shows high SM and low LR.
will not fail (in contrast with the fail-safe philosophy). Failure costs (e.g. warranty, product liability) can be high, so that the product must be correct 'first time'. Reliability predictions generally involve distributed values of load, (L), and strength (S) parameters. (For example, in Rogerson's paper, L represents the defect size and S the critical defect size). The mean values/, and S and the standard deviations a L and a s can be used to define a safety margin and loading roughness (Fig. 2). These in turn give an indication of the proportion of items failing (as shown by the shaded areas in Fig. 3) upon application of load L'. Overstress tests can truncate the strength distribution at the lower end. Figure 4 shows curves versus failure rate per application of load (logl0) for high and low loading roughness (Carter,,1979). This assumes normal distributions lk~r L and though similar curves can be derived
SM
LR
ofSM
S,
b
g
&
k'
Fig. 3.
(a) Low SM. low LR: (b) low SM. high LR.
Probabilistic J~'acture mechanics--a meeting report--March 1983
47
10 -2
10-4
1o-6
.ow,.~ L / 1
10-~
I 0
6
3
SM
Fig. 4.
for other assumptions. The general curve is given in Fig. 5, which divides the values of S M into three groups (for constant loading roughness). (1) The failure rate is too high. (2) The failure rate is too sensitive. (3) Designs having virtually zero "failure rate, that is, they are intrinsically reliable. In reliability assessments, it is high load and low strength values that are of interest. It is not sufficient to consider the distributed values close to the mean, and indeed a unimodal distribution may not be sufficient: it may be multi-modal with interactions. The mean S may not be constant, caused, for example, by degradation owing to cumulative damage, and the standard deviation may also change.
I
I
I \ ~co°,t,°~ I ~fl L. I ® ®
+®+ SM
Fig. 5.
I
®
48
Probabilistic tracture mechanics
a meeting report
March 19~¢3
In summary, high reliability design principles may be expressed as follows: (a) Determine the most likely distributions of S and L: (b) Evaluate the S M which will ensure high reliability; (c) By analysis of the nature of the S and L distributions, determine the most effective protection methods; (d) Analyse strength degradation modes; (e) Plan test programme to corroborate analysis results. Analyse test results; and (f) Take action to correct or control (redesign). The presentation then moved onto reliability prediction and its associated credibility. Predictions are generally made to give an early indication of a system's performance, assess cost implications, identify design areas requiring further work and to assess possible trade-offs (such as cost-benefit analyses). Methods of reliability prediction include parts count, stress analysis, load/strength analysis (statistical design) and reliability modelling (fault tree/Markov). When large numbers are involved, the central limit theorem may be used to make predictions about stochastic behaviour. For example, Boyle's law is experimentally verified, but predictions based upon it use the kinetic energy of the gas molecules. Since the quantities of molecules involved in typical systems are vast, the central limit theorem enables the essentially empirical Boyle's law to be used with a high degree of credibility. Similarly, high credibility is achieved with simpler problems where there are few interactions (for example the predicted path of a snooker cue ball). The real problem is for moderately large numbers of actions and interactions, e.g. 100-1000, typical of reliability and failure physics situations. Human variability is also unpredictable and often defies statistical analysis. Therefore, accurate reliability prediction is not feasible, and we must be aware of the uncertainty inherent in any such work. Bibliography O'Connor, P. D. T. Practical Reliability Engineering. John Wiley, New York, 1981. Carter, A. D. S. Reliability Reviewed, Proc. 1.Mech.E., 193(4) (1979).
Probabilistic [racture mechanics--a meeting report--March 1983
49
Use of Advanced First-Order Reliability Methods in the Treatment of Fracture and Fatigue, by M. J. Baker, Imperial College of Science and Technology, London SW7.
Synopsis Advanced first-order reliability analysis is a powerful tool which has already been used in the study of a variety of structural safety problems and in the development of a number of deterministic structural codes. These methods are now being applied to the reliability assessment of components under conditions of fracture and fatigue. The aim of the analysis is to include all the relevant sources of uncertainty and to identify those which have a major effect on the risk of failure. Improvements in reliability can then be achieved by appropriate control measures.
Summary Several aims of reliability analyses were identified. Immediate aims are the determination of component and system reliabilities. More globally, reliability analyses provide a rational treatment of uncertainty, improved decision making, a rationalisation of control measures (inspection, testing and monitoring), maintenance and repair, improved design procedures (design rules and safety factors) and an aid to maximum utility of the system. There are three types of uncertainties: (1) physical variability-material properties; (2) statistical uncertainty--uncertainties in distribution parameters owing to lack of data; and (3) model uncertainty-arising from inadequacies of modelling. In the most general sense, the reliability of a structure is its ability to fulfil its design purpose for some specific time. In a narrow sense, it is the probability that a structure will not attain each limit state during a specified period. The advanced first-order second-moment (AFOSM) method is sometimes referred to as the previously defined Level 2 method. The steps in an AFOSM analysis are as follows: --Identify failure criteria. --Develop mathematical model for each failure criterion. --Identify relevant basic variables. - - M o d e l the various uncertainties by appropriate probability distributions. --Set up a multi-dimensional failure surface.
50
Probabilistic /racture rnechanh's
a meeting report
March 19&?
- - M a p the failure surface to a standard normal space (mean = 0, standard deviation = 1). - - U s e a suitable algorithm to find the closest point of the surface to the origin of the normal space: this is the 'design point'. The distance is the reliability index, ft. For example, for the simple case of the elastic deflection of a simply supported beam under a uniformly distributed load, w: 5wL 4
M = Ap
3 8 4 E 1 - g(w, L, E, I)
where Ap is some critical value of deflection and M is the safety margin or failure indicator: M < 0 gives failure and M > 0, safety. In practice, it is Pr {M < 0 } that is required. Because of the properties of the standard normal space~ the failure probability pf is approximated by pf ~ (I)( - fl) where fl=reliability index and O = c u m u l a t i v e normal distribution function. The direction cosines of the line joining the origin to the closest point on the failure boundary can be thought of as sensitivity factors. They give the relevant ranking of importance for the variables, for the failure mode being considered. This theory has been developed and applied to a wide range of structural problems. Details of methods and some applications can be found in Thoft-Christensen and Baker (1982), A notable contribution to the field has been Madsen's application of Level 2 methods to fatigue and fracture. As an example, consider a fracture mechanics application incorporating fatigue crack growth. Assuming an initial defect size of a 0 (Fig. 6), various failure criteria may be identified. a~--a<_O K~ - K, <_0 tc--t<_O where a = defect size after N cycles, a c = specified critical crack size, K~ = stress intensity factor, K~c = fracture toughness, t = actual time to failure and t~ = design life. In this example, the first criterion (a c - a _< 0)
Probabilistic f~'acture mechanics--a meeting report--March 1983
51
a0
Fig. 6.
will be used. Consider constant amplitude loading, stress range Aa, then the failure probability, pf is" pf = Pr {a c - a _< 0 } = Pr {a c - a ( N ) <_0} after N cycles of stress range Aa. The Paris Law for fatigue crack growth is taken as da d N = C(AK)m where K = B A a . ~ integrating gives
and C and m are r a n d o m variables. For m > 2,
a-m/2 da = fOq CBm(Aa)m~m/2 d N fa(N) Llao where a( N )"' = a'~" + rn' C Bm( Aa )"rtm/2 N and m' = (2 - m)/2 Putting a ( N ) = a ¢ , the m a x i m u m allowable defect size gives a safety margin of M
__ a m -o ' + m ' C B m ( A a ) m r t m/2
a~"
In the example given, distributions for the variables were assumed as given in Table 1.
52
Probabilistic fracture mcchanic.~
a meeting report
March / ~ ' ¢
TABLEI
a~ m C B Aa
lognormal normal lognormal lognormal extreme value Type l
2m 3.0 1.92 x 10 13 1.12 50Nmm 2
0.5 m 0.25 0.445 × 10 ~:~ 0.056 20Nmm 2
The calculated failure probabilities ranged from 3 x 10-11 after 10 cycles to 0.58 after l 0 7 cycles. In practice, it is of more benefit to concentrate on decision making aspects by using sensitivity factors, rather than to attempt to interpret low failure probabilities. The results indicate that the ~ (sensitivity) factors are dominated by m with a lesser contribution from variations in Aa. Uncertainties in the distribution of the number of cycles, N, are of lower priority. In more recent applications, these methods have been applied with random wave loading on fixed offshore structures.
Bibliography Thoft-Christensen, P. and Baker, M. J. Structural Reliability Theory and its Applications. Springer-Verlag, Berlin, 1982. Madsen, H. O. Deterministic and Probabilistic Models/or Damage Cumulation due to Time-Varying Loading. DIALOG 5-82, Engineering Academy of Denmark, 1982.
Awareness of Risk, by M. H. Ogle, The Welding Institute, Abington Hall, Abington, Cambridge.
Synopsis The theme of the talk was the problem of anticipating sources of risk when designing, constructing and maintaining structures. Particular reference was made to the author's experience in the field of steel bridges. It was questioned whether the creation of increasingly elaborate procedures to control known sources of risk is in fact the best way of reducing overall risk. Summary Engineering judgement plays a very important role in the avoidance of
Probabilistic jJ'acture mechanics--a meeting report--March 1983
53
TABLE 2 Low frequency
High ~equency
Gross error in design Gross abuse Error of communication Gross error in calculation Loading uncertainty Inaccuracy in calculational method Deficiency in material
structural failure. Imagination is also important for correct interpretation of results, especially where 'large' factors may not be as dominant as 'small' ones. For example, the main peak (high frequency) part of a force distribution may well be of little concern compared with the tail of the distribution which is assigned a low frequency of occurrence; similarly for the resistance distribution, where low frequency, low resistances may be important due to incorrect material heat treatment or even mixed identity (the wrong material). CIRIA 63 classifies a number of possible variations as low or high frequency. Low frequency means that one departure from the normal could cause failure, whereas for high frequency variations two or three departures would be required prior to failure (Table 2). Low frequency variations can be further classified into (a) (b) (c) (d)
Structural misconception. Design oversight. Unforeseen construction deviation. Unforeseen service deviation.
A number of case histories for each category was then presented. These were real p f - ! cases.