Computers and Structures 87 (2009) 930–947
Contents lists available at ScienceDirect
Computers and Structures journal homepage: www.elsevier.com/locate/compstruc
Scalable uncertainty and reliability analysis by integration of advanced Monte Carlo simulation and generic finite element solvers M.F. Pellissetti, G.I. Schuëller * Institute of Engineering Mechanics, University of Innsbruck, Technikerstrasse 13, A-6020 Innsbruck, Austria
a r t i c l e
i n f o
Article history: Received 5 February 2009 Accepted 9 April 2009 Available online 1 May 2009 Keywords: Stochastic structural analysis Parallel processing Advanced simulation Structural analysis software
a b s t r a c t This contribution describes how the uncertainty associated with structures can be modeled and analyzed, in context with state-of-the-art FE software and modern computing infrastructure. Uncertainty modeling with high-dimensional random variables and random fields motivates the adoption of advanced Monte Carlo methods for reliability analysis. On the implementation side, object-orientation and parallelization have been embraced to ensure flexibility and performance. A novel, Matlab-based toolkit, COSSAN-X, embodying these characteristics, is presented. The application to a satellite under harmonic excitation and a turbine blade under centrifugal loading indicates the importance of considering spatial fluctuations and the scalability with respect to realistic FE models. Ó 2009 Elsevier Ltd. All rights reserved.
1. Introduction By and large, structural engineers, worldwide, rely on analysis and design methods, which account for the unavoidable uncertainties by making grossly simplifying assumptions, such as using extremely large or small values, or average values, for parameters characterizing the structure. In other words, the adopted methods are still based on deterministic reasoning and the uncertainties are – at best – accounted for in an intuitive fashion, but surely not in a systematic way that reflects the omnipresence of inherent, random uncertainties, an omnipresence that is inevitably confirmed by – all too rare – experimental testing campaigns. The lack of penetration of rational uncertainty analysis methods in engineering practice is hardly due to insufficient theoretical foundations or the scarcity of efficient algorithms. Indeed, uncertainty and reliability analysis in structural engineering have been vibrant topics of research for several decades (cf. for instance [1]). While a large portion of the associated efforts have focussed on shedding light on the fundamental and theoretical aspects and on the application of the uncertainty analysis methods to strongly simplified, reduced-order models of structures, significant progress has also been made recently in the rational treatment of uncertainties in large FE models of complex structures [2]. A frequently sensed motivation behind the refusal to apply uncertainty analysis in a systematic way is that the associated computational effort is viewed as overwhelming. While it is certainly true that processing uncertainties in the context of compu* Corresponding author. Tel.: +43 512 507 6841; fax: +43 512 507 2905. E-mail address:
[email protected] (G.I. Schuëller). URL: http://mechanik.uibk.ac.at (G.I. Schuëller). 0045-7949/$ - see front matter Ó 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.compstruc.2009.04.003
tational models of structures involves significant additional computational efforts, it is however surprising to see that the incessant growth of computational power – which is available to structural engineers by virtue of the corresponding progress of computer hardware technology – does not seem to alleviate the computational burden associated with uncertainty analysis, at least in the perception of the structural engineers. The reason for this may well lie in the fact that the computational power gained with each new generation of desktop computers and workstations is invested in even more sophisticated and refined FE models. The predictive quality of such sophisticated models is not necessarily commensurate to the trust that is placed by the general public and by decision makers on the associated results, as it has been recently remarked in [3]. In summary, it is clear that the demands placed on the accuracy of finite element models is usually disproportionate to the simplifications operated with respect to the uncertainties associated with the model. The accuracy afforded by the finite element models is therefore frequently elusive. Indeed, for a given level of uncertainties, there is clearly a delimiting point of the degree of resolution, beyond which any further refinement does not improve the FE model with respect to its intended use, because the scatter in the predictions due to uncertainties is significantly larger than the error reduction resulting from the refinement [4]. The above mentioned imbalance is mirrored by the highly contrasting number of software codes for finite element analysis and that of software codes for structural reliability analysis. This very imbalance represents a plausible interpretation for the lacking diffusion of uncertainty analysis. While the popularity of the FE method has entered a virtuous cycle with the development of general-purpose software codes [5,6], the corresponding evolution
931
M.F. Pellissetti, G.I. Schuëller / Computers and Structures 87 (2009) 930–947
of general-purpose tools for uncertainty and reliability analysis has lagged behind. This picture of the situation has emerged from a recent Special Issue on structural reliability software coordinated by the authors [7,8]. While there appears to be a comforting trend reversal – in view of several recent developments in the reliability software arena – the possibilities supplied by the presently available codes appear limited if compared to the status quo held by general purpose FE codes. The present manuscript discusses tools and methods for uncertainty and reliability analysis with special emphasis on the respective implications for their implementation in general purpose software. The guiding principle is the compatibility of the uncertainty analysis software with the state-of-the-art in FE modeling. Consequently, from the methodological point of view, advanced simulation methods are in the focus, since they have been shown to form the most scalable class of methods. As far as the implementation is concerned, the issues related to the interfacing of FE software and uncertainty analysis software are stressed, since it is viewed as imperative that the uncertainty analysis capitalizes on the existing, highly developed and tested, FE codes. The above issues are discussed and elucidated in the context of a software tool for uncertainty and reliability analysis, COSSANTM, which has been developed by the authors and the members of the Institute of Engineering Mechanics (IfM). At the end of the manuscript application examples are presented, which have been performed using the presented software. The purpose of these examples is to demonstrate the applicability of uncertainty and reliability analysis to large FE models of complex structural assemblies by means of suitable software tools, whilst making use of the nowadays widely available possibilities of parallel processing. The key message to be conveyed is then that in times where high-performance computing is within everybody’s reach, there is no reason to forego the possibilities offered by systematic uncertainty analysis in structural engineering. Presently a number of software packages for uncertainty and reliability analysis are available. Compared to these packages, the software presented here has the following distinguishing characteristics: (i) the Matlab-based engine permits the direct use of the software’s data structures from within the de-facto standard tool for scientific computing (Matlab); (ii) random fields can be used to model the spatial fluctuations of properties of the structural components, due to the spatial heterogeneity of materials and to manufacturing processes; (iii) problems with a very large number of uncertain parameters can be handled thanks to advanced Monte Carlo simulation based procedures, for the assessment of the relative importance the uncertain parameters and for the reliability estimation; the execution of these advanced simulation procedures in the parallel mode is fully automatized.
2. Methods of analysis
the focus is hence on computational tools based on the finite element method for the structural analysis and on probabilistic methods for uncertainty modeling and propagation. 2.2. Modeling of uncertainties in structural properties and loading A large portion of the parameters which characterize any numerical model utilized for structural analysis, is affected by uncertainty [9,10]. Examples of such parameters are the strength or the mass density of a given construction material, the section dimensions of a structural member or the magnitude of a load assumed to act on the considered structure. A convenient and rational way for representing this uncertainty consists in modeling these parameters as random variables. Formally, the latter are collected in a vector of random variables,
X ¼ XðhÞ ¼ ½X 1 X 2 . . . X d ;
h 2 S;
ð1Þ
where h denotes the random event, S the so-called sample space and d the dimension of the random vector X. The latter is fully characterized by its joint cumulative distribution function (CDF),
F X ðxÞ ¼ P½X 1 6 x1 ; X 2 6 x2 ; . . . ; X d 6 xd :
ð2Þ
where x ¼ ½x1 x2 . . . xd is an arbitrary value of the vector X and P½ denotes the probability of the event enclosed in the brackets. In many practical applications numerous uncertain parameters can be assumed to be statistically mutually independent, thus simplifying the definition of the joint CDF in Eq. (2). However, the representation of the uncertainty by means of random variables is not adequate for parameters and quantities of the numerical model, which are spatially or temporally distributed and which exhibit spatial or temporal fluctuations [11–13]. Examples of such quantities are a shell structure, whose thickness fluctuates in space, and the ground acceleration at a selected location, which varies randomly over time. Physical phenomena of this type have the character of a random function, rather than a random variable,
aðxÞ ¼ aðx; hÞ spatial fluctuations ðrandom fieldÞ; aðtÞ ¼ aðt; hÞ temporal fluctuations ðstochastic processÞ; aðx; tÞ ¼ aðx; t; hÞ spatial and temporal fluctuations:
ð3Þ ð4Þ ð5Þ
For use in numerical computations, efficient methods are available for the discretization of random fields and stochastic processes, such as the widely used Karhunen–Loève (K–L) expansion [14–16]. 2.2.1. Stochastic finite element analysis The finite element method is the standard tool for certain classes of partial differential equations arising in various fields of engineering and in particular for those arising in solid mechanics. For linear systems enforcing global static or dynamic equilibrium the FE method leads to a system of linear equations and a system of linear ODEs (assuming viscous damping), respectively,
2.1. General remarks
€ þ CUðtÞ _ MUðtÞ þ KUðtÞ ¼ FðtÞ;
KU ¼ F; The computational analysis of the structural performance in view of uncertainties requires two modeling steps, namely the construction of a mathematical–mechanical model and – on the basis of the latter – the modeling of the uncertainties. As for the mathematical–mechanical model of structures, the finite element method has established itself as the standard tool for structural analysis and is used in a variety of fields of engineering. Turning then to the modeling of the uncertainties, the concepts and notions of probability theory have been adopted early for capturing uncertainties in computational mechanics and constitute a versatile and powerful framework for this purpose. In the present manuscript,
X i ¼ X i ðhÞ;
ð6Þ
where the matrices K and M are the global stiffness and mass matrices, respectively, obtained by adding the contributions of all element matrices,
K¼
X
Ke ;
e
M¼
X
Me :
ð7Þ
e
The latter matrices have the form,
Ke ¼
Z Xe
BeT De Be dXe ;
Me ¼
Z Xe
qe HeT He dXe ;
ð8Þ
932
M.F. Pellissetti, G.I. Schuëller / Computers and Structures 87 (2009) 930–947
where Be is the matrix relating element displacements and strains, De is the elasticity matrix relating stresses and strains, qe is the mass density and Xe is the spatial domain of the element. The global damping matrix C is typically formulated in terms of M and K. If some of the physical parameters of the model, which enter the analysis either through the element matrices in Eq. (8), or through the loading – i.e. the right-hand-sides of Eq. (6) – and the boundary conditions, are affected by uncertainties, then the concepts introduced in Section 2.2 may be used to rationally represent these uncertainties. In this case, the numerical schemes for the analysis of the uncertainty associated with the quantities of interest of the structural analysis, are termed as Stochastic Finite Element methods (SFEM) [17–19,2]. Following the notation in Eq. (1) the set A is introduced, which contains the random fields and stochastic processes used to model quantities whose spatial or temporal fluctuations are assumed to have a significant influence on the structural performance,
A ¼ AðhÞ ¼ fa1 ðx; t; hÞa2 ðx; t; hÞ . . . ad0 ðx; t; hÞg;
ð9Þ
0
where d is the number of random fields and stochastic processes utilized in the uncertainty modeling. The propagation of the uncertainty associated with the physical parameters modelled as random variables – contained in XðhÞ – and those modelled as random fields or stochastic processes – contained in AðhÞ – is made explicit in the following, by noting the dependence on h, which denotes the randomness. With reference to Eqs. (8) and (7),
De ¼ De ðXðhÞ; AðhÞÞ ¼ De ðhÞ ) Ke ¼ Ke ðhÞ ) K ¼ KðhÞ:
ð10Þ
An analogous causal sequence applies to the mass and damping matrix and to the force vector. In view of the fact that the uncertainty propagates to the structural response through the discretized Eq. (6) of (static and dynamic) equilibrium, the following equations result, with the randomness of each quantity denoted by h,
KðhÞUðhÞ ¼ FðhÞ; € hÞ þ CðhÞUðt; _ hÞ þ KðhÞUðt; hÞ ¼ Fðt; hÞ: MðhÞUðt;
ð11Þ ð12Þ
On the basis of the above equations, the goal of stochastic finite element analysis is to characterize the uncertain structural response, Uðt; hÞ, and eventually also other quantities of interest, Q ðhÞ, which are representative of the structural performance, such as stresses, accelerations, etc. 2.3. Uncertainty propagation in structural analysis 2.3.1. Monte Carlo simulation The most general and robust method for processing and estimating the uncertainty associated with the structural performance is Monte Carlo simulation (MCS), which consists in generating independent samples of the physical parameters modelled as random variables or random fields (stochastic processes). With reference to Eqs. (1) and (9), the following ensembles are created using a random number generator,
fXðjÞ gNj¼1 ¼
nh
ðjÞ
ðjÞ
ðjÞ
X1 ; X2 ; . . . ; Xd
ioN j¼1
ð13Þ
;
n oN ðjÞ ðjÞ ðjÞ fAðjÞ gNj¼1 ¼ a1 ðx; t; hÞa2 ðx; t; hÞ . . . ad0 ðx; t; hÞ : j¼1
ð14Þ
For each of the samples, the corresponding response is computed by substituting the jth realization of the above ensemble for the uncertain model parameters in Eq. (10) through (12). This operation results in the ensemble of N samples of the quantities of interest of the structural performance,
n oN ðjÞ ðjÞ fQ ðjÞ gNj¼1 ¼ Q 1 ; Q 2 ; . . . ; Q ðjÞ : q
ð15Þ
j¼1
The above ensemble can be further processed in order to obtain for instance statistical estimators of the moments of the random performance parameters, such as means, standard deviations and correlations. While MCS is outperformed by alternative methods under special circumstances, such as for the uncertainty propagation for linear static structural analysis in the presence of only few random variables, it enjoys superior generality and robustness. The former attribute is due to the method’s applicability to any type of problem for which the corresponding deterministic problem can be solved, whereas robustness is ensured by the availability of error control in the form of confidence intervals. Furthermore, MCS is particularly amenable to parallel processing. As the availability of parallel computing infrastructure keeps increasing, this fact strongly improves the competitive position of MCS even for analysis tasks in which alternative methods excel. 2.3.2. Accelerated uncertainty propagation by approximate response representations Conceptually, in the case of Monte Carlo simulation, the exact mapping between the uncertain input parameters and the quantities of interest is used to propagate the uncertainty. The drawback of this approach is that each invocation of the exact mapping requires a full FE analysis, which is typically associated with significant computational cost. An alternative approach consists in constructing an approximate representation of the response, which can then be utilized to either directly derive the probabilistic quantities of interest, or the latter may be estimated again using Monte Carlo simulation; since in this case the approximate response representation is used to map a sample of the input parameters to the quantities of interest, and hence the computational cost is comparably low. Prominent examples of approximate response representations are – in chronological order – the Perturbation method, based on a low-order Taylor series expansion [20,18], the Neumann expansion [21,22] and the spectral stochastic FE method [17], in which the response is expressed with respect to a so-called polynomial chaos basis. 2.4. Structural reliability analysis 2.4.1. Problem definition The assessment of the reliability of structures, defined as the probability that the structure will satisfy (or equivalently, fail to satisfy) predefined performance levels, requires a quantitative definition of failure. For this purpose it is common practice to define a so-called limit state function g, which characterizes the state of the structure according to the following rule,
g ¼ gðNÞ such that
gðNÞ > 0 () N 2 S gðNÞ 6 0 () N 2 F
;
ð16Þ
where S and F denote the safe set and the failure set, respectively. The vector N consists of mutually independent standard normal random variables,
N ¼ ½N1 N2 . . . Nd00 :
ð17Þ
and is typically used in reliability analysis for convenience. The uncertain parameters introduced in Eq. (1) and the random fields listed in Eq. (9) can be represented in terms of the components of N, by suitable transformations, with the advantage that all uncertainties in the stochastic structural analysis model can ultimately traced back to a set of random variables of very basic form. The goal of a reliability analysis is then to compute the probability of failure,
M.F. Pellissetti, G.I. Schuëller / Computers and Structures 87 (2009) 930–947
pF ¼
Z
fN ðnÞdn ¼
Z
1F ðnÞfN ðnÞdn;
where 1F ðNÞ ¼
F
0 () N 2 S 1 () N 2 F: ð18Þ
where fN ðnÞ is the multi-dimensional probability density function (PDF) of N and 1F ðNÞ is the so-called indicator function. The computational cost of a reliability analysis is driven by the time of an FE analysis run for an individual realization of N, in order to determine whether N 2 F, since the failure region is not defined explicitly. Numerous methods have been developed for the estimation of the failure probability pF in the context of structural mechanics [23–25]. These methods may be classified into approximate methods, which include the first-order and second-order reliability methods (FORM, SORM), and into simulation methods, which provide estimates of pF on the basis of ensembles of realizations of the uncertain parameters and of the limit state function. According to a recent benchmark study [26], members of the latter class of methods exhibit superior robustness and generality for problems involving large numbers of uncertain parameters. 2.4.2. Direct Monte Carlo Simulation (DMCS) Direct Monte Carlo simulation [27,25] consists in the generation of N realizations of the basic random variables, with a random number generator, and subsequent evaluation of the corresponding realization of the indicator function, 1F ðNðjÞ Þ. The Monte Carlo ^F for the probability of failure pF ¼ P½F has then the estimator p form,
^F ¼ p
N 1 X 1F ðNðjÞ Þ; N j¼1
ð19Þ
In the case of Direct Monte Carlo simulation the coefficient of variation (CoV) of the estimate has the form,
CoV p^F ¼
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ^F =pF ¼ ð1 pF Þ=NpF : Var½p
ð20Þ
^F is the variance of the estimator. While the accuracy of where Var½p the estimator, expressed by CoV p^F , is independent of the dimension 00 d of the random vector N, for high levels of reliability, i.e. for small values of pF , a very large number (proportional to 1=pF ) of samples is needed for an accurate estimate. 2.4.3. Advanced Monte Carlo simulation methods In view of the above remarks, for complex structural systems with high target reliability levels, direct MCS is usually not a viable option. For this purpose advanced MCS techniques [28,25,29] have been developed, which aim at reducing the variance and hence of the CoV of the estimator of pF , with the effect that a smaller number N of samples suffices to achieve a desired accuracy. Two rather recent methods that have been shown to be particularly efficient in a number of practical applications, are the so-called Line Sampling method and the Subset Simulation method. The Line Sampling method [25] is a robust sampling technique particularly suitable for high-dimensional reliability problems, in which the performance function gðNÞ exhibits moderate non-linearity with respect to the uncertain parameters N. The key step consists in the identification of a direction in the high-dimensional input parameter space, pointing to regions which strongly contribute to the overall failure probability. Once such an important direction has been identified, samples are then evaluated along this direction from randomly selected starting points and the intersection of each of these lines with the failure region is determined. The ensemble of intersection points associated with the ensemble of lines then lead to the desired estimate of the failure probability.
933
This is visualized in the left portion of Fig. 1, where the important direction is denoted by ea and the failure region is shaded. The Subset Simulation method [28] overcomes the inefficiency of direct MCS in estimating small probabilities, by expressing the failure probability pF as a product of larger, conditional failure probabilities,
pF ¼ PðF m Þ ¼ PðF 1 Þ
m1 Y
PðF iþ1 =F i Þ:
ð21Þ
i¼1
In the above equation fF i gm i¼1 represents a decreasing sequence of m failure events (subsets) such that F m ¼ F and F 1 F 2 F m ¼ F. In the schematic representation of Subset Simulation in the right portion of Fig. 1 the subsets F 1 ; . . . ; F 4 are delimited by red lines; the actual failure domain (‘‘region of interest”) consists in the filled pink area. With an appropriate definition of the intermediate failure domains, the probabilities PðF 1 Þ and PðF iþ1 =F i Þ; 8i P 1 are large enough to a rather small number of samples for their estimation, say 100.
3. Software technology 3.1. General remarks In order to be of relevance for engineering applications of practical interest, the methodologies for uncertainty propagation and reliability analysis presented in the previous section have to be made available in the form of general-purpose software. In this context it is worth noting that the success of the FE method is inextricably tied to the development of general-purpose computer codes, which started already quite early in the history of the FE method and has led to a substantial diversity of commercial FE programs. In the present section, the methodologies for processing uncertainties in structural analysis will be revisited from the perspective of software technology, i.e. from the perspective of the programmer faced with the task of implementing these methodologies in a software code. Since any software for the uncertainty analysis of complex engineering structures necessarily requires also access to finite element modeling and solver tools, the developments will start with a review of the main components of deterministic finite element programs. Then the software-related aspects of the propagation of uncertainties through the mechanical model will be addressed, including a discussion of the parallelisms and their capitalization in the software implementation. Finally, the main features of a software code for stochastic structural analysis developed by the authors and the members of the Institute of Engineering Mechanics are presented. 3.2. Finite element software The number of existing FE software codes is very large, but since they are merely different implementations of the same numerical modeling and analysis methodology, several common, general features may be observed. In particular, most FE software codes involve three main phases, namely (i) the pre-processing, (ii) the assembly and solution, and (iii) the post-processing (see e.g. [30,31]). The pre-processing phase involves the definition of the numerical model, in particular of the geometry and the element types to be used in a particular portion of the model, as well as the material properties. Based on this definition, and upon the selection of meshing control parameters, the finite element mesh is generated. The pre-processing phase is completed by the definition of the
934
M.F. Pellissetti, G.I. Schuëller / Computers and Structures 87 (2009) 930–947
Fig. 1. Schematic representations of the Line Sampling method (left) and the Subset Simulation method (right).
boundary conditions and the loads, of the analysis type and of the quantities of interest to be returned. The entire set of definitions is usually gathered into one or more input files for the finite element program, which entirely characterizes the finite element model, as far as the FE solver concerns, and which can be used to execute the analysis in batch mode. As far as the pre-processor concerns, the finite element model is usually saved in a database file, which characterizes the model in terms of features, such as geometry entities, which are not explicitly contained in the input file. In the solution phase, the FE solver uses the specifications of the pre-processing phase to assemble the element matrices corresponding to the adopted formulation, e.g. the mass, damping and stiffness matrices for a vibration or transient dynamic analysis. These element matrices are then used to either (i) assemble the global matrices, which are then used in the direct solution of the systems of equations, or to (ii) perform the matrix-vector operations arising in the iterative solution of the system of equations, such as with conjugate-gradient methods (‘‘element-by-element techniques”). From the perspective of the CPU time, the solution phase is the most time consuming phase of the FE modeling and analysis process, even though the meshing of complex domains, too, requires significant computational resources. The final step of the FE analysis, post-processing, consists in the recovery of derived quantities of interest from the solution vector, which in most FE codes corresponds to the vector of the displacements at the nodal DOFs. Furthermore, post-processing involves the visualization of results – for instance in the form of contour and deformation plots – in order to facilitate their interpretation by the analyst. The post-processing phase also includes the evaluation of a-posteriori error estimators which help to quantify the accuracy of the predicted results and hence permit the verification of the numerical model. 3.2.1. Integration of CAD and FE software Despite the formidable progress brought about by the developers of finite element software, the potential of FE analysis is not yet exploited to a satisfactory degree in the design loops of manufactured products (such as structures), due to the insufficient integration with computer aided design (CAD) software [32,33]. This lack of integration is mainly due to the different ways in which a structure is represented in the CAD program and in the FE program; the consequence is that unavoidable modifications in the design lead
to time-consuming and error prone modifications to the FE model. Recent alliances of some CAD giants with FE software companies may be interpreted as a response to the above shortcoming. 3.3. Interoperability between FE code and uncertainty analysis software A fundamental characteristic of a software code for uncertainty analysis of structural engineering applications consists in the way it interfaces with the software that handles the finite element modeling and solution. In the following, the main strategies for implementing the required interoperability between the FE software and the uncertainty analysis software are introduced. 3.3.1. Standalone implementation In this type of implementation the FE analysis is viewed as a black box in the analysis process and the FE code is communicated with through a generic interface, through the input files of the latter. The uncertainty analysis program controls the FE code by automatically modifying the input files, set identifiers, which govern the automatic generation of input file samples by the stochastic solver, using pattern matching and replacement. Clearly this approach suffers from a certain performance loss due to the overheads associated with pattern matching and the repeated execution of full FE solver runs. Furthermore, the productivity of practicing engineers familiar with a specific FE code will generally be inferior due to the lack of integration in the FE code. The major advantage of this approach for interfacing FE code and uncertainty analysis software is the portability with respect to the FE code: indeed, an interface for any ASCII-input file based FE solver can be implemented in a rather effortless fashion. 3.3.2. Integration into FE program The first scenario for this type of implementation consists in a native integration of uncertainty analysis methods in the FE solver, thus leading to a monolithic program capable of both deterministic and uncertainty-aware structural analysis. This clearly leads to the best performance of the stochastic solver for a given FE code, since direct access to the data structures is available. Clearly, this scenario is not generally applicable to FE codes, as it requires access to the source code of the numerical solver, which is usually not granted by the commercial, proprietary FE codes.
M.F. Pellissetti, G.I. Schuëller / Computers and Structures 87 (2009) 930–947
An alternative implementation concept consists in programming the stochastic solver by utilizing the FE solver’s functionalities accessible through the API1 of the solver, if applicable. The resulting flexibility in the implementation of the stochastic solver and the associated performance depends however on the flexibility provided by the API. A general characteristic of software in which the integration in the FE solver is implemented in such a tight fashion is that the implementation is not portable to a different FE solver. Indeed, a dedicated implementation of the stochastic solver is then required for each FE program. However, the advantages afforded by the tight integration consist in a superior performance and a wider array of possibilities as far as the uncertainty propagation methods concern, as will be discussed in Section 3.4. 3.3.3. Integration into FE pre-processor The previous two approaches for handling the interoperability between uncertainty analysis software and FE software are related to the execution stage of the stochastic structural analysis. It is well known that in typical industrial workflows the analyst spends most of his time interacting with the pre-processor, first preparing the model and later visualizing and interpreting the results. This is no different if the analyst performs uncertainty analysis with the FE model. Consequently, the workflow of the uncertainty analysis will in most cases be significantly expedited if the interoperability of the pre-processor and the uncertainty analysis software is enabled, in the form of a plugin to the pre-processor. 3.4. Uncertainty propagation In Section 2.3 several computational methods for propagating the uncertainties associated with properties of the structural analysis model to the quantities of interest have been introduced. From the software point of view, these methods may be classified into (i) intrusive methods and (ii) non-intrusive methods. The former methods require access to the system matrices and/or intermediate results – such as partial derivatives etc. – of the FE solver. Non-intrusive methods, on the other hand, simply require the FE code to return the quantities of interest for a given combination of the uncertain input parameters. 3.4.1. Uncertainty propagation with non-intrusive methods Among the methods introduced in Section 2.3, only the Monte Carlo simulation may be called truly non-intrusive. Consequently this class of method is well suited for standalone implementations treating the FE code as a black box, which returns the structural response as the output to a given input, that consists in an arbitrary realization of the uncertain FE model parameters.
935
Export of the matrices and vectors which are needed to solve the equations of the utilized approximate response representation method. The solution of these equations is then conducted externally, e.g. in Matlab, after importing the matrices. Direct construction and utilization of the required matrices – from inside the FE code – for the evaluation of the above equations. The former procedure – based on the export of matrices to files – involves the minimum degree of intrusion and can typically be accomplished without major difficulties with most FE codes, with corresponding statements in the input file. Due to the overhead associated with writing and reading files, the performance of this approach (in terms of wall-clock time) is inferior to the alternative, significantly more intrusive, modus operandi, in which the matrices are accessed directly in the memory or file databases of the FE program. Clearly, this approach is only feasible if the uncertainty analysis software can be integrated into the FE solver, either natively or through an API. 3.5. Parallel processing The use of parallel processing in structural reliability software is not as widespread as the parallel potential of most uncertainty and reliability analysis algorithms would suggest [8]. Interestingly, similar observations have been made by researchers from other scientific disciplines such as economics and astrophysics: the authors of references [34,35] attribute this mainly to the relative difficulty of parallel programming and furthermore [35] to the existence of a large body of legacy codes, for which the posterior parallelization is not attractive. In stochastic structural mechanics there are mainly two levels of parallelism, a high level, coarse-grain parallelism associated with the stochastic algorithms and a low level parallelism related to the deterministic problem. 3.5.1. High level of parallelism This level of parallelism is associated with the stochastic analysis method used to propagate the uncertainty to the response quantities of interest. For the vast majority of these algorithms, the computationally expensive part consists in the repeated execution of computations with system matrices which differ in some way from the nominal ones. In most cases these repeated executions can be carried out in parallel, thereby leading to this high level parallelism. The high level parallelism is shown conceptually in the left portion of Fig. 2, where two different realizations of the entire structural domain are mapped to two different processing units, P 1 and P 2 . These units may correspond to two different processors
3.4.2. Uncertainty propagation with intrusive methods Intrusive SFEMs involve a modification of the workflow in the underlying deterministic FE analysis. For instance, in the Perturbation method the partial derivatives of the stiffness matrix with respect to the uncertain parameters are required. To obtain these, the execution mode of the FE code must deviate from the solution sequence followed in the regular FE analysis, since the derivatives need to be computed internally, or the stiffness matrix of the perturbed FE model must be output in order to approximate the derivative with the finite difference quotient. As already suggested by this example, the implementation of intrusive uncertainty propagation may be approached in two alternative ways:
1
Application programming interface.
Fig. 2. Task decomposition and mapping; left: sample set decomposition, right: domain decomposition.
936
M.F. Pellissetti, G.I. Schuëller / Computers and Structures 87 (2009) 930–947
of a compute cluster or of a shared-memory multi-processor computer. With reference to more recent computer architectures, the processing units may of course also correspond to processors from two different clusters, in the case of grid computing, or to two different cores of a multi-core processor. The most straightforward implementation of this parallelization approach is clearly associated with the standalone implementation of the stochastic solver, in which the FE code is interacted with in a black-box fashion. In this case, np instances of the FE solver are executed concurrently, where np is the number of available processors. An obvious drawback of this approach is due to the limited number of concurrent instances of the FE solver that can be executed. This limitation is typically imposed in terms of the maximum number of network licenses that can be obtained simultaneously from the license manager. Due to the significant costs associated with each single network license, it is typically not possible to exploit the full potential of parallel computers, which nowadays ordinarily include several dozens of processors. The above limitation may be overcome by transferring the control over the parallelization into the FE solver. In this case the need to generate multiple instances of the FE solver vanishes. Clearly, this requires the availability of a suitable library or API for message passing or direct access to the source code of the FE solver. From the algorithmic point of view, Monte Carlo based uncertainty and reliability analysis methods are particularly suited for the high-level parallelization, see e.g. [36]. Indeed, typically the corresponding samples may be processed with extremely modest requirements for synchronization and communication and hence qualify for an embarrassingly parallel implementation. This typically leads to virtually linear speedup,
SðnpÞ ¼
Tð1Þ np; TðnpÞ
ð22Þ
where np is the number of processors and Tð1Þ and TðnpÞ are the execution times on one and on np processors, respectively. In general, the more advanced the Monte Carlo based algorithm, the higher the synchronization requirements and, consequently, the less trivial is the parallel implementation. As investigated in [37], the parallelization of the Line Sampling method and, in particular, of the Markov-chain based Subset Simulation method (cf. Section 2.4) poses challenges, since the algorithms involve synchronization steps in order to rapidly meet the convergence targets. In the case of Line Sampling, the speedup is bounded by the number of lines, N L , processed simultaneously,
SðnpÞ 6 NL :
ð23Þ
Similarly, in the parallel execution of Subset Simulation, the speedup is bounded by the number of chains, N C , that are advanced simultaneously,
t [s]
10
10
10
3.5.2. Low level parallelism of the deterministic analysis A second, lower level of parallelism is associated with the deterministic problem: in order to exploit the parallelism at this level, the programmer has to partition the computational tasks required for a single solution of the deterministic problem in a way that permits concurrent execution of the partitions. The goal of parallel efficiency is typically more difficult to accomplish at this lower level of parallelism, since two of the main factors, load balancing and communication, are more difficult to control, compared to the high level parallelism inherent in most stochastic algorithms. Nevertheless, in deterministic structural mechanics considerable efforts have been devoted to exploiting the parallelism at the level of the deterministic model (cf. e.g. [40]). In particular emphasis, the methods based on domain decomposition and substructuring (see e.g. [41,42]) have attained a remarkable level of maturity, which has resulted in the implementation of domain decomposition based parallelization in commercial finite element solvers. The basic idea of domain decomposition is shown conceptually in the right portion of Fig. 2, where the physical domain is divided into subdomains X1 and X2 , which are separated by the boundary C. Besides domain decompostion, additional approaches to exploiting the parallelism of the deterministic analysis relate to the development of parallel solvers (see e.g. [43,44]) and of parallel time stepping algorithms (see e.g. [45,46]). As a concluding note on the different levels of parallelism, in most cases it will be preferable to exploit first the parallelism of the stochastic algorithm, since it will be typically possible to achieve a high parallel efficiency with limited programming effort. Provided that this has not fully depleted the available pool of pro-
5
3
2
150
100
50
1
10 0 10
200
Nc=23 Nc=90 Nc=360
4
ð24Þ
This is exemplified in Fig. 3, where the execution times and the corresponding speedups are shown for Subset Simulation runs with three different numbers of chains. Clearly, the speedup is bounded above by the respective number of chains. With respect to parallelization, the spectral stochastic finite element method (SSFEM), introduced in Section 2.3, differs significantly from most stochastic solution algorithms, in that the solution – rather than involving the repetition of the analysis with different realizations of the deterministic system matrices – consists in solving systems of equations which are orders of magnitudes larger than the underlying deterministic one [38]. However, thanks to the particular structure of the large matrices, high level parallelism can still be exploited very efficiently [39], if iterative solvers are used and the matrix-vector products are performed in parallel, in which case the assembly of the large system matrix can be foregone.
Speedup [.]
10
SðnpÞ 6 NC :
1
10
2
10
np [.]
3
10
0
0
100
200
np [.]
Fig. 3. Execution times and speedup of Subset Simulation.
300
M.F. Pellissetti, G.I. Schuëller / Computers and Structures 87 (2009) 930–947
cessors, one may then subsequently further reduce the wall-clock time by parallelizing the execution of individual FE analyses, e.g. by domain decomposition methods. 3.5.3. Multi-core architecture In the last few years, computer architecture has witnessed a distinct trend towards multi-core processors [47]. This trend is a consequence of the limitations which increasingly prevent the use of traditional measures for improving the computational power of processors, i.e. increasing the clock rate. For the field of computational mechanics this change of paradigm has clearly consequences, in that the parallel implementations will have to account for the particular characteristics of the multi-core architecture [48]. Computational stochastic structural analysis, too, is impacted by the new architectural trend. This is particularly the case for Monte Carlo based methods, which are amenable to embarrassingly parallel implementation on distributed memory systems. As a recent study has shown in context with the dynamics reliability analysis of a multi-story building model [37], the parallel Monte Carlo simulation on systems consisting of multi-core processors yields speedups which are far from those achieved on single-core systems. Therefore, in order to fully exploit the potential computational power on clusters of multi-core processors, adaptive parallel implementations will therefore be required, i.e. the decomposition and assignment of tasks depends on the characteristics of the analyzed structure and in particular on the size of the corresponding numerical model. 3.6. COSSAN – object oriented toolkit for computational stochastic structural analysis 3.6.1. Background The methodologies for uncertainty and reliability analysis presented in Sections 2.3 and 2.4 have been implemented in COSSAN-X [49], an object oriented program which builds on developments started in the nineties with a modular standalone toolbox [50,51].
937
The latter represented a monolithic, standalone software for finite element based uncertainty and reliability analysis, corresponding to the first type of implementation concept described in Section 3.3. Since then the focus of structural reliability software gradually shifted to the interface of the uncertainty analysis features with third party FE codes, as the recent Special Issue on reliability software confirmed [52]. This trend is motivated by the need of practicing engineers to perform uncertainty analysis without abandoning the finite element program of their choice. 3.6.2. Software concept – interoperability with FE software Conceptually, the software can operate in three different modes, which differ from each other in the way the interoperability with the FE software, i.e. with the solver and the pre-processor, is managed. These three modes of operation are represented schematically in Fig. 4. In the first mode, a generic FE code is interacted with in a black box fashion, through automated input-file modification. With reference to Section 3.3, in this mode the code acts as a standalone application, which communicates with the FE solver merely via input files. It is characteristic of this mode of operation that the analysis procedures – also referred to as solution sequences – provided by the FE code are used as a black box, i.e. they are not modified. This is the distinguishing feature of mode 2, in which the analysis flow inside the FE solver is modified. The purpose for which this operation mode has been implemented is twofold: firstly, it may lead to a more efficient operation of a non-intrusive uncertainty propagation method; secondly, operation mode 2 permits the implementation of intrusive algorithms, which require access to data structures created and used by the FE code, such as system matrices. As indicated in the corresponding portion of Fig. 4, mode 2 may be implemented either by modifying or extending the source code – as it can be done for instance in Abaqus [53] – or by utilizing the command language provided by the FE code. The latter applies to Nastran [54], which features the high-level language DMAP. In the current version operation mode 2 has been implemented for the FE code Nastran. In the third mode of operation interoperability between the uncertainty analysis engine and the FE pre-processor is afforded.
• •
• • •
Fig. 4. Interoperability with FE software.
938
M.F. Pellissetti, G.I. Schuëller / Computers and Structures 87 (2009) 930–947
The implementation consists typically in a plugin, which can be developed with the high level command language provided by many pre-processors. From the practitioner’s point of view, this operation mode implies improved productivity, as the uncertainty modeling can be performed while interacting with the graphical representation of the structure inside the pre-processor. Fig. 5 shows the plug-in developed for MSC.Patran. 3.6.3. Uncertainty analysis engine The software engine, represented by the corresponding box in Fig. 4, has been implemented as an object-oriented toolbox in Matlab [56]. This choice is motivated by a combination of factors, such as the ease of implementation, the availability of a rich library of mathematical functions (including linear algebra, probability, statistics, optimization) and of features for plotting and visualization. Furthermore, the flexibility of Matlab with respect to its mode of operation is an important factor, as it permits to use the software engine both in an interactive fashion and in the form of a standalone application or library, thanks to the Matlab compiler. The interactive operation mode is used by the greatest portion of Matlab’s extremely large user community (Matlab is considered as a de-facto standard in scientific computing). In this familiar environment it is possible to construct tailor-made solution sequences in the form of scripts, which utilize the implemented classes and combine them with the standard data structures of Matlab. Since Matlab programs are interpreted at runtime, depending on the implementation of a particular stochastic solution algorithm, performance may be somewhat reduced compared to what could be achieved with a compiled programs. However, this overhead is typically of marginal importance, because in computational stochastic mechanics the computational cost is typically driven by the processing of the finite element model. Furthermore, if the stochastic solution algorithm involves heavy numerical linear algebra, then Matlab’s characteristic of outsourcing numerically intensive tasks to compiled and highly efficient numerical libraries, such as BLAS, LAPACK and ARPACK, again greatly reduces the impact of the interpreting overhead. Fig. 6 shows the class diagram of the uncertainty analysis engine in UML (Unified Modeling Language) notation. The diagram
emphasizes the pivotal role of the three classes evaluator, rvset and probmodel. Hence, for these classes selected attributes (or fields, in the Matlab nomenclature) and operations (or methods) are indicated. The lines connecting the individual classes denote existing relationships: Associations: these are denoted by lines without arrow heads and represent a relationship in which objects of the two classes work together. Aggregations: this stronger form of relationship is denoted by a line with a diamond shape arrow head and means that the class next to which the diamond is placed owns an object of the other class. Generalizations: this relationship corresponds to inheritance and is denoted by the classical arrow head; the class next to which the arrow head is placed represents the more generalized class from which the other class inherits the attributes (fields) and operations (methods). The evaluator class is responsible to return the response quantities of interest for a given sample of the uncertain input parameters. This is accomplished primarily using the classes connector and mio. The former handles the interaction with the FE solver; it ‘‘connects” the FE model to the probabilistic model. The mio class covers the cases in which the quantities of interest are computed in a Matlab function; in other words an m-file maps the input to the output. The uncertain input parameters are modeled mainly with the class rvset, whose most important methods produce samples and permit the mapping between the standard normal space and the physical space. Both evaluator and rvset are aggregated to the class probmodel, which includes methods for uncertainty quantification and reliability estimation, in particular advanced Monte Carlo simulation methods for quantifying the failure probability pF . 3.6.4. Parallel processing Several of the algorithms for uncertainty and reliability analysis, in particular the most scalable ones, are Monte Carlo simulation
Fig. 5. COSSAN-X plugin for MSC.Patran [55].
M.F. Pellissetti, G.I. Schuëller / Computers and Structures 87 (2009) 930–947
939
Fig. 6. UML class diagram of uncertainty analysis engine.
based and hence involve the evaluation of batches of samples. The key parallelization feature is therefore the parallel execution of batches of samples, which is implemented in two classes, targeting different analysis scenarios. Class grid: In this class the parallel sample evaluation is fully managed by the Grid Engine [57], a widely used open-source batch queuing system. Whenever a grid object has been aggregated to an object of type connector, then the application of the corresponding evaluator object to a batch of samples results in the submission of a job to the Grid Engine. The latter distributes the samples based on (i) the available resources and (ii) the user-defined selections stored in the grid object. On each of the assigned nodes the sample evaluation is then executed based on the definition of the aggregated connector object.This class is suitable to perform parallel sample evaluation for evaluator objects which consist in finite element models accessible through a connector object. Class parmio: This class is specifically geared towards the parallel execution of evaluator objects defined in terms of a mio object. The parmio class exploits the possibility of Matlab to compile functions into libraries which can then be linked against from a C or C++ program. This permits to run the resulting executable on as many compute nodes as desired, provided that each node can access either the corresponding Matlab installation or an instance of the MCR (Matlab compiler runtime). It should be noted that the execution of a compiled Matlab function does not consume any license, consequently no limitation on the number of parallel sample evaluations results comes from the license side.The construction of a parmio object triggers (i) the compilation of the Matlab function associated with a specified mio object into a C shared library and (ii) the linking with a driver program which distributes the evaluation
of samples. The driver program is coded in C and utilizes the message passing library MPI [58] for communication. Load balancing is accomplished by adopting the master-slave paradigm, in which the samples are distributed by the master one at a time to the slaves. In this way, faster slaves process automatically more samples than slower ones and the time to complete the batch of samples is hence minimized.The parmio class is suitable for cases in which the limit state function is defined in terms of a Matlab function rather than in terms of a FE model for a third party FE code; it has been recently utilized in the parallel reliability analysis of a multi-story building [37].
3.6.5. Uncertainty propagation and analysis methods For the efficient propagation of uncertainties through the FE model, methods based on approximate response representations, such as the Perturbation method, the Neumann Expansion method and the Spectral Stochastic Finite Element method are implemented, in addition to the Direct Monte Carlo simulation method. With the exception of the latter, the propagation methods require some degree of intrusion, hence dedicated implementations have been developed for several FE codes, namely Abaqus [53], Ansys [59], FEAP [60] and Nastran [54]. Both approaches for the implementation of intrusive methods discussed in Section 3.4 are implemented in the class sfem. The less intrusive procedure, based on the export and import of the system matrices to files, has been implemented for all of the above FE codes. The more intrusive approach, in which the matrices are accessed directly, is currently still only supported for Nastran. For this implementation the high-level language DMAP (Direct Matrix Abstraction Program) [54] is used. The reliability analysis is performed with alternative methods, the respective merit of which depends on the problem at hand.
940
M.F. Pellissetti, G.I. Schuëller / Computers and Structures 87 (2009) 930–947
These include the widely used reliability algorithms for low dimensional problems such as FORM and Importance Sampling. For this purpose several alternative algorithms for the identification of the design point are implemented. More importantly, advanced simulation methods such as Line Sampling and Subset Simulation (cf. Section 2.4) are implemented. Thanks to their robustness with respect to the number of uncertain parameters of the reliability problem, these methods extend the range of applicability of the reliability analysis to large FE models of complex structures. For the identification of the sampling direction in the Line Sampling algorithm the program provides an efficient gradient estimation scheme [61,62]. As shown in Fig. 6, these methods are implemented in the class probmodel. A number of optimization algorithms are included in the toolkit; these are currently utilized mainly in reliability analysis – i.e. for the identification of the design point. Furthermore, the class opti-
mizer may be used to perform so-called reliability-based optimization [63], in which the structural optimization is conducted under the constraint of a prescribed target reliability. The implemented algorithms include standard gradient based optimization algorithms (NLPQL, SQP), gradient-free algorithms (Cobyla), as well as evolutionary and genetic algorithms. Where applicable, the implementation utilizes the algorithms provided by the optimization and genetic algorithms toolboxes of Matlab [56].
4. Large-scale applications 4.1. Satellite structure The first numerical example involves the frequency response analysis of a satellite subjected to base excitation. The Nastran
Y
Z
X Fig. 7. FE model of the INTEGRAL-satellite (tilted) (courtesy of ESA).
Table 1 Assumed distributions and COV of the uncertain parameters in the satellite model. Element/material type
Property
COV ðr=lÞ
Distribution
Isotropic material
Young’s modulus Poisson’s ratio Shear modulus Mass density Young’s modulus Poisson’s ratio Shear modulus Mass density Mat. property matrix Mass density Section dimension Non-structural mass Non-structural mass Thickness of plies Orientation angle Elastic prop. value Membrane thickness Non-structural mass Stiffness Mass Modal damping
8% 3% 12% 4% 8% 3% 12% 4% 12% 4% 5% 8% 8% 12% r ¼ 1:5 8% 4% 8% 10% 3% Various
Truncated Gaussian
Orthotropic shell element material
Solid element anisotropic material Simple beam Layered composite material
Spring element property Shell element Spring element Concentrated mass Damping
Truncated Gaussian
Truncated Gaussian Truncated Gaussian Truncated Gaussian
Truncated Gaussian Truncated Gaussian Truncated Gaussian Truncated Gaussian Log-normal
M.F. Pellissetti, G.I. Schuëller / Computers and Structures 87 (2009) 930–947
model of the INTEGRAL satellite of the European Space Agency (ESA) [64–66] and is shown in Fig. 7. The uncertainty associated with the structural properties (material and geometry) have been modeled as random variables, as listed in Table 1. The assumptions on the magnitude of the variability (COV) are partially based on data published in the literature [67]. Applying the assumptions in Table 1 to all corresponding instances in the FE model, results in a total of 1320 random variables. 4.1.1. Reliability analysis The present numerical example focusses on the analysis of the failure probability of the INTEGRAL satellite with COSSAN-X. More specifically, the limit state function defining the failure event has the following form,
g ¼ athresh
max aðf Þ;
f 2½45;70Hz
ð25Þ
941
failure occurs if the maximum acceleration amplitude due to a harmonic support excitation in the frequency band 45–70 Hz, exceeds the threshold value athresh . The observation point is located at the tip of one of the SAS (Sun Acquisition Sensor) booms mounted on the top of the satellite, cf. Fig. 7. In view of the large number of uncertain parameters the present FE model required the use of a reliability analysis method suitable for high dimensional reliability problems. The Subset Simulation method described in Section 2.4 fulfills this requirement and hence has been applied in the present case. As for the interoperability with the FE code (MSC.Nastran) – with reference to Fig. 4 – mode 1 has been adopted in this case. Fig. 8 shows the frequency response functions associated with various levels of the Subset Simulation. With reference to Fig. 1, each frequency response function may be associated with one of the black dots.
Fig. 8. Subset Simulation of the INTEGRAL satellite – frequency response functions associated with subset simulation levels 1 through 5 (500 samples per level).
942
M.F. Pellissetti, G.I. Schuëller / Computers and Structures 87 (2009) 930–947 Subset Simulation Subset Simulation − 100 Subset Simulation − 300 Subset Simulation − 500
−1
10
−2
PF
10
−3
10
−4
10
150
200
300
400
500
600 700
threshold level (acceleration) Fig. 9. Subset Simulation – failure probability versus acceleration threshold level.
Direct MCS Subset Simulation
10
100
1000
4.2. Turbine blade The present section describes the application of uncertainty analysis to a finite element model of a turbine blade. The model is based on the model described in [68]; compared to the original model it is somewhat simplified, since the fir-tree region at the bottom is not included.
Comparison for PF of 1e−4
0
The fact that the amplitudes of the acceleration frequency response functions tend to increase with the subset level, is characteristic for the Subset Simulation method. Indeed, with each subset level, the performance function – in this case the frequency response function – approaches the threshold value which defines failure. The conditional Markov chain Monte Carlo samples obtained with Subset Simulation have been used to quantify the probability of failure. Fig. 9 plots the failure probabilities versus the threshold level of the acceleration; the latter corresponds to athresh in Eq. (25). The different line styles refer to Subset Simulation runs with different numbers of samples per level. For the runs with 300 and 500 samples per level, the magnitude of the estimated standard deviation of the failure probability estimate is depicted at the threshold ^F is an integer. This so-called standard error values for which log10 p serves as an error estimate for the failure probability estimate. The computational cost associated with this reliability analysis is reported in Fig. 10: the wall clock time of the subset simulation based analysis is compared with the (hypothetical) time, which would be required for a direct Monte Carlo simulation based analysis with a comparable accuracy.
10000
Cpu time [h]
Fig. 10. Wall-clock time for pF ¼ 104 ; subset simulation: 300 samples per level.
4.2.1. Model description The mesh of the FE model has been constructed using the commercial pre-processor Patran [55] and consists of 12,933 tetrahedral (Tet4) elements, with a total of 3119 nodes and hence 18,714 DOF’s. The model is fully fixed at the bottom and subjected to inertial load in the vertical direction, corresponding to the centrifugal force due
Fig. 11. Turbine blade – von Mises stresses deterministic model.
M.F. Pellissetti, G.I. Schuëller / Computers and Structures 87 (2009) 930–947
to the rotation of the turbine disc at a velocity of 12,200 rotations per minute. In the present example this results in a centripetal acceleration of 350 km=s2 . The analyzed blade has a height of 45 mm and the assumed material properties of the nominal model correspond to those of a nickel-based alloy, with a density of 8:9 g=cm3 , a Young’s modulus of 200 kN=mm2 and a Poisson ratio of 0.31.
943
4.2.2. Deterministic analysis Prior to the uncertainty analysis, a deterministic analysis has been performed with the nominal model. Fig. 11 shows the von Mises stress distribution over the blade. Clearly, the maximum predicted stresses occur at the lower end of the lower support plate and of the curved portion of the blade, respectively. This is
Fig. 12. Deformed configuration of deterministic model.
Fig. 13. Random realizations of Young’s modulus; left: b ¼ 8 mm;right: b ¼ 100 mm.
944
M.F. Pellissetti, G.I. Schuëller / Computers and Structures 87 (2009) 930–947
expected since the loading consists exclusively in the distributed inertial forces associated with the mass density of the material. The corresponding deformations are shown in Fig. 12, indicating a slight bending of the curved turbine blade, due to a small eccentricity. 4.2.3. Stochastic model In the present study the uncertainty in the Young’s modulus has been modeled as a random entity. A basic assumption was that the material properties of the curved blade, the horizontal plate and the vertical support plate have been considered to be distinct and mutually independent. Two different assumptions for the nature of the random variability of the Young’s modulus have been made and separately investigated, namely variability without spatial fluctuations and with spatial fluctuations. (1) Without spatial fluctuations: The Young’s moduli of the three components have been assumed as three spatially invariant, mutually independent random variables with a CoV of 10%. The random Young’s moduli are assumed to follow a (truncated) Gaussian distribution with a mean value corresponding to the nominal Young’s modulus, i.e. of 200 kN=mm2 . (2) With spatial fluctuations: in this case the Young’s moduli of each of the three components has been separately modeled as a homogeneous Gaussian random field. The covariance function has been assumed to be of exponential type, Rðx; yÞ ¼ r2 expðd=bÞ, where d ¼ kx yk and where b is the correlation length. The standard deviation r has been assumed to be 10% of the mean, thus resulting in the same CoV of the spatially fluctuating Young’s modulus as in the case without spatial fluctuations.Fig. 13 depicts two random realizations of the spatial distribution of the random Young’s modulus, for different values of the correlation length b. Clearly, for the realization associated with the smaller correlation length, the rate of change of the spatial fluctuation is significantly greater.
4.2.4. Uncertainty analysis This variability has been quantified with the statistical estimator of the CoV, based on a Monte Carlo simulation with 100 samples. Of particular interest in the present study was the question, as to whether different assumptions with respect to the spatial fluctuations of the properties have a significant impact on the predicted variability. Therefore, two sets of Monte Carlo simulation runs have been conducted, one in which the Young’s moduli of each of the three portions of the blade model are fully correlated (i.e. without spatial fluctuations), and one in which the Young’s moduli are modeled as random fields, as described above, with a correlation length of b ¼ 8 mm. Fig. 14 reveals the significant difference of representing the uncertain Young’s modulus with the random variable model on one hand, and the random field model on the other hand. Indeed, for this response quantity of interest, the variability is essentially negligible in the random variable case, as it remains below 0.1% throughout the considered domain, with the exception of small portions at the surface of the middle plate. For these localized portions, however, the significance of CoV is actually greatly diminished, since – as Fig. 11 reveals – they correspond regions in which the mean stress is close to zero. In contrast, if the uncertainty in the Young’s modulus is modeled with random fields (right portion), then the CoV of the von Mises stress is more than an order of magnitude higher and reaches peaks between 1% and 5%. The physical explanation of this behavior has to do with the fact that a spatially varying Young’s modulus causes stress redistributions, whereas in the case of a fully correlated Young’s modulus the latter does not affect the stress distribution, but only the displacement. This is confirmed by Fig. 15, where the variability of the vertical displacement, due to the two different models for the uncertainty in the Young’s modulus is shown. Indeed, for this quantity of interest, the variability (CoV) is constantly about 10% throughout the domain, in the case in which the Young’s modulus is modeled as a random variable (left portion). The variability is therefore signif-
Fig. 14. Variability of von Mises stress (CoV); left: random variable model for Young’s modulus; right: random field model, b ¼ 8 mm.
M.F. Pellissetti, G.I. Schuëller / Computers and Structures 87 (2009) 930–947
945
Fig. 15. Variability of vertical displacement (CoV); left: random variable model for Young’s modulus; right: random field model, b ¼ 8 mm.
icantly larger than in the case in which the Young’s modulus is modeled as a random field. As the right portion of Fig. 15 shows, in the latter case, the CoV is around 5% over most of the domain. It should be noted that the small CoV’s in the region near the support have been set artificially; this is acceptable because the significance of the CoV in this region is very limited, since mean displacement is approaching zero. These results confirm the importance of correctly modeling the spatial fluctuations of the variability in structural properties. The impact of including the spatial fluctuations on the response vari-
ability is diametrically opposite for the stress and the displacement response: the stress variability increases due to the spatial fluctuations in the Young’s modulus, whereas the displacement variability decreases. Similar observations have been made in an earlier study on a cylindrical shell [69]. 4.2.5. Reliability analysis The different impact of the alternative choices of the uncertainty model on the stress variability obviously carries over to the analysis of the failure probability. In the present study local
Fig. 16. Failure probability with respect to stress threshold of 70 MPa; left: random variable model for Young’s modulus; right: random field model, b ¼ 8 mm.
946
M.F. Pellissetti, G.I. Schuëller / Computers and Structures 87 (2009) 930–947
failure has been defined at each point of the structure in terms of the following limit state function,
g ¼ 2:3 MPa rvM ;
of Mr. Pichler and Mr. Panayirci in the preparation of the numerical examples is greatly appreciated.
ð26Þ
where rvM denotes the von Mises stress at the considered location. Fig. 16 shows the local failure probability throughout the domain. In the case in which the Young’s modulus is modeled as a spatially constant random variable, the failure probability is below the level that can be estimated with the utilized number of samples. In contrast, for the uncertainty model which does include the spatial fluctuations, there are two regions near the support, in which the failure probability assumes values in excess of 50%. This shows again the importance of considering the random spatial fluctuations of the material properties in the utilized uncertainty model, since otherwise stress based reliability estimates may be in gross error on the unsafe side.
5. Conclusions The present manuscript discussed the implementation of uncertainty modeling and analysis procedures, with the objective of scalability, i.e. the ability to permit uncertainty analysis of state-ofthe-art numerical models of growing complexity. Essential aspects and challenges have been elaborated upon, with special emphasis on the interoperability with finite element analysis software. The presented software code, COSSAN-X, has been developed with the motivation to facilitate the application of structural reliability analysis methods to numerical models of industrial applications, such as the analyzed satellite structure and turbine blade. The main conclusions emerging from this work are as follows: The cornerstones of this integration are (i) multi-modal interoperability between finite element software and the uncertainty modeling and analysis tools, (ii) advanced, scalable uncertainty propagation and reliability analysis algorithms, and (iii) utilization of high performance computing. Due to the efficiency of recently developed, highly scalable reliability analysis methods – such as subset simulation – the probability of extremely unlikely, unfavorable events can be estimated also for large-scale structures modeled with sophisticated finite element models. The spatial fluctuations of the uncertain material properties can have a significant impact on the probability of unfavorable events. As the blade example indicates, neglecting these fluctuations may lead to non-conservative estimates of the failure probability. The currently implemented features for random field modeling are a critical tool for capturing the effects associated with the spatial variability. The presented object-oriented implementation consists in a set of classes running in the widely used Matlab computing environment, it represents a flexible and extensible development environment for applications requiring the integration of computational mechanics and uncertainty modeling. Examples of such applications are reliability-based optimization, robustness analysis and V&V (validation and verification). Finally, there is no longer a plausible motivation to forego the possibilities offered by systematic uncertainty analysis in structural engineering, since all ingredients – i.e. the theoretical basis, the efficient algorithms and the computing resources – are in place.
Acknowledgements The financial support of the Austrian Science Fund (FWF) through Project L269-N13 is gratefully acknowledged. The help
References [1] Schuëller GI, editor. A state-of-the-art report on computational stochastic mechanics. J Prob Eng Mech 1997;12(4):197–321. [2] Schenk CA, Schuëller GI. Uncertainty assessment of large finite element systems. Berlin/Heidelberg/New York: Springer-Verlag; 2005. [3] Guide for verification and validation in computational solid mechanics. An American standard ASME V&V 10-2006. American Society of Mechanical Engineers; 2006. [4] Charmpis D, Schuëller GI. Finite element mesh refinement in view of physical uncertainties. In: Ramm E, Wall W, Bletzinger K-U, Bischoff M, editors. Proceedings of IASS/IACM 2005 (CD-ROM), Salzburg; 2005. [5] Stavroulaki M, Stavroulakis G, Leftheris B. Modelling prestress restoration of buildings by general purpose structural analysis and optimization software, the optimization module of MSC/NASTRAN. Comput Struct 1997;62:81–92. [6] Richter C-H. Structural design of modern steam turbine blades using ADINATM . Comput Struct 2003;81:919–27. [7] Schuëller GI, editor. Structural reliability software. Structural Safety – Special Issue 2006;28(1–2):1–216. [8] Pellissetti MF, Schuëller GI. On general purpose software in structural reliability. Structural Safety 2006;28(1–2):3–16. [9] Ang A-S, Tang H. Probability concepts in engineering planning and design. John Wiley and Sons; 1975. [10] Schuëller G. Einführung in die Sicherheit und Zuverlässigkeit von Tragwerken. Berlin, München: Wilhelm Ernst und Sohn; 1981. [11] Vanmarcke E. Random fields: analysis and synthesis. Cambridge, Massachusetts: MIT Press; 1983. [12] Bendat J, Piersol A. Random data: analysis & measurement procedures. 3rd ed. New York: Wiley-Interscience; 2000. [13] Grigoriu M. Stochastic calculus: applications in science and engineering. Boston: Birkhäuser; 2002. [14] Loeve M. Probability theory. 4th ed. New York: Springer-Verlag; 1977. [15] Ghanem R. Ingredients for a general purpose stochastic finite elements implementation. Comput Meth Appl Mech Eng 1999;168(1–4):19–34. [16] Schuëller GI, Pradlwarter HJ, Schenk C. Nonstationary response of large linear FE-models under stochastic loading. Comput Struct 2003;81(8–11):937–47. [17] Ghanem R, Spanos P. Stochastic finite elements: a spectral approach. Berlin: Springer-Verlag; 1991. [18] Kleiber M, Hien T. The stochastic finite element method: basic perturbation technique and computer implementation. John Wiley and Sons; 1992. [19] Chakraborty S, Dey S. A stochastic finite element dynamic analysis of structures with uncertain parameters. Int J Mech Sci 1998;40(11):1071–87. [20] Crandall S. Perturbation techniques for random vibration of nonlinear systems. J Acoust Soc Am 1963;35:1700–5. [21] Shinozuka M, Deodatis G. Response variability of stochastic finite element systems. J Eng Mech ASCE 1988;114(3):499–519. [22] Brenner C. Stochastic finite element methods (literature review). Technical report, Institute of Engineering Mechanics, Leopold-Franzens University, Innsbruck, Austria, EU; 1991. [23] Schuëller GI, Stix R. A critical appraisal of methods to determine failure probabilities. J Struct Safety 1987;4:293–309. [24] Ditlevsen O, Madsen HO. Structural reliability methods. Chichester: John Wiley and Sons; 1996. [25] Schuëller GI, Pradlwarter HJ, Koutsourelakis P. A critical appraisal of reliability estimation procedures for high dimensions. Probab Eng Mech 2004;19(4):463–74. [26] Schuëller GI, Pradlwarter HJ. Benchmark study on reliability estimation in higher dimensions of structural systems – an overview. Struct Safety 2007;29(3):167–82. [27] Rubinstein R. Simulation and the Monte Carlo method. New York/Chichester/ Brisbane/Toronto: John Wiley and Sons; 1981. [28] Au S-K, Beck J. Estimation of small failure probabilities in high dimensions by subset simulation. Probab Eng Mech 2001;16(4):263–77. [29] Katafygiotis L, Cheung SH. Wedge simulation method for calculating the reliability of linear dynamical systems. Probab Eng Mech 2004;19:229–38. [30] Zienkiewicz O, Taylor R. The finite element method. 5th ed. Oxford, UK: Butterworth-Heinemann; 2000. [31] Smith I, Griffiths D. Programming the finite element method. 3rd ed. Chichester, UK: John Wiley and Sons; 1998. [32] Shephard M, Beall M, O’Bara R, Webster B. Toward simulation-based design. Finite Elem Anal Des 2004;40(12):1575–98. [33] Niggl A, Rank E, Mundani R-P, Bungartz H-J. A framework for embedded structural simulation: benefits in building design. In: Joint international conference on computing and decision making in civil and building engineering, Montréal, Canada; 2006. p. 1768–77. [34] Creel M. User-friendly parallel computations with econometric examples. Comput Econ 2005;26:107–28. [35] Noble M. Getting more from your multicore: exploiting openmp from an opensource numerical scripting language. Concurr Comput: Pract Experience 2008;20:1877–91.
M.F. Pellissetti, G.I. Schuëller / Computers and Structures 87 (2009) 930–947 [36] Papadrakakis M, Kotsopulos A. Parallel solution methods for stochastic finite element analysis using Monte Carlo simulation. Comput Meth Appl Mech Eng 1999;168(1–4):305–20. [37] Pellissetti M. Parallel processing in structural reliability. Int J Struct Eng Mech 2009;32(1). [38] Ghanem R, Kruger R. Numerical solution of spectral stochastic finite element systems. Comput Meth Appl Mech Eng 1996;129:289–303. [39] Keese A, Matthies H. Hierarchical parallelisation for the solution of stochastic finite element equations. Comput Struct 2005;83:1033–47. [40] Sotelino E. Parallel processing techniques in structural engineering applications. J Struct Eng ASCE 2003;129(12):1698–706. [41] Farhat C, Lesoinne M. Automatic partitioning of unstructured meshes for the parallel solution of problems in computational mechanics. Int J Numer Meth Eng 1993;36(5):745–64. [42] Quarteroni A, Valli A. Domain decomposition methods for partial differential equations. Numerical mathematics and scientific computation. New York, USA: Oxford University Press, Inc.; 1999. [43] Bitzarakis S, Papadrakakis M, Kotsopulos A. Parallel solution techniques in computational structural mechanics. Comput Meth Appl Mech Eng 1997;148(1–2):75–104. [44] Wriggers P, Boersma A. A parallel algebraic multigrid solver for problems in solid mechanics discretisized by finite elements. Comput Struct 1998;69(1):129–37. [45] Krysl P, Bittnar Z. Parallel explicit finite element solid dynamics with domain decomposition and message passing: dual partitioning scalability. Comput Struct 2001;79(3):345–60. [46] Farhat C, Cortial J, Dastillung C, Bavestrello H. Time-parallel implicit integrators for the near-real-time prediction of linear structural dynamic responses. Int J Numer Meth Eng 2006;67:697–724. [47] Stenstrom P. The paradigm shift to multi-cores: opportunities and challenges. Appl Comput Math 2007;2:253–7. [48] Thomaszewski B, Pabst S, Blochinger W. Parallel techniques for physically based simulation on multi-core processor architectures. Comput Graph 2008;32:25–40. [49] COSSAN-X, Version 811. Institute of Engineering Mechanics, University of Innsbruck, Austria, EU; 2008. [50] COSSAN, COmputational Stochastic Structural ANalysis, Part A, User’s Manual. Institute of Engineering Mechanics, University of Innsbruck, Austria, EU; 1996. [51] Schuëller GI, Pradlwarter HJ. Computational stochastic structural analysis (COSSAN) – a software tool. Struct Safety 2006;28(1–2):68–82.
947
[52] Pellissetti MF, Schuëller GI. On general purpose software in structural reliability. Struct Safety 2006;28(1–2):3–16. [53] Abaqus, Version 6.6. ABAQUS, Inc., Providence, RI, USA; 2006. [54] MSC.Nastran, Version 2005. MSC.Software Corporation, Santa Ana, CA, USA; 2005. [55] Patran, MD R2.1. MSC.Software Corporation, Santa Ana, CA, USA; 2008. [56] Matlab, Version 2007b. The Mathworks, Natick, MA, USA; 2007. [57] Sun Microsystems, Inc., Santa Clara, CA, N1 Grid Engine 6 User’s Guide; 2005. [58] Gropp W, Lusk E, Doss N, Skjellum A. A high-performance portable implementation of the MPI message passing interface standard. Parallel Comput 1996;22(6):789–828. [59] Ansys, Version 11. ANSYS, Inc., Canonsburg, PA, USA; 2008. [60] FEAP, Version 7.5.m. Regents of the University of California, Berkeley, CA, USA; 2004. [61] Pradlwarter HJ. Relative importance of uncertain structural parameters, part I: algorithm. Comput Mech 2007;40(4):627–35. [62] Pellissetti MF, Pradlwarter HJ, Schuëller GI. Relative importance of uncertain structural parameters, part II: applications. Comput Mech 2007;40(4):637–49. [63] Jensen H, Valdebenito M, Schuëller G. An efficient reliability-based optimization scheme for uncertain linear systems subject to general Gaussian excitation. Comput Meth Appl Mech Eng 2008;194(1):72–87. [64] Notarnicola M, Paron A, Tizzani L, Evans E. Integral – structural mathematical model description and dynamic analysis results. Technical report INT-TN-Al0089, Issue 2, Alenia Aerospazio Space Division, Turin, Italy; 1998. [65] Moreno C. Integral – service module structure mathematical model description. 1st ed. Technical report INT-TN-CAS-1002. CASA Space Division, Madrid, Spain; 1998. [66] Oxfort M. Integral – plm payload module structure fem description. 2nd ed. Technical report INT-RP-OCW-0002. Oerlikon-Contraves BU Space, ZürichSeebach, Switzerland; 1997. [67] Esnault P, Klein M. Factors of safety and reliability – present guidelines & future aspects. In: Proceedings of the conference on spacecraft structures, materials & mechanical testing, SP-386, European Space Agency, Nordwijk, The Netherlands; 1996. p. 109–19. [68] Witek L. Failure analysis of turbine disc of an aero engine. Eng Fail Anal 2006;13:9–17. [69] Charmpis D, Schuëller GI, Pellissetti M. The need for linking micromechanics of materials with stochastic finite elements: a challenge for materials science. Comput Mater Sci 2007;41:27–37.