Rigorous uncertainty quantification and design with uncertain material models

Rigorous uncertainty quantification and design with uncertain material models

International Journal of Impact Engineering 136 (2020) 103418 Contents lists available at ScienceDirect International Journal of Impact Engineering ...

1MB Sizes 0 Downloads 63 Views

International Journal of Impact Engineering 136 (2020) 103418

Contents lists available at ScienceDirect

International Journal of Impact Engineering journal homepage: www.elsevier.com/locate/ijimpeng

Rigorous uncertainty quantification and design with uncertain material models X. Suna, T. Kirchdoerferb, M. Ortiz a b

T

⁎,a

Division of Engineering and Applied Science, California Institute of Technology, 1200 E. California Blvd., Pasadena, CA 91125, USA Lawrence Livermore National Laboratory, Livermore, CA 94550, USA

A R T I C LE I N FO

A B S T R A C T

Keywords: Uncertainty quantification Safe design Quantification of margins and uncertainities Concentration of measure inequalities Sub-ballistic impact AZ31B Mg alloy

We assess a method of quantification of margins and uncertainties (QMU) in applications where the main source of uncertainty is an imperfect knowledge or characterization of the material behavior. The aim of QMU is to determine adequate design margins given quantified uncertainties and a desired level of confidence in the design. We quantify uncertainties through rigorous probability bounds computed by exercising an existing deterministic code in order to sample the mean response and identify worst-case combinations of parameters. The resulting methodology is non-intrusive and can be wrapped around existing solvers. The use of rigorous probability bounds ensures that the resulting designs are conservative to within a desired level of confidence. We assess the QMU framework by means of an application concerned with sub-ballistic impact of AZ31B Mg alloy plates. We assume the design specification to be a maximum allowable backface deflection of the plate. As a simple scenario, we specifically assume that, under the conditions of interest, the plate is well-characterized by the Johnson-Cook model, but the parameters of the model are uncertain. In calculations, we use the commercial finite-element package LS-Dyna and DAKOTA Version 6.7. The assessment demonstrates the feasibility of the approach and how it results in high-confidence designs that are well-within the practical range of engineering application.

1. Introduction There are many applications in engineering in which imperfect knowledge of material behavior is the main source of uncertainty in predicted quantities of interest for purposes of design. In such cases, the geometry, loading and other operating conditions of the system are known with a high degree of certainty, whereas the material behavior is only imperfectly known due to its complexity, stochastic character, paucity of experimental data, and other factors. These uncertainties render deterministic analysis of limited value. Instead, it becomes necessary to estimate the likely spread of performance metrics and relevant design parameters in order to provide an adequate design margin and meet specifications with sufficient confidence. Within computational science, Uncertainty Quantification (UQ) refers to a family of solution strategies that aim to characterize the variability of a given analysis and the spread in the predicted performance of a system. The work presented in this paper focuses specifically on systems in which the main source of uncertainty is an imperfect knowledge of material behavior, as described by a parameterized constitutive model. These models often represent the primary link between



experimental inputs and predicted outputs and thus constitute the strongest source of physical fidelity in a given calculation. Traditionally, the experimental inputs would be characterized almost exclusively by laboratory experiments. More recently, the source of constitutive data might just as easily be subgrid-scale simulations. Regardless of the source, analysts are most commonly restricted to a limited set of constitutive model forms, either by their simulation tool of choice or by the significant effort required to formulate, implement and characterize a new constitutive model. In practice, this restriction limits how well a fixed set of constitutive parameters can fully represent a broad range of complex constitutive behaviors. Our work focuses on the design of systems against critical thresholds using Quantification of Margins and Uncertainties (QMU) [1–4]. The aim of QMU is to determine adequate design margins given quantified uncertainties and a desired level of confidence in the design. The specific approach by which we quantify uncertainties is through rigorous upper bounds of the probability that the system fail to perform within the design margin. Specifically, the probability upper bounds that we use, due to McDiarmid [5], belong to a general class known as concentration-of-measure (CoM) inequalities [6]. Such bounds are

Corresponding author. E-mail address: [email protected] (M. Ortiz).

https://doi.org/10.1016/j.ijimpeng.2019.103418 Received 21 July 2019; Received in revised form 9 September 2019; Accepted 21 October 2019 Available online 25 October 2019 0734-743X/ © 2019 Elsevier Ltd. All rights reserved.

International Journal of Impact Engineering 136 (2020) 103418

X. Sun, et al.

complete certainty if

rigorous, i. e., they are sure to be conservative and result in safe designs, they get sharper with an increasing number of input variables (the ’blessing of dimensionality’) and require knowledge of ranges of the input variables only, and not their full probability distribution as is the case for Bayesian methods. The bounds are computed by exercising an existing deterministic code in order to sample the mean response of the system and identify worst-case combinations of parameters. The resulting UQ methodology is, therefore, non-intrusive [1,3,4,7,8]. In this work, we aim to assess this QMU approach in applications in which the main source of uncertainty is the material model. We carry out this assessment by means of an example concerned with the subballistic impact of an elastic-plastic AZ31B Mg alloy plate struck by a heavy elastic ball. We adopt as design condition a maximum allowable value of the backface deflection of the plate. For simplicity, we specifically hypothesize that the material behavior of the plate is known to be well-represented by the Johnson-Cook constitutive model [9], but the corresponding material constants are only imperfectly characterized experimentally. Thus, we assume that we are given an experimental data set of yield strength values measured over a sample of effective plastic strains, strain rates and temperatures and that, in order to cover a percentile 1 − ϵ″ of the experimental data, the Johnson-Cook parameters must be allowed to vary over a certain range. In calculations, we specifically use the data set of Hasenpouth [10], who conveniently provides parameter ranges ensuring 95% coverage of the data, or ϵ″ = 0.05. Calculations are carried out over a range of impact velocities and obliquity angles using the commercial finite-element package LSDyna [11] on a single converged mesh. This strategy is entirely analogous to design testing for impact resistance [12], wherein performance is evaluated relative to a targeted set of characterized impact conditions. The mean maximum backsurface deflection is computed using the DAKOTA Version 6.7 software package [13] of the Sandia National Laboratories. The remainder of the paper is structured as follows. For completeness, in Section 2 we start by reviewing the QMU theoretical framework and the probability bounds used for purposes of UQ. In Section 3, we proceed to illustrate the QMU framework by means of an application concerned with sub-ballistic impact of AZ31B Mg alloy plates. An extensive critique of the approach, extensive discussions of methodological tradeoffs and suggestions for further work are finally presented in Section 4.

 [Y < Yc ] = 1,

(2.2)

i. e., if the probability that the system perform according to specification is 1. In general, complete certainty may not be attained or results in overly conservative designs. In such cases, we may tolerate a small probability of failure ϵ and accept a design if

 [Y ≥ Yc ] ≤ ϵ.

(2.3)

Another difficulty encountered in practice is that the probability of failure  [Y ≥ Yc ] cannot be computed exactly due to an imperfect knowledge of the probability distribution of X, the complexity of the calculation, or other factors. In such cases, a conservative design can be ensured if an upper bound

 [Y ≥ Yc ] ≤ PUB

(2.4)

is known. Indeed, by requiring that (2.5)

PUB ≤ ϵ

the design criterion (2.3) is automatically satisfied. Evidently, the tighter the bound PUB the more economical the design. However, increasing tightness comes at increasing computational expense, which sets forth a fundamental trade-off between economy of design and computability. In the present work, we use McDiarmid’s bound [5] as a working compromise between tightness and computational complexity. One advantage of McDiarmid’s bound is that its evaluation does not require the probability distribution of X, which is often unknown, or imperfectly known. It is enough to know that the values of the input random variables lie within intervals (E1, ⋯, EN ), i. e., x1 ∈ E1, ⋯, xN ∈ EN, with probability 1 − ϵ″. On this basis, McDiarmid’s inequality states that, with probability 1 − ϵ″,

r2  ⎡F (X ) −  [F (X )] ≥ r⎤ ≤ exp ⎛⎜−2 2 ⎞⎟, ⎢ ⎥ D ⎣ ⎦ ⎝ ⎠

(2.6)

where r ≥ 0 is a slack variable,  [F (X )] is the expected system performance and N

⎞ ⎛ D = ⎜ ∑ Di2⎟ 1/2, ⎝ i=1 ⎠

(2.7)

2. Methodology

is the system diamater. In (2.7), Di denotes the subdiameter of the Xi input variable, which follows from the optimization problem

For the sake of completeness and convenience, in this section we briefly summarize the QMU theoretical framework as it applies to safe design. Further details and extensions of the theory can be found in Ref. [8,14].

⎛ ⎞ Di = ⎜ sup |F (x^i , x i ) − F (x^i , x i′)| 2⎟ 1/2, ^i ∈ E^i, xi, x i′∈ Ei x ⎝ ⎠

E^i = x^i = (x1, …, x i − 1, x i + 1, …, xN ), where we write E1 × ⋯×Ei − 1 × Ei + 1 × ⋯×EN , (x^i , x i ) = (x1, …, x i − 1, x i , x i + 1, …, xN ) and (x^i , x i′) =(x1, …, x i − 1, x i′, x i + 1, …, xN ) . McDiarmid’s inequality (2.6) expresses the remarkable property that, when the input random variables are restricted to intervals, the probability of deviation of the response from the mean is exponentially decaying on the scale of the diameter of the system. The subdiameter Di measures the largest deviation in system performance, in the sense of the modulus of variation, resulting from a finite variation in the corresponding input variable. It can also be regarded as a worst-case finite sensitivity measure in which variations in the input variable is allowed to span its full range. Substituting r = (Yc −  [Y ])+ in Eq. (2.6), where x+: =max(x , 0), and rearranging terms, we obtain

2.1. Quantification of margins and uncertainties (QMU) We are concerned with a system whose performance is described by a known response function

Y = F (X ),

(2.8)

(2.1)

where X ≡ (X1 , ⋯, XN ) are N real-valued random variables, expressing imperfectly known or uncertain properties of the system, and Y is a realvalued random variable, or performance measure. The function F also depends on parameters representing operating conditions and design parameters, but such dependence is omitted for economy of notation. For instance, in the application to impact presented subsequently: X are uncertain material constants; Y is the maximum backsurface deflection of the plate; operating parameters are the mass of the projectile, the impact velocity and the angle of attack, obliquity; and the design parameter of choice is the thickness of the plate. The design specifications require that Y remain below a critical value Yc. Thus, the design fails if Y ≥ Yc. The system is designed with

 [Y ≥ Yc ] ≤ exp ⎛⎜−2 ⎝

(Yc −  [Y ])+2 ⎞ ⎟ ≡ PUB , D2 ⎠

(2.9)

which provides an upper bound on the probability of failure. A conservative design criterion is, then, 2

International Journal of Impact Engineering 136 (2020) 103418

X. Sun, et al.

exp ⎛⎜−2 ⎝

(Yc −  [Y ])+2 ⎞ ⎟ ≤ ϵ. D2 ⎠

sure. Then, from (2.18), the probability that the design meet specifications satisfies the bound

(2.10)

1 −  [Y ≥ Yc ] ≥ 1 − PUB,

A straightforward manipulation gives the equivalent criterion

(2.21)

(2.11)

with PUB as in (2.18) and ⟨Y⟩, D and α as defined previously. More generally, with partial confidence in the estimated mean performance, ϵ′ > 0, and in the material parameter ranges, ϵ″ > 0, conditioning of probabilities gives

(2.12)

1 −  [Y ≥ Yc ] ≥ (1 − ϵ′)(1 − ϵ″)(1 − PUB),

measures the design margin, whereas

or

U=D

(2.13)

 [Y ≥ Yc ] ≤ 1 − (1 − ϵ′)(1 − ϵ″)(1 − PUB),

(Yc −  [Y ])+ ≥ D

log

1 . ϵ

Evidently,

M = (Yc −  [Y ])+

M ≥ U

log

1 − (1 − ϵ′)(1 − ϵ″)(1 − PUB) ≤ ϵ,

1 . ϵ

(2.14)

PUB ≤ 1 −

1−ϵ ≡ ϵ″ ′, (1 − ϵ′)(1 − ϵ″)

(2.25)

where ϵ is the probability-of-failure tolerance. We note that, since PUB ≥ 0, we must necessarily have

ϵ ≥ 1 − (1 − ϵ′)(1 − ϵ″),

(2.26)

which sets a lower bound on the probability-of-failure tolerance that can be allowed for. Inserting (2.19) into (2.25) and solving, we obtain

Yc ≥ 〈Y 〉 + α + CF·D ≡ Ymin,

(2.27)

with

n



yk .

CF =

(2.15)

k=1

Lucas et al. [8] showed that the probability of failure  [Y ≥ Yc ] can be determined to within confidence intervals by considering the randomness of the estimated mean ⟨Y⟩. The result is

(Yc − 〈Y 〉 − α )+2 ⎞ ⎤  ⎡ [Y ≥ Yc ] ≥ exp ⎛⎜−2 ≤ ϵ′, ⎟ ⎢ ⎥ D2 ⎣ ⎝ ⎠⎦

−ln ϵ′ 2n

(2.16)

(2.17)

(Yc − 〈Y 〉 − α )+2 ⎞ ⎟ ≡ PUB , D2 ⎠

log

1 , ϵ

(2.29)

3. Numerical experiments (2.18) We proceed to illustrate the QMU framework described in the foregoing by means of an application concerned with the sub-ballistic impact of AZ31B Mg alloy plates, Fig. 1. We assume the design specification to be a maximum allowable backface deflection of the plate, Fig. 2. We assume that all uncertainty arises from an imperfect characterization of the constitutive response of the plate. As a simple scenario, we assume that, under the conditions of interest, the plate is wellcharacterized by the Johnson-Cook model [9], but the parameters of the model are uncertain. Specifically, they must be allowed to vary over certain ranges in order to cover the experimental data with prescribed probability. For simplicity, the projectile is assumed to be elastic and uncertainty-free. Parameters describing the operating conditions of the plate are the mass of the projectile, its impact velocity and its angle of attack, or obliquity. The main design parameter is assumed to be the thickness of the plate. The objective of the QMU analysis is to determine the design margin that is required in order to meet the design specifications with a prescribed confidence factor.

(2.19)

which may be in turn recast in the form (2.14) with margin

M = (Yc − 〈Y 〉 − α )+,

(2.28)

which determines the admissible range of t.

which supplies an upper bound on the probability of failure. A conservative design criterion then follows as

(Yc − 〈Y 〉 − α )+ ≥ D

1 , ϵ″ ′

Ymin (t ) = 〈Y 〉 (t ) + α (t ) + CF·D (t ) ≤ Yc ,

represents a loss of margin. Provided that this margin hit is taken into account, we have, with probability greater than 1 − ϵ′,

 [Y ≥ Yc ] ≤ exp ⎜⎛−2 ⎝

log

by way of effective confidence factor. Eq. (2.27) gives the smallest threshold deflection Ymin that the system can be rated for. Suppose now that a system needs to be designed so as to meet a specified rating requirement Ymin ≤ Yc . For simplicity, we assume that the system performance, and therefore its rating Ymin (t ), depends on a single design parameter t. Then, we seek to determine intervals of the design parameter that guarantee a rating Ymin (t ) ≤ Yc with confidence CF. In this scenario, computing the empirical mean performance ⟨Y⟩, margin hit α and system diameter D as a function of t and inserting the resulting values into (2.27) gives the design condition

where 1 − ϵ′ is a pre-specified confidence interval for mean estimation and

α=D

(2.24)

or

The ratio CF of margin to uncertainty measures the confidence that can be placed on the design as is referred to as confidence factor. The design criterion (2.14) simply requires that the confidence in the design, as measured by the confidence factor, be greater than a minimum value. The preceding methodology can be extended to the case in which the exact mean performance  [Y ] is not available and the mean performance must instead be estimated. To this end, we assume that a sample of predicted performance measures y1, y2, ..., yn are obtained by conducting n independent evaluations of the model F(X) based on an unbiased sampling of the random input variables. The corresponding empirical mean performance is

1 〈Y 〉 = n

(2.23)

which bounds above the probability of failure of the design. The design is then conservatively rated if

provides an unambiguous definition and measure of uncertainty. With these identifications, the design criterion (2.11) can be expressed as

CF =

(2.22)

(2.20)

and uncertainty Eq. (2.13). 2.2. Performance rating The QMU analysis described in the foregoing provides the basis for rating system performance for purposes of safe design. Begin by assuming that the mean performance is computed exactly, or ϵ′ = 0, and that the input parameters take values in the intervales E1 × ⋅⋅⋅ × EN for 3

International Journal of Impact Engineering 136 (2020) 103418

X. Sun, et al.

Table 1 Lower and upper bounds of 95% confidence intervals for the AZ31B/RD Mg alloy Johnson-Cook parameters [10]. Parameter

Lower bound

Upper bound

A (MPa) B (MPa) n C m

200.372 150.682 0.160 0.012 1.523

249.970 186.010 0.324 0.014 1.577

Table 2 Fixed system parameters used in the ballistic problem. Material constants for AZ31B Mg alloy from Ref. [10].

Fig. 1. Schematic of the computational model. (a) Perspective view of the projectile/plate system. (b) Mid y − z cross-section showing impact velocity v, angle of attack θ and plate thickness t.

Fig. 2. Visualization of the dynamic indentation process. (a) Perspective view of the projectile/plate system. (b) Cross-sectional view with maximum backface deflection labeled dmax.

Plate (AZ31B Mg)

Value

(unit)

Mass density Young’s modulus Poisson’s ratio Reference strain rate

1.77 45.0 0.35 0.001

(g/cm3) (GPa)

Reference temperature Reference melt. temp. Plate width Projectile (elastic) Mass density Young’s modulus Poisson’s ratio Diameter Total mass

298.0 905.0 10.0 Value 11.34 14.0 0.42 1.12 8.34

(s−1) (K) (K) (cm) (unit) (g/cm3) (GPa) (cm) (g)

Table 3 Margin hit vs. required confidence 1 − ϵ′ for sample size n = 2, 000, Eq. (2.17). ϵ′ α/D

10−1 0.0239926

10−2 0.0339307

10−3 0.0415565

10−4 0.0479853

10−5 0.0536492

3.1. Constitutive behavior We assume that the constitutive behavior of the plate is well-characterized by the Johnson-Cook model [9],

σ (ϵp, ϵ˙p, T ) = [A + B ϵnp][1 + C ln ϵ˙*p][1 − T *m],

3.2. Forward solver For a given realization of the system parameters, the maximum backface deflection Y of the plate is computed using the explicit dynamics solver available within the commercial FEA software package LS-Dyna [11]. The initial conditions of the computational model are shown in Fig. 1a. Fig. 1b shows a cross sectional view of the system before deformation with labeled variables of interest. As already mentioned, the Johson-Cook parameters X ≡ (A, B, n, C, m) are assumed to be uncertain and known to within 95% percentile intervals only. We take the impact velocity v and the obliquity angle θ relative to the target face-normal as free or operating parameters for purposes of parametric studies. The main design parameter is the thickness t of the plate. All other system parameters are fixed and listed in Tables 1 and 2. The backface nodes of the target near the edges are fully constrained to prevent displacement in all directions. The projectile is resolved using 864 hex elements, while the number of elements for the plate depends on the simulated plate thickness, e. g., 70,000 hex elements for t = 0.35 cm. The elements are linear, single point integration with hourglass control. The specified time-step size is 1.0 × 10−4 µs, with all simulations running for 500 µs before termination. This simulation duration is sufficiently long to allow for the rebound and separation of the projectile from the plate in all the calculations. The calculations are adiabatic with the initial temperature set to room temperature.

(3.1)

where σ is the true Mises stress, ϵp is the equivalent plastic strain, ϵ˙p is the plastic strain rate, and T is the temperature. The normalized plastic strain rate ϵ˙*p is defined as

ϵ˙*p: =

ϵ˙p ϵ˙p0

,

(3.2)

where ϵ˙p0 is a reference strain rate. The model also makes use of the normalized temperature

T *: =

T − T0 , Tm − T0

(3.3)

where T0 is a reference temperature and Tm is the melting temperature. The model parameters are: A, the yield stress; B, the strain-hardening modulus; n, the strain-hardening exponent; C, the strengthening coefficient of strain rate; and m, the thermal-softening exponent. We regard the set X ≡ (A, B, n, C, m) of Johnson-Cook parameters as the main source of uncertainty in the analysis. In practice, the material parameters are derived from specific data sources, e. g., experiments or sub-grid simulations. We assume that the data is sufficient to determine confidence intervals for each parameter. In our calculations, we specifically use the AZ31B Mg alloy characterization of Hasenpouth [10], which, conveniently, includes the lower and upper bound of the 95% confidence intervals for each parameter, Table 1. The remaining values of the material parameters used in the calculations are tabulated in Table 2.

3.3. QMU analysis A complete QMU analysis consists of two parts: (i) The computation of the subdiameters Di; and (ii) Computation of the mean performance.

4

International Journal of Impact Engineering 136 (2020) 103418

X. Sun, et al.

Fig. 3. Empirical estimation of the mean maximum backface deflection ⟨Y⟩. System parameters v = 200 m/s, θ = 0∘, t = 0.35 cm . (a) Convergence with sample size n. (b) Histogram of Y distribution for sample size n = 2, 000 .

3.3.1. Computation of mean response The mean maximum backface deflection ⟨Y⟩ is estimated by means of the empirical mean (2.15) with uniform Monte Carlo sampling using the Latin hypercube scheme in the DAKOTA Version 6.7 software package [13] of the Sandia National Laboratories. Fig. 3 illustrates salient aspects of the calculation for the case of impact velocity v = 200 m/s, angle of attack θ = 0∘, and plate thickness t = 0.35 cm . Fig. 3a shows the convergence of ⟨Y⟩ with sample size. As may be seen from the figure, ⟨Y⟩ is ostensibly converged for a sample size n = 2, 000 . With this sample size, we compute 〈Y 〉 = 0.99541 cm. Fig. 3b shows the corresponding sampled distribution of the maximum backface deflection Y. The margin hit (2.17) incurred as a result of the empirical estimation of the mean maximum backface deflection is shown in Table 3 for different required levels of confidence 1 − ϵ′. As may be seen from the table, for a sample size n = 2, 000 the margin hit is a modest 5% of the system diameter up to a stringent ϵ′ = 10−5 . These values indicate that the empirical estimation of mean performance is not likely to restrict designs significantly with high confidence levels. 3.3.2. Computation of sub-diameters Computing sub-diameters entails a constrained optimization over the space of input variables aimed at determining the largest deviation in the performance measure. A flowchart of the subdiameter calculations is shown in Fig. 4. We recall that genetic algorithms (GAs) are global derivative-free optimization methods and, as such, are particularly well-suited to the computation of sub-diameters. Another advantage of GAs is their high degree of concurrency, since each iteration can be evaluated independently across multiple processors. In calculations, we use the DAKOTA Version 6.7 software package [13] of the Sandia National Laboratories. We choose throughout a fixed population size of 32. One seed in the initial population is generated by setting the two repeated optimization variables associated with the sub-diameter at the two limits of that parameter range, with the remaining optimization variables set at the mid-span of their respective ranges. The remaining individuals in the initial population are selected randomly. We find that this initial setup accelerates the convergence of the GA iterations. Using a crossover rate of 0.8 and a mutation rate of 0.1 the GA calculations start to converge after 20 to 30 generations. Fig. 5 shows the sub-diameters computed for each of the material parameters of the Johnson-Cook model for the case of impact velocity v = 200 m/s, angle of attack θ = 0∘, and plate thickness t = 0.35 cm . The numerical values are also tabulated in Table 4. The total computed diameter for this case is D = 0.15449 cm. An important property of the sub-diameters is that they all have equal units, i. e., they are measured in units of the performance measure. A direct consequence of this

Fig. 4. Process flow for the calculation of subdiameters.

Sub-diameter (cm)

0.12 0.10 0.08 0.06 0.04 0.02 0

A

B

n

C

m

Johnson-Cook parameter Fig. 5. Sub-diameters of random parameters in Johnson-Cook model. System parameters v = 200 m/s, θ = 0∘, t = 0.35 cm . The corresponding total diameter is D = 0.15449 cm.

5

International Journal of Impact Engineering 136 (2020) 103418

X. Sun, et al.

Table 4 Sub-diameters of Johnson-Cook random parameters. System parameters v = 200 m/s, θ = 0∘, t = 0.35 cm . Johnson-Cook parameter

A

B

n

C

m

Sub-diameter (cm)

0.11782

0.05039

0.08347

0.01676

0.01461

Table 6 System performance vs. plate thickness.

3.4. Performance rating and safe design We are now in a position to rate the performance of a particular plate/projectile system at fixed operating conditions. We recall that Eq. (2.27) gives the smallest threshold deflection Ymin for which the system can be rated. By way of example, we consider again the plate/ projectile system analyzed in the foregoing, corresponding to a plate thickness t = 0.35 cm, impact velocity v = 200 m/s and angle of attack θ = 0∘ . The rating plate deflection thresholds for this system, using the reported values of ⟨Y⟩, D and α and choosing ϵ′ = 10−3 and ϵ″ = 0.05, are tabulated in Table 5 as a function of the design probability-offailure tolerance ϵ. As may be seen from the table, despite the conservative character of McDiarmid’s bound the resulting ratings are wellwithin the practical range of engineering application with a high degree of confidence. Suppose now that the plate needs to be designed so as to meet a specified threshold requirement Y < Yc. Specifically, we seek to determine the minimum thickness tmin of the plate that guarantees a rating Ymin < Yc with confidence CF. In this scenario, computing the empirical mean performance ⟨Y⟩, margin hit α and system diameter D as a function of t and inserting the resulting values into (2.27) gives the design condition (2.29). The dependence of ⟨Y⟩ and D on t for an impact velocity v = 200 m/s and angle of attack θ = 0∘ is tabulated in Table 6 and shown in Fig. 6. As expected, the mean maximum backface deflection ⟨Y⟩ decreases with increasing plate thickness, Fig. 6a. Less intuitive is the finding that the diameter D, which measures the uncertainty in the response, also decreases with plate thickness, Fig. 6b. Fig. 7 finally shows the minimum thickness tmin required to achieve a range of design thresholds Yc and confidence factors. We see from the figure that, as expected, tmin increases with decreasing threshold Yc and increasing confidence factor. Again we note that, despite the conservative character of McDiarmid’s inequality, the range of plate thicknesses identified by the QMU analysis is well-within the practical realm of engineering practice with high levels of confidence.

Table 5 Rating threshold of plate deflection as a function of design probability-of-failure tolerance. 0.076

0.088

0.100

Rating threshold Ymin (cm)

1.28684

1.22801

1.21010

1.19857

1.18987

0.45

0.50

0.55

Estimated mean (cm) Total diameter (cm)

0.99541 0.15449

0.86918 0.11996

0.76153 0.10278

0.67432 0.08644

0.59596 0.06821

We have assessed a method of quantification of margins and uncertainties (QMU) [1–4] in applications where the main source of uncertainty is an imperfect knowledge or characterization of the material behavior. The objective of QMU is to determine the design margin required to meet design specifications with a prescribed confidence factor. We quantify uncertainties (UQ) through rigorous probability bounds computed by exercising an existing deterministic code in order to sample the mean response and identify worst-case combinations of parameters. We specifically use McDiarmid’s inequality [5], which belongs to a general class of probability bounds known as concentration-of-measure (CoM) inequalities [6]. The resulting methodology is non-intrusive and can be wrapped around existing solvers. The use of rigorous probability bounds ensures that the resulting designs are conservative to within a desired level of confidence. We have assessed the QMU framework by means of an application concerned with subballistic impact of AZ31B Mg alloy plates. For purposes of this assessment, we assume the design specification to be a maximum allowable backface deflection of the plate. As a simple scenario, we additionally assume that, under the conditions of interest, the plate is well-characterized by the Johnson-Cook model [9], but the parameters of the model are uncertain. Thus, we assume that we have access to an experimental data set of yield strength values measured over a sample of effective plastic strains, strain rates and temperatures and that, in order to cover a certain percentile of the experimental data, the JohnsonCook parameters must be allowed to vary over a certain range. We specifically use the data set of Hasenpouth [10], who conveniently provides parameter ranges ensuring coverage of a certain percentile of the data. In calculations, we use the commercial finite-element package LS-Dyna and DAKOTA Version 6.7. The assessment demonstrates the feasibility of the approach and how it results in high-confidence designs that are well-within the practical range of engineering application. Several features of the QMU analysis developed in the foregoing are noteworthy. We begin by noting that Eq. (2.13) provides a clear and unambiguous quantitative measure of uncertainty, U = D, in the system response. In particular, the level of uncertainty U sets the scale for the margin required to achieve a desired level of confidence in the design. We also see from (2.14) that, as the failure tolerance ϵ decreases, the

The preceding QMU analysis can be augmented by a parametric

0.064

0.40

4. Summary and concluding remarks

3.5. Parametric studies

0.052

0.35

study on the parameters referring to operating conditions of the system, i. e., parameters that are variable but deterministic. These parametric studies can provide valuable insight into response regimes and how uncertainty is driven under varying operating conditions. Fig. 8 shows the variation of the sub-diameters with impact velocity over the range of 100 m/s to 200 m/s and obliquity angle over the range of 0∘ to 60∘. Fig. 9 shows the corresponding variation of the mean maximum backface deflection. We see from Fig. 8 for all operating conditions under consideration, the Johnson-Cook parameters maintain their sub-diameter ordering, A > n > B > C > m. The persistence of this ordering suggests a single response regime under all operating conditions. We also see from Fig. 8 that the uncertainty in the response increases with impact velocity and decreases with obliquity. In addition, Fig. 9 shows that, simultaneously, the maximum backface deflection increases with increasing impact velocity and decreases with increasing obliquity. Thus, it follows from (2.27) that the largest velocity of 200 m/s and normal impact is the worst-case scenario for purposes of design over the range of operation conditions considered.

property is that the sub-diameters can be compared and rank-ordered, which in turn provides a quantitative metric of the relative contributions of the parameters to the overall uncertainty of the response. From Fig. 5, we deduce this rank-ordering to be A > n > B > C > m, with A and n the parameters that contribute the most to the uncertainty, C and m the least and B intermediate. It is noteworthy that the parameter A has a smaller percentile variation than n, cf. Table 1, but contributes more to the total uncertainty in the system performance. This example evinces how relative uncertainties cannot be directly deduced from the variability of the input parameters in general, but also depend critically on the nonlinear sensitivity of the system response to the parameters.

Design tolerance ϵ

Plate thickness (cm)

6

International Journal of Impact Engineering 136 (2020) 103418

X. Sun, et al.

Fig. 6. Dependence of system performance on the plate thickness. (a) Estimated mean maximum backface deflection ⟨Y⟩. (b) Total system diameter D.

the input variables, and not their detailed probability distribution functions, need to be known for the computation of the subdiameters and, by extension, for uncertainty quantification. In particular, variables with large subdiameters contribute the most to the system uncertainty and may be targeted for higher-fidelity modeling. We also emphasize that, for nonlinear problems, considering finite—possible large—variations in input variables is essential to rigorously bounding uncertainties. By comparison, methods that rely on linearized sensitivity analysis have the potential for underestimating uncertainties, especially in systems whose response is not differentiable due to yielding, contact or other factors. Conveniently, the calculation of the sub-diameters does not require differentiation of the response function, which constitutes an additional advantage over linearized sensitivity analysis. We also note that the effect of estimating the mean response of the system is a margin hit, with no effect on uncertainty. The extent of the margin hit can be controlled through the size of the sample used to compute the empirical mean response (2.15). We recall that Eq. (2.27) gives the smallest performance measure Ymin that the system can be rated for. We note that Ymin depends on the empirical mean performance ⟨Y⟩, system diameter D, the margin hit α due to the empirical estimation of the mean performance and the desired confidence CF in the rating. Specifically, the larger the mean performance, performance uncertainty, margin hit and confidence factor, the larger the performance measure that the system can be rated for. In particular, a high level of uncertainty and poor sampling have a

Min. plate thickness (cm)

0.55 CF=3.0 CF=2.5 CF=2.0 CF=1.5 CF=1.0 CF=0.5

0.50

0.45

0.40

0.35 0.6

0.9

1.2

1.5

Design threshold (cm) Fig. 7. Minimum plate thickness required to meet a range of design thresholds and confidence factor specifications.

required minimum confidence factor increases, indicating that greater margins or smaller uncertainties are needed in order to achieve an admissible design. From (2.13) and (2.7), it follows that the subdiameters Di (2.8) measure the uncertainty in the response of the system due to each random input variable, i. e., the contribution of each variable to the overall uncertainty budget. Remarkably, only ranges of

Fig. 8. Parametric dependence of Johnson-Cook sub-diameters. (a) Effect of impact velocity. (b) Effect of impact obliquity. 7

International Journal of Impact Engineering 136 (2020) 103418

X. Sun, et al.

Fig. 9. Parametric dependence of mean maximum backface deflection. (a) Effect of impact velocity. (b) Effect of impact obliquity.

In sum, we have shown that CoM inequalities, and the demonstrated framework that implements them, provide a simple but powerful analysis tool for use in connection with complex nonlinear problems. While the UQ methods viable for non-linear problems remain numerically expensive, our developed framework provides useful analysis without requiring high-order accuracy of the characterized statistical sample space. Specially useful are CoM diameters and subdiameters, which provide easily understood descriptions of maximum variability in the units of the performance measure. As such, they readily inform intuition regarding sources of uncertainty and allow for the direct comparison of model forms of interest. For our specific demonstration of an elastic impactor into a magnesium alloy plate, the subdiameters of the Johnson-Cook constitutive parameters are consistently ordered A > n > B > C > m. This ordering clearly emphasizes the specific parameters where improvements are best targeted. Such a clear set of relationships also characterizes the constitutive regime of interest, especially when compared against other boundary and/or initial conditions. For example, were a new family of simulations introduced using the same constitutive model, any re-ordering of the input parameter set might highlight major changes in the attendant material states. Subsequent to such a finding, strategies for how to best improve the model would need to be devised, or if the design failed, a new set of parameters might need to be developed for the alternative regime. In addition to the characteristic ordering of the subdiameters, significant information is to be found in the values of the calculated diameters and subdiameters. These diameters express outcomes that correlate to the performance metric of interest. Here we emphasize that the clarity and ease of discussion provided by the ability to directly relate sensitivity metrics to outcomes of interest is of great value to those trying to communicate specific conclusions of any given study. In particular, we note that the total diameter is directly comparable to any other parameterized model under similar conditions. Thus, if the system fails design and the parameter bounds cannot be tightened, the only remaining path forward might be to characterize a new model form. In that case, the total diameter then supplies a convenient metric against which the new model could be directly compared. We note in closing that the use of CoM inequalities provides conservative statistical estimates through both the use of intuitive intermediate metrics and a minimal number of assumptions with regard to parameter variability. These characteristics, when paired with the reduced sampling requirements of the response function, combine to form a tool that is general enough for a broad array of problems, rigorous enough to be trusted, and simple enough to explain and understand. As such, we trust that these strategies for constitutive modeling will significantly aid those seeking to provide clarity about the predictive

detrimental effect on the system rating. Thus, it follows that a system cannot be rated conservatively based on deterministic calculations of mean performance alone. Instead, conservative rating requires also a careful assessment of performance and sampling uncertainties. The present approach is predicated on probability inequalities as a means of bounding uncertainties. Evidently, the tighter the bound the better the design. However, increasing tightness comes at increasing computational expense, which sets forth a trade-off between economy of design and computability. Simple probability inequalities, such as McDiarmid’s [5], supply a working compromise between tightness and computational complexity. However, it is both interesting and useful to investigate the tightness of the bounds and the attendant conservativeness of the designs. To this end, Fig. 10 shows the estimates of the probability of failure for various failure thresholds using the Monte Carlo sampling and the McDiarmid inequality for the projectile/plate system with impact velocity v = 200 m/s, angle of attack θ = 0∘, and plate thickness t = 0.35 cm . As expected, the McDiarmid bound lies uniformly above than the Monte Carlo estimate, which illustrates the conservative character of the bound and, by extension, of the corresponding designs. A large number of probability inequalities are available that improve on McDiarmid’s, albeit at the expense of added complexity (cf., e. g., [14]). Evidently, such inequalities could be taken as a basis for tighter UQ schemes, but such extensions and enhancements are beyond the scope of this work.

Probability of failure

1.0 MC CoM

0.8 0.6 0.4 0.2 0 0.9

1.0

1.1

1.2

1.3

Threshold (cm) Fig. 10. Comparison of probability of failure computed from direct Monte Carlo (MC) sampling and bounded by means of McDiarmid's inequality [5] for the projectile/plate system with impact velocity v = 200 m/s, angle of attack θ = 0∘, and plate thickness t = 0.35 cm . 8

International Journal of Impact Engineering 136 (2020) 103418

X. Sun, et al.

capacity of simulations encompassing complex material behavior.

National Laboratory, Albuquerque, New Mexico; 2006. [5] McDiarmid C. On the method of bounded differences. SurvComb 1989;141(1):148–88. [6] Ledoux M. The concentration of measure phenomenon. Mathematical surveys and monographs. 89. American Mathematical Society; 2001. [7] Acharjee S, Zabaras N. A non-intrusive stochastic galerkin approach for modeling uncertainty propagation in deformation processes. Comput Struct 2007;85(5–6):244–54. [8] Lucas LJ, Owhadi H, Ortiz M. Rigorous verification, validation, uncertainty quantification and certification through concentration-of-measure inequalities. Comput Methods Appl Mech Eng 2008;197(51–52):4591–609. [9] Johnson GR. A constitutive model and data for materials subjected to large strains, high strain rates, and high temperatures. Proc 7th Inf Sympo Ballistics. 1983. p. 541–7. [10] Hasenpouth D. Tensile high strain rate behavior of az31b magnesium alloy sheet. University of Waterloo; 2010. [11] Hallquist JO, et al. Ls-dyna keyword user manual. 970. Livermore Software Technology Corporation; 2007. p. 299–800. [12] Mukasey M, Sedgwick JL, Hagy D. Ballistic resistance of body armor, nij standard0101.06. US Department of Justice (www ojp usdoj gov/nij); 2008. [13] Adams BM, Bauman L, Bohnhoff W, Dalbey K, Ebeida M, Eddy J, et al. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis: version 6.7 userâs manual. Tech Rep SAND2014-4253. Sandia National Laboratories; 2014. [14] Owhadi H, Scovel C, Sullivan TJ, McKerns M, Ortiz M. Optimal uncertainty quantification. SIAM Rev 2013;55(2):271345.

Acknowledgments We gratefully acknowledge the support of the U. S. Army Research Laboratory (ARL) through the Materials in Extreme Dynamic Environments (MEDE) Collaborative Research Alliance (CRA) under Award Number W911NF-11-R-0001. We are grateful to Jaroslaw Knap, Richard Becker and Jeffrey Lloyd of the ARL at Aberdeen Proving Ground, MD, for useful input and illuminating discussions. References [1] Sharp DH, Wood-Schultz MM. QMU and nuclear weapons certification-what’s under the hood? Los Alamos Sci 2003;28:47–53. [2] Leader) DES. Quantification of margins and uncertainties (QMU). Tech. Rep.. 7515 Colshire Drive McLean, Virginia, 22102: The MITRE Corporation; 2005. [3] Goodwin BT, Juzaitis R. National certification methodology for the nuclear weapons stockpile. Tech. Rep.. Lawrence Livermore National Lab.(LLNL), Livermore, CA (United States); 2006. [4] Pilch M, Trucano TG, Helton JC. Ideas underlying quantification of margins and uncertainties (QMU): a white paper. Unlimited Release SAND2006-5001. Sandia

9