Computers and Mathematics with Applications (
)
–
Contents lists available at ScienceDirect
Computers and Mathematics with Applications journal homepage: www.elsevier.com/locate/camwa
A new learning function for Kriging and its applications to solve reliability problems in engineering Zhaoyan Lv, Zhenzhou Lu ∗ , Pan Wang School of Aeronautics, Northwestern Polytechnical University, Xi’an, 710072, PR China
article
info
Article history: Received 3 July 2013 Received in revised form 8 August 2014 Accepted 12 July 2015 Available online xxxx Keywords: Reliability Kriging Learning function Information entropy Line Sampling
abstract In structural reliability, an important challenge is to reduce the number of calling the performance function, especially a finite element model in engineering problem which usually involves complex computer codes and requires time-consuming computations. To solve this problem, one of the metamodels, Kriging is then introduced as a surrogate for the original model. Kriging presents interesting characteristics such as exact interpolation and a local index of uncertainty on the prediction which can be used as an active learning method. In this paper, a new learning function based on information entropy is proposed. The new learning criterion can help select the next point effectively and add it to the design of experiments to update the metamodel. Then it is applied in a new method constructed in this paper which combines Kriging and Line Sampling to estimate the reliability of structures in a more efficient way. In the end, several examples including non-linearity, high dimensionality and engineering problems are performed to demonstrate the efficiency of the methods with the proposed learning function. © 2015 Elsevier Ltd. All rights reserved.
1. Introduction In structural reliability analysis, a fundamental problem is to compute the integral of joint probability density function (PDF) of random variables in the failure domain, i.e. to solve the multifold probability integral defined as [1]
Pf =
G(x)≤0
f (x)dx
(1)
where Pf is the failure probability of the structure. x = [x1 , . . . , xn ]T is a vector of random input variables such as loads, environmental factors, material properties, and structural geometry and so on. The performance function G(x) characterizes the response of the structure and G(x) ≤ 0 means the structure is a failure at the point x. The border between the failure and safe domain is the limit state G(x) = 0. f (x) denotes the joint PDF of x. It is difficult to evaluate Eq. (1) for general structures directly due to the time-consuming computation with identifying G(x) and numerically performing the multi-dimensional integration of f (x) over the failure domain. Therefore, various methods have been developed in order to solve the integral, among which Monte Carlo Simulation (MCS) [2,3] is one of the most widely used methods for handling complex problems. However, it is time-demanding for engineering problems. First and second order reliability methods (FORM, SORM) [4,5] approximate the failure probability based on the knowledge of the most probable point (MPP, also known as design point), and require a significantly smaller number of performance
∗
Corresponding author. Tel.: +86 29 88460480; fax: +86 29 88460480. E-mail addresses:
[email protected] (Z. Lv),
[email protected] (Z. Lu).
http://dx.doi.org/10.1016/j.camwa.2015.07.004 0898-1221/© 2015 Elsevier Ltd. All rights reserved.
2
Z. Lv et al. / Computers and Mathematics with Applications (
)
–
function evaluations. But in practice they feature the same limitation: the MPP is difficult to search. Then some novel variance reduction techniques, such as Subset Simulation (SS) [6,7], Line Sampling (LS) [8,9] and Importance Sampling (IS) [10,11], have been proposed to address the computational problems. The basic idea of IS is generating points around the MPP in the vicinity of the limit state surface. SS computes the failure probability as a product of conditional probabilities, which can be easily estimated by Markov Chain Monte Carlo (MCMC) simulation [12,13]. LS computes the failure probability based on the optimal important direction which is from the origin of coordinate to the MPP in the standard normal space. These techniques prove to be robust and require much fewer samples than MCS, and LS is especially excellent for high-dimension and low failure probability problems. It is, however, still not practical in applying these methods on the reliability analysis where the finite element models are involved in engineering problems. As a result, we need a model to substitute the initial expensive model as the performance function in engineering problems. It is called metamodel with various kinds: Quadratic Response Surfaces [14], Polynomial Chaos [15], Kriging [16,17], Neural Network [18], and Support Vector Machine [19,20], etc. In this paper, Kriging is employed. Kriging, developed by Krige [21] and then developed by Matheron [22], is an exact interpolation method and a form of generalized linear regression for the formulation of an optimal estimator in a minimum mean square error sense. Kriging metamodel is constructed by a design of experiments (DoE) and then can provide not only the predicted response at any point, but also the local uncertainty called Kriging variance on the response: the higher the variance, the less certain the prediction. So if a sample point with higher variance is added to the DoE, the Kriging model can get more improvement. This process is called active learning that means the Kriging model is updated by adding new points intelligently. Therefore, active learning functions which determine whether the points are selected to add or not are proposed in these years. Expected Feasibility Function (EFF), the first learning function proposed by Bichon in an efficient global reliability analysis method [16], aims at adding new points with high Kriging variance in the vicinity of the limit state surface. Then a new learning function U is proposed and first applied in the Active learning reliability method combining Kriging and Monte Carlo Simulation (AK-MCS) [23] and later in the method called AK-IS for active learning and Kriging-based IS [24]. Antoine Dumasa et al. [25] proposed a learning criterion, named as PBC based on the distribution of the prediction. This paper proposes a new learning function H that originated from the information entropy theory and Kriging with this learning function can be applied in many reliability methods such as MCS, IS, LS and so on. As LS is a very efficient reliability method, even for high-dimension and low failure probability problems, a new active learning method combining Kriging and LS called AK-LS is constructed to offer more choices for computing the structural reliability. The paper is organized as follows. Section 2 recalls the basic theory of the Kriging method for the limit state function. In Section 3, a new active learning function H is proposed and verified. Section 4 introduces a method combining LS method and Kriging with the new active learning function. Several numerical and engineering examples including finite-elementbased reliability problems are computed to demonstrate the efficiency of the proposed method in Section 5. The paper ends with conclusions in Section 6. 2. Basic theory about Kriging method Kriging metamodel is an interpolation technique based on statistical theory, which consists of a parametric linear regression model and a nonparametric stochastic process. It needs a design of experiments to define its stochastic parameters and then predictions of the response can be completed on any unknown point. Give an initial design of experiments (initial DoE) X = [x1 , x2 , . . . , xN0 ], with xi ∈ Rn (i = 1, 2, . . . , N0 ) the ith experiment, and G = [G(x1 ), G(x2 ), . . . , G(xN0 )], with G(xi ) ∈ R the corresponding response to X . The approximate relationship between any experiment x and the response G(x) can be denoted as
ˆ (x) = F (β, x) + z (x) = f T (x)β + z (x) G
(2)
where β = β1 , . . . , βp is a regression coefficient vector. Similar to the polynomial built by response surface method, f T (x) = [f1 (x), f2 (x), . . . , fp (x)]T makes a global simulation in design space. In the ordinary Kriging, F (β, x) is a scalar and ˆ (x) can be simplified as always taken as F (β, x) = β . So the estimated G ˆ (x) = F (β, x) + z (x) = β + z (x). G T
Here z (x) is a stationary Gaussian process [26] and the statistic characteristics can be denoted as E (z (x)) = 0
(3)
Var (z (x)) = σ
2 z
(4)
Cov[Z (xi ), Z (xj )] = σ
2 z R
(xi , xj )
(5)
where σz2 is the process variance, and xi , xj are discretional points from the whole samples X . R(xi , xj ) is the correlation function about xi and xj with a correlation parameter vector θ . There are several models to define R(xi , xj ), and the broadly used Gaussian correlative model is selected in the paper which is the best in application and can be formulated by R(xi , xj ) = exp
n k =1
δ −θk xki − xkj
θk ≥ 0, 0 ≤ δ ≤ 2
(6)
Z. Lv et al. / Computers and Mathematics with Applications (
)
–
3
Fig. 1. Kriging predictions.
where n is the dimension of the sample, xki , xkj and θk are the kth components of xi , xj and θ respectively. δ connects with the smoothness of the metamodel and usually takes the value of 2. In addition, θ is usually substituted by a scalar θ . Eq. (6), therefore, can be shown as
R(xi , xj ) = exp −θ
n
xki
−
2 xkj
θ ≥ 0.
(7)
k=1
Define R = [R(xi , xj )]N0 ×N0 , F a N0 × 1 unit vector, then β and σz2 are estimated as
βˆ = (F T R −1 F )−1 F T R −1 G σˆ z2 =
1 N0
(8)
(G − β F )T R −1 (G − β F ).
(9)
The correlation parameter θ can be obtained through the maximum likelihood estimation:
θ = arg min N0 ln σz2 + ln |R | .
(10)
θ
For any θ , there is an interpolation model corresponding with it, so the best Kriging model can be gotten after optimizing the θ . ˆ (x) is shown to be a Gaussian Then at an unknown point x, the Best Linear Unbiased Predictor (BLUP) of the response G ˆ (x) ∼ N µ ˆ (x), σ ˆ (x) where random variate G G G
µGˆ (x) = β + r (x)R −1 (G − β F ) σ (x) = σ (x)(1 + u (x)(F R 2 ˆ G
2 z
T
T
−1
(11) F)
u (x) − r (x)R
−1 T
T
−1
r (x))
(12)
ˆ (x) at where r (x) = [R(x, x1 ), R(x, x2 ), . . . , R(x, xN0 )], u(x) = F T R −1 r (x) − 1. µGˆ (x) is usually taken as the estimated G point x. The process of calculating µGˆ (x) and σGˆ (x) above can be implemented by MATLAB toolbox DACE [27,28]. It is easy to prove that and for i = 1, . . . , N0 . That means Kriging is an exact interpolation method (see Fig. 1 as an example). ˆ (x) and G(x). The variances of points The variance σ ˆ2 (x) is defined as the minimum of the mean squared error between G G
in the initial DoE are zero, but the variances of other points are always not zero as shown in Fig. 1. When σ ˆ2 (x) is very large, G
it means the prediction there is not exact. Therefore, the prediction σ ˆ2 (x) is important in the unexplored areas and presents G a good index to improve the initial DoE. 3. A new learning function H based on information entropy The learning function is usually defined to determine the best next point, which has an important uncertainty or is close to G(x) = 0, to update the Kriging metamodel. The widely used learning function EFF [16] is defined as
ˆ (x) = EFF G
¯ (x)+ε G ¯ (x)−ε G
ε − G¯ (x) − Gˆ (x) f Gˆ (x) d Gˆ (x)
(13)
4
Z. Lv et al. / Computers and Mathematics with Applications (
¯ (x) = G(x) is the constraint equation and f where G
)
–
ˆ (x) is the PDF of Gˆ (x). ε is in proportion to σ ˆ (x) and always taken G G
¯ (x) − ε, G+ (x) = G¯ (x) + ε , and then Eq. (13) can be shown as as 2σGˆ (x). Suppose G− (x) = G + − ¯ (x) − µ ˆ (x) G (x) − µGˆ (x) G (x) − µGˆ (x) G G ¯ (x) 2Φ −Φ −Φ EFF (x) = µGˆ (x) − G σGˆ (x) σGˆ (x) σGˆ (x) ¯ (x) − µ ˆ (x) G G− (x) − µGˆ (x) G+ (x) − µGˆ (x) G − σGˆ (x) 2φ −φ −φ σGˆ (x) σGˆ (x) σGˆ (x) + − G (x) − µGˆ (x) G (x) − µGˆ (x) + Φ −Φ σGˆ (x) σGˆ (x)
(14)
where Φ is the standard normal cumulative distribution function (CDF) and φ the standard normal PDF. ¯ (x) always takes the value of 0. Therefore, EFF provides an indication of how well thepredictors In reliability analysis, G in any point x are expected to satisfy the equality constraint G(x) = 0 over a region defined by G− (x), G+ (x) . From Eq.
ˆ (x) is in the vicinity of the constraint function or (13), it is not difficult to find that the value of EFF is larger when x, G when the predictors of x are tied with more uncertainty. If these points are put into the initial DoE, the metamodel will be improved a lot and will become more accurate. Recently, a new learning function U is proposed by B. Echard [23]. The new learning function is seen to give more weight to the points in the close neighborhood of the predicted limit state rather than further ones with the high Kriging variance like EFF. It is constructed as U (x) =
µ ˆ (x) G
σGˆ (x)
.
(15)
The stopping conditions of the two learning method are EFFmax ≤ 0.001 and Umin ≥ 2 respectively, where EFFmax denotes the maximal value of all EFF and Umin minimal value of all U. According to B. Echard, the learning function U is slower than EFF in estimating roughly the probability of failure but faster in converging towards an accurate probability of failure at stopping condition. To explore more learning functions, a new learning method named H based on information entropy is proposed in this paper. To verify the feasibility and superiority of the proposed method, some comparisons are made among EFF, U and H in Section 5. 3.1. Learning function H In Section 2, it has been proposed that for any point x, we can get the predictions µGˆ (x) and σ ˆ2 (x) as shown in G ˆ (x), actually Gˆ (x) follows a Gaussian distribution: Gˆ (x) ∼ Fig. 1. In calculation, µGˆ (x) is taken as the estimated response G
ˆ (x) is shown as below. N (µGˆ (x), σGˆ (x)). The PDF of G According to the information entropy theory which is used to measure the uncertainty introduced by Shannon [29], the ˆ (x) can be described as information entropy of G
ˆ (x) = − H G
ln f
ˆ (x) G
f
ˆ (x) d Gˆ (x) . G
(16)
ˆ (x) describes the degree of disorder of Gˆ (x), and it can be used to quantitatively judge the And the information entropy H G ˆ (x). The prediction is more certain when the information entropy is lower. In order to select the important uncertainty of G points for the limit state function, we can define the learning function as G+ (x) ˆ (x) = − ˆ (x) ln f Gˆ (x) d Gˆ (x) H G f G G− (x)
(17)
where G+ (x) can be defined by +2σGˆ (x), and G− (x) can be defined by −2σGˆ (x) to guarantee the accuracy near the limit state function (see Fig. 2). Then Eq. (17) can be derived as follows
√ 2σGˆ (x) − µGˆ (x) −2σGˆ (x) − µGˆ (x) 1 ln 2π σGˆ (x) + Φ −Φ 2 σ ( x ) σ ( x ) ˆ ˆ G G H(x) = . 2σGˆ (x) − µGˆ (x) 2σGˆ (x) − µGˆ (x) 2σGˆ (x) + µGˆ (x) −2σGˆ (x) − µGˆ (x) − φ + φ 2 σGˆ (x) 2 σGˆ (x)
(18)
Z. Lv et al. / Computers and Mathematics with Applications (
)
–
5
ˆ (x). Fig. 2. The PDF of G
Fig. 3. Hmax = 4.4209, Pf = 0.0001.
ˆ (x) at point x and help construct the The learning function H(x) can measure the uncertainty of the predicted response G metamodel in the neighborhood of the limit state function. The detailed derivation of Eq. (18) from Eq. (17) is shown in Appendix. 3.2. Stopping criterion and the feasibility of H For every point xi , the corresponding H(xi ) (i = 1, 2, . . . , N ) can be obtained by Eq. (18). The larger the H is, the more uncertain the predictions will be. If we add the point with the maximal H (Hmax ) to the initial DoE, the predictions should be improved a lot by the updated Kriging metamodel. Moreover, when computing H(xi ) (i = 1, 2, . . . , N ) through the updated metamodel, the new Hmax will be smaller. Repeat the above procedures until Hmax approximates to zero, and then the predictions will be steady and accurate. To demonstrate the convergence process of Hmax , the following problems are investigated. Consider an automotive semi-axle problem in engineering, the torque conveyed by the semi-axle is T , the torque intensity of its material is r, and the design radius of the semi-axle is d. Take T , r , d as normal variables, and their distributions are T ∼ N 11760 N m, (976 N m)2 , r ∼ N 1050 MPa, (40 MPa)2 , d ∼ N 42 mm, (0.21 mm)2 respectively, and the limit state function can be constructed as follows by taking the limit stress into account. G(T , r , d) = r − 16T /π d3 .
(19)
Generate N0 = 3 samples by MCS as the initial DoE and calculate their responses by Eq. (19), then the Kriging metamodel can be constructed. Sample N = 10 000 points and the predictions on every point can be obtained through the constructed Kriging metamodel. Therefore, the initial failure probability calculated by MCS and the values of H can be gained by Eq. (18) as shown in Fig. 3. At the same time, Hmax can also be obtained. In Fig. 3, i (i = 1, 2, . . . , N ) stands for the ith point in N = 10 000 samples. Search the point with Hmax and calculate its response by Eq. (19). Then add the point and its response to the initial DoE, and the Kriging metamodel can be updated. With the updated Kriging metamodel, new H and Pf can be obtained, and the results are shown in Fig. 4. Repeat the above process, the results are shown in Figs. 5–9. Hence, if we continue to repeat the process from the results of Fig. 9, the results will be the same to those shown in Fig. 9, which means the iteration converges.
6
Z. Lv et al. / Computers and Mathematics with Applications (
)
–
Fig. 4. Hmax = 3.2292, Pf = 0.0017.
Fig. 5. Hmax = 2.6625, Pf = 0.0019.
Fig. 6. Hmax = 0.9577, Pf = 0.0018.
Moreover, if we calculate Pf directly from Eq. (19) rather than the metamodel, the result is Pf = 0.0018. So the precision of Pf estimated by the Kriging metamodel is high enough.
Z. Lv et al. / Computers and Mathematics with Applications (
)
–
7
Fig. 7. Hmax = 0.5916, Pf = 0.0018.
Fig. 8. Hmax = 0.1377, Pf = 0.0018.
Fig. 9. Hmax = 0, Pf = 0.0018.
It is easy to find that the predictions by the metamodel are almost exact after adding three new points to the DoE as shown in Fig. 6. That is to say, for this problem, the stopping criterion can be defined as Hmax ≤ 1. On the other hand, from Fig. 3 to Fig. 9, it is obvious that the values of the learning function tend to be smaller and smaller with the new point added in continuously. When the metamodel is accurate enough, Hmax will converge to zero.
8
Z. Lv et al. / Computers and Mathematics with Applications (
)
–
Table 1 The values of Hmax and Pf with the jth point added. j
0
1
Hmax Pf
127.96 45.21 0.0256 0.0259
2
3
4
5
6
7
8
9
10
11
18.73 0.0263
1.186 0.0265
0.7255 0.00268
0.5155 0.00268
0.3206 0.0267
0.4208 0.0269
0.3976 0.0268
0.2227 0.0268
0 0.0268
0 0.0268
To explore the stopping criterion further, another example is considered. Supposing there are four input variables in this numerical example, which are normal as xp ∼ N (10, 3) (p = 1, 2, 3, 4). The non-linear limit state function is G(x) = x21 + 5x1 + 2x22 + 7x2 + x23 − 8x3 + x24 − 10x4 − 200.
(20)
The process of building the initial Kriging metamodel is the same as the above example except for N0 = 20. Then the values of Hmax and Pf are listed in Table 1 after the jth point is added to the DoE. From Table 1, the process of adding new points can be ended after the third point (j = 3) is selected, and the stopping condition Hmax ≤ 1 is suitable for this problem. To get a more accurate metamodel, the stopping condition can be defined as Hmax ≤ 0.4. From the results of the above two examples and many other examples which are not shown in the paper, we conclude that it is reasonable to define the stopping criterion as Hmax ≤ 1, or a smaller one for higher precision. Hmax ≤ 0.5 is selected in this paper.
4. AK-LS proposed method After the Kriging metamodel is constructed, the next procedure is to use reliability methods such as MCS, IS, LS and so on to analyze the reliability. All these methods can be combined with Kriging method, and the processes are similar to AKMCS method in [23] and AK-IS method in [24]. MCS is simple and easy to achieve, so it is applied in this paper as shown in Section 5. However, for a structure with failure probability Pf , MCS method usually needs at least (102 –104 )/Pf samples to ensure its accuracy. On the other hand, when constructing Kriging metamodel, the learning function, no matter EFF, U, or H, will be called by every sample to find the next new point to join the initial DoE. So it is time-consuming for low failure probability problems. LS is a simulation method evolved from the need of efficiently estimating the multidimensional and low failure probability problems of structural reliability [30]. As a result, a method named AK-LS combining Kriging with learning function H and LS is constructed in this paper to offer more choices for Kriging in computing the reliability. The efficiency of LS depends on the determination of the important direction, usually denoted by unit vector eα , which is optimally determined to point towards the MPP from the original point of the standard normal coordinate. Searching the optimal important direction and guiding the samples to estimate the failure probability are the key steps of LS. In the standard normal space, the direction is defined from the origin of coordinate to the MPP in the failure domain. There are mainly two methods widely used to find MPP: FORM and MCMC. As the performance function obtained by Kriging metamodel is implicit, MCMC is applied in this paper. Suppose that the performance function G(x) and the samples xj (j = 1, 2, . . . , N ) obtained by MCS are transformed into g (u) and uj (j = 1, 2, . . . , N ) in the standard normal space. Then for a two-dimensional problem, the process of LS can be shown as Fig. 10. ⊥ Here u⊥ j = uj − ⟨eα , uj ⟩eα is a vector perpendicular to eα · uji = ci eα + uj and the corresponding g (uji ) are obtained ˜ according to three assumptive coefficients c ( i = 1 , 2 , 3 ) . Then the coefficient c i j can be calculated by quadratic interpolation ˜ j = c˜j eα + u⊥ with three points ci , g (uji ) (i = 1, 2, 3), and g (˜uj ) = 0, u . Therefore, the probability of failure can be j estimated as Pˆ f =
N 1
N j =1
Pj =
N 1
N j =1
Φ (−˜cj ).
(21)
The process shown in Fig. 11 mainly consists of three stages: First, constructing Kriging metamodel with the DoE and updating the model with new points selected by the learning function H proposed in Section 3. Second, obtaining the MPP by MCMC simulation. Last, computing the failure probability with LS process. In the second and last procedure, the responses ˆ (x) are estimated by Kriging prediction. The blue parts in Fig. 11 denote that they are related to the prediction. Moreover, G the C in Cov(Pˆ f ) =
Var (Pˆ f )/Pˆ f < C is the critical value and can be set up artificially according to practice.
Z. Lv et al. / Computers and Mathematics with Applications (
)
–
9
Fig. 10. Line Sampling diagram in the standard normal space.
5. Validation examples 5.1. Example 1: One-variable numerical problem Suppose that x is a uniform variable: x ∼ U [−5, 5], and the performance function is defined as G(x) = x3 − 10.
(22)
To test the accuracy of the proposed learning function H, N0 = 3 samples are generated by MCS as the initial DoE to construct the initial metamodel. Then sample N = 105 points by MCS and calculate H of each sample. Therefore, the point with the maximal H can be found. Add this point to the DoE, and the metamodel can be updated. Iterate the above process ˆ (x) can be until 3 points have been added. Afterwards, for the variable x in the interval [−5, 5], the curve of the predictor G estimated by the Kriging metamodel. In the same way, the curves can be gotten by Kriging with EFF, and U respectively as given in Fig. 12. ‘‘Accurate’’ denotes the accurate curve of G(x) which is computed by Eq. (22) directly. It needs to note that the initial DoE is the same for the different learning functions, and it is the same for the following examples. From the results shown in the figure, it is easy to find that the curve by Kriging with the learning function H is better approximated to the accurate than other learning functions. 5.2. Example 2: Dynamic response of a non-linear oscillator The following example consists of non-linear undamped single degree of freedom system [23] as shown in Fig. 13. The original variables all follow the normal distribution, while here the distributional parameters of random variables are changed a little as listed in Table 2. The performance function is given as:
2 2F1 ω0 t1 sin G(m, c1 , c2 , r , F1 , t ) = 3r − 2 2 mω0 c1 + c2 with ω0 = . m
(23)
In this problem, 28 points are sampled by MCS as the initial DoE for the Kriging process, and 105 points are used for the learning functions to choose new points and calculate the failure probability. In this example, the number of important points added to the DoE only depends on the stopping condition without other limits. The results are as shown in Table 3, where Ncall denotes the number of calling the performance function (23). This is a typical example of non-linearity and low failure probability. In the result shown by Table 3, AK-MCS + EFF represents the Kriging process of AK-MCS is updated by learning function EFF. The meanings that AK-MCS + U, AK-MCS + H and the following AK-LS + H represent are similar to AK-MCS + EFF. 28 + 1 means that there are 28 samples in the initial
10
Z. Lv et al. / Computers and Mathematics with Applications (
)
–
Fig. 11. Flow diagram of the improved Line Sampling method.
Fig. 12. The simulated curves made by Kriging with EFF, U and H.
DoE and 1 point is added to the DoE by the learning function in the whole calculation. Remark, the initial DoE with 28 samples for constructing the Kriging metamodel are the same for different methods. As listed in Table 3, in the process of choosing new points by learning function, EFF and U do not work. That is because after the initial metamodel is established, EFFmax = 0 and Umin = 45.23 are obtained, which satisfy the stopping condition
Z. Lv et al. / Computers and Mathematics with Applications (
)
–
11
Fig. 13. Non-linear oscillator.
Fig. 14. Cantilever beam structure. Table 2 The distributional parameters of random variables. Random variable
m
c1
c2
r
F1
t1
Distribution
Normal
Lognormal
Lognormal
Normal
Normal
Normal
Mean value Variation coefficient
1 0.1
1 0.1
0.1 0.1
0.5 0.1
1 0.2
0.5 0.2
Table 3 The estimated values of failure probability. Method
MCS
AK-MCS + EFF
AK-MCS + U
AK-MCS + H
Ncall
10 000 000
28 + 0
28 + 0
28 + 1
Pf
2.38 × 10−5
0
0
2.0 × 10−5
EFFmax ≤ 0.001 and Umin ≥ 2 a lot. However, Hmax = 0.58 makes the learning function H (the stopping criterion is Hmax < 0.5) work and get the reasonable failure probability. 5.3. Example 3: Cantilever beam structure There is a cantilever beam with rectangular section as shown in Fig. 14. It is loaded uniformly. Supposing that the deflection of the beam’s free end cannot exceed L/325, and the limit state function is constructed as g (ω, b, L) = L/325 − ωbL4 /(8EI ), where ω, b, L, E , I are unit load, sectional dimension, length of the beam, elastic modulus, sectional moment of inertia respectively with E = 26 GPa, I = b4 /12 and others are normal distributed random variables listed in Table 4. Then the failure probability is calculated by MCS, AK-MCS, and AK-LS method respectively. The results are listed in Table 5. The initial Kriging metamodels are the same for different methods which are constructed by 15 samples. Moreover, the samples with 105 points from which the learning functions choose next important points are the same. In the process of calculating, 10 important points are selected by every method respectively.
12
Z. Lv et al. / Computers and Mathematics with Applications (
)
–
Table 4 The distributional parameters of random variables. Random variable
ω
Mean value Variation coefficient
1000 N/m 0.1
2
L
b
6m 0.15
250 mm 0.15
Table 5 The estimated values of failure probability. Method
MCS
AK-MCS + EFF
AK-MCS + U
AK-MCS + H
AK-LS + H
Ncall Pf Cov(Pf )
106 0.00511 0.0139
15 + 10 0.00682 0.0382
15 + 10 0.00581 0.0413
15 + 10 0.00561 0.0421
15 + 10 0.00528 1.368 × 10−10
Table 6 The distributional parameters of the random variables of the stiffener structure. Random variable
d/mm E /GPa P1 /Pa
P2 /Pa
F1 / N
F2 /N
F3 /N
F4 /N
F5 /N
F6 /N
Mean value Variation coefficient
5 0.05
5000 0.05
35 239 0.05
23 758 0.05
5949 0.05
16 245 0.05
10 140 0.05
19 185 0.05
100 0.05
5000 0.05
Fig. 15. The leading edge stiffener.
As the results revealed in Table 5, the final metamodel updated by H is better than EFF and U, and the computing results of AK-LS are more accurate than AK-MCS. 5.4. Example 4: The leading edge stiffener of one civil aircraft The leading edge stiffener is important for the aircraft (see Fig. 15) as it can improve the intensity of the airfoil effectively. The geometry dimensions of the stiffener structure are presented in Fig. 16. There are six holes in the stiffener, the biggest of which is used to fasten the engine that controls the movement of the slat. Moreover, at the top is the lightening hole and it is used to allow the pipeline or cable to pass through. The rest holes provide support for the slideway of the slat. The material of the stiffener is aluminum alloy 7050-T7451. The stiffener bears not only the concentrated load on its web, but also the aerodynamic loading on its edge as Fig. 17 shows. Suppose the thickness of the web is d, the elastic modulus is E, the aerodynamic loads are P1 and P2 , and the concentrated loads F1 , F2 , F3 , F4 , F5 , F6 . These ten parameters are independent and normal distributed random variables. Their parameters are shown in Table 6.
Z. Lv et al. / Computers and Mathematics with Applications (
)
–
13
Fig. 16. The geometry dimensions of the leading edge stiffener.
Fig. 17. Diagram of the force distributed on the stiffener.
According to Figs. 16 and 17, a parametric model of the stiffener can be constructed and its static analysis can be made by ANSYS as depicted in Fig. 18. Then the reliability of the stiffener can be calculated through MATLAB calling ANSYS. In reliability analysis, the initial DoE consists of 10 points sampled by MCS, and the number of the points used for providing important points to the DoE and the next failure probability calculation is 105 . The constraint condition in computing takes Cov(Pf ) ≤ 0.05. The results are listed in Table 7. From the results in Table 7, the same conclusions can be obtained. Therefore as the front examples, the learning function H and AK-LS proposed in this paper are reasonable and can be applied in engineering especially for finite element models widely. 6. Conclusion This paper mainly proposes a new learning function H based on information entropy for Kriging method. Kriging is an excellent method to provide a metamodel for the time-demanding finite element model in engineering. The metamodel is
14
Z. Lv et al. / Computers and Mathematics with Applications (
)
–
Fig. 18. The finite element model and static analysis of the stiffener web. Table 7 The estimated values of failure probability. Method
MCS
AK-MCS + EFF
AK-MCS + U
AK-MCS + H
AK-LS + H
Ncall Pf Cov(Pf )
105 0.0456 0.0145
10 + 3 0.0470 0.0142
10 + 6 0.0429 0.0149
10 + 3 0.0461 0.0144
10 + 3 0.0458 3.21 × 10−8
always defined by only a few points of population that need to be evaluated based on the performance function. The new learning function H proposed in this paper can help determine the points concentrated in the vicinity of the performance function to update the metamodel effectively. Moreover, the Kriging with learning function H can be applied with many reliability methods like MCS, IS and so on. To offer more choices for Kriging in computing the reliability analysis of the engineering structures, an active learning method named AK-LS combining Kriging with Line Sampling is proposed then. The AK-LS is efficient to access the failure probability of complex structures. It combines the advantages of the two methods: the efficiency of solving the reliability by LS and the veracity of approximating to the performance function by Kriging with learning function H. Therefore, the AK-LS method is accurate and efficient to compute the reliability with small failure probability and multidimensional variables. Several examples in Section 5 have been studied to demonstrate the feasibility and efficiency about the applications of the proposed learning function H. Acknowledgments This work was supported by the National Natural Science Foundation of China (Grant No. NSFC 51175425), and the Aviation Foundation (Grant No. 2011ZA53015). Appendix For G(x) ∼ N (µGˆ (x), σGˆ (x)), Eq. (17) can be defined as
+ G (x) ˆ ˆ ˆ ˆ H G(x) = − f G(x) ln f G(x) d G(x) G− (x) 2 2 + ˆ (x)−µ (x) ˆ (x)−µ (x) G G ˆ ˆ G G G (x) − − 1 1 2σ ˆ (x)2 2σ ˆ (x)2 G G e ln √ e = − √ dGˆ (x) . G− (x) 2 π σ ( x ) 2 π σ ( x ) ˆ ˆ G G
(24)
Z. Lv et al. / Computers and Mathematics with Applications (
)
–
15
Before we get Eq. (18), some transformations can be made as follows first. G+
−
2π σ
G−
G+
=−
√
√ G+
√ ln
= = ln
2π σ
√
2π σ
2
e
e
2
2σ
G+
2σ
√ G−
2
− (G−µ) 2
G− G+
1
√
ˆ dG
2
− (G−µ) 2 ˆ
2σ
+ ln e
2π σ
ˆ
e
2π σ
2σ
e
ˆ dG
(Gˆ − µ)2 ˆ − dG ln √ 2σ 2 2π σ √ (Gˆ − µ)2 ˆ ln 2π σ + dG 2σ 2 2
− (G−µ) 2
ˆ
ln
− (G−µ) 2 2σ
2
− (G−µ) 2
1
2σ
ˆ
e
2π σ
√
√
− (G−µ) 2
ˆ
1
1
√
ln ˆ
1
2π σ
G−
2σ
2π σ
G−
=
e
2π σ
G− G+
=−
2
− (G−µ) 2 ˆ
1
√
1
2π σ
ˆ + √1 dG 2 2π σ 3 2
− (G−µ) 2 ˆ
e
1
2σ
G+
− (G−µ) 2 ˆ
e
2
2σ
G−
ˆ + √1 dG 2 2π σ 3
G+
2
− (G−µ) 2 ˆ
e G−
(Gˆ − µ)2 dGˆ 2σ
(Gˆ − µ)2 dGˆ
− G+ 2 G+ − µ ˆ G −µ −1 − (G−µ) Φ −Φ + √ (Gˆ − µ)de 2σ 2 σ σ 2 2 π σ G− − G+ (Gˆ −µ)2 2 √ G+ − µ ˆ G −µ −1 − − (G−µ) G+ 2 ˆ ˆ 2 σ = ln −Φ + √ |G− − e 2σ 2 dG 2π σ Φ (G − µ)e − σ σ 2 2π σ G − + 2 2 √ G+ − µ ˆ ˆ 1 G −µ 1 G 1 − (G−µ) − (G−µ) G+ 2 2 ˆ ˆ 2 σ 2 σ = ln dG − √ |G− 2π σ Φ −Φ + (G − µ)e e √ σ σ 2 G− 2π σ 2 2π σ + − + + − √ 1 G −µ G −µ G −µ G −µ G− − µ G −µ = ln 2π σ + Φ −Φ − φ − φ (25) 2 σ σ 2 σ 2 σ
= ln
√
2π σ
ˆ is Gˆ (x), σ is σ ˆ (x), µ is µ ˆ (x), G+ is G+ (x), and G− is G− (x). where G G G Take G+ (x) = +2σGˆ (x) and G− (x) = −2σGˆ (x) into the above formula, Eq. (24) can be translated as √ 2σGˆ (x) − µGˆ (x) −2σGˆ (x) − µGˆ (x) 1 ln 2 π σ ( x ) + Φ − Φ ˆ G 2 σGˆ (x) σGˆ (x) ˆ (x) = H G . 2σGˆ (x) − µGˆ (x) 2σGˆ (x) − µGˆ (x) 2σGˆ (x) + µGˆ (x) −2σGˆ (x) − µGˆ (x) − φ + φ 2 σGˆ (x) 2 σGˆ (x)
(26)
On the right of Eq. (26), it is also the function of x. Therefore, Eq. (26) can be written as
√ 2σGˆ (x) − µGˆ (x) −2σGˆ (x) − µGˆ (x) 1 ln 2 π σ ( x ) + Φ − Φ ˆ G 2 σGˆ (x) σGˆ (x) H(x) = . 2σGˆ (x) − µGˆ (x) 2σGˆ (x) − µGˆ (x) 2σGˆ (x) + µGˆ (x) −2σGˆ (x) − µGˆ (x) − φ + φ 2 σGˆ (x) 2 σGˆ (x)
(27)
That is Eq. (18). References [1] [2] [3] [4] [5] [6] [7] [8]
O. Ditlevsen, H.O. Madsen, Structural Reliability Method, Wiley, Chichester, 1996. Y.T. Wu, Y.S. Monhant, Variable screening and ranking using sampling based sensitivity measures, Reliab. Eng. Syst. Saf. 91 (6) (2006) 634–647. Y.T. Wu, Computational methods for efficient structural reliability and reliability sensitivity analysis, AIAA J. 32 (8) (1994) 1717–1723. R. Rackwitz, B. Fiessler, Structural reliability under combined random load sequences, Comput. Struct. 9 (1978) 489–494. P.L. Liu, A. Der Kiureghian, Optimization algorithms for structural reliability, Struct. Saf. 9 (1991) 161–178. S.K. Au, J.L. Beck, Estimation of small failure probabilities in high dimensions by subset simulation, Probab. Eng. Mech. 16 (2001) 263–277. S.K. Au, Reliability-based design sensitivity by efficient simulation, Comput. Struct. 83 (2005) 1048–1061. G.I. Schuëller, H.J. Pradlwarter, P.S. Koutsourelakis, A critical appraisal of reliability estimation procedures for high dimensions, Probab. Eng. Mech. 19 (4) (2004) 463–473. [9] G.I. Schuëller, H.J. Pradlwarter, P.S. Koutsourelakis, A comparative study of reliability estimation procedures for high dimension, in: 16th ASCE Engineering Mechanics Conference, University of Washington, Seattle, 2003. [10] G.I. Schueller, R. Stix, A critical appraisal of methods to determine failure probabilities, Struct. Saf. 4 (4) (1987) 293–309.
16
Z. Lv et al. / Computers and Mathematics with Applications (
)
–
[11] Y. Ibrahim, Observations on applications of importance sampling in structural reliability analysis, Struct. Saf. 9 (4) (1991) 269–281. [12] C.J. Geyer, E.A. Thompson, Constrained Monte Carlo maximum likelihood for dependent data, J. R. Stat. Soc. Ser. B Stat. Methodol. 54 (3) (1992) 657–699. [13] X. Descombes, R. Morris, J. Zerubia, et al., Maximum likelihood estimation of Markov random field parameters using Markov Chain Monte Carlo algorithms, Lecture Notes in Comput. Sci. (1997). [14] I. Kaymaz, C.A. McMahon, A response surface method based on weighted regression for structural reliability analysis, Probab. Eng. Mech. 20 (2005) 11–17. [15] R.G. Ghanem, P.D. Spanos, Stochastic Finite Elements: A Spectral Approach, Springer, Berlin, 1991. [16] B. Bichon, M. Eldred, L. Swiler, et al., Efficient global reliability analysis for nonlinear implicit performance functions, AIAA J. 46 (10) (2008) 2459–2468. [17] I. Kaymaz, Application of Kriging method to structural reliability problems, Struct. Saf. 27 (2) (2005) 133–151. [18] M. Papadrakakis, N. Lagaros, Reliability-based structural optimization using neural networks and Monte Carlo simulation, Comput. Methods Appl. Mech. Engrg. 191 (32) (2002) 3491–3507. [19] J.E. Hurtado, An examination of methods for approximating implicit limit state functions from the viewpoint of statistical learning theory, Struct. Saf. 26 (3) (2004) 271–293. [20] C.C. Zhou, Z.Z. Lu, X.K. Yuan, Use of relevance vector machine in structural reliability analysis, J. Aircr. 50 (6) (2013) 1726–1733. [21] M. Guarascio, M. David, C. Huijbregts, Advanced geostatistics in the mining industry, in: Proceedings of the NATO Advanced Study Institute, Istituto di Geologia Applicata of the University, Rome, Italy, 1975. [22] G. Matheron, The intrinsic random functions and their applications, Adv. Appl. Probab. 5 (3) (1973) 439–468. [23] B. Echard, N. Gayton, M. Lemaire, AK-MCS: an active learning reliability method combining Kriging and Monte Carlo simulation, Struct. Saf. 33 (2) (2011) 145–154. [24] B. Echard, N. Gayton, M. Lemaire, et al., A combined Importance Sampling and Kriging reliability method for small failure probabilities with timedemanding numerical models, Reliab. Eng. Syst. Saf. 111 (2013) 232–240. [25] A. Dumasa, B. Echarda, N. Gaytona, et al., AK-ILS: An active learning method based on kriging for the inspection of large surfaces, Precis. Eng. 37 (2003) 1–9. [26] C.E. Rasmussen, C.K.I. Williams, Gaussian Processes for Machine Learning, MIT Press, 2006. [27] S.N. Lophaven, H.B. Nielsen, J. Sondergaard, DACE, A Matlab Kriging Toolbox, Version 2.0, Technical University of Denmark, 2002. [28] S.N. Lophaven, H.B. Nielsen, J. Sondergaard, Aspects of the Matlab Toolbox DACE, Technical University of Denmark, 2002. [29] C.E. Shannon, A mathematical theory of communication, Bell Syst. Tech. J. 27 (1948) 379–423. [30] P.S. Koutsourelakis, H.J. Pradlwarter, Schueller, Reliability of structures in high dimensions, part I: algorithms and application, Probab. Eng. Mech. 19 (2004) 409–417.