An Interval Uncertainty Analysis Method for Structural Response Bounds Using Feedforward Neural Network Differentiation

An Interval Uncertainty Analysis Method for Structural Response Bounds Using Feedforward Neural Network Differentiation

Journal Pre-proof An Interval Uncertainty Analysis Method for Structural Response Bounds Using Feedforward Neural Network Differentiation Liqun Wang ...

2MB Sizes 0 Downloads 53 Views

Journal Pre-proof

An Interval Uncertainty Analysis Method for Structural Response Bounds Using Feedforward Neural Network Differentiation Liqun Wang , Zengtao Chen , Guolai Yang PII: DOI: Reference:

S0307-904X(20)30067-6 https://doi.org/10.1016/j.apm.2020.01.059 APM 13272

To appear in:

Applied Mathematical Modelling

Received date: Revised date: Accepted date:

24 September 2019 16 January 2020 22 January 2020

Please cite this article as: Liqun Wang , Zengtao Chen , Guolai Yang , An Interval Uncertainty Analysis Method for Structural Response Bounds Using Feedforward Neural Network Differentiation, Applied Mathematical Modelling (2020), doi: https://doi.org/10.1016/j.apm.2020.01.059

This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2020 Published by Elsevier Inc.

Highlights:



A new interval uncertainty analysis method using feedforward neural networks is proposed.



A feedforward neural network is capable of approximating the partial derivatives of an arbitrary function.



The subinterval method is used to obtain more reliable results for problems with relatively large uncertainty levels.



Three numerical examples show that the present method can achieve fine precision with less computational cost.

1

An Interval Uncertainty Analysis Method for Structural Response Bounds Using Feedforward Neural Network Differentiation Liqun Wanga, b, Zengtao Chenb, Guolai Yanga, * *Corresponding author, E-mail: [email protected] a School of Mechanical Engineering, Nanjing University of Science &Technology, Nanjing 210094, PR China. b Department of Mechanical Engineering, University of Alberta, Edmonton T6G 1H9, Canada.

Abstract: This paper proposes a new interval uncertainty analysis method for structural response bounds with uncertain-but-bounded parameters by using feedforward neural network (FNN) differentiation. The information of partial derivative may be unavailable analytically for some complicated engineering problems. To overcome this drawback, the FNNs of real structural responses with respect to structure parameters are first constructed in this work. The first-order and second-order partial derivative formulas of FNN are derived via the backward chain rule of partial differentiation, thus the partial derivatives could be determined directly. Especially, the influences of structures of multilayer FNNs on the accuracy of the first-order and second-order partial derivatives are analyzed. A numerical example shows that an FNN with the appropriate structure parameters is capable of approximating the first-order and second-order partial derivatives of an arbitrary function. Based on the parameter perturbation method using these partial derivatives, the extrema of the FNN can be approximated without requiring much computational time. Moreover, the subinterval method is introduced to obtain more accurate and reliable results of structural response with relatively large interval uncertain parameters. Three specific examples, a cantilever tube, a Belleville spring, and a rigid-flexible coupling dynamic model, are employed to show the effectiveness and feasibility of the proposed interval uncertainty analysis method compared with other methods. Keywords: Interval uncertainty, Interval uncertainty analysis, Uncertainty propagation, Surrogate model, Feedforward neural network

1. Introduction In the structural design and analysis, it is preliminarily important to calculate the structural responses. However, a variety of uncertainties are inherent in material properties, external loads, dimensional tolerances, and boundary conditions, due to various factors. These uncertainties will inevitably affect the final system performances, and small parameter uncertainties may lead to relatively large uncertainties in the structural response, especially for nonlinear systems. The traditional way to investigate the structural responses problems with uncertainty parameters is the probabilistic approach [1-3], in which the uncertain parameters are treated as stochastic variables with known probability distributions. However, determining precise probability distribution requires the complete stochastic information and knowledge, which are usually hard and expensive to obtain in practical engineering problems. To circumvent this drawback, some alternative tools, such as fuzzy sets [4], interval sets, Dempster-Shafer (D-S) evidence theory [5], have been widely adopted to quantify the uncertainties. Among them, the interval approach uses the upper and lower bounds of uncertain-but-bounded parameters to depict uncertainty, and the bounds of structural responses are solved via so-called interval analysis methods. It will be a better choice if only the bounds information of uncertain parameters is available. Since Moore [6], and the pioneering work of Alefeld and Herzberger [7], developing accurate and efficient interval analysis methods has drawn much research attention. Several approaches including interval arithmetic based methods, interval vertex method (IVM), interval perturbation methods (IPM), and global searching algorithms (GSA) have been proposed to solve the static and dynamic structural responses with respect to uncertain parameters. Based on interval arithmetic operations [8, 9], the interval algorithm can quickly calculate the bounds of the true solution. However, its inherent deficiency is the large overestimation of the bounds, which is usually called “wrapping effect” [10]. IVM is a better choice to calculate the exact bounds of the solution to linear interval equations. Qiu et al. [11-13] gave the mathematical proof of the vertex solution theorem, and applied it to calculate the static displacement [11], stress as well as deformation [12] bounds of structures with uncertain-but-bounded parameters. But the applications of IVM are limited to the monotonous cases, which is not suitable for the complex nonlinear problems. IPM is a Taylor expansion based approximation method, which can produce an approximate solution close to the exact solution. Based on the first-order Taylor expansion and parameter perturbation, Chen et al. [14] and McWilliam [15] used IPM to solve the uncertain, static displacement problem of structures with interval parameters; Impollonia and Muscolino [16] evaluated the static response of structures with interval axial stiffnesses; Wang et al. [17] proposed a modified, interval parameter perturbation finite element method (MIPPM) for the coupled, structural-acoustic field prediction and structural design with 2

uncertainties. Moreover, Qiu and Wang [18] applied the first-order parameter perturbation method to evaluate the range of dynamic responses of structures with interval parameters. The IPM has fine computational efficiency, but it has two main deficiencies hindering its engineering application. One deficiency is that this method is limited to solve the problems of small uncertainty level, and its accuracy decreases as the uncertainty level increases. To overcome this drawback, the subinterval perturbation method [19-21] was proposed to divide the large intervals into several subintervals and apply the interval perturbation to each subinterval combination, which can obtain more accurate bounds. The other deficiency is that the first-order or higher-order partial derivative of structural responses with respect to interval variables is required in IPM. In most practical engineering systems, however, the explicit, functional relationship between system inputs and outputs is too complex to be derived and as a result, the first-order and higher-order derivative may be unavailable. One may argue that the finite difference method can calculate the derivative, but it is not a sensible choice for computation-intensive simulation models. Most of the aforementioned methods can only solve the simple engineering problems, but GSA is a choice for the complex ones. In this method, the extremum of the structural responses within the uncertain domain is calculated using two deterministic optimization processes. For instance, Jiang et al. [22, 23] and Cheng et al. [24] used the intergeneration projection genetic algorithm (IP-GA) and the genetic algorithms (GA) as optimization operators to search for the lower and upper bounds of performance functions, respectively. Li et al. [25] modeled the structural eigenvalue problem with interval parameters as a series of problems of quadratic programming with box constraint, and then solved it by using the difference of convex functions algorithm (DCA). The biggest merit of GSA is its fine accuracy, but the optimization solver usually takes high computational cost, especially for some complex engineering problems using computation-intensive simulation models. The surrogate models were introduced into interval analysis to replace the original simulation model, thus the interval bounds of structural responses can be calculated efficiently. Wu et al. [10] presented a new interval analysis method using the Chebyshev polynomial series. Liu et al. [26] introduced the radial basis function (RBF) into structural interval analysis, in which a surrogate model of real structural response with respect to interval parameters was constructed with the RBF and the extrema of the surrogate model were calculated by some auxiliary methods such as GA. Liu et al. [27] proposed a Bayesian collocation method (BCM) for static analysis of structures with interval uncertainties. Among the various surrogate models, the multilayer feedforward neural networks (FNNs) have proven to possess the universal approximation property [28-30]. Leshno et al. [31] and Pinkus [32] further noted that the standard multilayer FNNs with a locally bounded piecewise continuous activation function can accurately approximate an arbitrary function and its partial derivatives if and only if the network's activation function is not a polynomial. Moreover, Maiorov and Pinkus [33] proved that two hidden layer FNNs using a fixed finite number of units in each layer can approximate any continuous function with arbitrary precision. The application of the FNNs in the domain of interval uncertainty analysis is promising, yet mostly unexplored. This paper mainly focuses on the following challenges in solving the structural response bounds of complex engineering problems with uncertain-but-bounded parameters: the information of the partial derivatives is difficult to obtain direct and the auxiliary optimization solver usually takes high computational cost. The paper proposes a new interval uncertainty analysis method with both high efficiency and fine precision by using the FNNs. In this work, the FNNs of real structural responses with respect to interval parameters are first constructed. The method to calculate the first-order and second-order partial derivatives with the FNNs is discussed, thus the information of the partial derivatives could be determined directly. Especially, the influences of the structure parameters of a multilayer FNN to its first-order and second-order partial derivatives are analyzed, which were rarely mentioned in the literature. Based on the first-order or second-order parameter perturbation method using these partial derivatives, the extrema of the FNN can be approximated without requiring expensive computational time. Moreover, the subinterval method is introduced to obtain more accurate and reliable results for the structural response problems with relatively large interval uncertain parameters. This paper is organized as follows. In Section 2, a statement of the structural response problems with interval uncertain parameters is given, and some existing interval analysis methods are reviewed briefly. The detailed methods to calculate the first-order and second-order partial derivatives with the FNNs are presented in Section 3, and a numerical example is used to demonstrate its validity. Then in Section 4, the process of implementing FNN differentiation to interval uncertainty analysis is introduced. In Section 5, three numerical examples are given to show the effectiveness and feasibility of the proposed method. Finally, the conclusions are drawn in Section 6. 2. Overview of the structural response problems with interval uncertainty parameters A statement of the structural response problems with interval uncertainty parameters will be given, and the existing interval uncertainty analysis methods for structural response bounds used in section 5 for comparison will be reviewed briefly. 2.1. Problem statement

3

Without losing any generality, most structural response problems can be described as a simple, nonlinear functional equation: Y  F  x, (1) T where Y=[Y1, Y2, …, Yq] is a q-dimensional vector of structural responses, F=[F1, F2, …, Fq]T is a q-dimensional vector comprising nonlinear functions, x=[x1, x2, …, xn]T is an n-dimensional vector of design parameters. Eq. (1) represents a deterministic model. When considering interval uncertainties, the interval parameters can be defined as x I  Rn , where the superscript I denotes interval, and Eq. (1) can be written as Y I  F  xI , (2) I I I I I where x  [ x1 , x2 ,..., xn ] and xi   xi , xi  , denotes an interval vector consisting of the upper and the lower bounds of uncertain parameters. Considering the transitivity of uncertain parameters, Y is also an interval vector rather than a number. The solution of Eq. (2) can be stated as follows: Y I  Y , Y   Y : Y  F  x  , x  x I . (3) The lower bound and upper bound can be obtained by Y  minI Y : Y  F  x  , x  x I  . x x (4) I Y  maxI Y : Y  F  x  , x  x . xx (5) The bounds of the response interval are essentially the extrema of the functional equations. For most practical problems, the precise interval bounds are difficult to obtain directly through Eqs. (4)-(5). Thus several methods, such as IVM, IPM, and GSA, have been applied to obtain the approximate bounds of response interval. 2.2. A brief review of existing interval uncertainty analysis methods 2.2.1. Interval vertex method In IVM, it is assumed that the performance functions are monotonous within the uncertain domain. Thus the bounds of the response interval are at the vertices of the uncertain domain, which can be computed by the following equations

 Y  max F  x

 ,

Y  min F  x B1  , F  x B2  , .... , F  x B2n  ,

where

B1

 , F  xB2  , .... , F  xB2

n

(6)

xB1   x1 , x2 , ... , xn  , xB2   x1 , x2 , ... , xn  , xB2n   x1 , x2 , ... , xn .

(7) Eqs. (6) and (7) are essentially a process of full factorial design of experiments (DOE). If there are n uncertain parameters, 2n vertices need to be calculated. The smallest and biggest structural responses of these vertexes are regarded as the lower and upper response bounds, respectively. IVM can only solve the monotonous cases, which require a deep understanding of the original process, and is not suitable for most practical problems. 2.2.2. Global searching algorithms and surrogate models In GSA, the extremum of the structural responses within the uncertain domain is obtained by using two deterministic optimization processes. Y and Y can be obtained by a minimization optimization process and a maximization optimization process, respectively: Y  minI F  x  , Y  maxI F  x  . x x

x x

(8) Eq. (8) can be easily derived with some auxiliary optimization algorithms such as GA. GSA is not limited by the research object and has been applied to many practical engineering problems. Moreover, for some complex engineering problems with computation-intensive simulation models, a surrogate model is usually constructed to replace the original model, and combined with an optimization algorithm to obtain high computational efficiency. In general, GSA has a fine accuracy especially for highly nonlinear problems, but the solver usually leads to a high computational cost.

4

2.2.3. Interval perturbation method IPM is a Taylor expansion based approximation method. According to interval mathematics, the i-th dimension interval parameter xiI   xi , xi  can be also represented by

xiI  xic , xiw , xic 

xi  xi x  xi , xiw  i , 2 2

(9) where xic and xiw are the nominal value and the radius of the interval parameter, respectively. The i-th interval parameter can be further expressed as xiI  xic  xiwi , i 1,1. (10) c Apply Taylor expansion at xi , i=1,2,…,n, thus the structural response Yj (j=1,2,…,q) can be approximated as follows 2 n Y 1 n n  Yj j Y j  Fj  x I   Fj  x c    |x  xc xiw i   |x  xc , xiw xkw i  Rn  x  , i i i i  x 2!  x  x i 1 i 1 k 1 i i k xk  xkc (11) where Rn  x  is an interval remainder term. The strict Taylor expansion can guarantee that the real solution is inside the ranges of the obtained solution. However, calculating the interval remainder usually results in a high computational cost. Therefore we usually use a first-order or second-order Taylor model and ignore the interval remainder term. With interval expansion method, the bounds of the structural response Yj (j=1,2,…,q) can be calculated by the following expressions: Y j  Fj  x c    n

Y j

i 1

xi

n

Y j  Fj  x c   

Y j

i 1

xi

|x  xc xiw 

2 1 n n  Yj | c xiw xkw .  2! i 1 k 1 xi xk xxik xxikc,

|x  xc xiw 

 Yj 1 | c xiw xkw .  2! i 1 k 1 xi xk xxik xxikc,

i

i

i

i

n

(12)

2

n

(13) It should be mentioned that Taylor expansion is precise only if the uncertainty levels of interval parameters are small enough. If not, it will lead to an unsuccessful approximation of the response interval. Hence the subinterval method is suggested for these cases to divide a large interval into several subintervals with small uncertainty level, which can be stated as:

 2  Si  1 xiw 2Si xiw  I x  x  , x   i Si  i  , Si  1,2,..., ti . i t t i i  

where

x 

I i Si

(14)

and ti are the Si-th subinterval and the subinterval number of the i-th (i=1,2,…,n) dimension

interval parameter, respectively. For each subinterval combination, Eqs. (10)-(13) are used to calculate the bounds of the structural response interval, and the final solution is obtained based on interval set theory: Y  min F  xC1  , F  xC2  , .... , F  xCr  , Y  max F  xC1  , F  xC2  , .... , F  xCr  ,

(15)

where n

r   ti , i 1

xC1   x  ,  x  , ... ,  x   , S1 S1 S1   I I I xC2   x1  ,  x2  , ... ,  xn   , S1 S1 S2   I 1

I 2

(16)

I n

xCr   x1I  ,  x2I  , ... ,  xnI   . St1 St2 Stn   (17) A total of r subinterval combinations need to be calculated. The smallest and biggest structural responses of these subinterval combinations are regarded as the lower and upper response bounds, respectively. In general, IPM is a good choice to increase efficiency, but it can only be used for a case whose numerical differentiation is easy to 5

compute, but most practical problems are non-differentiable functional systems. As aforementioned, these methods have their own limitations. To increase efficiency and ensure accuracy, we propose an interval analysis method using FNN differentiation to solve the above problem, which will be described in detail in the following sections. 3. Feedforward neural network differential equations The methods to calculate the first-order and second-order partial derivatives with a multilayer FNN will be presented and demonstrated in this section. 3.1. System equations of feedforward neural network Consider a multilayer FNN in Fig. 1, which consists of an input layer, M hidden layers, and an output layer. Here, the input layer is denoted as the 0-th layer, and the output layer is denoted as the (M+1)-th layer. I i ( Oi ) is k k k the input (output) parameters of the neural network, oi , ni , kj ,i  , bi , f k   , S k are, respectively, the net

output of i-th unit in k-th layer, the net input of i-th unit in k-th layer, the connection weight of the j-th unit in (k-1)-th layer to the i-th unit in k-th layer, the offset factor of i-th unit in k-th layer, the activation function of the k-th layer, and the units number of k-th layer.

Fig. 1 The multilayer feedforward neural network The input vector, I , is introduced from the input layer, and then transferred to the hidden layers. For the k-th layer, the net input to the unit i can be obtained by a weighted sum of the output vector from the front layer S k 1

n  kj ,i  okj 1  bik , j  1, 2,..., S k 1 , i  1, 2,..., S k , k  1, 2,...,(M  1). k i

j 1

(18)

The net output of unit i of layer k can be calculated by applying the activation function oik  f k  nik  , i  1, 2,..., S k , k  1, 2,...,( M  1).

(19)

Substitute Eq. (18) into Eq. (19), and rewrite in a matrix form, we can obtain following system equation ok  f k W k ok -1  bk  , k  1, 2,...,(M  1).

(20) According to the above manner, the output vector of the k-th layer is transferred to the subsequent hidden layer and finally to the output layer to generate the output vector. 3.2. First-order partial derivative feature Suppose only direct interconnections from one layer to the successor exist in the FNN, and its activation functions are differentiable and of the same form in each layer. The first-order partial derivative of the units in the layer k (k  1, 2,...,(M  1)) can be written as follow

oik oik nik   f k  nik  (kj ,i ) , i  1, 2,..., S k , j  1, 2,..., S k 1. okj 1 nik okj 1

(21)

Eq. (21) can also be expressed as a matrix form

δk  f k  nk W k ,

(22)

 

where δ k is the first-order partial derivative matrix of the k-th layer, the symbol f k nk

6

is a diagonal matrix

which has the following properties:



  ,

fk(nk )  diag fk  n1k  , fk  n2k  ,..., f k nSk k

(23) diag(V ) denotes a function that converts the vector V into a diagonal matrix. According to the backward chain rule of partial differentiation proposed by Hashem [34], the first-order partial derivative of the output parameter Oi with respect to the input parameter I j is formulated as follows

Oi oiM 1 niM 1 o M n M o1 n1  ... I j niM 1 o M n M o M -1 n1 o0j +1 M  M  f M 1  niM 1 W( M ... f 1  n1 W(1:, j ) , i  1, 2,..., S M 1 , j  1, 2,..., S 0 . i,:) f M  n W

(24)

Rewrite Eq. (24) in a matrix form, we can have:

δ  f M +1  n M +1 W Μ +1 f M  n M W Μ ... f 1  n1 W 1 ,

which is a ( q  n )-matrix. q  S

M 1

is the number of output parameters, n  S

0

(25) is the number of input

parameters,  ij is the first-order partial derivative of the i-th output parameter with respect to the j-th input parameter. 3.3. Second-order partial derivative feature Once the first-order partial derivative is obtained, the second-order ones can be easily calculated. For the unit i (i  1, 2,..., S k ) in the layer k (k  1, 2,...,(M  1)) , its second-order partial derivative with respect to units h and k 1 j (h and j  1, 2,..., S ) , can be expressed as:

 2 oik nik   oik   k k k k       f n    f n  (kj ,i ) f k nik  (kh ,i ) .       k i ( j ,i )  ( j ,i ) k i o kj 1ohk 1 ohk 1  o kj 1  ohk 1  ohk 1

(26)

Rewrite Eq. (26) in a matrix form, the following equation can be obtained: T  2 oik  W(ki ,:)  f k nik W(ki ,:) . k 1 2  o



 



(27) These results extend automatically to units of an FNN. Applying the backward chain rule of partial differentiation to calculate the first derivative of Eq. (24), the second-order partial derivative of the output parameter Oi with respect to the input vector I can be formulated as: M 1 M +1 M M 1 1  2Oi   f M 1  ni W( i, :) f M  n W ... f 1  n W   I 2 I M 1 M +1   f M 1  ni W( i, :)    f M  n M W M ... f 1  n1 W 1  I   f M  n M W M  +1 f M 1  niM 1 W( M f M -1  n M -1 W M -1 ... f 1  n1 W 1  i, :) I ...... 

f M 1  n

M 1 i

W

M +1 ( i, :)

f M  n

M

W

M

... f 2  n W 2

2

  f 1  n1 W 1  I

, i  1, 2,..., q. (28)

And then, the following equation can be derived easily: 2 T  Oi +1 M  M   f M 1  niM 1 W( M ... f 1  n1 W 1  W( Mi, :)+1 f M  n M W M ... f 1  n1 W 1  i, :) f M  n W 2 I  f M  n M W M f M -1  n M -1 W M -1 ... f 1  n1 W 1  diag  f M 1  niM 1 W( Mi, :)+1  W M f M -1  n M -1 W M -1 ... f 1  n1 W 1      ......  T

+1 M  f 1 n1 W 1  diag  f M 1  niM 1 W( M  M ... f 2  n 2 W 2  W 1 , i  1, 2,..., q, i, :) f M  n  W    T

(29)

7

where



 .

fk(nk )  diag fk n1k  , fk n2k  ,..., f k nSk k

(30)

 2 Oi is a ( n  n )-matrix, and a total of q second-order partial derivative I 2 matrices can be obtained for a neural network with q output parameters. Take a feedforward network with one single hidden layer as a special case, Eq. (28) can be rewritten as: T T  2Oi   f 2 ni2 W( 2i, :) f1  n1 W 1  W( 2i, :) f 1  n1 W 1   f 1 n1 W 1  diag  f 2  ni2 W(2i, :)  W 1 . 2 I (31) As another special case, the second-order partial derivative formula of a feedforward network with two hidden layers, can be expressed as: T  2 Oi   f3 ni3 W( 3i, :) f 2  n 2 W 2 f 1  n1 W 1  W( 3i, :) f 2  n 2 W 2 f 1  n1 W 1  2 I

The second-order partial derivative

 f 2 n 2 W 2 f 1  n1 W 1  diag  f3  ni3 W( 3i, :)  W 2 f 1  n1 W 1      T

 f 1 n1 W 1  diag  f3  ni3 W( 3i, :) f 2  n 2 W 2  W 1 .     (32) From the above description, we can see that the final calculation results mainly depend on the network structure, activation functions, and connection weights. Once an FNN is constructed, the first-order and second-order partial derivatives of the output vector O with respect to the input vector I can be easily calculated. A key factor that affects the success of process modeling, is the ability to extract information about the model structure and the relationships between its inputs and outputs. T

3.4. A numerical example and discussion In general, the multilayer FNN with the associated error backpropagation (BP) learning algorithm is the most widely used paradigm among current different FNNs. This paradigm has three kinds of differentiable activation functions, namely, 1 2 Logsig: f  x   , Tansig: f  x    1, Purelin: f  x   x. 1  exp   x  1  exp(2 x) (33) Hence BP neural network is used in a mathematical example to investigate the accuracy of the proposed method in an intuitive way. The following equations are considered:  f  x x 2  3  x 4 x 2  1  0.05exp  x  ; 2 3 3  1 1 3 2  f 2  x1 sin  x2  2   10cos( x3 )  4 x1 x3  2 x22  3x2 ;  3  f3  0.1x1  5 x3  x1 sin( x3 )  0.1exp( x2 );  2  f 4  0.1x2 exp( x1  2 x3 )  10 x3 ;  x1 , x2 , and x3  [0,5].  (34) Eq. (34) contains linear functions and most typical nonlinear functions such as trigonometric function, quadratic function, cubic function, and exponential function, so that the applicability of the proposed method in partial derivatives prediction can be well demonstrated. Besides, the structures of an FNN, such as the number of hidden layers and the unit number in hidden layers, have significant impacts on its capability to approximate an unknown mapping. Undoubtedly, they will also affect the partial derivatives. Thus the influences of the above structure parameters of a multilayer FNN to its first-order and second-order partial derivatives are discussed in this example. 3.4.1. First-order partial derivative Six single hidden layer, BP FNNs in which the number of hidden layer units are 10, 20, 40, 60, 80 and 100, respectively, are firstly trained. The activation functions of the hidden layer and the output layer are set as Tansig and Purelin, respectively. R-Squared (R2), which indicates how much variation of a dependent variable is explained by the independent variable(s) in a surrogate model, is applied to test the accuracy of these FNNs. The results are listed in Table 1. The first-order derivatives of the output parameters with respect to the input parameters are calculated. The approximate values of f1 and f2 under different hidden layer units and the corresponding exact values (EV) are illustrated in Fig. 2. 8

The first-order derivatives of f2

The first-order derivatives of f1

Table 1 R2 of the FNNs with different number of hidden layer units The number of hidden layer units f1 f2 f3 f4 10 0.9987 0.9985 0.9990 0.9996 20 1.0000 1.0000 1.0000 1.0000 40 1.0000 1.0000 1.0000 1.0000 60 1.0000 1.0000 1.0000 1.0000 80 1.0000 1.0000 1.0000 1.0000 100 1.0000 1.0000 1.0000 1.0000 6 5 4 3 2 1 0

0

1

2

x1

3

4

5

28

11 10 9 8 7 6 5 4 3 2 1 0

0

1

2

x2

3

4

5

50

24

24 21 18 15 12 9 6 3 0 -3

0

1

2

x3

3

4

5

30 EV 10 units 20 units 40 units 60 units 80 units 100 units

25

40

20

20

16

30

15

12

20

10

8

5 10

4 0

EV 10 units 20 units 40 units 60 units 80 units 100 units

0

1

2

x1

3

4

5

0

0 0

1

2

x2

3

4

5

-5

0

1

2

x3

3

4

5

Fig. 2 The first-order partial derivatives of f1 and f2 under different hidden layer units Table 2 RMSEs of the first-order partial derivatives under different hidden layer units Formulation No. 10 Units 20 Units 40 Units 60 Units 80 Units 100 Units f1 2.3748 0.4123 0.0897 0.0747 0.0494 0.1298 f2 3.7600 0.9522 0.1370 0.2020 0.1246 0.3038 f3 1.8598 0.1847 0.0548 0.0266 0.0291 0.0157 f4 13.3893 1.2012 0.8870 0.6856 0.7437 0.3238

Table 1 shows that all these trained FNNs have a good capability to approximate the mapping in Eq. (34). However, when the number of hidden layer units reaches a threshold, it is not recommended to improve the nonlinear mapping capacity of a neural network by adding hidden layer units. In this example, 20 hidden layer units are enough to construct a high credibility FNN. However, Fig. 2 shows that the number of hidden layer units has a significant impact on its first-order partial differentiation calculation. The more this number, the more the probability of the considered network to give precise first-order partial derivative results. When the number of hidden layer units is 10, the FNN cannot provide higher confidence to approximate the exact first-order partial derivatives. As the number of hidden layer units increases, the first-order partial derivatives using FNN tend to yield fine precision. When the number of hidden layer units is large enough, such as 60 in this example, the approximate first-order partial derivatives using FNN are almost identical to the exact ones. To show this phenomenon in a more intuitive way, Root Mean Squared Error (RMSE) is adopted to evaluate the deviations of the approximate values and the exact ones, which are given in Table 2. A total of 1000 samples are generated when calculating RMSE. Overall, Table 2 shows that the RMSEs show a decreasing trend as the number of hidden layer units increases. When the number of hidden layer units is 80, the first-order partial derivatives of f1 and f2 have the best precision, whose RMSE are 0.0494 and 0.1246, respectively. The RMSEs of f3 and f4 are smallest when the number of hidden layer units is 100, which are 0.0157 and 0.3238, respectively. Moreover, Fig. 2 also shows that the present method is prone to large deviations when dealing with the first-order derivatives in the close neighborhood of two interval bounds of X. Fortunately, this defect will be resolved as the number of hidden layer units increases. 9

To analyze the influences of the number of hidden layers, three BP FNNs with 1, 2 and 3 hidden layers, respectively, are trained. The activation functions of the first hidden layer, the subsequent hidden layers, and the output layer are set as Tansig, Logsig, and Purelin, respectively. Moreover, the total number of hidden layer units is 60, which is evenly distributed to each hidden layer. The values of R2 of the FNNs with the different numbers of hidden layers are listed in Table 3. The first-order derivatives of the outputs with respect to the inputs are computed, and the results of f3 and f4 are plotted in Fig. 3. Furthermore, the RMSEs are calculated and shown in Table 4.

The first-order derivatives of f4

The first-order derivatives of f3

Table 3 R2 of the FNNs with different number of hidden layers The number of hidden layers f1 f2 f3 f4 1 1.0000 1.0000 1.0000 1.0000 2 1.0000 1.0000 1.0000 1.0000 3 1.0000 1.0000 1.0000 1.0000

9

16

9

7

12 7

5 8

6

3

5 4

1 -1

4 0

1

2

x1

3

4

5

0 -10 -20 -30 -40 -50 -60 -70 -80

EV 1 HL 2 HLs 3 HLs

8

0

1

2

x1

3

4

5

0

16 14 12 10 8 6 4 2 0 -2

0

1

2

x2

3

4

5

3

0

1

2

x3

3

4

5

50 EV 1 HL 2 HLs 3 HLs

30 10 -10 -30 0

1

2

x2

3

4

5

-50

0

1

2

x3

3

4

5

Fig. 3 The first-order partial derivatives of f3 and f4 under different hidden layers Table 4 RMSEs of the first-order partial derivatives under different hidden layers Formulation No. 1 hidden layer 2 hidden layers 3 hidden layers f1 0.0747 0.0872 0.0271 f2 0.2020 0.1633 0.0873 f3 0.0266 0.0332 0.0212 f4 0.6856 0.8626 0.5810

Fig. 3 shows that the FNNs with three different numbers of hidden layers all have good precisions, which are capable of accurately approximating the functions in Eq. (34). As the number of hidden layer units is large enough, the first-order derivatives using the FNNs are almost identical to the exact ones. However, Fig. 3 shows that there exists a considerable error in the close neighborhood of two interval bounds. In addition, as the number of hidden layers increases, the first-order derivatives near the interval bounds of X become more accurate. This is further proved in Table 4 in a more intuitive way. The RMSEs in Table 4 show a decreasing trend as the number of hidden layers increases. When the number of hidden layers is 3, the RMSEs of f1, f2, f3, and f4 are smallest, which are 0.0271, 0.0873, 0.0212 and 0.5810, respectively. Therefore, we can summarize that when the number of hidden layers or the number of hidden layer units increases, the proposed method can obtain more accurate results of first-order derivatives. 3.4.2. Second-order partial derivative The six FNNs with different number of hidden layer units as mentioned before are revisited to calculate the

10

0.5

0.6 0.4

0.0

0.2 -0.5

0.0

-1.0

-0.2 -0.4

-1.5

-0.6 -2.0 -2.5

The second-order derivatives of f2

The second-order derivatives of f1

second-order derivatives of the output parameters with respect to the input parameters. The results of f1 and f2 are illustrated in Fig. 4. Moreover, the corresponding RMSEs are calculated and listed in Table 5.

-0.8 0

1

2

x1

3

4

5

-1.0

0

1

2

x2

3

4

5

8 7 6 5 4 3 2 1 0 -1

6

15

12

4

10

8

5

4

0

0

-5

-4

-10

-8

2 0 -2 -4

0

1

2

x1

3

4

5

-15

0

1

2

x2

3

4

5

-12

EV 10 units 20 units 40 units 60 units 80 units 100 units

0

1

2

x3

3

4

5

EV 10 units 20 units 40 units 60 units 80 units 100 units

0

1

2

x3

3

4

5

Fig. 4 The second-order partial derivatives of f1 and f2 under different hidden layer units Table 5 RMSEs of the second-order partial derivatives under different hidden layer units Formulation No. 10 Units 20 Units 40 Units 60 Units 80 Units 100 Units f1 2.2476 0.9684 0.7142 0.2771 0.5491 0.6529 f2 8.7824 5.1734 0.3809 1.2274 1.2829 0.5725 f3 4.9333 0.6573 0.1027 0.0908 0.7250 0.1522 f4 37.9994 3.6846 0.4817 3.5433 1.0343 1.7864

Obviously, Fig. 4 shows that compared with first-order derivatives, approximating the second-order derivatives of an arbitrary function from an FNN is confronted with more difficulties. The number of hidden layer units in an FNN has a significant impact on its second-order, partial derivative approximation. For example, the precision of  2 f 2 x22 in Fig. 4 firstly increases and then decreases slightly as the number of hidden layer units increases, reaching a peak value when this number is 40. Nonetheless, the FNNs with 60, 80, and 100 hidden layer units give more precise  2 f 2 x22 than those with 10 and 20 hidden layer units. The similar results also appear in other second-order derivatives in Fig. 4. This phenomenon is further proved in Table 5 in a more intuitive way. When the number of hidden layer units is 10, the FNN cannot accurately approximate the exact second-order derivatives. When the number of hidden layer units is 60, the second-order derivatives of f1 and f3 have the best precision, with RMSE of 0.2771 and 0.0908, respectively. The RMSEs of f2 and f4 are smallest when the number of hidden layer units is 40, which are 0.3809 and 0.4817, respectively. Thus an appropriate number of hidden layer units need to be specified according to the practical problem when calculating the second-order derivatives of an FNN. Moreover, Fig. 4 also shows that if the number of hidden layer units is inappropriate, the presented method is prone to large deviations when computing the second-order derivatives of linear functions, such as  2 f1 x12 and  2 f1 x22 . Fortunately, this condition can commonly be satisfied as the functions usually are nonlinear or even highly nonlinear in most practical structural problems. Similar to the first-order partial derivative, Fig. 4 shows that the second-order derivatives of the FNN in the close neighborhood of two interval bounds of X are less accurate than other positions. To analyze the influences of the number of hidden layers on the second-order derivatives of an FNN, the three FNNs with different number of hidden layers as mentioned before are revisited. The results of f3 and f4 are illustrated in Fig. 5. Furthermore, the RMSEs are calculated and listed in Table 6.

11

The second-order derivatives of f3

3.0

The second-order derivatives of f4

80

2.5

70

2.0

15

2.5

12

2

0

2.0

9

1.5

-1

6

-2

3

-3

1.0 0.5 0.0

EV 1 HL 2 HLs 3 HLs

1

-4 0 0

1

2

x1

3

4

5

60

0

1

2

x2

3

4

5

1.5

50

1.0

40 30

0.5

20

0.0

10

-0.5

0 0

1

2

x1

3

4

5

-1.0

0

1

2

x2

3

4

5

-5

300 275 250 225 200 175 150 125 100 75 50 25 0 -25

0

1

2

x3

3

4

5

EV 1 HL 2 HLs 3 HLs

0

1

2

x3

3

4

5

Fig. 5 The second-order partial derivatives of f3 and f4 under different hidden layers Table 6 RMSEs of the second-order partial derivatives under different hidden layers Formulation No. 1 hidden layer 2 hidden layers 3 hidden layers f1 0.2771 0.2748 0.1816 f2 1.2274 0.3151 0.2544 f3 0.0908 0.1099 0.0932 f4 3.5433 1.5469 1.5227

As the number of hidden layers is 60 which is an appropriate value for this example, the results in Fig. 5 show fine precisions. However, Fig. 5 shows that compared to the first-order derivatives, the errors of second-order derivatives in the close neighborhood of two interval bounds of X are more obvious. As the number of hidden layers increases, the results near the interval bounds of X become much more accurate. When the number of hidden layers is 3, the second-order derivatives from the trained FNNs are almost identical to the exact ones. This is further proved in Table 6. For most functions in Eq. (34), such as f1, f2, and f4, the RMSEs show a decreasing trend as the number of hidden layers increases. When the number of hidden layers is 3, the RMSEs of these functions are smallest, which are 0.1816, 0.2544, and 1.5227, respectively. Moreover, the result of  2 f 4 x22 in Fig. 5 shows that the presented method is prone to large deviations when computing the second-order derivatives of linear functions. The results of the current detailed assessment show that an FNN with the appropriate number of hidden layer units and hidden layers is capable of approximating the first-order and second-order derivatives of an arbitrary function accurately. This method provides an alternative way for the differential calculation of the practical engineering problems, especially suitable for the complex, non-differentiable, nonlinear functional systems. However, it should be noted that this research only chooses the classical activation functions in Eq. (33). One may ask a question if there are other activation functions, for which one can take less hidden layers and less hidden units than those in Eq. (33). The answer to this question is positive, such as the sigmoidal functions in [35, 36], although such activation functions are not always efficient in practice. Guliyev and Ismailov [35] proved that a single hidden layer FNN with the specifically constructed smooth, sigmoidal, computable activation function and only two units in the hidden layer can approximate any continuous univariate function arbitrarily well. Besides, the relevant research results [33, 36] show that two hidden layer FNNs having a fixed number of hidden neurons in hidden layers can approximate any continuous n-variable function with arbitrary precision. The activation functions exploited in [35, 36] have nice properties such as sigmoidality, smoothness, computability, and weak monotonicity. Therefore, there seems to be a reason to conjecture that these activation functions may be significantly more promising to approximate the derivatives of a function with an arbitrary degree of accuracy but using less hidden layers and less hidden units, at least from a purely approximation-theoretic point of view. This 12

problem certainly warrants further study. 4. An interval uncertainty analysis method with FNN differentiation To solve the limitations of the current interval analysis methods as aforementioned in Section 2, the FNNs and their differentiation are implemented into IPM, thus a novel one abbreviated as FANNBIA is proposed. The FNNs will replace the original model to calculate the structural responses, and the first-order as well as second-order partial derivatives in Eqs. (11)-(13) will be directly computed using the differential equations of FNNs. The flowchart of the solving process is plotted in Fig. 6, which can be described as follows. Step (1): Determine the uncertain parameters and their corresponding ranges. Step (2): Select sampling points in the interval space, and then calculate the corresponding structural responses by using the original model. Step (3): Specify the structure parameters for the FNN, including the number of hidden layer units, the number of hidden layers, and activation functions. Train an FNN with the limited training samples and test its accuracy. Step (4): Divide the uncertain domains into subintervals with small uncertainty levels. Step (5): Calculate the responses and the partial derivatives at the nominal values of xI with the trained FNN, and obtain the lower and upper bounds of the surrogate model using Taylor expansion based IPM. Begin

End

Determine the uncertain parameters and corresponding ranges

Obtain the final lower and upper bounds of structural responses

Select sampling points Preparation in the interval space of training Calculate structural responses samples with the original model

Interval-set arithmetic For each subinterval, obtain the lower and upper bounds of the surrogate model

Obtain the training samples

Perform Taylor expansion for structural responses Specify the structure parameters for the feedforward ANN Train an ANN

Calculate the responses and the partial derivatives at the nominal values of xI with the trained ANN

Train an ANN with the samples Accuracy test No

Interval analysis method based on Taylor expansion

Subinterval division

Accuracy requirement?

Determine the uncertain domains

Yes Fig. 6 The solving process of the interval analysis method with FNN differentiation

5. Numerical examples Three numerical examples are used here to demonstrate the validity of the proposed interval analysis method. 5.1. A cantilever tube A cantilever tube shown in Fig. 7 is subject to external forces F1, F2, and P, and torsion T [37]. Here, we modified it by adding five interval parameters. The performance function is defined as the maximum von Mises stress on the top surface of the tube at the origin, which is stated as:

VMS   x2  3 zx2 . The normal stress can be calculated by the following expressions:

13

(35)

P  F1 sin 1  F2 sin 2 Md  , A 2I

x 

(36)

where M is the bending moment, which is given by

M  F1 L1 cos1  F2 L2 cos2 , and

A

(37)

 2  2 4 d   d  2t   , I  d 4   d  2t   .   4 64 

(38)

The torsional stress  zx is calculated by

 zx  z

Td . 4I

F2

(39)

F1 1

d

y

P

x

t

L2 L1

T

Fig. 7 A cantilever tube

The parameters in this example are all continuous, and the nominal values of t, d, L1, L2, F1, F2, P, T, θ1 and θ2 are 5 mm, 42 mm, 120 mm, 60 mm, 3.0 KN, 3.0 KN, 12.0 KN, 90.0 Nm, 5 deg, and 10 deg respectively. t, d, F1, F2, T are considered as interval uncertainty parameters, and twelve cases are analyzed in which the uncertainty levels (β) of interval parameters are ±0.1%, ±1.0%, ±2%, ±4%, ±6%, ±8%, ±10%, ±12%, ±14%, ±16%, ±18%, and ±20%, respectively. The considered uncertainty levels are up to 20%, which contain the most possible situations in engineering practice, thus the performance of the proposed FANNBIA can be comprehensively demonstrated, especially for the large uncertainty level problems. The proposed method and the existing methods introduced in Section 2.2 are applied to obtain the interval bounds of σVMS, where the IVM is applied to obtain the exact values (EV) of response bounds. The results versus uncertainty levels are plotted in Fig. 8, and their relative errors are computed and listed in Table 7. The computing times of different methods are listed in Table 8. Four subintervals are divided in Eq. (14) when applying the IPM, and two cases including first-order (FO) and second-order (SO) are calculated via Eqs. (9)-(17). Considering GA exhibits a fine global convergence and only needs the information of functional values, it is employed as the optimization operator for the GSA. The population size, crossover fraction and migration fraction are set as 50, 0.9, and 0.3, respectively, and the stopping criterion is set as 5000 generations. Moreover, 1000 samples are selected within the uncertain domains to construct a BP neural network with three hidden layers and 60 hidden layer units when using the proposed FANNBIA. All calculations are completed on one desktop computer with 16 GB RAM and a CPU core of Intel(R) Core(TM) i5-4590 CPU clocked at 3.30 GHz. Following conclusions can be summarized: (i) Fig. 8 shows that all these methods can approximate the real solution with high precision. This is further proved in Table 7 in a more intuitive way. Moreover, Table 7 shows that the errors of the approximate solutions derived from IPM and FANNBIA will increase as the uncertainty level increases from 0 to 20%, and reach a maximum when the uncertainty level is 20%. (ii) Although the uncertainty level in this example is up to 20%, Table 7 shows that the relative errors of σVMS derived from FANNBIA are still less than 1.558%. It is a fine precision for most practical problems. The proposed FANNBIA has fine precision even in the large uncertainty level problems. (iii) As shown in Table 7, the response bounds yielded by FANNBIA perfectly match the results of IPM. The two methods have similar precisions, thus FANNBIA can be used as an effective alternative method to the traditional IPM. (iv) Table 8 clearly illustrates the much higher computational efficiency of IPM and FANNBIA than GSA. Therefore, the proposed FANNBIA can be considered as an effective and efficient uncertainty analysis method for the structural response problems with interval parameters. (v) Table 7 shows that GSA has good precision overall, but it has no obvious advantages over other methods in some cases, such as β=8.0%. This is due to the fact that GA uses random numbers to identify trial individuals, the result of each run will be different. 14

VMS (MPa)

(vi) It is noted that the relationships between σVMS and uncertain parameters are monotonous. High accuracy and less computational cost can be achieved by applying IVM to solve such type of problems. 325 300 275 250 225 200 175 150 125 100 75 50

EV (IVM) IPM (FO) IPM (SO) GSA FANNBIA (FO) FANNBIA (SO)

0

2

4

6

8

10 12 14 16 18 20

Uncertainty level (%) Fig. 8 Structural response bounds of a cantilever tube versus uncertainty levels

β (%) 0.1 1.0 2.0 4.0 6.0 8.0 10.0 12.0 14.0 16.0 18.0 20.0

IPM (FO) 0.0000 0.0022 0.0088 0.0345 0.0762 0.1331 0.2045 0.2896 0.3880 0.4990 0.6222 0.7572

IPM (SO) 0.0000 0.0044 0.0177 0.0695 0.1540 0.2697 0.4153 0.5898 0.7920 1.0210 1.2761 1.5566

Case Time

Table 7 The relative errors (%) of σVMS by different methods Lower bound Upper bound FANNBIA FANNBIA IPM IPM FANNBIA GSA GSA (FO) (SO) (FO) (SO) (FO) 0.0000 0.0000 0.0001 0.0000 0.0000 0.0002 0.0001 0.0000 0.0019 0.0047 0.0023 0.0000 0.0001 0.0023 0.0166 0.0036 0.0146 0.0091 0.0000 0.0001 0.0179 0.0036 0.0294 0.0694 0.0373 0.0003 0.1011 0.0406 0.1851 0.0783 0.1578 0.0857 0.0014 0.0001 0.0879 0.0001 0.1275 0.2661 0.1558 0.0035 0.1860 0.1592 0.0000 0.2044 0.4153 0.2489 0.0073 0.2555 0.2491 0.2584 0.2805 0.5846 0.3667 0.0133 0.0017 0.3701 0.0000 0.4342 0.8310 0.5109 0.0223 0.1312 0.5150 0.0095 0.4999 1.0214 0.6836 0.0350 0.4344 0.6837 0.0138 0.6213 1.2793 0.8869 0.0523 0.0940 0.8979 0.0000 0.7533 1.5577 1.1231 0.0753 0.5288 1.1241

FANNBIA (SO) 0.0000 0.0001 0.0102 0.0023 0.0053 0.0099 0.0079 0.0244 0.0296 0.0359 0.0733 0.0821

Table 8 The computing time (S) of IVM, GSA, IPM, and FANNBIA IVM (EV) GSA IPM (FO) IPM (SO) FANNBIA (FO) FANNBIA (SO) 0.0001 3.6370 0.0519 0.3551 0.1798 0.6552

5.2. A Belleville spring As the second example, a Belleville spring [38, 39] is depicted in Fig. 9. Here, we slightly modified it by adding four interval uncertainty parameters. The design parameters of this problem are the external diameter de, the internal diameter di, the thickness t, and the free height h. These parameters are all continuous, and the nominal values of de, di, t, h are 0.3 m, 0.211 m, 7.272 mm, and 5.0 mm, respectively.

di P

t

h

l

de Fig. 9 A Belleville spring

The performance functions of this example are the rated load P (N) and the maximal stress σmax (Pa), which can be formulated as follows:

P

E max

  max  3  h  2   h   max  t  t  ,   1  v   de / 2  2

2

15

(40)

 max 

E max

   max    h   rt ,    1  v2   de / 2   2   2

(41) where E is Young’s modulus (210 GPa), δmax is the maximum allowable deflection which is equal to h, v is Poisson’s ratio (0.3), ρ is the mass density (7850 Kg/m3), and

d 6  K 1 6  K 1  6  K 1  1 , K  e , r  .   ,    ln K  K   ln K  ln K  di  ln K  2  2



(42) Obviously, this is an example with explicit functional relationships, and its partial derivatives can be obtained directly. de, di, t, h are considered as interval parameters, and nine cases under different uncertainty levels are analyzed. The proposed method and the existing methods introduced in Section 2.2 are applied to obtain the interval bounds of P and σmax. It is noted that P and σmax are all monotonically increasing functions of di, h, and t, and monotonically decreasing functions of de, thus the IVM can be applied to obtain the exact values of response bounds. Ten subintervals are divided in Eq. (14) when applying the IPM, and two cases including first-order (FO) and second-order (SO) are calculated via Eqs. (9)-(17). GA is employed as the optimization operator for the GSA. The population size, crossover fraction and migration fraction are set as 50, 0.9, and 0.3, respectively, and the stopping criterion is set as 5000 generations. Moreover, 1000 samples are selected within the uncertain domains to construct a BP neural network with three hidden layers and 60 hidden layer units when using the proposed FANNBIA. The results versus uncertainty levels are plotted in Fig. 10, and their relative errors are computed and the results are listed in Tables 9-10. Following conclusions can be summarized:

550 500 450 400 350 300 250 200 150 100 50 0 -50

EV (IVM) IPM (FO) IPM (SO) GSA FANNBIA (FO) FANNBIA (SO)

0

2

4 6 8 10 12 Uncertainty level (%)

max (GPa)

P (KN)

(i) Fig. 10 shows that when the uncertainty level is small, i.e.   4.0% , all these methods can approximate the real solution with high precision. This is further proved in Tables 9-10 in a more intuitive way. Moreover, Tables 9-10 show that the errors of the approximate solutions derived from IPM and FANNBIA will increase as the uncertainty level increases from 0 to 10%, and reach a maximum when the uncertainty level is 10%, which is less than 1.9% in this example. It is a fine precision for most practical engineering problems. (ii) Fig. 10 clearly illustrates that when the uncertainty level is 15%, the upper bounds of P and σmax obtained by the first-order IPM and FANNBIA exist some error that cannot be ignored. This is because of the performance functions of P and σmax are highly nonlinear. In such cases, although the second-order IPM and FANNBIA need more computational cost, the precisions of P and σmax are much higher. (iii) As shown in Tables 9-10, the results obtained by IPM and FANNBIA are almost identical especially when the uncertainty level is small. Thus the two methods have similar precisions, and the proposed FANNBIA can be used as an alternative method to the traditional IPM. (iv) Tables 9-10 show that when the uncertainty level is small, the GSA has no obvious advantages in precision and is even worse in some cases than the other methods, such as the results of β=4.0% in Table 9 and β=6.0% in Table 10. However, when the uncertainty level is large, it provides much higher confidence to approximate the exact solutions.

14

16

11 10 9 8 7 6 5 4 3 2 1 0

EV (IVM) IPM (FO) IPM (SO) GSA FANNBIA (FO) FANNBIA (SO)

0

2

4 6 8 10 12 Uncertainty level (%)

14

16

(a) The interval bounds of P (b) The interval bounds of σmax Fig. 10 Structural response bounds of a Belleville spring versus uncertainty levels

β (%)

IPM (FO)

IPM (SO)

Table 9 The relative errors (%) of P by different methods Lower bound Upper bound FANNBIA FANNBIA IPM IPM FANNBIA GSA GSA (FO) (SO) (FO) (SO) (FO) 16

FANNBIA (SO)

0.1 1.0 2.0 4.0 6.0 8.0 10.0 12.0 15.0

β (%) 0.1 1.0 2.0 4.0 6.0 8.0 10.0 12.0 15.0

0.0001 0.0059 0.0223 0.0805 0.1660 0.2740 0.4021 0.5492 0.8060

IPM (FO) 0.0000 0.0036 0.0132 0.0451 0.0881 0.1377 0.1914 0.2474 0.3343

0.0001 0.0118 0.0448 0.1625 0.3364 0.5572 0.8201 1.1232 1.6538

0.0002 0.1852 0.2223 0.6932 0.0184 0.0630 0.6768 0.0000 0.9301

0.0001 0.0059 0.0223 0.0805 0.1596 0.2711 0.3914 0.7584 0.7712

0.0001 0.0118 0.0448 0.1625 0.3330 0.5535 0.8146 1.2818 1.6161

0.0001 0.0067 0.0290 0.1373 0.3790 0.8651 1.8495 4.0036 16.0016

0.0000 0.0000 0.0004 0.0040 0.0190 0.0680 0.2202 0.7268 6.1290

0.0008 0.0000 0.0000 0.0002 0.0006 0.0001 0.0005 0.0199 0.0005

0.0001 0.0068 0.0290 0.1375 0.3804 0.8625 1.8839 4.5685 17.0250

0.0000 0.0002 0.0004 0.0043 0.0221 0.0652 0.2866 1.8827 8.8288

IPM (SO) 0.0001 0.0072 0.0265 0.0911 0.1786 0.2800 0.3900 0.5054 0.6847

Table 10 The relative errors (%) of σmax by different methods Lower bound Upper bound FANNBIA FANNBIA IPM IPM FANNBIA GSA GSA (FO) (SO) (FO) (SO) (FO) 0.0005 0.0000 0.0001 0.0000 0.0000 0.0004 0.0001 0.0466 0.0036 0.0072 0.0043 0.0000 0.0004 0.0044 0.1180 0.0132 0.0265 0.0191 0.0003 0.0002 0.0192 0.0710 0.0451 0.0911 0.0957 0.0028 0.0001 0.0959 0.5635 0.0908 0.1805 0.2789 0.0139 0.0002 0.2828 1.2522 0.1360 0.2787 0.6721 0.0526 0.0004 0.6699 0.0064 0.1864 0.3867 1.5165 0.1802 0.0004 1.5343 0.6822 0.1602 0.4505 3.4644 0.6283 0.0001 3.7975 0.8164 0.3495 0.6981 15.0060 5.7468 0.0199 15.5572

FANNBIA (SO) 0.0000 0.0001 0.0003 0.0030 0.0199 0.0505 0.2156 1.3324 7.2995

17.4 17.2

Upper bound of P (KN)

Lower bound of P (KN)

The subinterval number of FANNBIA has a significant impact on its accuracy and computational cost. The uncertainty level β=10% is selected as a special circumstance, and six cases are further calculated in which the subinterval number is 5, 6, 7, 8, 9, and 10, respectively. The results versus subinterval numbers are plotted in Fig. 11 for the purpose of comparison.

17.1731

17.1059 17.0644 17.0887 16.9714 17.0283 17.0 16.8738 17.0332 16.9477 16.9979 16.8 16.8728 EV 16.7539 FANNBIA (FO) 16.6 FANNBIA (SO) 16.5483

16.4

5

6

7 8 Subinterval number

9

10

146 144 143.3328 142.9219 142 142.4848 142.6968 142.8319 140 141.5144142.1342 140.6326 139.3974 140.1053 138 138.4165 EV 136 137.0035 FANNBIA (FO) 134 134.8646 FANNBIA (SO) 132 5 6 7 8 9 10 Subinterval number

Upper bound of max (GPa)

Lower bound of max (GPa)

(a) The lower and upper bounds of P 0.675 0.670 0.665

0.6705 0.6501

0.6650 0.6628

0.660 0.655

0.6668 0.6679

0.6590

0.6685

0.6690 0.6693

0.6664

0.6673

0.6680

EV FANNBIA (FO) FANNBIA (SO)

0.650 5

6

7 8 Subinterval number

9

10

3.12 3.09 3.0823 3.06 3.03 3.0504 3.00 2.97 2.9329 2.94 2.91 5

3.0614 2.9707

6

3.0740 3.0756 3.0256 3.0350

3.0677 3.0715 3.0131 2.9957

EV FANNBIA (FO) FANNBIA (SO)

7 8 Subinterval number

9

10

(b) The lower and upper bounds of σmax Fig. 11 Structural bounds versus subinterval numbers

Fig. 11 shows that the response bounds of P and σmax obtained by FANNBIA become more accurate as the subinterval number increases from 5 to 10. Of course, more computing cost is needed. For the actual interval uncertain problem, the uncertainty level is usually small, which does not need too many subintervals. Under this circumstance, high precision and less computational cost can be achieved simultaneously by applying IPM and FANNBIA to solve nonlinear uncertain problems. 5.3. A rigid-flexible coupling dynamic model A large-caliber artillery model as illustrated in Fig. 12 [40, 41] is investigated as another example to verify the applicability of the proposed method in practical engineering problems. The flexible parts, including the barrel, cradle and top carriage, are created based on finite element modal neutral file, whilst the rest parts are rigid. The coupling of the various flexible-rigid, multi-body models is realized by using the modal synthesis method. The 17

connections between the front (retral) bush and the barrel, the toothed sector and the elevating gear, are set as rigid-flexible contact. As a whole, this model contains 13 parts (including 3 flexible bodies), 5 revolute pairs, 3 sliding pairs, 11 fixed joints, with a total of 133 degrees of freedom. For more details of this model, refer to Xiao et al. [40].

Cradle Front bush Trunnion

Barrel

Elevating gear Toothed sector

Breech

Top carriage

Trails Bottom carriage

Base plate

Fig. 12 The rigid-flexible coupling dynamics model

The clearance between barrel and front bush cb, the vertical (horizontal) mass eccentricity of recoiling parts ey (ez), the height at the center of trunnion hy, as well as the axial position of front bush lx are considered as interval uncertainty parameters. These parameters are all continuous, and their nominal values are 1.05 mm, 6.23mm, -2.56mm, 0.0mm, and 100.0mm, respectively. The vibration parameters including the vertical muzzle angular displacement Ωv and the vertical muzzle angular velocity Φv, are used as the structural responses to demonstrate the performance of the proposed FANNBIA. Obviously, the relationships between the interval parameters and the vibration behaviors are too complex to be derived, which require the computation-intensive simulation model. The IVM and the IPM cannot solve this problem because of the unknown monotonicity and the complex partial derivative calculations. Wang et al. [41] used a combination approach of GSA and FNNs in an optimization problem to search the approximate structural response bounds. Here, the proposed FANNBIA is applied to obtain the interval bounds of Ωv and Φv, and the aforementioned method is used to generate the reference bounds. The results versus uncertainty levels (β) are plotted in Fig. 13, and the detailed values, as well as their relative errors from the reference ones, are listed in Tables 11-12. Five subintervals are divided in Eq. (14) when applying FANNBIA. GA is used as the optimization operator for the GSA. The population size, crossover fraction and migration fraction are set as 50, 0.9, and 0.3, respectively, and the stopping criterion is set as 500 generations. Moreover, the Latin hypercube sampling is employed to generate a total of 145 samples within the uncertain domains to construct a BP neural network with three hidden layers for this problem. We refer the readers to the Tab 3 Sec 3.2 p. 294 in [40] for the detailed data. -1.5

0.013 GSA FANNBIA (FO) FANNBIA (SO)

-2.5

GSA FANNBIA (FO) FANNBIA (SO)

v (deg/s)

v (deg)

0.011

-2.0

-3.0

0.009

-3.5

0.007

-4.0 -4.5

0.005

-5.0 0.003

-5.5 6 8 10 12 14 16 18 20 0 2 4 6 8 10 12 14 16 18 20 Uncertainty level (%) Uncertainty level (%) (a) The interval bounds of Ωv (b) The interval bounds of Φv Fig. 13 Structural response bounds of a rigid-flexible coupling dynamics model versus uncertainty levels

β (%)

0

2

4

Table 11 The interval bounds of Ωv (deg) and their relative errors (%) from reference values Lower bound Upper bound GSA FANNBIA(FO) FANNBIA(SO) GSA FANNBIA(FO) FANNBIA(SO)

18

0.1 1.0 2.0 4.0 6.0 8.0 10.0 12.0 14.0 16.0 18.0 20.0

β (%) 0.1 1.0 2.0 4.0 6.0 8.0 10.0 12.0 14.0 16.0 18.0 20.0

0.00823 0.00800 0.00774 0.00725 0.00678 0.00635 0.00595 0.00559 0.00525 0.00491 0.00458 0.00423

0.00823 (0.0037) 0.00799 (0.0056) 0.00774 (0.0051) 0.00724 (0.0291) 0.00678 (0.0637) 0.00634 (0.1325) 0.00594 (0.1997) 0.00557 (0.2838) 0.00523 (0.3012) 0.00488 (0.5653) 0.00454 (0.8988) 0.00420 (0.6069)

0.00823 (0.0037) 0.00799 (0.0077) 0.00774 (0.0145) 0.00724 (0.0732) 0.00677 (0.1770) 0.00633 (0.3540) 0.00592 (0.5676) 0.00554 (0.8316) 0.00519 (1.0565) 0.00483 (1.7078) 0.00447 (2.4188) 0.00412 (2.6395)

0.00828 0.00852 0.00878 0.00931 0.00981 0.01031 0.01077 0.01118 0.01157 0.01191 0.01222 0.01249

0.00828 (0.0000) 0.00852 (0.0002) 0.00878 (0.0000) 0.00931 (0.0065) 0.00982 (0.1606) 0.01032 (0.0631) 0.01078 (0.1127) 0.01120 (0.1699) 0.01159 (0.2276) 0.01194 (0.2791) 0.01226 (0.3195) 0.01254 (0.3461)

0.00828 (0.0000) 0.00852 (0.0016) 0.00878 (0.0070) 0.00931 (0.0320) 0.00983 (0.2137) 0.01033 (0.1542) 0.01079 (0.2624) 0.01123 (0.3878) 0.01163 (0.5182) 0.01198 (0.6414) 0.01231 (0.7480) 0.01260 (0.8317)

Table 12 The interval bounds of Φv (deg/s) and their relative errors (%) from reference values Lower bound Upper bound GSA FANNBIA(FO) FANNBIA(SO) GSA FANNBIA(FO) FANNBIA(SO) -3.91164 -3.91170 (0.0014) -3.91170 (0.0014) -3.89481 -3.89481 (0.0000) -3.89481 (0.0000) -3.98668 -3.98675 (0.0016) -3.98678 (0.0026) -3.81787 -3.81790 (0.0007) -3.81786 (0.0003) -4.06822 -4.06833 (0.0027) -4.06848 (0.0063) -3.73065 -3.73068 (0.0007) -3.73052 (0.0034) -4.22523 -4.22579 (0.0132) -4.22637 (0.0269) -3.55059 -3.55077 (0.0050) -3.55016 (0.0122) -4.37402 -4.37564 (0.0370) -4.37691 (0.0659) -3.36613 -3.36389 (0.0666) -3.36250 (0.1078) -4.51607 -4.51798 (0.0422) -4.52015 (0.0903) -3.16930 -3.17050 (0.0380) -3.16798 (0.0418) -4.64516 -4.65295 (0.1677) -4.65622 (0.2382) -2.97245 -2.97126 (0.0403) -2.96725 (0.1752) -4.77768 -4.78142 (0.0784) -4.78583 (0.1707) -2.76463 -2.76694 (0.0836) -2.76115 (0.1259) -4.89902 -4.90399 (0.1015) -4.90962 (0.2165) -2.55687 -2.55850 (0.0637) -2.55067 (0.2423) -5.01485 -5.02108 (0.1242) -5.02804 (0.2631) -2.34479 -2.34701 (0.0950) -2.33701 (0.3316) -5.12502 -5.13286 (0.1529) -5.14145 (0.3206) -2.13269 -2.13371 (0.0474) -2.12151 (0.5247) -5.23025 -5.23907 (0.1687) -5.24948 (0.3678) -1.93072 -1.91987 (0.5617) -1.90561 (1.3004)

Fig. 13 shows that the present FANNBIA can approximate the real response bounds with high precision, especially when the uncertainty level is small. This is further proved in Tables 11-12 in a more intuitive way. The results of FANNBIA almost match the results of GSA. Moreover, as shown in Tables 11-12, the relative error of Ωv and Φv derived from FANNBIA increases as the uncertainty level increases and reaches the maximum when the uncertainty level is 20%, which are 2.6395%, and 1.3004%, respectively. This precision is very satisfactory for general engineering problems. Furthermore, the computing time of GSA, FANNBIA (FO), and FANNBIA (SO) is 15.3407s, 0.3550s, and 2.9477s, respectively. The computational efficiency of FANNBIA is much higher than that of GSA. This example demonstrates that high accuracy and less computational cost can be achieved by the proposed method when solving practical engineering problems. Moreover, it is noted from Tables 11-12 that there is still a little overestimation for the proposed method, which cannot be totally avoided when using Taylor expansion, but the error is completely confined within an acceptable range. 6. Conclusions The first-order and high-order derivative information may be unavailable in practical engineering systems. To overcome this drawback, the FNNs of real structural response with respect to structure parameters are first constructed in the present work. The method to calculate the first-order and second-order partial derivatives with the FNNs is derived, thus the information of partial derivatives could be determined directly. The results of the current detailed assessment show that an FNN is capable of approximating the first-order and second-order partial derivatives of an arbitrary function. However, it should be noted that the partial derivatives of the FNNs in the close neighborhood of the two bounds of interval parameters are less accurate than other positions, which is more obvious in the second-order cases. Moreover, the method is prone to large deviations when computing the second-order derivatives of linear functions. Fortunately, these defects can commonly be overcome when the FNNs are with the appropriate numbers of hidden layer units and hidden layers. Thus the appropriate structure parameters of the FNNs need to be specified. Moreover, this paper only chooses classical activation functions. Other activation functions, such as the sigmoidal functions in [35, 36], can be potentially applied to approximate the derivatives of a function with an arbitrary degree of accuracy but using less hidden layers and less hidden units. This problem warrants further study. A new interval uncertainty analysis method with the FNN differentiation has been proposed for the calculation of the structural response bounds of nonlinear systems with uncertain-but-bounded parameters. Based on the first-order or second-order parameter perturbation method using these partial derivatives, the extrema of the FNNs can be approximated without requiring much computational time. Moreover, the subinterval method is introduced to obtain more accurate and reliable results for the structural response problems with relatively large interval uncertain parameters. Two numerical examples and an engineering application show that the present method can simultaneously achieve fine precision with less computational cost when solving nonlinear uncertain problems. 19

Although it still produces a little overestimation, the error is completely confined within an acceptable range. The proposed method provides an alternative way for the interval uncertainty analysis of practical engineering problems. It is especially suitable for the complex, non-differentiable, nonlinear functional systems. Acknowledgments This research was financially supported by the National Natural Science Foundation of China [Grant Nos. 301070603 and 11572158]. The first author is supported by the China Scholarship Council [Grant No. 201806840028] and the University of Alberta.

20

Reference [1] S.S. Isukapalli, A. Roy, P.G. Georgopoulos, Stochastic response surface methods (SRSMs) for uncertainty propagation: Application to environmental and biological systems, Risk Anal. 18 (1998) 351–363. doi:10.1111/j.1539-6924.1998.tb01301.x. [2] S. Huang, S. Mahadevan, R. Rebba, Collocation-based stochastic finite element analysis for random field problems, Probabilistic Eng. Mech. 22 (2007) 194–205. doi:10.1016/j.probengmech.2006.11.004. [3] M.M.R. Williams, A method for solving stochastic eigenvalue problems II, Appl. Math. Comput. 219 (2013) 4729–4744. doi:10.1016/j.amc.2012.10.089. [4] A.S. Balu, B.N. Rao, High dimensional model representation based formulations for fuzzy finite element analysis of structures, Finite Elem. Anal. Des. 50 (2012) 217–230. doi:10.1016/j.finel.2011.09.012. [5] H. Shah, S. Hosder, T. Winter, Quantification of margins and mixed uncertainties using evidence theory and stochastic expansions, Reliab. Eng. Syst. Saf. 138 (2015) 59–72. doi:10.1016/j.ress.2015.01.012. [6] R.E. Moore, Methods and Applications of Interval Analysis, Society for Industrial and Applied Mathematics, 1979. [7] G. Alefeld, J. Herzberger, Introduction to Interval Computation, Academic press, 1983. [8] H.U. Köylüoğlu, A. Çakmak, S.R.K. Nielsen, Interval algebra to deal with pattern loading and structural uncertainties, J. Eng. Mech. 121 (1995) 1149–1157. doi:10.1061/(ASCE)0733-9399(1995)121:11(1149). [9] R.L. Muhanna, R.L. Mullen, Uncertainty in mechanics problems - interval-based approach, J. Eng. Mech. 127 (2001) 557–566. doi:10.1061/(ASCE)0733-9399(2001)127:6(557). [10] J. Wu, Y. Zhang, L. Chen, Z. Luo, A Chebyshev interval method for nonlinear dynamic systems under uncertainty, Appl. Math. Model. 37 (2013) 4578–4591. doi:10.1016/j.apm.2012.09.073. [11] Z. Qiu, X. Wang, J. Chen, Exact bounds for the static response set of structures with uncertain-but-bounded parameters, Int. J. Solids Struct. 43 (2006) 6574–6593. doi:10.1016/j.ijsolstr.2006.01.012. [12] Z. Qiu, Y. Xia, J. Yang, The static displacement and the stress analysis of structures with bounded uncertainties using the vertex solution theorem, Comput. Methods Appl. Mech. Eng. 196 (2007) 4965–4984. doi:10.1016/j.cma.2007.06.022. [13] Z. Qiu, Z. Lv, The vertex solution theorem and its coupled framework for static analysis of structures with interval parameters, Int. J. Numer. Methods Eng. 112 (2017) 711–736. doi:10.1002/nme.5523. [14] S. Chen, H. Lian, X. Yang, Interval static displacement analysis for structures with interval parameters, Int. J. Numer. Methods Eng. 53 (2002) 393–407. doi:10.1002/nme.281. [15] S. McWilliam, Anti-optimization of uncertain structures using interval analysis, Comput. Struct. 79 (2001) 421–430. doi:10.1016/S0045-7949(00)00143-7. [16] N. Impollonia, G. Muscolino, Interval analysis of structures with uncertain-but-bounded axial stiffness, Comput. Methods Appl. Mech. Eng. 200 (2011) 1945–1962. doi:10.1016/j.cma.2010.07.019. [17] C. Wang, Z. Qiu, X. Wang, D. Wu, Interval finite element analysis and reliability-based optimization of coupled structural-acoustic system with uncertain parameters, Finite Elem. Anal. Des. 91 (2014) 108–114. doi:10.1016/j.finel.2014.07.014. [18] Z. Qiu, X. Wang, Parameter perturbation method for dynamic responses of structures with uncertain-but-bounded parameters based on interval analysis, Int. J. Solids Struct. 42 (2005) 4958–4970. doi:10.1016/j.ijsolstr.2005.02.023. [19] Z. Qiu, I. Elishakoff, Antioptimization of structures with large uncertain-but-non-random parameters via interval analysis, Comput. Methods Appl. Mech. Eng. 152 (1998) 361–372. doi:10.1016/S0045-7825(96)01211-X. [20] Y.T. Zhou, C. Jiang, X. Han, Interval and subinterval analysis methods of the structural analysis and their error estimations, Int. J. Comput. Methods. 3 (2006) 229–244. doi:10.1142/S0219876206000771. [21] B. Xia, D. Yu, Modified sub-interval perturbation finite element method for 2D acoustic field prediction with large uncertain-but-bounded parameters, J. Sound Vib. 331 (2012) 3774–3790. doi:10.1016/j.jsv.2012.03.024. [22] C. Jiang, X. Han, G.R. Liu, G.P. Liu, A nonlinear interval number programming method for uncertain optimization problems, Eur. J. Oper. Res. 188 (2008) 1–13. doi:10.1016/j.ejor.2007.03.031. [23] C. Jiang, X. Han, G.P. Liu, A sequential nonlinear interval number programming method for uncertain structures, Comput. Methods Appl. Mech. Eng. 197 (2008) 4250–4265. doi:10.1016/j.cma.2008.04.027. [24] J. Cheng, Z. Liu, M. Tang, J. Tan, Robust optimization of uncertain structures based on normalized violation degree of interval constraint, Comput. Struct. 182 (2017) 41–54. doi:10.1016/j.compstruc.2016.10.010. [25] Q. Li, Z. Qiu, X. Zhang, Eigenvalue analysis of structures with interval parameters using the second-order Taylor series expansion and the DCA for QB, Appl. Math. Model. 49 (2017) 680–690. doi:10.1016/j.apm.2017.02.041. [26] Y. Liu, X. Wang, L. Wang, Interval uncertainty analysis for static response of structures using radial basis functions, Appl. Math. Model. 69 (2019) 425–440. doi:10.1016/j.apm.2018.12.018. [27] Y. Liu, X. Wang, L. Wang, Z. Lv, A Bayesian collocation method for static analysis of structures with unknown-but-bounded uncertainties, Comput. Methods Appl. Mech. Eng. 346 (2019) 727–745. doi:10.1016/j.cma.2018.08.043. [28]K. Hornik, M. Stinchcombe, H. White, Multilayer feedforward networks are universal approximators, Neural Networks. 2 (1989) 359–366. doi:10.1016/0893-6080(89)90020-8. [29] K. Hornik, M. Stinchcombe, H. White, Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks, Neural Networks. 3 (1990) 551–560. doi:10.1016/0893-6080(90)90005-6. [30] D.C. and R. Spigler, Constructive Approximation by Superposition of Sigmoidal Functions, Anal. Theory Appl. 29 (2013) 169–196. doi:10.4208/ata.2013.v29.n2.8. [31] M. Leshno, V.Y. Lin, A. Pinkus, S. Schocken, Multilayer feedforward networks with a nonpolynomial activation function can approximate any function, Neural Networks. 6 (1993) 861–867. doi:10.1016/S0893-6080(05)80131-5. [32] A. Pinkus, Approximation theory of the MLP model in neural networks, Acta Numer. 8 (1999) 143–195. doi:10.1017/S0962492900002919. [33] V. Maiorov, A. Pinkus, Lower bounds for approximation by MLP neural networks, Neurocomputing. 25 (1999) 81–91. doi:10.1016/S0925-2312(98)00111-8. 21

[34] S. Hashem, Sensitivity analysis for feedforward artificial neural networks with differentiable activation functions, in: Proc. 1992 Int. Jt. Conf. Neural Networks. 1 (1992) 419–424. doi:10.1109/IJCNN.1992.287175. [35] N.J. Guliyev, V.E. Ismailov, On the approximation by single hidden layer feedforward neural networks with fixed weights, Neural Networks. 98 (2018) 296–304. doi:10.1016/j.neunet.2017.12.007. [36] N.J. Guliyev, V.E. Ismailov, Approximation capability of two hidden layer feedforward neural networks with fixed weights, Neurocomputing. 316 (2018) 262–269. doi:10.1016/j.neucom.2018.07.075. [37] J. Guo, X. Du, Reliability sensitivity analysis with random and interval variables, Int. J. Numer. Methods Eng. 78 (2009) 1585–1617. doi:10.1002/nme.2543. [38] S. Gunawan, Parameter sensitivity measures for single objective, multi-objective, and feasibility robust design optimization, University of Maryland, 2004. [39] N. Hirokawa, K. Fujita, Mini-max type formulation of strict robust design optimization under correlative variation, in: Proc. ASME Des. Eng. Tech. Conf., 2002: pp. 75–86. [40] H. Xiao, G. Yang, J. Ge, Surrogate-based multi-objective optimization of firing accuracy and firing stability for a towed artillery, J. Vibroengineering. 19 (2017) 290–301. doi:10.21595/jve.2016.17108. [41] L. Wang, G. Yang, H. Xiao, Q. Sun, J. Ge, Interval optimization for structural dynamic responses of an artillery system under uncertainty, Eng. Optim. (2019). doi:10.1080/0305215X.2019.1590563.

22