Semi-empirical Neural Network Based Approach to Modelling and Simulation of Controlled Dynamical Systems

Semi-empirical Neural Network Based Approach to Modelling and Simulation of Controlled Dynamical Systems

Available online at www.sciencedirect.com This This This This This This space is reserved for the Procedia header, space is reserved for the Procedi...

541KB Sizes 0 Downloads 58 Views

Available online at www.sciencedirect.com

This This This This This This

space is reserved for the Procedia header, space is reserved for the Procedia header, space is reserved for the Procedia header, space is reserved for the Procedia header, space is Computer reservedScience for the header, Procedia 123Procedia (2018) 134–139 space is reserved for the Procedia header,

ScienceDirect

do do do do do do

not not not not not not

use use use use use use

it it it it it it

Neural Network Based Approach toBICA 2017 8th Annual Semi-empirical International Conference on Biologically Inspired Cognitive Architectures, Semi-empirical Neural Network Based Approach to Semi-empirical Neural Based Approach to Modelling and Simulation ofNetwork Controlled Dynamical Systems Semi-empirical Neural Network Based Approach to Semi-empirical Neural Network Based Approach to Modelling and Simulation of Controlled Dynamical Systems Modelling and Simulation of Controlled Dynamical Systems Mikhail V. Egorchev and Yury V. Tiumentsev Modelling and Simulation ofNetwork Controlled Dynamical Semi-empirical Neural Based ApproachSystems to Modelling and Simulation of and Controlled Dynamical Systems Mikhail V. Egorchev Yury V. Tiumentsev Mikhail V. Egorchev YuryUniversity), V. Tiumentsev Moscow Aviation Institute (National Research Moscow, RussiaSystems Modelling and Simulation of and Controlled Dynamical Mikhail V. Egorchev and Yury V. Tiumentsev [email protected], [email protected] Moscow Mikhail Aviation Institute (National Research Moscow, V. Egorchev and YuryUniversity), V. Tiumentsev Moscow Aviation Institute (National Research University), Moscow, [email protected], [email protected] Moscow Mikhail Aviation Institute (National Research University), Moscow, V. Egorchev and [email protected] V. Tiumentsev [email protected], Moscow Aviation Institute (National Research [email protected] University), Moscow, [email protected], [email protected], [email protected] Moscow Aviation Institute (National Research University), Moscow,

Russia Russia Russia Russia

Abstract Russia [email protected] A modelling and [email protected], approach is discussed for nonlinear controlled dynamical systems Abstract Abstract under multiple diverse uncertainties. The main goal is tocontrolled demonstrate capabilities for A modelling andand simulation approach is discussed for nonlinear dynamical systems Abstract A modelling and simulation approach is discussed for nonlinear controlled dynamical systems semi-empirical neural network based models combining theoretical domain-specific knowledge Abstract under multiple diverse uncertainties. The main goal is tocontrolled demonstrate capabilities for A modelling andand simulation approach is discussed for nonlinear dynamical systems under multiple and uncertainties. The main goal is oftothe demonstrate capabilities for with training tools ofdiverse artificial neural network field. Training dynamical neural network A modelling and simulation approach is discussed for nonlinear controlled dynamical systems semi-empirical neural network based models combining theoretical domain-specific knowledge under multiple and diverse uncertainties. The main goal is to demonstrate capabilities for Abstract semi-empirical neural network based models combining theoretical domain-specific knowledge model for multi-step ahead prediction isdiscussed performed innonlinear a sequential fashion. Computational under multiple and uncertainties. The main goal is oftothe demonstrate capabilities for with training tools ofdiverse artificial neural network field. Training dynamical neural network semi-empirical neural network based models combining theoretical domain-specific knowledge A modelling and simulation approach is for controlled dynamical systems with trainingare tools of artificial neural network field. Training of the dynamical neural network experiments carried out to confirm efficiency of the proposed approach. semi-empirical neural network based models combining theoretical domain-specific knowledge model for multi-step ahead prediction is performed ingoal a sequential fashion. capabilities Computational with tools artificial neural network field. Training dynamical neural network under multiple andofdiverse uncertainties. The main is oftothe demonstrate for modeltraining for multi-step ahead prediction is performed in a sequential fashion. Computational with training tools of artificial neural network field. Training of the dynamical neural network experiments are carried out to confirm efficiency of the proposed approach. model for multi-step ahead prediction is performed in a sequential fashion. Computational semi-empirical neural network based models combining theoretical domain-specific knowledge Keywords: nonlinear dynamical system, semi-empirical model, neural sequential learning experiments are carried outbytoElsevier confirm efficiency the proposed approach. © 2018 The Published Ltd. is an of open access article network, under the CC BY-NC-ND license model for Authors. multi-step ahead prediction isThis performed in a sequential fashion. Computational experiments are carried out to confirmsemi-empirical efficiency of model, the proposed approach. with training tools of artificial neural network field. Training of the dynamical neural network Keywords: nonlinear dynamical system, neural network, sequential learning (http://creativecommons.org/licenses/by-nc-nd/3.0/). Keywords: nonlinear dynamical system, semi-empirical model, neural network, sequential learning experiments are carried out to confirm efficiency of the proposed approach. model for nonlinear multi-step aheadofprediction iscommittee performed in8th a neural sequential fashion. Computational Peer-review under responsibility the scientific ofmodel, the Annual International Conference on Biologically Keywords: dynamical system, semi-empirical network, sequential learning experiments are Architectures carried out tosystem, confirmsemi-empirical efficiency of model, the proposed approach. Inspired Cognitive Keywords: nonlinear dynamical neural network, sequential learning

1 Introduction Keywords: nonlinear dynamical system, semi-empirical model, neural network, sequential learning 1 Introduction 1 Introduction A modelling and simulation problem for multidimensional, highly nonlinear and nonstationary 1 Introduction controlled dynamical system problem such as maneuverable aircraft is considered. Traditional approach A modelling and simulation for multidimensional, highly nonlinear and nonstationary 1 Introduction A modelling and simulation problem for multidimensional, highly nonlinear and nonstationary to mathematical modelling and computer simulation of dynamical systems relies upon difcontrolled dynamical system such as maneuverable aircraft is considered. Traditional approach A modelling and simulation problem for multidimensional, highly nonlinear and nonstationary 1 Introduction controlled dynamical system problem such as maneuverable aircraft is considered. Traditional approach A modelling and simulation for multidimensional, highly nonlinear and nonstationary

ferential equations. However, dynamical system models onnonlinear differential equations lack A and simulation for multidimensional, highly and nonstationary to modelling mathematical modelling and computer simulation of based dynamical systems relies upon difcontrolled dynamical system problem such as maneuverable aircraft is considered. Traditional approach to mathematical modelling and computer simulation of dynamical systems upon difcontrolled dynamical systemthe such as maneuverable aircraft is considered. Traditional approach adaptivity, which motivates search formultidimensional, alternatives. One possibility would berelies to develop dyferential equations. However, dynamical system models based onnonlinear differential equations lack to mathematical modelling and computer simulation of dynamical systems relies upon difA modelling and simulation problem for highly and nonstationary ferential equations. However, dynamical system models based on differential equations lack to mathematical modelling and computer simulation of dynamical systems relies upon difnamical system based on artificial neural networks (ANN). This would for model adaptivity, whichmodels motivates the search for alternatives. Onebased possibility would beallow to develop dyferential equations. However, dynamical system models on differential equations lack controlled dynamical system such as maneuverable aircraft is considered. Traditional approach adaptivity, which motivates thedynamical search for significantly alternatives. Onebased possibility would be equations tofor develop dyferential equations. system models onThis differential lack adaptivity, but at theHowever, same time it would restrict level of complexity themodel plant namical system models based onsearch artificial neural networks (ANN). wouldberelies allow for adaptivity, which motivates the for alternatives. One possibility would to develop dyto mathematical modelling and computer simulation of dynamical systems upon difnamical system models based on artificial neural networks (ANN). This would allow for model adaptivity, which motivates the search for alternatives. One possibility would be to develop dyand thussystem prohibit application to most of the practical problems. The reason is that adaptivity, but at theHowever, same time it would significantly restrict level of complexity fortraditional themodel plant namical models based on artificial neural networks (ANN). This would allow for ferential equations. dynamical system models based on differential equations lack adaptivity, but at the same time it would significantly restrict level of complexity for the plant namical system models based on artificial neural networks (ANN). This would allow for model ANNthus approach considers as a “black box” [17], which leads significant increase of and prohibit application tosearch most of practical problems. Theofto reason isbethat traditional adaptivity, but at the sameplant time it would significantly restrict level complexity the plant which motivates the forthe alternatives. One possibility would tofor develop dyand thus prohibit application to most of the practical problems. Theofdataset reason is that traditional adaptivity, but at the and, same time it would significantly restrict level complexity forincrease thevalues, plant model dimensionality as a consequence, to increase of(ANN). training size up to ANN approach considers plant as a “black box” [17], which leads to significant of and thus prohibit application to most of the practical problems. The reason is that traditional namical system models based on artificial neural networks This would allow for model ANN approach considers plant as a Basic “black box” [17], which leads to increase of and thus prohibit application to most of the practical problems. Theofdataset reason is that traditional unattainable in at real world problems. idea suggested approach is significant to introduce available model dimensionality and, as a consequence, to ofincrease of training size up to values, ANN approach considers plant as a “black box” [17], which leads to significant increase of adaptivity, but the same time it would significantly restrict level complexity for the plant model dimensionality and, as a consequence, to increase of training dataset size up to values, ANN approach considers plant as a “black box” [17], which leads to significant increase of theoretical knowledge for the plant into the purely empirical model in tothat decrease both unattainable in real world problems. Basic idea ofincrease suggested approach isorder to introduce available model dimensionality and, as a consequence, to of training dataset size up to values, and thus prohibit application to most of the practical problems. The reason is traditional unattainable in real world problems. Basic idea of suggested approach is to introduce available model dimensionality and, as a consequence, to increase of training dataset size up to values, and required set size. Such semi-empirical box”) models theoretical knowledge for the plant theidea purely empirical model to decrease both unattainable in real world problems. Basic of suggested approach to(“gray introduce available ANN approach considers plant as training ainto “black box” [17], which leads in toisorder significant increase of theoretical knowledge for the plant into the purely empirical model in to decrease unattainable in real problems. Basic idea ofand suggested approach isorder to(“gray introduce available [5, 3, 15] possess theworld required adaptivity feature utilize both theoretical knowledge forboth the model dimensionality and required training set size. Such semi-empirical box”) models theoretical knowledge for the plant into the purely empirical model in order to decrease both and, as a consequence, to increase of training dataset size up to values, model dimensionality and required set size.empirical Such semi-empirical (“gray box”) models theoretical knowledge fordata the plant into the purely model inisorder to decrease plant and experimental of its training behavior. Models of this class attain high accuracy and [5, 3, 15] possess theworld required adaptivity feature utilize both theoretical knowledge forboth the model dimensionality and required training set size. Such semi-empirical box”)available models unattainable in real problems. Basic idea ofand suggested approach to(“gray introduce [5, 3, 15] possess the required adaptivity feature and utilize both theoretical knowledge for the model dimensionality and required training set size. Such semi-empirical (“gray box”) models performance, as evidenced by adaptivity computational experiments. plant andpossess experimental data of its into behavior. Models of this class attain high accuracy and [5, 3, 15] the required feature and utilize both theoretical knowledge for the theoretical knowledge for the plant the purely empirical model in order to decrease both plant andpossess experimental data networks of its behavior. Models of this class attain accuracy [5, 3, 15] the required and utilize both theoretical knowledge for and the Learning recurrent neural tofeature perform multistep prediction is (“gray ahigh difficult optimizaperformance, as evidenced by adaptivity computational experiments. plant and experimental data of its behavior. Models of this class attain high accuracy and model dimensionality and required training set size. Such semi-empirical box”) models performance, as evidenced by computational experiments. plant and experimental data of its behavior. Models of this class attain high accuracy and tion problem. therequired following sections,tofeature we present sequential learning algorithm designed to Learning recurrent neural perform multistep prediction is a difficult optimizaperformance, asInevidenced by networks computational experiments. [5, 3, 15] possess the adaptivity and utilize both theoretical knowledge for the Learning recurrent neural to perform multistep prediction is a difficult optimizaperformance, asInevidenced by networks computational experiments. tion problem. the following sections, we present sequential learning algorithm designed to Learning recurrent neural networks to perform multistep prediction is a difficult optimizaplant and experimental data of its behavior. Models of this class attain high accuracy and tionLearning problem.recurrent In the following sections,towe presentmultistep sequential learning is algorithm designed to neural perform prediction a difficult optimiza1 tion problem. asInevidenced the following sections, we present sequential learning algorithm designed to performance, by networks computational experiments. tionLearning problem.recurrent In the following sections,towe presentmultistep sequential learning is algorithm designed to neural networks perform prediction a difficult optimiza1 1 1877-0509 © 2018 The Authors. Published by Elsevier Ltd.we Thispresent is an open access article under the CC BY-NC-ND licenseto tion problem. In the following sections, sequential learning algorithm designed 1 (http://creativecommons.org/licenses/by-nc-nd/3.0/). 1

Peer-review under responsibility of the scientific committee of the 8th Annual International Conference on Biologically Inspired Cognitive Architectures 1 10.1016/j.procs.2018.01.022



Mikhail V. Egorchev et al. / Procedia Computer Science 123 (2018) 134–139and Tiumentsev Semi-empirical Neural Network Based Modelling Approach Egorchev

circumvent some of the difficulties and illustrate efficiency of the proposed approach by results of computer simulations.

2

Semi-empirical neural network based model development

Development process for semi-empirical neural network based model of dynamical system consists of the following stages: 1. development of continuous-time theoretical model for the considered dynamical system as well as acquisition of experimental data about behavior of the system; 2. accuracy assessment for theoretical model of dynamical system using collected data; 3. conversion of original continuous-time model into a discrete-time model [18]; 4. generation of ANN-representation for discrete-time model [4, 12]; 5. learning of ANN-model [13]; 6. structural adjustment of ANN-model to fit modelling accuracy requirements. To estimate efficiency of proposed approach, let us consider a problem of modelling and simulation of aircraft three-axis rotational motion. Traditional continuous-time theoretical model for aircraft flight dynamics consists of 14 ordinary differential equations [2], omitted here for the sake of brevity. State variables of corresponding dynamical system include: roll angular rate p, pitch angular rate q and yaw angular rate r (degree/second); roll angle φ, yaw angle ψ and pitch angle θ (degree); angle of attack α, angle of sideslip β; angle of all-moving tailplane deflection δe , angle of rudder deflection δr , angle of aileron deflection δa (degree); angular rates of all-moving tailplane, rudder and aileron deflections δ˙e , δ˙r , δ˙a (degree/second), respectively. Control inputs include command signals supplied to all-moving tailplane, rudder and aileron δeact , δract , δaact (degrees), respectively. This theoretical model contains 6 unknown nonlinear functions of several variables that correspond to aerodynamic coefficients of axial Cx (α, β, δe , q), transverse Cy (α, β, δr , δa , p, r) and normal Cz (α, β, δe , q) aerodynamic forces, as well as roll Cl (α, β, δe , δr , δa , p, r), pitch Cm (α, β, δe , q) and yaw Cn (α, β, δe , δr , δa , p, r) aerodynamic moments. These unknown functions are replaced by 6 feedforward neural network modules with one hidden layer. Hidden layers include 1, 5, 3, 5, 10 and 5 neurons with sigmoid activation functions for modules Cx , Cy , Cz , Cl , Cm and Cn , respectively. Output layer neurons are linear functions. This semi-empirical neural network based model has quite complex structure, thus we consider a restricted case of longitudinal rotational motion for illustration purposes: structure of this simplified model based on Euler difference scheme is given in Fig. 1a. Colored arrows on illustration correspond to inter-neuron connections with varying (“learnable”) weights. Colored nodes represent neurons that have at least one such connection as its input. For comparison, structure of the completely empirical NARX model (Nonlinear AutoRegressive neural network with eXogenous inputs) is given in Fig. 1b.

3

Sequential learning algorithm and simulation results

Learning of long input sequences with recurrent ANNs is difficult due to the existence of spurious valleys on the error surface [9], effects of exponential decrease or increase of the gradient norm [16], possible unbounded growth of network outputs. Thus, gradient optimization methods fail to find satisfactory solution except for rare occurrences when initial values for network parameters are very close to such solution. Now we might consider a problem of finding such 2

135

136

Mikhail V. Egorchev et al. / Procedia Computer Science 123 (2018) 134–139and Tiumentsev Semi-empirical Neural Network Based Modelling Approach Egorchev

q(k + 1)

α(k + 1) 1 ∆t g V

φ(k + 1)

ψ(k + 1)

α(k + 1)

q(k + 1)

1

1

∆t

1

∆t

∆t

1 qS¯ ¯ c Jy

qS ¯ − mV

1 T2

−1

∆−1

∆−1

1 −2T ζ

α(k)

q(k)

φ(k)

ψ(k)

φact (k)

(a) Semi-empirical neural network model

δeact (k)

α(k)

α(k − m)

q(k)

q(k − n)

(b) Empirical NARX model

Figure 1: ANN-models of aircraft longitudinal rotational motion initial values for network parameters. This problem may be stated as optimization problem for slightly perturbed objective function of original problem. Following this logic, we need to find such sequence of optimization problems that the first one is simple to solve for almost any initial values of network parameters, each subsequent problem is similar to the previous one and the sequence converges to the original difficult problem. Similar approach was previously discussed in [7, 11, 19, 1] and reported to provide a substantial improvement of learning results. For a problem of learning recurrent ANNs to perform multistep prediction, it is natural to suggest a sequence of optimization problems generated by varying prediction horizon length k: Jk ({xi , ui }ni=1 , w) =

n−k k  1 xi+j − net(. . . net(xi , ui ; w), . . . , ui+j−1 ; w), k(n − k) i=1 j=1

where xi and ui represent vector of state variables and vector of control variables at discrete instants of time i; w is the vector of the adjustable parameters of neural network model. We also use notation argmin Jk (X, w; w0 ) for a general iterative minimization algorithm applied w

to objective function Jk on dataset X using w0 as initial guess for parameter values w. Proposed sequential learning algorithm basically works as follows: it starts by minimizing 1-step prediction objective function for n − 1 initial states (all states xi from a training set trajectory, except for the last one); then it increments prediction horizon, excludes one last initial state and performs minimization using previously found minima as initial guess; it terminates when prediction horizon equals n − 1 and there is only one initial state x1 . The rest of the algorithm performs some form of steplength adaptation for prediction horizon increment. This algorithm has proven to be successful for learning semi-empirical recurrent neural network to perform 1000-step prediction of aircraft motion. Levenberg-Marquardt algorithm [8] was used to find solutions to intermediate optimization problems on steps 7 and 11 of Algorithm 1. Real-Time Recurrent Learning algorithm [10] was used for computation of Jacobi matrices. Representative training set is obtained using polyharmonic (multisine) excitation signal that has proven to be very effective for considered class of problems [6]. Test set is generated using random steps excitation signal. Simulation results are given in Table 1 and Fig. 2. From Fig. 2 we see that prediction errors for all observable state variables are sufficiently small. Moreover, these errors do not tend to increase with time which serves as evidence of 3



Semi-empirical Neural Network Based Modelling Approach Egorchev and Tiumentsev Mikhail V. Egorchev et al. / Procedia Computer Science 123 (2018) 134–139

Algorithm 1 Sequential learning algorithm for dynamic neural network model 1: 2: 3: 4: 5: 6: 7:

Prepare training set X train ← {xtrain , utrain }ni=1 i i val m val val Prepare validation set X ← {xi , ui }i=1 Choose target value εgoal for objective function Choose maximum acceptable increase ∆max in objective function on training set Choose maximum acceptable number of subsequent learning epochs smax that increase objective function on validation set and initialize corresponding counter s ← 0 Choose initial values w∗ for model parameters (for example, random ones) Initialize current number of prediction steps k ∗ ← 1 and solve optimization problem w∗ ← argmin Jk∗ (X train , w; w∗ ) w

If Jk∗ (X train , w∗ ) > εgoal , return to step 6 Find minimum number of prediction steps k + ∈ [k ∗ , n − 1] such that Jk+ (X train , w∗ )  Jk∗ (X train , w∗ ) + ∆max 10: If k + = k ∗ , return to step 6 11: Perform backtracking to find maximum number of prediction steps k − ∈ [k ∗ , k + ] such that solution of optimization problem w− ← argmin Jk− (X train , w; w∗ ) would satisfy 8: 9:

w

12: 13: 14: 15: 16:

Jk− (X train , w− )  εgoal If k − = k ∗ , return to step 6 If validation error increases Jm−1 (X val , w− ) > Jm−1 (X val , w∗ ), increment s ← s + 1 If s > smax , return to step 6 Accept w∗ ← w− and k ∗ ← k − If k ∗ < n − 1, return to step 9, otherwise terminate: w∗ are the desired model parameters. Number of prediction steps 2 4 6 9 14 21 1000

RMSEα 0.1376 0.1550 0.1647 0.1316 0.0533 0.0171 0.0171

RMSEβ 0.2100 0.0870 0.0663 0.0183 0.0109 0.0080 0.0080

RMSEp 1.5238 0.5673 0.4270 0.1751 0.1366 0.0972 0.0972

RMSEq 0.4517 0.4069 0.3973 0.2931 0.1116 0.0399 0.0399

RMSEr 0.4523 0.2738 0.2021 0.0530 0.0300 0.0193 0.0193

Table 1: Simulation errors on the test set for semi-empirical model at different learning stages

good generalization ability of neural network model. Changes of model prediction error on intermediate stages of learning algorithm are presented in Table 1. We can also estimate error of identified aerodynamic coefficients by comparing outputs of corresponding neural network modules with available experimental data [14]. Root mean-square errors for each module are: RMSECy = 5.4257 × 10−4 , RMSECz = 9.2759 × 10−4 , RMSECl = 2.1496 × 10−5 , RMSECm = 1.4952 × 10−4 , RMSECn = 1.3873 × 10−5 . It is important to note that conceptually similar approach was suggested in [7]. The algorithm imposed limitations on temporal window within which patterns could be processed by a recurrent neural network. This was performed by periodically restricting access to network prior internal states via the recurrent connections. These limitations were gradually relaxed as learning proceeded. Such approach was found to resemble conditions under which children 4

137

Semi-empirical Neural Network Based Modelling Approach Egorchev Mikhail V. Egorchev et al. / Procedia Computer Science 123 (2018) 134–139and Tiumentsev

Gact , deg e

-5 -6 -7 -8

0

5

10

15

20

25

30

35

40

0

5

10

15

20

25

30

35

40

0

5

10

15

20

25

30

35

40

0

5

10

15

20

25

30

35

40

0

5

10

15

20

25

30

35

40

0

5

10

15

20

25

30

35

40

0

5

10

15

20

25

30

35

40

0

5

10

15

20 t, sec

25

30

35

40

Gact , deg r

2 0 -2

Gact , deg a

1 0 -1 -2 E , deg/sec

0.2

-0.2 0.1 0 -0.1 0.05 0

R

E , deg/sec

Q

E , deg/sec

P

0

-0.05

ED, deg

0.05 0 -0.05 0.02 EE, deg

138

0 -0.02

Figure 2: Generalization error estimate: Eα , Eβ , Ep , Eq , Er represent prediction errors for corresponding observed values; δeact , δract , δaact represent test excitation signals. learn natural language, namely the working memory capacity increase that occurs during maturational changes. It was hypothesized that these early limitations assist efficient learning of natural language.

4

Conclusions

Simulation results clearly demonstrate that proposed ANN-based approach to complex nonlinear dynamical systems modelling is very effective from the standpoint of simulation accuracy, especially if we combine ANN learning techniques with some knowledge about simulated object. This approach can be implemented for systems operating under various uncertainty conditions using adaptation mechanisms based on the ANN training tools. Suggested sequential learning algorithm also proves to be useful for learning such models to perform multistep prediction. 5



Semi-empirical Neural Network Based Modelling Egorchev and Tiumentsev Mikhail V. Egorchev et Approach al. / Procedia Computer Science 123 (2018) 134–139

References [1] Y. Bengio, J. Louradour, R. Collobert, and J. Weston. Curriculum learning. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML ’09, pages 41–48, New York, NY, USA, 2009. ACM. [2] A. F. Bochkariov, V. V. Andreyevsky, V. M. Belokonov, V. I. Klimov, and V. M. Turapin. Aeromechanics of Airplane: Flight Dynamics. Mashinostroyeniye, Moscow, 2nd ed. edition, 1985. [In Russian]. [3] G. Dreyfus. Neural networks - methodology and applications. Springer, 2005. [4] G. Dreyfus and Y. Idan. The canonical form of nonlinear discrete-time models. Neural Computation, 10(1):133–164, Jan 1998. [5] M. V. Egorchev, D. S. Kozlov, and Yu. V. Tiumentsev. Neural network adaptive semi-empirical models for aircraft controlled motion. In Proceedings of the 29th Congress of the International Council of the Aeronautical Sciences, volume 4, Sep 2014. [6] M. V. Egorchev and Yu. V. Tiumentsev. Learning of semi-empirical neural network model of aircraft three-axis rotational motion. Optical Memory and Neural Networks, 24(3):201–208, July 2015. [7] J. L. Elman. Learning and development in neural networks: the importance of starting small. Cognition, 48(1):71–99, 1993. [8] S. Haykin. Neural Networks: A Comprehensive Foundation. Prentice Hall PTR, Upper Saddle River, NJ, USA, 2nd edition, 1998. [9] J. Horn, O. De Jesus, and M. T. Hagan. Spurious valleys in the error surface of recurrent networks: Analysis and avoidance. IEEE Transactions on Neural Networks, 20(4):686–700, April 2009. [10] O. De Jesus and M. T. Hagan. Backpropagation algorithms for a broad class of dynamic networks. IEEE Transactions on Neural Networks, 18(1):14–27, Jan 2007. [11] J. Ludik and I. Cloete. Incremental increased complexity training. In ESANN 1994, 2nd European Symposium on Artificial Neural Networks, Brussels, Belgium, April 20-22, 1994, Proceedings, 1994. [12] O. Nerrand, P. Roussel-Ragot, L. Personnaz, G. Dreyfus, and S. Marcos. Neural networks and nonlinear adaptive filtering: Unifying concepts and new algorithms. Neural Computation, 5(2):165– 199, March 1993. [13] O. Nerrand, P. Roussel-Ragot, D. Urbani, L. Personnaz, and G. Dreyfus. Training recurrent neural networks: why and how? An illustration in dynamical process modeling. IEEE Transactions on Neural Networks, 5(2):178–184, Mar 1994. [14] L. T. Nguyen, M. E. Ogburn, W. P. Gilbert, K. S. Kibler, P. W. Brown, and P. L. Deal. Simulator study of stall/post-stall characteristics of a fighter airplane with relaxed longitudinal static stability. Technical Report TP-1538, NASA, December 1979. [15] Y. Oussar and G. Dreyfus. How to be a gray box: dynamic semi-physical modeling. Neural Networks, 14(9):1161–1172, 2001. [16] R. Pascanu, T. Mikolov, and Y. Bengio. On the difficulty of training recurrent neural networks. In Proceedings of the 30th International Conference on International Conference on Machine Learning - Volume 28, ICML’13, pages III–1310–III–1318. JMLR.org, 2013. [17] I. Rivals and L. Personnaz. Black-Box Modeling with State-Space Neural Networks, volume 15 of Robotics and Intelligent Systems, pages 237–264. World Scientific Pub Co Inc, 1996. [18] L. R. Scott. Numerical Analysis. Princeton University Press, 2011. [19] J. A. K. Suykens and J. Vandewalle. Learning a simple recurrent neural state space model to behave like chua’s double scroll. IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, 42(8):499–502, Aug 1995.

6

139