Neurocomputing 28 (1999) 37}51
Implementation of neural network based non-linear predictive control P.H. S+rensen *, M. N+rgaard , O. Ravn , N.K. Poulsen Department of Automation, Bldg. 326, Technical University of Denmark, 2800 Lyngby, Denmark Department of Mathematical Modelling, Bldg. 321, Technical University of Denmark, 2800 Lyngby, Denmark
Abstract This paper describes a control method for non-linear systems based on generalized predictive control. Generalized predictive control (GPC) was developed to control linear systems, including open-loop unstable and non-minimum phase systems, but has also been proposed to be extended for the control of non-linear systems. GPC is model based and in this paper we propose the use of a neural network for the modeling of the system. Based on the neural network model, a controller with extended control horizon is developed and the implementation issues are discussed, with particular emphasis on an e$cient quasi-Newton algorithm. The performance is demonstrated on a pneumatic servo system. 1999 Elsevier Science B.V. All rights reserved. Keywords: Predictive control; Quasi-Newton optimization; Implementation issues
1. Introduction Neural networks have been used in non-linear control applications in a variety of ways, and can basically be divided into two classes: control systems where the controller in itself is a neural network [9,16,2] and model-based control systems where the model of the non-linear dynamic system is a neural network. The former methods largely make use of some form of inverse neural model, which often gives rise to severe stability and robustness problems in the control of non-minimum-phase systems, or systems whose inverse are close to the stability boundary. One form of control employing only a neural network model of the system which has received some attention recently is known as &local instantaneous linear * Corresponding author. 0925-2312/99/$ } see front matter 1999 Elsevier Science B.V. All rights reserved. PII: S 0 9 2 5 - 2 3 1 2 ( 9 8 ) 0 0 1 1 4 - 3
38
P.H. S~rensen et al. / Neurocomputing 28 (1999) 37}51
model' [11,15,20]. In this case, a linear model is extracted from the neural network model in each sample and used in the controller design. In this way, access to the whole range of linear control designs is opened but it requires that the system dynamics do not change too rapidly. We will pursue a model-based control strategy originating in the idea of predictive control. Predictive control can stabilize open-loop unstable systems and non-minimum-phase systems, and even in the non-linear extension it possesses some attractive stability features [13]. The basis for predictive control is the idea of the receding horizon, which seems intuitively simple, and where tuning is reduced to selecting a few horizon parameters and a single weighting factor. In this paper we follow the ideas of generalized predictive control as proposed in [3,4] extended to include a neural network model. Neural-network-based predictive control has been dealt with in several papers. See [17,22,8,14,15,19]. Usually, the control horizon is limited to one as the controller in this case is far easier to implement and it is su$ciently #exible for many practical applications. The implementation of controllers with general control horizons is treated in [14,19]. In the former case, a Newton-based Levenberg}Marquardt algorithm is used while in the latter paper an ordinary Newton algorithm is suggested. Both schemes yield a fast convergence, but are di$cult to implement as they require both gradient and Hessian of the cost function. What we have done in this paper is to detail a very e$cient algorithm for dealing with longer control horizons based on a quasi-Newton search method. The convergence is similar to that of a full Newtonbased algorithm, and for the problem under consideration, fewer computations are generally required in each iteration. Additionally, the quasi-Newton method is much easier to implement as the exact Hessian is not required. Thus, we believe that the optimization task is reduced enough to be applicable in real time for systems with reasonable time constants.
2. Generalized predictive control The idea behind generalized predictive control is at each iteration to minimize a criterion of the following type: ,S , J(t,;(t))" [r(t#i)!y( (t#i)]#o *u(t#i!1) G, G with respect to the N future control inputs S ;(t)"[u(t)2u(t#N !1)]2 S and subject to the constraint
(1)
(2)
*u(t#i)"0, N 4i4N !d. (3) S r denotes the reference (the desired output), y( a prediction of the output, and u the control input. * is the di!erence operator, *u(t)"u(t)!u(t!1). The tuning
P.H. S~rensen et al. / Neurocomputing 28 (1999) 37}51
39
parameters of the controller are N , N , N , and o. N is called the minimum cost S horizon, N the prediction (or maximum cost) horizon, and N the (maximum) control S horizon. o is a weighting factor penalizing changes in the control inputs. For nonlinear systems, the optimization problem must be solved at each sample resulting in a sequence of future control inputs. From this sequence the "rst component, u(t), is then applied to the system. One of the prime characteristics of predictive control is the idea of a receding horizon. That is, at each sample the control signal is determined to achieve a desired behavior in the following N time steps. A very appealing property of this idea is that many control tasks carried out by humans are done in a somewhat similar fashion. This intuitive foundation can to some extent accommodate the tuning of the design parameters. Another important attribute is the notion of a control horizon which is smaller than the prediction horizon. The objective is here that only the "rst N future control S inputs are determined. From that point on, the control input is assumed constant. A long horizon allows a more active control signal, therefore enabling a higher performance, while a short horizon generally makes the control system more robust [18]. As the computational burden increases dramatically with the length of this horizon, it is typically kept as short as possible. In fact, N "1 is su$cient for S obtaining an acceptable performance in most applications [3]. Minimization of the GPC criterion when the predictions are determined by a nonlinear relation of the future control inputs constitutes a complex non-linear programming problem. Unfortunately, the problem does not exactly become less involved as real-time implementation issues are taken into account. These demand that a prespeci"ed maximum response time exists (which preferably is short) and that the control law is numerically robust and does not have problems with convergence. Moreover, the minimization must be executed automatically since there will be no possibility for user interference. Consequently, it cannot depend on numerous design parameters which need proper adjustment to achieve a satisfactory performance. 2.1. The criterion To alleviate the derivation of the control law, the criterion is rewritten in vector notation as follows: J(t,;(t))"[R(t)!>K (t)]2[R(t)!>K (t)]#o;I (t)2;I (t)"E(t)2E(t)#o;I (t)2;I (t), (4) where R(t)"[r(t#N )2r(t#N )]2, >K (t)"[y( (t#N "t)2y( (t#N "t)]2, E(t)"[e(t#N "t)2e(t#N "t)]2, ;I (t)"[*u(t)2*u(t#N !1)]2 S
(5)
e(t#k"t)"r(t#k)!y( (t#k"t) for k"N ,2, N .
(6)
and
40
P.H. S~rensen et al. / Neurocomputing 28 (1999) 37}51
2.2. The predictor Provided that the system to be controlled can be modelled by a NNARX-model, the one-step ahead prediction is given by y( (t),y( (t#k"t)"g(y(t!1),2,y(t!n), u(t!d),2, u(t!d!m))
(7)
where g is some function realized by a neural network and d is the time delay, which is assumed to be at least one. As the above model takes as input n past outputs and m past control inputs and has delay d, the model is sometimes referred to as an NNARX(n, m, d) model. The k-step ahead prediction of the system's output is calculated by shifting the expression forward in time while substituting predictions for actual measurements where these do not exist y( (t#k),y( (t#k"t)"g(y( (t#k!1),2,y( (t#k!min(k, n)), y(t!1),2,y(t!max(n!k, 0)), u(t!d#k),2, u(t!d!m#k)).
(8)
It is assumed that the measurement of the output is available up to time t!1, only. For this reason, y( (t) enters the expression instead of y(t). This is to avoid the computational time delay which will appear if u(t) was to be calculated between AD and DA-converter calls at time t. Instead u(t) can be calculated after the DA-call in the previous sample. This maneuver is essentially analogous to the principle behind control by state feedback where the states are estimated by a classical predictive observer [10]. By inserting g, which here is assumed to be a two-layer MLP-network with tanh activation functions in the hidden units and a linear output unit, one obtains LF y( (t#k)" = f (hI (k, j))#= , H H where f (x)"tanh(x) and
(9)
IL L hI (k, j)" w y( (t#k!i)# w y(t#k!i) HG HG G G IL> K # w u(t!d#k!i)#w . (10) HL>>G H G It is also possible to train N !N #1 networks to directly produce each of the predictions y( (t#k"t), N 4k4N . However, this method has certain de"ciencies as more distant future predictions are considered. The networks require an additional input each time k is increased. To obtain reasonably accurate long-range predictions, the condition of proper excitation thus demands gigantic training sets. For this reason, only predictions determined in the recursive fashion described above will be considered here.
P.H. S~rensen et al. / Neurocomputing 28 (1999) 37}51
41
2.3. Derivation of the control law Minimization of the GPC-criterion when the predictions are non-linear in the control inputs is a quite elaborate optimization problem. In order to determine the minimum, it is necessary to apply an iterative search method similar to the strategies used for neural network training ;G>";G#kGf G.
(11)
;G speci"es the current iterate of the sequence of future control inputs, kG speci"es the step size, and f G the search direction. As for neural network training, many schemes for determination of search direction and step size may come into consideration. However, the present problem is far from identical to the problem of neural network training. In particular, one has to take into account the following characteristics: E E E E E
Most often, fast convergence is an absolute necessity. Minimum must be found in real time, i.e., a maximum response time must exist. Numerical robustness is crucial. Not a mean-square-error type criterion. Only a few parameters need to be determined since the control horizon, Nu, usually is small. E The accuracy is often not of particular importance. Accuracy better than the resolution of the DA-converter is unnecessary. These aspects govern a somewhat di!erent approach to the optimization problem. A gradient descent method will generally fail to satisfy the "rst two demands and thus only Newton or Newton-related methods should be considered. A full Newton method involves the calculation of the Hessian matrix, which becomes quite cumbersome, in particular for a control horizon larger than one. Sometimes even unrealistic for real-time implementation. In many ways the obvious choice of minimization method for the considered problem is a quasi-Newton method. In fact, a quasi-Newton algorithm is typically the method of choice for small- and medium-sized non-least-squares problems when the Hessian is di$cult to derive or cumbersome to compute. In the quasi-Newton methods one directly constructs a positive-de"nite approximation to the inverse Hessian matrix from the information embedded in previous evaluations of gradient and criterion. The algorithm is known to have excellent convergence properties, in particular near a minimum [6]. This is interesting in relation to the present problem. As the last N !1 elements of the vector ;(t!1) can be used as initial guess on the S "rst N !1 elements in ;(t), the minimization algorithm will usually be invoked close S to a minimum. The Newton search direction is given by f G"!BGG(;G(t)),
(12)
where BG speci"es the inverse Hessian and G(;G(t)) is the gradient of the cost function with respect to the control inputs. The quasi-Newton method approximates the full
42
P.H. S~rensen et al. / Neurocomputing 28 (1999) 37}51
Newton search direction and it is consequently necessary to complement it with a line search to ensure convergence. There are very restrictive rules for how this line search should be implemented to guarantee the validity of the update for the inverse Hessian matrix. A complete quasi-Newton algorithm is suggested later. 2.4. Calculatation of the gradient The gradient in (12) is given by
*J(t,;(t)) G(;G(t))" *;(t)
>K (t)2 ;I (t)2 " !2 E(t)#2o ;I (t) . (13) *;(t) *;(t) 3R3GR 3R3GR Before calculating the gradient, it is necessary to determine the various partial derivatives entering the expression. These are derived in the following. The partial derivative *;I (t)/*;(t): Since *u(t)"u(t)!u(t!1), this is
;I (t)2 " *;(t)
1
0
0
0
0
!1
1
0
0
0
$
\
\
0
$
0
0
!1
!1
0
0
0
0
!1
!1
(14)
which is a matrix of dimension N ;N . The derivative is clearly independent of time S S and can be constructed beforehand. The partial derivative *>K (t)/*;(t): This is a matrix of dimension N ;(N !N #1), S *y( (t#N ) *y( (t#N ) 2 *u(t) *u(t#N !1) S >K (t)2 " $ \ $ . (15) *;(t) *y( (t#N ) *y( (t#N ) 2 *u(t) *u(t#N !1) S To accommodate the calculation of the derivatives of which this matrix is composed, the following separation between terms depending on past and future control inputs is made:
I\BL
I\B\,S>K> hI (k, j)" w y( (t#k!i)# w u(t#N !1) HG HL>G S G G
I\BK
IL # w u(t!d#k!i)# w y( (t#k!i) HL>G> HG GI\B\,S> GI\B> L K # w y(t#k!i)# w u(t!d#k!i)#w . HG HL>G> H GI> GI\B>
(16)
P.H. S~rensen et al. / Neurocomputing 28 (1999) 37}51
43
The "rst three sums depend on future control inputs while the remaining three depend on past control inputs only. For ∀k3[d,N ], ∀l3[0, min(k!1,N !1)] determine S *y( (t#k) LF *hI (k, j) " = f (hI (k, j)) H *u(t#l) *u(t#l) H LF " = f (hI (k, j))h(k, l, j), (17) H H where
I\BL *y( (t#k!i) h(k, l, j)" w u(t#l) HG *u(t#l ) G
I\B\,S>K> *u(t#N !1) S # w HL>G *u(t#l) G
I\BK *u(t!d#k!i) # . w HL>G> *u(t#l) GI\B\,S>
(18)
Since
1, *u(t#N !1) S " *u(t#l) 0,
l"N !1, S otherwise,
(19)
1, l"k!d!i, *u(t!d#k!i) " *u(t#l) 0, otherwise,
(20)
*y( (t#k!i) "0, l5k!d!i#1, *u(t#l)
(21)
the expression for h(k, l, j) can be reduced to h(k, l, j)"
I\B\JL *y( (t#k!i) I\B\,S>K> w # w , l"N !1 HG *u(t#l) HL>G S G G
I\B\JL *y( (t#k!i) w #w , max(0, k!d!m)4l4N !2 HG *u(t#l) HL>I\B\J> S G
I\B\JL *y( (t#k!i) w , 04l(max(0, k!d!m). HG *u(t#l) G (22)
As we are assuming tanh activation functions in the hidden layer we must insert f (x)"tanh(x)Nf (x)"1!( f (x)).
(23)
In case another type of activation function is used, the derivative of this function should be inserted instead.
44
P.H. S~rensen et al. / Neurocomputing 28 (1999) 37}51
The order in which the derivatives in (17) are calculated is very important since the calculation should consist of known quantities only. Follow this procedure to construct the matrix (17): for k"d to N for l"0 to min(k!1, N !1) S *y( (t#k) calculate *u(t#l) end (The following should only be performed once) for l"k to N !1 S *y( (t#k) "0 set *u(t#l) end end
3. Implementation of the quasi-Newton algorithm Apart from being di$cult to implement, the weakness of the full Newton-type method considered in [14,19] is that, from a computational perspective, they become still less manageable as the prediction and (in particular) the control horizons are increased. The computational burden is to a large extent due to the formation of the Hessian matrix and the Cholesky factorization needed for computation of the search direction. An alternative which to some extend remedies this computational overhead is the so-called quasi-Newton algorithm. In this algorithm, the inverse Hessian is being approximated directly from the information about criterion and gradient contained in the past iterations. Di!erent update formulas for approximating the inverse Hessian exist. The most popular one, which is also the one used here, is the BFGS-algorithm (BFGS"Broyden}Fletcher}Goldfarb}Shanno) [6]. In addition to the approximation of the inverse Hessian, the quasi-Newton algorithms must also be accompanied by a line search for determination of the step size. This can be carried out in di!erent ways, but due to the nature of the present problem, it is important that the search requires a minimum of criterion evaluations. In particular, when the prediction horizon is long, evaluation of the criterion becomes very expensive. Two algorithms are presented below: the basic quasi-Newton algorithm and an algorithm implementing the line search. Both algorithms are thoroughly discussed in [12], but a similar treatment of the subject can be found in [6]. While the basic algorithm is straightforward, the line search is somewhat involved. This is because it serves an additional purpose apart from the natural one of ensuring that the criterion decreases from one iteration to the next. Provided that BG\ is positive de"nite, the BFGS update of BG is only guaranteed to be positive de"nite if the condition (*;G)2*GG'0 is satis"ed. The line search must therefore also take
P.H. S~rensen et al. / Neurocomputing 28 (1999) 37}51
45
care of this. An exact line search does this but requires far too many criterion evaluations. Instead, a soft line search as the one suggested in [12] should be used. Basic algorithm Step 0: Select initial sequence of future control inputs. Evaluate criterion J(;) and gradient G(;). Initialize the approximation of the inverse Hessian B"I. i"0. Step 1: Determine search direction: f G"!BG%3G. Step 2: Determine step size kG by soft line search. See the algorithm below. Step 3: Update sequence of future control inputs: ;G>";G#kGf G. Step 4: Go to step 7 if the stopping criterion: ";G>!;G"(d or i'(maximumCof iterations) is satis"ed. Step 5: i"i#1. Step 6: Update approximation to the inverse Hessian with the BFGS algorithm:
*;G(*GG)2 *GG(*;G)2 *;G(*;G)2 BG" I! BG\ I! # (*GG)2*;G (*GG)2*;G (*GG)2*;G where *;G,;G!;G\ *GG,GG!GG\ Go to step 1. Step 7: Accept sequence of future control inputs and terminate. Algorithm for selecting the step size As mentioned previously, this algorithm has been adapted from [12]. The algorithm is based on considerations outlined in [5]. The conditions J(;G#kf G)4J(;G)#dk( f G)2G(;G),
(24)
( f G)2G(;G#kf G)5b( f G)2G(;G)
(25)
play an important role in the step size selection, but for abbreviation we will in the following refer to them only by their respective reference number. Step 0: k"1; I"[b , b ]; J "J(;G); G "G(;G). @ @ Step 1: Evaluate the criterion J(;G#kf G) and determine the gradient G(;G#kf G). Set J "J(;G#kf G) and G "G(;G#kf G). @ @ Step 2: If both (24) and (25) are satis"ed go to, step 10. Step 3: If (24) is not satis"ed, go to step 5. Step 4: I"[k, 2k]; k"2k; J "J ; G "G . Go to step 1. @ @ @ @ Step 5: Determine b as the extreme of the second order polynomial P(x)"p x# p x#p possessing the following properties: P(b )"J , P(b )"( f G)2G , and @ @
46
P.H. S~rensen et al. / Neurocomputing 28 (1999) 37}51
P(b )"J which is equivalent to @ J !J #( f G)2G (b !b ) @ @ , p " @ (b !b ) p "( f G)2G !2p b , @ p "J !p b!p b . @ That is, b is then determined by b"p /2p . Step 6: If min(b!b ,b !b)50.1(b !b ) then k"b. Otherwise, k"(b #b )/2. Step 7: Evaluate criterion J(;G#kf G) and determine the gradient G(;G#kf G). Step 8: If both (24) and (25) are satis"ed, then go to step 10. Step 9: If (24) is satis"ed, then set I"[k, b ]; J "J(;G#kf G) and G " @ @ G(;G#kf G) and go to 5. Otherwise set I"[b ,k]; J "J(;G#kf G) and @ G "G(;G#kf G) and go to step 5. @ Step 10: Accept step and return to main algorithm. The quantity d must be a small positive number, d(0.5. In this digital control context, it is natural to apply a value of the same magnitude as one DA-converter quant (selecting the d in this way does not guarantee that this precision is obtained, though). b should be chosen so that b3(d, 1). A typical choice is a number close to 1, e.g., b"0.9. Implementing the quasi-Newton algorithm in a real-time system demands certain additional constraints on the algorithm. In practice, it is necessary to impose a limit on the number of times criterion and gradient are evaluated. Although it should not occur that the loops 1-4-1 and 5-6-7-9-5 are executed more than a few times, it is nevertheless recommended to impose a limitation for safety reasons. 4. The pneumatic control problem The pneumatic servomechanism model that will be used in the simulation study is a model of the laboratory set-up described in [21]. The servomechanism consists of a linear compressed air cylinder lifting an inertial weight. The objective is to control the position of the piston using a system of servo valves. A diagram of the servomechanism is depicted in Fig. 1. Compared to the real set-up, certain simpli"cations are made in simulation model. The most important ones are the following: E In reality, the cylinder is fed from a system of four valves that each consist of a parallel assembly of nine identical solenoid on}o! valves. Nevertheless, it is assumed that the servo valves open proportional to their control. E The valves are operated so that S1"S4"; for ;50 and S2"S3"; for ;(0. In this way, the servomechanism can be treated as a SISO system with the common control input ;. E The stiction and Coulomb friction in the piston bearings are not modeled. E The servo valve opening characteristics are approximately linear, and since the friction is neglected, the main nonlinear behavior is due to the cylinder itself. The
P.H. S~rensen et al. / Neurocomputing 28 (1999) 37}51
47
cylinder chambers compression dynamics are position dependent and the servo valve #ow characteristic is non-linear. To make sure that the entire operating range will be present in the training data, a high-frequency signal is applied in some periods of the experiment while in other periods the system is allowed to assume stationarity. It is also attempted to have the
Fig. 1. The pneumatic servomechanism.
Fig. 2. The training data. Top panel: control signal. Bottom panel: response (position in m).
48
P.H. S~rensen et al. / Neurocomputing 28 (1999) 37}51
entire range of possible positions from !0.245 to #0.245 m present in the data set. Depending on one's patience as much data as desired can be collected. In this study, 3000 samples"300 s are considered su$cient for training. The complete training set is shown in Fig. 2. In a similar fashion, a data set is produced for validation purposes. Since no noise or external disturbances are acting on the system and the training set is large it is fairly easy to identify a neural network model. Utilizing the knowledge that the system is of order four, a NNARX(4,4,1) model structure is selected. By gradually increasing the number of hidden units while evaluating the test error for the trained networks it is found that the minimum test error is achieved with 12 hidden units corresponding to 121 weights. There is no time delay in the system except for the usual delay of one sampling period. The minimum costing horizon (N ) is therefore set to 1. The prediction horizon (N ) is set to 10 since this value is recommended in [3] as a good default choice for a large class of practical systems. The only design parameters left to tune are the control horizon (N ) and the penalty factor (o). These are adjusted to achieve S a response that is reasonably fast and has little or no overshoot while the control signal is kept smooth. In addition, the control input should be in the range from !5 to #5 since the training set contained data from this range only. The result of a closed-loop simulation with the tuned controller is shown in Fig. 3. Here, the
Fig. 3. Closed-loop simulation of neural predictive controller and pneumatic servomechanism. N "1, N "10, N "2, and o"0.05. S
P.H. S~rensen et al. / Neurocomputing 28 (1999) 37}51
49
position reference is a series of steps of increasing amplitude, the "nal amplitude being close to the full extent of the piston. In this way, all of the dynamic range of the non-linear system is tested. The controller is seen to give a smooth response with no discernible change from small to large amplitude and the control input is limited within the allowed range.
5. Conclusion A neural network based predictive controller is described that will be able to control non-linear, open-loop unstable, non-minimum-phase plants and attain good stability and robustness features. The controller only has a few tuning parameters in the cost function, notably the minimum and maximum cost horizon, the control horizon and the penalty weighting of the control cost. In most neural network-based predictive control schemes, the algorithms have been developed with a control horizon limited to one. In this paper, we have given a detailed description of an e!ective numerical algorithm for minimization of a general extended control horizon cost function. The algorithm detailed in this paper was based on a quasi-Newton optimization where the inverse of the cost function Hessian matrix is approximated using the BFGS algorithm. This complemented with a line search algorithm ensures convergence in real time. The controller is demonstrated on a pneumatic servo system simulation. A limitation of the presented approach is that it does not cover limitations in the input and/or output. In practice, the magnitude of the input signal will usually be constrained, but it might also be desirable to be able to impose rate limitations on the input, constraints on the magnitude of the output, etc. A common way to accomodate input constraints is to use clipping, where an input beyond the allowed interval is truncated, before it is applied to the system. This may work in practice, however, the solution is generally not optimal in the GPC sense. A correct way to deal with constraints is to take them into account when solving the optimization problem. Unfortunately, this tends to complicate the optimization a great deal. For linear constraints, such as constraints on the magnitude of the input and rate limitations it is possible to extend the quasi-Newton scheme. However, other methods may also come into consideration for solving the problem e!ectively [7]. There are many references on constrained generalized predictive control for linear systems; see, e.g., [18,1].
References [1] E. Camacho, Constrained generalized predictive control, IEEE Trans. Automat. Control 38 (2) (1994) 327}332. [2] F. Chen, K. Khalil, Adaptive control of nonlinear systems using neural networks } a dead-zone approach, Proceedings of the American Control Conference, 1991, pp. 667}672. [3] D.W. Clarke, C. Mothadi, P.S. Tu!s, Generalized Predictive Control } Part I. The Basic Algorithm, Automatica 23 (2) (1987) 137}148.
50
P.H. S~rensen et al. / Neurocomputing 28 (1999) 37}51
[4] D.W. Clarke, C. Mothadi, P.S. Tu!s, Generalized Predictive Control } Part II. Extensions and interpretations, Automatica 23 (2) (1987) 149}160. [5] J.E. Dennis, J.J. More, Quasi-Newton methods, motivation and theory, SIAM Rev. 19 (1) (1977) 46}89. [6] J.E. Dennis, R.B. Schnabel, Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Prentice-Hall, Englewood Cli!s, NJ, 1983. [7] P. Gill, W. Murray, M.H. Wright, Practical Optimization, Academic Press, New York, 1981. [8] K. Hunt, D. Sbarbaro, Neural network for control and systems, in: Studies in Neural Network Based Control, Peter Peregrinus, London, 1992 (Chapter 6). [9] K. Hunt, D. Sbarbaro, R. Zbikowski, P. Gawthrop, Neural Networks for Control Systems } A Survey, Automatica 28 (6) (1992) 1083}1112. [10] H. Kwakernaak, H. Sivan, Linear Optimal Control Systems, Wiley, New York, 1972. [11] G. Lightbody, G. Irwin, A novel neural internal model control structure, Proceedings of the American Control Conference, 1995, pp. 350}354. [12] K. Madsen, Optimization without contraints, Number 46, Department of Mathematical Modelling, building 321, Technical University of Denmark, 1984 (in Danish). [13] D. Mayne, H. Michalaska, Receding horizon control of nonlinear systems, IEEE Trans. Automat. Control 35 (7) (1990) 814}824. [14] M. Norgaard, P. Sorensen, Generalized predictive control of a nonlinear system using neural networks, Prep. of the 1995 International Symposium on Arti"cial Neural Networks, Hsinchu, Taiwan, ROC 1995, pp. B1-33}40. [15] M. Norgaard, P. Sorensen, N. Poulsen, O. Ravn, L. Hansen, Intelligent predictive control of nonlinear processes using neural networks, Proceedings of the 1996 IEEE International Symposium on Intelligent Control (ISIC), Dearborn, MI, USA, IEEE Press, New York, 1996, pp. 374}379. [16] D. Psaltis, A. Sideris, A.A. Yamamura, A multilayer neural network controller, IEEE Control Systems Mag. 8 (2) (1988) 17}21. [17] J. Saint-Donat, N. Bhat, T. McAvoy, Neural net based model predictive control, Int. J. Control 54 (6) (1991) 1453}1468. [18] R. Soeterboek, Predictive Control } A Uni"ed Approach, Prentice-Hall, Englewood Cli!s, NJ, 1992. [19] D. Soloway, P. Haley, Neural/generalized predictive control, a Newton}Raphson implementation, Proceedings of the Eleven IEEE International Symposium on Intelligent Control, 1996, pp. 277}282. [20] O. Sorensen, Neural networks in control applications, Ph.D. Thesis, Department of Control Engineering, Aalborg University, 1994. [21] P.H. Sorensen, P.K. Sinha, K. Al-Mutib, Identi"cation of a pneumatic servo mechanism using neural networks, Proceedings of the International conference on Machine Automation, vol. 2, Tampere, Finland, 1994, pp. 499}512. [22] M. Willis, G. Montague, C. Massimo, M. Tham, A. Morris, Arti"cial neural networks in process estimation and control, Automatica 28 (6) (1992) 1181}1187.
Paul H. S0rensen was born in southern Denmark in 1956. He obtained his M.Sc. from the university of Aarhus, Denmark in 1982 and his Ph.D. degree from the University of Cambridge, UK in 1985 both in the "eld of theoretical high-energy physics. From 1985 to 1989 he worked as an Assistant Professor at the department of Control Engineering, the Technical University of Denmark (DTU). Since 1989 he has been employed by the Department of Automation, DTU as an Associate Professor. In 1993 he spent one year as a visiting professor at the university of Reading, UK. Currently, he is the Director of Studies for the graduate school of the DTU and is working on a textbook on optimal control. His research interest are control engineering, neural networks in control, robotics and autonomous systems.
P.H. S~rensen et al. / Neurocomputing 28 (1999) 37}51
51
Magnus N0rgaard was born in Copenhagen, Denmark in 1968. He received the M.S. and Ph.D. degrees in electrical engineering from the Technical University of Denmark (DTU) in 1992 and 1996, respectively. Since 1996 he has been working as an assistant research professor at DTU. His current research interests include nonlinear state estimation techniques and sensor fusion systems for autonomous guided vehicles and system identi"cation and control with neural networks.
Ole Ravn was born in Frederiksberg, Denmark in 1959. He received his M.Sc. and Ph.D. degrees in electrical engineering from Department of Automation at the Technical University of Denmark. He has been employed at the Technical University of Denmark from 1987, since 1991 as an associate professor at Department of Automation. His primary research interests are control of robots and autonomous systems, adaptive control and computer aided control engineering.
Niels Kj0lstad Poulsen was born in the central part of Sjvlland, Denmark in 1956. He received his M.Sc. and Ph.D. degrees in electrical engineering from The Institute of Mathematical Statistics and Operations Research (IMSOR), the Technical University of Denmark, in 1981 and 1984, respectively. He has been employed at the Technical University of Denmark from 1984. Since 1990 as an associate professor at The Department of Mathematical Modelling. His primary research interests are in stochastic control theory, system identi"cation and adaptive control.