Enforcing Stability in Steady-State Optimization

Enforcing Stability in Steady-State Optimization

16th IFAC Symposium on System Identification The International Federation of Automatic Control Brussels, Belgium. July 11-13, 2012 Enforcing Stabilit...

440KB Sizes 75 Downloads 71 Views

16th IFAC Symposium on System Identification The International Federation of Automatic Control Brussels, Belgium. July 11-13, 2012

Enforcing Stability in Steady-State Optimization Radek Beˇ no ∗ Daniel Pachner ∗∗ Vladim´ır Havlena ∗,∗∗ ∗

Department of Control Engineering, Czech Technical University in Prague. Czech Republic (e-mail: [email protected], [email protected]) ∗∗ Honeywell Prague Laboratory, V Parku 2326/18, 148 00 Prague 4, Czech Republic (e-mail: [email protected], [email protected]) Abstract: The article deals with the stability constraint in nonlinear continuous-time dynamic model identification. The identification is formulated as a boundary value problem. Constraining the norm of the terminal sensitivity to the initial condition is used to drive the model state to a stable equilibrium. Solving such boundary value problem on an extending finite time horizon may be numerically more appealing than constraining the eigenvalues of the Jacobian matrix evaluated at the equilibrium point in the state space. Keywords: Steady-state stability, System identification, Nonlinear systems, Convex optimisation, Parameter identification, Continuous-time systems

This article treats of a nonlinear continuous-time dynamic model parameters identification based on the steady state data. Information in the steady state data is certainly not sufficient for model identification. Nonetheless, projection of that information to the estimates of model parameters is still a meaningful problem formulation. It will be shown how the steady state data matching a cost function can be constructed. However, it should be understood that the cost function, actually minimized during the parameter identification process, may include other terms as well. These terms may be related to transient data prediction errors or violations of some physical assumptions. Therefore, the algorithm presented should not be understood as a complete identification procedure, which may be more complex. A steady state fitting might be done simultaneously with the transient data fitting when minimizing the sum of two criteria. However, the identification process can be more efficient when using the steady state information first and adding the transient data prediction errors to the cost function later, see Stewart et al. [2010]. First, the extracted steady states can represent a significantly smaller amount of data compared to the transient data whereas being still representative of the process nonlinearity over its whole operating range. Because of the data set size, the steady state data fitting may be performed with much less effort. Second, it is often possible to guess which parameters have no impact on the steady state. Those parameters can be excluded from the optimization focused on the steady states. As an example, the fact that the heat accumulation rate is zero in the steady state may often result in the conclusion that the certain heat capacities cannot affect the steady state process values. Therefore, the resulting steady state parameter optimization problem may be 978-3-902823-06-9/12/$20.00 © 2012 IFAC

Process values

1. INTRODUCTION

Process input Process output Averaged

{u2, y2}

{u , y } 1

1

Time Fig. 1. Collection of the process steady state values during step testing. defined in a lower dimensional space which simplifies the optimization. Certainly , the parameters which have been left out must by identified in the next step of the modeling process when also matching the transient behavior. Another advantage of treating the steady state information separately is that the steady state model accuracy may be emphasized. This is especially useful when the model is used for the steady state process optimization. On the other hand this approach is not applicable to unstable, integrating or oscillating processes. Without a stability control, the numerical optimization of the steady states usually works only if the initial model is stable and the initial guess of the parameters is sufficiently accurate. With the stability control, the convergence can be significantly improved. It will be shown on a simple example that the model may lose stability during the numerical identification even if started from a stable

1599

10.3182/20120711-3-BE-2027.00315

16th IFAC Symposium on System Identification Brussels, Belgium. July 11-13, 2012

initial guess. Once the model gets unstable, it is virtually impossible to continue with the numerical identification methods due to numerical inaccuracies accumulated when solving the unstable differential equations over a longer time intervals. Then, the numerical methods fail.

Model stability enforcement for nonlinear model reduction via dissipation constraint has been recently treated by Bond et al. [2010]. The method requires to define a storage function for the model.

The article is organized as follows. In section 2, the identification problem is cast as a finite horizon model trajectory optimization, not just the equilibrium optimization. The cost function used is a function of the terminal point of the trajectory only. The end point is the only relevant part of the trajectory. Such problem is the boundary value problem. The linearization of the equality constraint dx/dt = 0 is proposed to handle the steadiness constraint in section 3. However, this method does not appear sufficient. To improve it, the variational calculus method is used to control the stability in section 4. This will be achieved by minimizing the end point sensitivity to the initial condition numerically. The algorithm is applied on a simple nonlinear differential equation in section 5.

2. FORMULATION OF THE IDENTIFICATION PROBLEM

A list of selected references treating model trimming (steady state optimization), identification of a-priori stable systems or projecting the information in the steady state data to the dynamic model parameters follows. None of the method is directly related to the method developed here. For targeting general nonlinear models created by using a library of subsystems, a very general, almost bruteforce, numerical convex optimization based approach is taken in this article. No special assumption on the model structure is made. Least squares methods minimizing the quadratic norm of the state derivatives are often proposed to solve the steady state related optimizations, see De Marco et al. [2007]. This formulation may suffer from multiple local optima. Many such points may exist in the steady state, but the state derivative can actually be nonzero. Such points do not represent equilibria, but just points with locally minimum absolute state rates. The nullity of the time derivatives can happen in a multiplicity of points in the state space and a global nonlinear equation solver is needed to handle all of them. Sequential quadratic programming with the nullity of state derivatives as an equality constraint is usually used when solving the model trimming problems. The Matlab Simulink function trim is a professional example, see Optimization Toolbox User’s Guide [2010]. Other class of methods is based on the relation between the dynamic model parameters and its static characteristics, see Aguirre et al. [2004]. Methods, how prior information about the process stability or static gain, can be accounted in system identification in adaptive control applications are treated in Tulleken [1992]. The methods are based on establishing a geometric relationship between the model parameters, Jacobian eigenvalues and a convex hull approximation of such a relationship. The author deals only with linear systems. Model stability has also been recently used as an inequality constraint with the subspace identification methods as was shown in Lacy and Bernstein [2003]. These methods can handle linear systems only.

Let us suppose a process model is being created in the form of block diagram defined by the process causality flow. The individual blocks of the diagram represent model components modeling various physical phenomena such as heat transfer, thermodynamic processes, chemical reactions, energy and mass flows and conversions etc. The individual model components are described by the first principles of physics whenever possible. Such model structure usually results from the physical insight into the modeled process. Even if the physical modeling is preferred, it usually happens that certain phenomena may be unknown or too complex to be modeled strictly by the first principles. The model creator may decide to approximate such phenomena by some empirical, or partly empirical, functions to keep the model complexity acceptable. Model stability during parameter optimization may become difficult if such empirical nonlinear mappings appear in closed loops inside the model structure. Also, the first principles equations may contain a number of unknown parameters, such as reaction rates, heat transfer coefficients etc. A common technique is to identify the unknown parameters based on the input and output data to the individual model blocks. This may not always be possible due to the lack of measurements. Moreover, when identifying the model components separately, the overall model accuracy is not optimized because the propagation of the errors across the model structure is not taken into account. The overall model accuracy can be optimized only when minimizing the whole model (and not just component) prediction errors with respect to the model parameters, see Stewart et al. [2010]. Let an emphasis be put on the stable equilibria which have been observed on the process. Such equilibria are usually obtained during step tests when each manipulated process input is changed by a certain value at a time. Then, the process is left to settle down waiting for an amount of time. The process steady states may be collected taking an average of the samples measured before the next input step change like on Figure 1. Let the plant model be dynamic, time invariant, deterministic, nonlinear and time-continuous. The model is represented by the state space first order differential equations and the output equations written in the usual form: dx(t) = f (x(t), u(t), p), dt y(t) = g(x(t), u(t), p).

(1)

Practically, though the model functions f and g can be evaluated numerically evaluating the functions in the block diagram. It would be awkward to analyze them because of the model representation. The usual notation is used: x(t) is the state vector, u(t) is the vector of

1600

16th IFAC Symposium on System Identification Brussels, Belgium. July 11-13, 2012

external inputs, y(t) is the vector of measured outputs, p is the vector of the model parameters which are not known accurately. All are finite dimensional. Further, a set of steady state input and output measurement data is available: D = {uk , yk , k = 1, 2, . . . }. It means that each of {uk , yk } pairs consists of input output values. They were actually observed on the process and the process was supposedly at a stable equilibrium at that time. Further, let a known vector of the initial conditions x0 be specified. The reason is that the algorithm will optimize the parameters p to approximate the observed equilibria achieved from this initial condition. Assuming the fixed initial condition the algorithm avoids the ambiguity of multiple equilibria, i.e. multiple solutions of the set of nonlinear equations f (x(t), u(t), p) = 0. Usually at least one combination of the values of the state variables is known to be contained in the desired model stability region. These values often correspond to process cold start or minimum energy state. We recommend to use them as x0 . Supposedly, the process trajectory from this initial condition converges to a strictly stable equilibrium point applying fixed input values uk and the same holds for the model with the initial parameter values p0 . These initial parameter estimates has been obtained through earlier modeling steps, e.g. analyzing the individual model components separately. They should be as accurate as possible because the following optimization is only locally convergent. The assumption that the initial model parameter estimates define a stable trajectory from x0 will be relaxed later in section 4. The solution of the state space equation for given initial condition, given constant input values and given values of the parameters is denoted as: x(t) = Φ(x0 , uk , p, t).

(2)

it is possible to find an initial steady state point via a numerical solution on [0, T ]. Starting from a such point, it should be possible to continue with the optimization linearizing the equality constraint dx(t)/dt|T = 0. To do it, the right hand side of (1) can be differentiated at the equilibrium point at t = T : dx(t) = f (x(t), u(t), p) = 0, dt f (x(t) + dx, uk , p + dp) = ∂f (x(t), uk , p) ∂f (x(t), uk , p) dx + dp = 0. ∂x(t) ∂p

Using equation (5), sensitivity of the known stable terminal solution Φ(x0 , uk , p, T ) to the model parameters is: −1  dx ∂f (x(t), uk , p) ∂f (x(t), uk , p) =− , dp ∂x ∂p where the derivatives are evaluated at p = p0 , x = Φ(x0 , uk , p0 , T ).

X

||ˆ yk (uk , p) − yk ||22 .

∂g(x(t), uk , p) d yˆk (uk , p) = − dp ∂p  −1 ∂f (x(t), uk , p) ∂g(x(t), uk , p) ∂f (x(t), uk , p) , (7) ∂x ∂x ∂p where the derivatives are evaluated at p = p0 , x = Φ(x0 , uk , p0 , T ).

(3)

k

The model steady state prediction yˆk (uk , p) satisfies the equations (1) with zero values of state derivatives. Combining (2) and (3), the criterion function can be rewritten as follows: J(p) = lim

T →∞

X

||g(Φ(x0 , uk , p, T ), uk , p) − yk ||22 .

(4)

k

3. CONSTRAINT LINEARIZATION Because the term steady state translates to zero derivatives with respect to the time, the static identification problem can be understood as a constrained optimization with the equality constraint on the state derivatives. Due to the initial model stability assumption, it is possible to approximate the steady state solution solving the model (1) numerically until the state derivatives are negligible. The time when this happens will be denoted as T . Thus,

(6)

This result has the following geometric interpretation. In the joint space of the state variables and parameters {x, p}, a parameter perturbation changes the equilibrium solution along a direction tangent to the surface defined by f (x, uk , p) = 0. Hence, a parameter perturbation rate and the implied rate of change of the terminal solution are linearly dependent, as defined by (6). The inverse matrix used in equation (6) should exist because the supposed terminal solution is a strictly stable equilibrium point. The Jacobian eigenvalues should all have negative real parts and none is zero. Then the sensitivity of the model steady state prediction yˆk (uk , p) is given by the following:

The identification algorithm should improve the values of p0 based on information in the static data minimizing the sum of prediction error squares: J(p) =

(5)

The sensitivity of the steady output with respect to the parameters can be used to minimize the cost function (3) via an optimization method. The algorithm proceeds iteratively as follows: (I) The model (1) is solved to get a feasible equilibrium point. (II) The constraint dx(t)/dt|T = 0 is linearized and the model steady state output sensitivity with respect to the parameters is evaluated by means of (7) and the parameters are updated. The algorithm tests a stopping condition and returns to I or stops. Though this linearization method may work, it does not appear very efficient. The model will not reach a stable equilibrium point at I if x0 is pushed outside the model stability region from iteration to iteration. Then this method fails when trying to solve an unstable differential equation.

1601

16th IFAC Symposium on System Identification Brussels, Belgium. July 11-13, 2012

Therefore, the algorithm development is continued in the next section to improve the situation. 4. STABILITY ENFORCEMENT The implicit stability assumption should not be forgotten even if it makes the problem more difficult. The equilibrium stability should be supposed as known a priori for any steady state which has actually been observed and measured on the process. Unstable equilibria usually could not be observed in a straightforward way. Stability at all observed equilibria should be somehow formally incorporated to the optimization problem as a constraint. The equilibrium stability constraint means that the eigenvalues of the certain Jacobian should have negative real parts. Zero real parts, i.e. the integrators, are excluded because the static characteristics should be unique, independent on the initial condition. However, such eigenvalue constraint may be difficult to handle. Though it is known that the eigenvalues of a matrix are continuous functions of its elements, the problem is ill conditioned. Wilkinson’s polynomial is the notorious example due to Wilkinson [1959]. An additional term to the cost function (3) seems to be necessary to ensure the optimized trajectory end point x(T → ∞) will remain strictly stable. A term penalizing the norm of state derivatives is obviously not sufficient as it does not discriminate between stable and unstable equilibria. Therefore it is proposed to augment the cost function (3) by a term which penalizes unstable trajectories. The additional term is related to the Lyapunov exponents which are a measure of the average divergence rate of neighboring trajectories. The divergence rate will be denoted as δx(t)/δx0 . It can be interpreted as the sensitivity of trajectory to sufficiently small initial point perturbation δx0 . This quantity satisfies a differential (variational) equation which can be formally derived by differentiating (1) with respect to x: ∂f (x(t), uk , p) δx(t) d δx(t) = , dt δx0 ∂x δx0 where the derivatives are evaluated at x = Φ(x0 , uk , p, t) (nominal trajectory).

(8)

The divergence rates are described by a vector of Lyapunov exponents λ(i) . For a perturbation along a defined i-th eigenvector of the trajectory sensitivity, the Lyapunov exponent is defined as:

λ

(i)



δx(t) 1

= lim lim log (i) . T →∞ δx0 →0 T

δx 0

(9)

The trajectory will be asymptotically stable (in Lyapunov sense) if maxi λ(i) < 0, see Carbonell et al. [2002]. Adding a quadratic norm of the trajectory sensitivity ||δx(T )/δx0 ||22 to the cost function (3) is proposed. The sensitivity is pushed to zero through optimization and the Lyapunov exponents can be thus made negative. However, unlike the Lyapunov exponents the sensitivity term norm is evaluated at time T < ∞. In this respect, it is only an approximation.

The question is how efficiently the sensitivity norm can be minimized. The numerical optimization methods like steepest descent or the nonlinear least squares methods could require an efficient calculation of the sensitivity of the new criterion term with respect to the model parameters p. It is possible to obtain this information evaluating the mixed sensitivity (the sensitivity of sensitivity) solving a linear differential equation for δ 2 x(T )/δpδx0 . The sought equation is the linear second order variational equation:

d δ 2 x(t) ∂ 2 f (x(t), uk , p) δx(t) ∂f (x(t), uk , p) δ 2 x(t) = + , dt δpδx0 ∂p∂x δx0 ∂x δpδx0 where the derivatives are evaluated at p = p0 , x = Φ(x0 , uk , p, t). (10)

The solution of the (10) can be used to evaluate the gradient and an approximate Hessian of ||δx(T )/δx0 ||22 with respect to parameters p. The calculations are recursive. First, it is necessary to solve (1) to get the model equation nominal trajectory and the terminal prediction error. Then, the linear time varying differential equation (8) has to be solved for a full set of linearly independent directions in the state space, e.g.: (1, 0, 0, . . . ), (0, 1, 0, . . . ). The quadratic norm of the terminal points of all these trajectories are added to the cost function. Both sensitivity and the nominal trajectories are used in the equation (10) to get the full matrix of mixed sensitivities in the terminal state. The equation (10) has to be solved for the set of initial conditions again, for a full regular set of parameter perturbations. Altogether, (1 + nx )(1 + np ) differential equations have to be solved. Here, nx refers to the number of states and np refers to the number of model parameters. All of the equations are linear time varying except of the model equation (1). The stable equilibria matching problem can be now summarized as a boundary value problem for the model equation (1): find p parameter values such that at T → ∞: (1) the model output at T would be sufficiently close to the observed data. (2) the derivative norm dx(t)/dt would go to zero. (3) the sensitivity function δx(T )/δx0 norm would go to zero. It is possible to employ a second order optimization method to this nonlinear least squares problem when using (10). It suggests that the convergence could be superlinear. Unfortunately, such approach would be analogous to the classic single shooting approach. It is well known that the shooting approaches are not sufficiently stable especially for long time horizons, which is exactly the problem we solve, as T → ∞. However, it is possible to employ the multiple shooting approach, Bock and Plitt [1984]. We propose a simpler method adaptively extending the solution time horizon during the optimization process, i.e. increasing the end time to Tk+1 adaptively when the above three conditions are optimized for some Tk < Tk+1 . This can be done as follows:

1602

16th IFAC Symposium on System Identification Brussels, Belgium. July 11-13, 2012

(I) Solve the model equation (1) till the assumed settling time T or stop earlier at Tk if the cost function becomes too large. (II) Use the shooting approach on [0, Tk ] to decrease the cost function value, i.e. decrease the end point sensitivity to the initial condition and bring the trajectory end point closer to the data. (III) If the current end point time Tk is less than T , go to I. Terminate otherwise.

1

0.8

x

0.6

The idea is to start with a shorter trajectory and not continue further until the end point is not close to the target. In section 2, we made an assumption that the initial trajectory is stable. With this extending horizon approach, this assumption can be eventually relaxed.

0.4

0.2 initial guess target

0

5. NUMERICAL EXAMPLE

0

An autonomous model described by the first order differential equation is used as an example given by: x(t) ˙ = ax(t) + bx2 (t) + 1, y(t) = x(t), x0 = 0.

2

4

6

8

10

t

Fig. 2. Trajectory optimization process based on steady state constraint linearization.

(11)

The steady state solution which has to be achieved through the parameters a and b is x = 1. This value might represent the observed process steady state. Optimizing the values a and b to achieve a stable equilibrium at y = 1 is the steady state identification problem. It should be noted that this identification problem is simpler than the generic problem (3) for sake of simplicity and for clarity of the visualization. First, the model (11) is autonomous, it has no observed input-output stable equilibria values {uk , yk }. Instead, there is just one observed stable equilibrium value y1 = 1. Furthermore, we can expect that it will be possible to match this steady state exactly when manipulating the two parameters a and b simultaneously. In more typical identification problems, it is only possible to fit the steady state predictions in the least squares sense: the number of observed equilibria will be greater than the number of parameters. The transient behavior of the model will not be regarded in this example. To simplify the notation, let Sα (t) be a sensitivity of the nominal trajectory to the parameter α as a function of time. Parameter α may be any of x, a, b or mixed combinations a, x, and b, x. The evolution of all the sensitivities can be derived differentiating the model equation (11). The equations have to be solved in this order for a nominal trajectory of (11): S˙ x (t) = (a + 2bx(t))Sx (t),

(12)

S˙ a (t) = x(t) + aSa (t) + 2bx(t)Sa (t), S˙ b (t) = aSb (t) + x2 (t) + 2bx(t)Sb (t),

(13) (14)

S˙ ax (t) = (1 + 2bSa (t))Sx (t) + (a + 2bx(t))Sax (t), (15) S˙ bx (t) = (2x(t) + 2bSb (t))Sx (t) + (a + 2bx(t))Sbx (t). (16) Let the initial parameter values be: a = −4, b = 3. These values imply the initial steady state value x = 1/3. x = 1 is the stability boundary of this steady state, i.e. all trajectories starting at x0 > 1 are unstable. All others

converge to 1/3. In practical modeling problems involving complex chemical or combustion processes for example, the quadratic term bx2 (t) could be added on a semiempirical basis and the unstable equilibrium can in fact be just a fictitious solution added unintentionally far from the physically sensible x(t) values without any connection to the plant behavior. Figures 2 and 3 compare how two optimization methods have been updating the initial trajectory. For both methods, the steady state was approximated solving the equation (11) numerically for T = 10. The minimization method with the sensitivity norm (8) has achieved the desired terminal state with a stable trajectory in Figure 3. The method without the term was based on the equality constraint linearization only. Both methods started with updating the parameters in almost identical direction achieving the value x = 1 at time T , as can be seen in Figure 3. However, the latter result in Figure 2 is useless because the state does not stand still at T . Therefore, the actual error in the steady state characteristics is infinite in this case. The same method would fail numerically if using a longer simulation horizon due to numerical instability. The analysis of the example will be continued to illustrate the difference between finite (though possibly extending) horizon compared to dealing with the equilibrium points directly. Figure 4 shows the Jacobian eigenvalues at equilibria of the differential equation (11). It shows two eigenvalues answering two steady states depending on the parameter b for a value fixed at −1. For b in (0, 1/4) there is one stable and one unstable steady state with eigenvalues λ2 < 0 and λ1 > 0 respectively. For values of b > 1/4, there is no finite equilibrium. Numerically calculated sensitivity function Sx (T ) = δx(T )/δx0 evaluated at T → ∞ and assuming x0 = 0 is related to the stable eigenvalue through (9). Figure 4 shows approximations for T ∈ {1, 5, 10, 15}. The value 1/T log |Sx | converges to λ2 for T increasing. The advantage of the finite time calculation is that the value is always defined, finite and single valued even if the model has no equilibrium or two.

1603

16th IFAC Symposium on System Identification Brussels, Belgium. July 11-13, 2012

and nonlinear system identification. This article proposed a similar approach by extending the finite time horizon adaptively during the optimization to avoid trajectories too far from the target point.

1

0.8

x

0.6

0.4

0.2 initial guess target

0 0

2

4

6

8

10

The advantage of using the sensitivity norm compared to the computation of the Jacobian eigenvalues is that the sensitivity function is always finite and smooth for a sufficiently short time horizon even if the initial model parameters are inaccurate. The numerical experiments indicate that the problem nonlinearity is reduced for short time horizons. Moreover, the mixed sensitivity or second order variation can be easily calculated solving linear differential equations and it can be used for the nonlinear least squares update. This improves the convergence rate. The Jacobian eigenvalues at the equilibrium points are more difficult to evaluate and their sensitivity to parameters is ill-conditioned.

t

ACKNOWLEDGEMENTS Fig. 3. Trajectory optimization process involving the terminal sensitivity norm minimization.

This work was partially supported by ALFA programme of Technology Agency of the Czech Republic, project TA01030170.

1

REFERENCES

0.5

λ

λ1 unstable 0

λ stable 2

1/T log |S | x

−0.5 T increasing −1 0

0.05

0.1

0.15 b

0.2

0.25

0.3

Fig. 4. Jacobian eigenvalues depending on b and a finite horizon approximation of the stable one. 6. CONCLUSION An alternative viewpoint on the optimization of the steady states of a dynamic system was presented. The algorithm optimizes the end points of the trajectories which are all starting at a fixed initial condition. This is simpler than the general constrained optimization problem with the nullity of state derivatives as the equality constraints. The trajectories are calculated numerically solving the model differential equations for a set of constant input values. It is the simulation in optimization approach. The stability is controlled by minimizing the norm of sensitivity of the trajectory end point to the initial condition. When the optimization is successful, both sensitivity and derivatives approach zero resulting in stable trajectories terminating at stable equilibria. The sensitivity value is related to the degree of instability. This formulation is a boundary value problem which can be solved via numerically stable multiple shooting methods known from nonlinear control

L.A. Aguirre, M.F.S. Barroso, R.R. Saldanha, and E.M.A.M. Mendes. Imposing steady-state performance on identified nonlinear polynomial models by means of constrained parameter estimation. IEE Proc.-Control Theory Appl., volume 151, No. 2, 2004. H.G. Bock and K.J. Plitt. A Multiple Shooting Algorithm for Direct Solution of Optimal Control Problems. IFAC 9th World Congress, volume 9, Pages 242-247, 1984. B.N. Bond, Z. Mahmood, Yan Li, R. Sredojevic, A. Megretski, V. Stojanovic, Y. Avniel, and L. Daniel. Compact Modeling of Nonlinear Analog Circuits Using System Identification Via Semidefinite Programming and Incremental Stability Certification IEEE Transactions On Computer-Aided Design Of Integrated Circuits And Systems, volume 29, No. 8, August 2010 F. Carbonell, J.C. Jimenez, and R. Biscay. A numerical method for the computation of the Lyapunov exponents of nonlinear ordinary differential equations. Applied Math. and Comp., volume 131, Pages 21-37, 2004. A. De Marco, E.L. Duke, and J.S. Berndt. A General Solution to the Aircraft Trim Problem. AIAA Modeling and Simulation Technologies Conference and Exhibit, August 2007, Hilton Head, South Carolina. S.L. Lacy and D.S. Bernstein. Subspace Identification With Guaranteed Stability Using Constrained Optimization. IEEE Transactions on Automatic Control, volume 48, No. 7, July 2003. Mathworks. Optimization Toolbox, User’s Guide. 2010. G. Stewart, F. Borrelli, J. Pekar, D. Germann, D. Pachner and D. Kihas Toward a Systematic Design for Turbocharged Engine Control. L. del Re et al.(Eds.): Automotive Model Predictive Control, LNCIS 402, pages 211-230. Springer-Verlag Berlin Heidelberg, 2010. H.J.A.F. Tulleken. Grey-box modelling and Identification Topics. Techn. Univ. Delft, 1992. J.H. Wilkinson. The evaluation of the zeros of illconditioned polynomials. Part I Numerische Mathematik, volume 1, Pages 150-166, 1959.

1604