Zero Finding via Feedback Stabilisation

Zero Finding via Feedback Stabilisation

Proceedings of the 20th World Congress The International Federation of Congress Automatic Control Proceedings of the 20th World The International Fede...

495KB Sizes 0 Downloads 48 Views

Proceedings of the 20th World Congress The International Federation of Congress Automatic Control Proceedings of the 20th World The International Federation of Automatic Control Proceedings of the 20th9-14, World Congress Toulouse, France, July 2017 The International Federation of Automatic Control Toulouse, France, July 9-14, 2017 Available online at www.sciencedirect.com The International of Automatic Control Toulouse, France,Federation July 9-14, 2017 Toulouse, France, July 9-14, 2017

ScienceDirect

Zero Zero Zero Zero

IFAC PapersOnLine 50-1 (2017) 8133–8138

Finding via Feedback Stabilisation Finding Finding via via Feedback Feedback Stabilisation Stabilisation Finding via Feedback Stabilisation T. Mylvaganam ∗∗ A. Astolfi ∗∗ ∗∗

T. Mylvaganam ∗∗ A. Astolfi ∗∗ T. Mylvaganam ∗ A. Astolfi ∗∗ T. Mylvaganam A. Astolfi ∗∗ ∗ ∗ Department of Aeronautics, Imperial College London, London SW7 Department of Aeronautics, Imperial College London, London SW7 ∗ ∗ Department of(Email: Imperial College London, London SW7 2AZ, UK [email protected]). Aeronautics, ∗ 2AZ, UK (Email: [email protected]). ∗∗ 2AZ, UKof(Email: Department Aeronautics, Imperial College London, London SW7 [email protected]). and ∗∗ Department of Electrical Department of Electrical and Electronic Electronic Engineering, Engineering, Imperial Imperial ∗∗ 2AZ, UK (Email: [email protected]). ∗∗College Department of Electrical and Electronic Engineering, Imperial London, London SW7 2AZ, UK and Dipartimento di ∗∗College London, London SW7 2AZ, UK and Dipartimento di Department of Electrical and Electronic Engineering, Imperial CollegeCivile London, London SW7 2AZ, UKUniversit` and Dipartimento di Ingegneria e Ingegneria Informatica, a di Roma Ingegneria Civile e Ingegneria Informatica, a di Romadi“Tor “Tor College London, London SW7 2AZ, UKUniversit` and Dipartimento Ingegneria Civile e del Ingegneria Informatica, Universit` a di(Email: Roma “Tor Vergata”, Via Politecnico 1, 00133 Roma, Italy Vergata”, Via del Politecnico 1, 00133 Roma, Italy (Email: Ingegneria Civile Ingegneria Informatica, a di(Email: Roma “Tor Vergata”, Viae del Politecnico 1, 00133 Universit` Roma, Italy [email protected]). Vergata”, Via [email protected]). Politecnico 1, 00133 Roma, Italy (Email: [email protected]). [email protected]). Abstract: Abstract: Two Two iterative iterative algorithms algorithms for for solving solving systems systems of of linear linear and and nonlinear nonlinear equations equations are are Abstract:For Two iterative algorithms for solving systems ofcontrol linear and nonlinear equations are proposed. linear problems the algorithm is based on a theoretic approach and it is proposed. For linear problems the algorithm is based on a control theoretic approach and it is Abstract: Two iterative algorithms for solving systems linear and nonlinear equations proposed. For linear problems thesequence algorithm is any based on aof control theoretic approach and itare is guaranteed to yield a converging for initial condition provided a solution exists. guaranteed to yield a converging sequence for any initial condition provided a solution exists. proposed.ofFor the algorithm based on aacondition control theoretic approach and it is guaranteed to linear yield aproblems converging sequence foris any initial provided a solution exists. Systems nonlinear equations are then considered and generalised algorithm, again taking Systems of nonlinear are then considered and acondition generalised algorithm, again taking guaranteed to yield a equations converging sequence forLocal any initial provided ainsolution exists. Systems of from nonlinear equations are then considered and a generalised algorithm, again taking inspiration control theory, is is nonlinear inspiration from control theory, are is proposed. proposed. Local convergence convergence is guaranteed guaranteed in the the nonlinear Systems of nonlinear equations then considered and a generalised algorithm, again taking inspiration from control theory, is proposed. Local convergence is guaranteed in the nonlinear setting. Both the linear and the nonlinear algorithms are demonstrated on a series of numerical setting. Both the control linear and the nonlinear algorithms are demonstrated on a series of numerical inspiration from theory, is proposed. Local convergence is guaranteed in the nonlinear setting. Both the linear and the nonlinear algorithms are demonstrated on a series of numerical examples. examples. setting. Both the linear and the nonlinear algorithms are demonstrated on a series of numerical examples. © 2017, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved. examples. Keywords: Keywords: Numerical Numerical algorithms, algorithms, numerical numerical methods, methods, convergence convergence of of numerical numerical methods, methods, Keywords: Numerical algorithms, numerical methods, convergence of numerical methods, control system design, numerical analysis control system design, numerical analysis Keywords: Numerical numerical methods, convergence of numerical methods, control system design,algorithms, numerical analysis control system design, numerical analysis 1. ogy 1. INTRODUCTION INTRODUCTION ogy between between the the control control theoretic theoretic problem problem of of designing designing a a 1. INTRODUCTION ogy between the control theoretic problem ofadesigning a stabilising controller and that of determining convergent stabilising controller and that of determining a convergent 1. INTRODUCTION ogy between the control problem ofadesigning stabilising controller and theoretic that of determining convergenta iterative method. In paper explore iterative method. In this this paperofwe we explore the the similarities similarities A stabilising controller and and that determining convergent A key key objective objective which which is is central central to to control control theory theory is is to to iterative method. In this paper we explore thea similarities between the two notions propose a numerical method between the two notions and propose a numerical method A key objective which is central oftoa control theory is to feedback stabilise an equilibrium system, i.e. given a iterative method. In this paper we explore the similarities feedback stabilise an equilibrium of a system, i.e. given a between the two notions and propose a numerical method for obtaining the solution of a system of linear equations A key objective which is central control theory is to obtaining the solution of a system of linear equations feedback stabilise an design equilibrium oftoainput system, i.e. renders given a for system one seeks to a control which between the two a of numerical method system one seeks to design a control input which renders obtaining which is about aaand control theoretic approach. Due thenotions solution of propose a system linear equations feedback stabilise an design equilibrium of ainput system, i.e. renders given a for which is centred centred about control theoretic approach. Due system one seeks to a globally control which an equilibrium of the system (or locally) asympfor obtaining the solution of a system of linear equations an equilibrium of the system globally (or locally) asympwhich is centred about a control theoretic approach. Due to its simplicity (in terms of computational requirements), system one seeks to design a control input which renders to its simplicity (in terms of computational requirements), an equilibrium ofAthe system globally (oris locally) asymptotically stable. consequence of this that the state which is centred about a control theoretic approach. Due totically stable.ofAthe consequence of this is locally) that the state to its simplicitytechnique (in termsisofwell-suited computational developed for probrequirements), an the equilibrium system globally asympthe developed technique isofwell-suited for large-scale large-scale probtotically stable.converges A consequence of this(or is that the state the of system to a certain equilibrium point. to its simplicity (in terms computational requirements), of the system converges to a certain equilibrium point. developed technique is well-suited for large-scale of problems. Exploiting the theoretic the totically stable. Abeconsequence of this is that the point. state the lems. Exploiting the system system theoretic interpretation interpretation of the of the system converges to a that certain equilibrium Moreover, it a convergence the developed technique is well-suited for large-scale probMoreover, it may may be desirable desirable that a certain certain convergence lems. Exploiting themethod, system theoretic interpretation of the proposed numerical it is modified to solve systems of the system converges to a certain equilibrium point. proposed numerical method, it is modified to solve systems Moreover, it mayFor be linear desirable thatthe a certain rate be achieved. systems rate of convergence lems. Exploiting themethod, system theoretic interpretation of the rate be achieved. For linear systems the rate of convergence proposed numerical it is modified to solve systems of nonlinear equations, with local convergence results. Moreover, it may belocations desirable that a certain convergence of nonlinear equations, with it local convergence results. rate be achieved. For linear systems the rate of convergence is determined by the of the eigenvalues associated proposed numerical method, is modified to solve systems is determined by the locations of the eigenvalues associated of nonlinear equations, with local convergence results. rate be achieved. For locations linear systems rate of convergence is determined by the of thethe eigenvalues associated with the system and [2011]). iterative methods systems of of nonlinear equations, with for localsolving convergence results. with the closed-loop closed-loop system (Dorf (Dorf and Bishop Bishop [2011]). Different Different iterative methods for solving systems of equaequais determined by the locations the eigenvalues associated with the closed-loop system of (Dorf and Bishop [2011]). Different iterative methods for solving systems of equaIn contrast, stability analysis for nonlinear systems often tions exist. The Jacobi method and the Gauss-Seidel In contrast, stability analysis for nonlinear systems often tions exist. The Jacobi method and the Gauss-Seidel with the closed-loop system (Dorf and Bishop [2011]). Different iterative methods for solving systems of equaIn contrast, stability analysis(Isidori for nonlinear systems often tions exist. The Jacobi method and of thestationary Gauss-Seidel relies on methods [1995], [1996]). method, which fall within the iterrelies on Lyapunov Lyapunov methods (Isidori [1995], Khalil Khalil [1996]). method, which fallJacobi within method the category category of stationary iterIn contrast, stability analysis for nonlinear systems often tions exist. The and the Gauss-Seidel relies onother Lyapunov methods (Isidori Khalil [1996]). method, which fall within the category of stationary iterOn the hand, numerical analysis concerns algorithms [1995], ative algorithms, are two such examples (see, for instance, On the other hand, numerical analysis concerns algorithms ative algorithms, are two such examples (see, for instance, relies onother Lyapunov methods (Isidori Khalil [1996]). method, which fall the examples category of stationary iterOn the hand, numerical analysis[1995], concerns algorithms algorithms, arewithin two such (see, for instance, used to solutions to problems when Ortega [1990]). convergence of algorithms can used to obtain obtain solutions to mathematical mathematical problems when ative Ortega [1990]). The The convergence of these these algorithms can On the hand, numerical analysisavailable. concerns algorithms ative be algorithms, are convergence two certain such examples (see, for instance, used toother obtain solutions to mathematical problems when Ortega [1990]). The of these algorithms can closed-form solutions are not readily In general, only guaranteed for classes of problems (see closed-form solutions are not readily available. In general, only be guaranteed for certain classes of problems (see used to obtain solutions to mathematical problems when Ortega [1990]). The convergence of these algorithms can closed-form solutions are not readily available. In general, be[1990]). guaranteed for methods certain classes of problems (see numerical methods can be divided into two diOrtega Iterative for systems of numerical methods canare benot divided intoavailable. two approaches: approaches: di- only Ortega [1990]). Iterative methods for nonlinear nonlinear systems of closed-form solutions readily In general, only be guaranteed for certain classes of problems (see numerical methods can be divided into two approaches: di- Ortega [1990]). Iterative methods for nonlinear systems of rect methods and iterative methods. Direct methods, such equations include the well-known Newton’s method and rect methods and iterative methods. Direct methods, such equations include the well-known Newton’s method and numerical methods can beand divided into two method, approaches: di- equations Ortega [1990]). Iterative methods for nonlinear systems of rect methods and iterative methods. Direct methods,allows such include the well-known Newton’s method and as Gaussian elimination the variations thereof (see, for Dennis and as Gaussian elimination and the simplex simplex method, allows variations thereof (see, for example, example,Newton’s Dennis Jr. Jr. and SchnSchnrectcompute methodsthe and iterative methods. Direct methods, such equations include the well-known method and as Gaussian elimination and the simplex method, allows variations thereof (see, for example, Dennis Jr. and Schnto solution of a problem in a finite number of abel [1996], Ortega and Rheinboldt [1970]). However, the to compute the solution ofand a problem in a finite number of abel [1996], Ortega(see, and Rheinboldt [1970]). However, the as compute Gaussian elimination the simplex method, allows variations example, Dennis Jr. and Schnto the solution of a problem in a afinite number of abel [1996],thereof Ortega andfor Rheinboldt [1970]). However, the steps. Moreover, given infinite precision, solution found convergence of Newton’s method for nonlinear equations steps. Moreover, given infinite precision, a solution found convergence of Newton’s method for nonlinear equations to compute the solution ofDifferently a problem in a afinite number of convergence abel [1996], Ortega and Rheinboldt However, the steps. Moreover, given infinite precision, solution found of Newton’s method for[1970]). nonlinear equations via such methods is from methods, relies heavily the of initial condition and via such methods given is exact. exact. Differently fromadirect direct methods, relies heavilyofon onNewton’s the selection selection of the the initial condition and steps. Moreover, infinite precision, solution found convergence method for nonlinear equations via such methods isyield exact. Differently fromconverge direct methods, relies heavily on the selection of the initial condition and iterative methods sequences which to a somethod is generally globally convergent (Dennis Jr. iterative methodsisyield sequences which converge to a so- the the method is not not generally globally convergent (Dennisand Jr. via such methods exact. Differently from directExamples methods, relies heavily on[1996]). the selection of theconvergent initial condition iterative methods yield sequences which converge to a so- the method is not generally globally (Dennis Jr. lution of the underlying mathematical problem. and Schnabel Other iterative methods include lution of the underlying mathematical problem. Examples and Schnabel [1996]). Other iterative methods include iterative methods yield sequences which converge to a sothe method is not generally globally convergent (Dennis Jr. lution of the underlying mathematical problem. Examples Schnabel [1996]). for Other iterative methods include of such include Newton’s In if aa range of solving unconstrained, convex of suchofmethods methods includemathematical Newton’s method. method. In practice, practice, if and range of algorithms algorithms for solving unconstrained, convex lution the underlying problem. Examples and Schnabel [1996]). Other iterative methods include of such methods include Newton’s method. In practice, if aoptimisation range of algorithms for solving unconstrained, convex an iterative method is it when problems (see, for Boyd Vanan iterative method is convergent, convergent, it is is terminated terminated when optimisation problems for (see,solving for example, example, Boyd and andconvex Vansuch methods include Newton’s method. In practice, if optimisation a range of [2004]) algorithms unconstrained, an iterative method issolution convergent, it is terminated when problems (see, for example, Boyd and Vanaaofsufficiently accurate is obtained, thus providing denberghe or Krylov subspace methods (see, for exsufficiently accurateissolution is obtained, thus providing denberghe [2004]) or Krylov subspace methods (see, for exan iterative method convergent, it is terminated when optimisation problems (see, for example, Boyd and Vanaansufficiently accurate solution isproblem. obtained, thus providing denberghe [2004]) or Krylov subspace methods (see, for exapproximate solution of the ample, Van Der Vorst [2000]). A summary of developments an approximate solution of the problem. ample, Van Der Vorst [2000]). A summary of developments a sufficiently accurate solution is obtained, thus providing denberghe [2004]) or Krylov subspace methods (see, for exan approximate solution of the problem. ample, Van Der Vorst [2000]).relating A summary of developments made in the 20th century, to methods made inVan theDer 20th century, relating to iterative iterative methods Several links be numerical an approximate the problem. ample, Vorst [2000]). A summary of developments Several links can cansolution be drawn drawnofbetween between numerical analysis analysis and and made in thelinear 20th systems century, of relating to iterative methods for solving equations can in for solving linear systems of equations can be be found found in Several linkssystems can be drawn between numerical analysis and dynamical theory: for the solution of made in the 20th century, relating to iterative methods dynamical systems theory:between for example, example, theanalysis solutionand of for solving linear systems of equations can bea found in Saad and Van Der Vorst [2000]. In Smale [1976] so-called Several links can be drawn numerical Saad and Van Der Vorst [2000]. In Smale [1976] a found so-called dynamical systems theory: forinterpreted example, the aa system of equations can be as the equilibsolution of for solving linear systems of equations can be in system ofsystems equations can be interpreted as the equiliband Van Der Vorstwhich [2000]. In Smale [1976] a so-called global Newton method, relies on introduction of dynamical theory: for example, the solution of Saad global Newton method, which relies on the the[1976] introduction of arium system of equations can be interpreted as the equilibof a dynamical system. There is a further similarity Saad and Van Der Vorst [2000]. In Smale a so-called rium of a dynamical system. There is a further similarity global Newton method, which relies on the introduction of an ordinary differential equations is provided. Properties a system of equations can besystem interpreted as thesimilarity equiliban ordinary differential equations is provided. Properties rium of athe dynamical system. There isand a further between dynamics of a iterations of an global Newton method, which reliesisonprovided. the introduction of between the dynamics of a system and iterations of an an ordinary differential equations Properties the method are considered in [1981]. rium of athe dynamical system. There there isand a further similarity of the method are further further considered in Keenan Keenan [1981]. between dynamics of a system iterations of an of iterative numerical method. Thus, is a clear analan the ordinary differential equations is provided. Properties iterative numerical method. Thus, there is a clear analof method are further considered in Keenan [1981]. between the dynamics of a system and iterations an iterative numerical method. Thus, there is a clear of analof the method are further considered in Keenan [1981]. iterative numerical method. Thus, there is a clear anal-

Copyright © 2017 IFAC 8467 Copyright © 2017, 2017 IFAC 8467Hosting by Elsevier Ltd. All rights reserved. 2405-8963 © IFAC (International Federation of Automatic Control) Copyright © 2017 IFAC 8467 Peer review under responsibility of International Federation of Automatic Copyright © 2017 IFAC 8467Control. 10.1016/j.ifacol.2017.08.1253

Proceedings of the 20th IFAC World Congress 8134 T. Mylvaganam et al. / IFAC PapersOnLine 50-1 (2017) 8133–8138 Toulouse, France, July 9-14, 2017

With roots in dynamical systems theory the global Newton method considered in Smale [1976] and Keenan [1981] has been developed in the context of mathematical economics. The parallels between control theory and iterative algorithms have been studied from several different perspectives in the literature. In Bhaya and Kaszkurewicz [2007] iterative methods for solving linear and nonlinear equations are viewed as regulation problems for dynamical systems with feedback and several iterative methods are studied using this approach. The connections between the convergence of iterative methods and stability analysis based on Lyapunov functions are considered in Ortega [1973]. More recently, the convergence properties of several well-known iterative methods have been studied from a system theoretic point of view in Kashima and Yamamoto [2007]. The link between iterative algorithms and dynamical systems is also utilised in Harbor [1989] and Han and Han [2010]. Differently from the aforementioned literature, in Hasan et al. [2013] the effects of finite precision on numerical methods are investigated from a control theoretic point of view. In this paper, iterative algorithms are proposed based on the discretisation of the ordinary differential equations (ODEs) describing dynamical systems with stabilising state feedback. The parallels between control theory and numerical analysis have not yet been fully exploited: the results in this paper are intended to highlight new ways in which these parallels can be taken advantage of. Moreover, this work is intended to pave the way for the construction and the study of convergence properties of interative methods for solving systems of couped algebraic Riccati equations (AREs) which arise in linear quadratic differential games (see, for example, Mylvaganam et al. [2015], Engwerda [2005]). Coupled AREs are, in general, not straight-forward to solve and, consequently, there is a need for novel numerical methods to deal with such problems. The remainder of this paper is organised as follows. The problem of solving a system of linear equations is introduced along with a brief summary of available numerical methods before a novel approach for solving such systems of equations, based on a control theoretic approach, is provided in Section 2. The approach results in a globally convergent algorithm for solving systems of linear equations. Systems of nonlinear equations are then considered. The approach introduced in Section 2 is generalised and applied to nonlinear problems in Section 3, where a locally convergent algorithm for solving systems of nonlinear equations is provided. Both methods (for linear and nonlinear problems, respectively) and the resulting algorithms are illustrated on a series of numerical examples in Section 4. Finally, some concluding remarks and directions for further research are provided in Section 5. The following standard notation is used in the remainder of this paper. The set of real numbers is denoted by R and the open left half complex plane is denoted by C − . The identity matrix is denoted by I. The spectrum of the matrix A is denoted by σ(A), i.e. σ(A) = {λ1 , . . . , λN }, where λi , i = 1, . . . N , are the eigenvalues of A. The spectral radius is of the matrix A is denoted by ρ(A), i.e. ρ(A) = maxi {|λ1 |, . . . , |λN |}. The notation A = {aij } is

used as a compact representation of the matrix A. The n weighted norm √ of a vector x ∈ R is denoted by xR ,  i.e. xR = x Rx, with R ∈ Rn×n . Finally, given a vector x ∈ Rn , maxi {|xi |} denotes the element of x with the largest magnitude. 2. AN ITERATIVE ALGORITHM FOR SYSTEMS OF LINEAR EQUATIONS Consider the system of linear equations Ax + d = 0 , (1) where A ∈ Rn×n , x ∈ Rn and d ∈ Rn , and suppose we want to find the solution x∗ of equation (1). In what follows the following assumption is made. Assumption 1. The matrix A in (1) is non-singular, i.e. det(A) = 0 (or, equivalently, rank(A) = n). Clearly, the solution of (1) is (2) x∗ = −A−1 d . However, it is of interest to obtain the solution without performing computationally demanding operations, such as matrix inversions. Alternative, numerical methods become particularly important for large-scale systems of equations, i.e. when n is large. In what follows, iterative numerical methods for solving the system of equations (1) are considered. In Harbor [1989] iterative numerical methods for solving equations of the form (1) are considered from a control theoretic perspective. In particular, it is demonstrated that an iterative method, such as the well-known Jacobi iteration and the Gauss-Seidel iteration, can be interpreted as a feedback control system. An iterative method can thus be described as a discrete-time system and its convergence can be studied using standard tools for stability analysis. In Han and Han [2010] an algorithm for solving systems of nonlinear equations is provided. The algorithm proposed therein relies on the observation that a solution of such a system of equations can be interpreted as an equilibrium of a dynamical system (see also Deuflhard [2011]). In the remainder of this paper we pursue the idea of developing numerical methods based on control theoretic concepts. In particular, we seek to solve the following problem. Problem 1. Consider the system of linear equations (1), where A satisfies Assumption 1. Derive an iterative numerical method which allows to determine the solution x∗ (given by (2)) of the system without performing matrix inversions. Drawing inspiration from Harbor [1989] and Han and Han [2010] an iterative method for solving Problem 1 which is centred around dynamical systems and feedback control is presented in this section. In particular, the problem of solving the system of linear equations (1) is recast as a problem of designing a dynamic feedback controller for a linear dynamical system. A similar, yet preliminary, approach has been adopted in Mylvaganam and Astolfi [2016a] and Mylvaganam and Astolfi [2016b] to obtain solutions for coupled algebraic Riccati equations (AREs), which are nonlinear. To this end, consider the variables x(t) ∈ Rn and u(t) ∈ Rn and the extended system

8468

Proceedings of the 20th IFAC World Congress Toulouse, France, July 9-14, 2017 T. Mylvaganam et al. / IFAC PapersOnLine 50-1 (2017) 8133–8138

x˙ = Au + d , u˙ = w ,

Table 1. A-LI: Iterative algorithn for linear equations Input: Parameters A, d, K1 , K2 , T ,. Output: Estimated solution x∗ . 1 : Initialization: Set initial conditions x0 , u0 . 2 : Repeat 3: xk+1 := xk + (Auk + d)T 4: uk+1 := uk + (K1 xk + K2 uk )T 5: k := k + 1 5 : Until maxi {|Axk + d|i } <  6 : x∗ = uk 7 : End

(3)

with input w. Problem 2. Consider the extended system (3). Design a state feedback w such that the closed-loop system has an exponentially stable equilibrium. Remark 1. In the dynamic setting introduced above, the constant term d in (1) can be interpreted as a static disturbance affecting the dynamical system (3).  

 0 A .A K1 K2 solution of Problem 2 is provided by the following simple statement.

Let Ki ∈ Rn×n , for i = 1, 2, and let Acl =



Lemma 1. Let Ki , i = 1, 2, be such that σ(Acl ) ∈ C and suppose K1 is invertible. Then the state feedback (4) w = K1 x + K2 u , is such that the dynamical system (3) is exponentially  stable and lim u(t) = x∗ . t→∞

Substituting (4) into (3), the closed-loop system is described by     d x˙ . (5) = Acl x + 0 u˙ A selection of K1 and K2 satisfying the assumptions in Lemma 1, and which does not require computing matrix inverses, is provided in the following statement. Proposition 1. The selection (6) K1 = −A , K2 = −M , for some positive definite matrix M , is such that σ(Acl ) ∈ C−.  Remark 2. Any positive definite matrix M is such that the equilibrium of the closed-loop system (5) is exponentially stable. A simple selection is then provided by M = λI , (7) where λ > 0.  Lemma 1 implies that the solution of (1) can be obtained by considering the evolution of the states of the closedloop system (5). Thus, the discretisation of (5) starting from the initial conditions x(0) = x0 and u(0) = u0 results in an iterative algorithm for obtaining the solution of (1) as shown in Table 1, where T is the step size and  > 0 is a tolerance value, indicating the accuracy required by the algorithm. In what follows the algorithm is referred to as Algorithm A-LI. Algorithm A-LI in Table 1 results from applying the Euler method for differential equations to the ordinary differential equation (5) and it is guaranteed to converge for a sufficiently small step size (provided K1 and K2 are such that Acl ∈ C − ). Thus, Algorithm ALI provides a solution for Problem 1, i.e. it provides a simple algorithm for obtaining an asymptotic solution of (1). The conditions for Algorithm A-LI to converge, for any initial condition x0 and u0 , are provided in the following statement. Lemma 2. (Chapter 7, Ortega [1990]). Suppose that the parameters K1 , K2 and T in Algorithm A-LI are selected

8135

such that ρ(I + Acl T ) < 1. Then Algorithm A-LI is convergent, i.e. lim xk = x∗ .  k→∞

Remark 3. The matrices K1 and K2 and the step size T can be selected to maximise the speed of convergence of Algorithm A-LI. In fact, the rate of convergence is determined by ρ(H) and the aforementioned parameters can be selected to minimise ρ(H), thus maximisising the rate of convergence (see Chapter 7, Ortega [1990]). In some cases it may be useful to apply Gershgorin’s Theorem, or generalisations thereof (see, for instance Feingold and Varga [1962]), to minimise the spectral radius of H without the direct computation of its eigenvalues.  The speed of the algorithm may be further improved according to the following two remarks, essentially relating to the method of disrectising the continuous-time dynamical system (3) and the the selection of the step size T . Remark 4. Different schemes for discretising the system (3) resulting in iterations different from those in Algorithm-ALI can be cosindered and may result in improved convergence rates. Higher order one-step methods such as the classic Runge-Kutta method or multistep methods may be considered (see, for example, Chapter 5, Ortega [1990]).  Remark 5. The step size T is constant in Algorithm ALI. Utlising an adaptive step size may prove useful in improving the algorithm. Adaptive schemes are, for example, considered in Ilie et al. [2008]. Note that the system (5) with K1 and K2 in (6) is a Hamiltonian system. It may therefore be of interest to explore discrete-time models for port-controlled Hamiltonian systems as given in Laila and Astolfi [2006] and Stuart and Humphries [1998].  3. AN ITERATIVE ALGORITHM FOR SYSTEMS OF NONLINEAR EQUATIONS Algorithm A-LI in Section 2 is applicable only to systems of linear equations. However, the algorithm can be generalised to deal with systems of nonlinear equations. To this end, consider the system of equations f (x) = 0 , (8) where x ∈ Rn and f : Rn → Rn is a smooth mapping, and suppose we want to find a solution x∗ satisfying (8), i.e. f (x∗ ) = 0. As in the case of linear systems discussed in Section 2, it is often of interest to obtain such a solution numerically. That is, we seek to solve the following problem (which is a direct extension of Problem 1).

8469

Proceedings of the 20th IFAC World Congress 8136 T. Mylvaganam et al. / IFAC PapersOnLine 50-1 (2017) 8133–8138 Toulouse, France, July 9-14, 2017

Table 2. A-NLI: Iterative Algorithm for Nonlinear Equations

Problem 3. Consider the system of nonlinear equations (8). Derive an iterative numerical method which allows to determine the solution x∗ of the system.

Input: Parameters f (x), M (xk , uk ), T ,. Output: Estimated solution x∗ . 1 : Initialization: Set initial conditions x0 , u0 . 2 : Repeat 3: xk+1 := xk + f (uk )T 4: uk+1 := uk − (F (uk ) xk + M (xk , uk )uk )T 5: k := k + 1 5 : Until maxi {|f (uk )|i } <  6 : x∗ = uk 7 : End

In the following a second algorithm, generalised to handle systems of nonlinear equations, is provided. Remark 6. Unlike the case with systems of linear equations, systems of nonlinear equations of the form (8) may have several solutions. Therefore, in what follows, local solutions for Problem 2 are considered.  Let x∗ denote a solution of (8). Consider, as in the linear case in Section 2, the variables x(t) ∈ Rn and u(t) ∈ Rn and the extended system x˙ = f (u) , (9) u˙ = w , with w as input. The “nonlinear equivalent” of Problem 2 follows. Problem 4. Consider the extended system (9). Design a state feedback w such that the closed-loop system has a (locally) asymptotically stable equilibrium at u = x∗ . Remark 7. There exists a matrix-valued function F (u) : Rn → Rn×n such that the system (9) can be written in the form 1 x˙ = F (u)u + d , (10) u˙ = w , where d = f (0).    Suppose an equilibrium, denoted by (x of the e , ue ) system (9) exists and consider the function V : Rn → V given by V (x, u) =     λI δF (ue ) 1 x − xe   , (x − xe ) (u − ue ) u − ue δF (ue ) λI 2 (11) with λ > 0 and δ > 0. Let M : Rn , Rn → Rn×n denote a positive definite matrix-valued function, i.e. M (x, u) > 0 for all x and u, and let ∆i : Rn × Rn → Rn×n be matrix valued mappings such that ∆i (u, ue )(u − ue ) = F (u)u − F (ue )ue and ∆2 (u, ue , xe ) (x − xe ) = F (u) x −   , F (ue ) xe . Finally, let Γ11 = 12 δ F (ue )∆ 2 + ∆2 F (ue ) − ∆ )) and Γ = λM − Γ12 = 12 (δF (ue )M + λ(∆ 2 1 22  1   2 δ F (ue ) ∆1 + ∆1 F (ue ) .

Proposition 2. Let w = −F (u) x − M (x, u)u . (12) Suppose F (ue ) is invertible, λ and δ are such that δ2 λI − F (ue ) F (u2 ) > 0 , (13) λ and suppose the trajectories of the closed-loop system (9)(12) satisfy −1 Γ12 ≤ 0 (14) Γ11 > 0 , Γ22 − Γ 12 (Γ11 )   in a neighbourhood N containing the equilibrium (x e , ue ) . Then the closed-loop system (9)-(12) is such that the equilibrium (xe , ue ) is (locally) exponentially stable and lim u(t) = x∗ where x∗ is the solution of (8).  t→∞ 1

The statement follows directly from the observations that x˙ = f (u) − f (0) + f (0) and that the first two terms can be written as f (u) − f (0) = F (u)u.

Remark 8. In the linear case F is constant and ∆1 = ∆2 = A. Thus, in the linear case the inequalities (13) and (14) are trivially satisfied (for suitable values of λ and δ). From a control theoretic perspective the challenge, which becomes apparent in the nonlinear case, is that we seek to stabilise an unknown equilibrium of the system.  Similarly to what has been done in Section 2 for systems of linear equations, the above statement is utilised to develop an algorithm for solving Problem 2. In particular, the iterative numerical algorithm, given in Table 2, for obtaining an approximate solution of (8) follows from the discretisation of the closed-loop systems (9)-(12). Henceforth the algorithm will be referred to as Algorithm ANLI. Similarly to what has been seen in Section 2 the Algorithm A-NLI in Table 2 simply represents the Euler approximation of the dynamical system (9). Remark 9. As seen in Lemma 2 in the linear case, convergence of algorithm A-NLI depends on the step size T . Similarly to the observations made in Remark 3 the parameters may be selected to maximise the rate of convergence and, as seen in Remarks 4 and 5, alternative methods of discretising the dynamical system (9) may yield further improvements.  4. NUMERICAL EXAMPLES Three numerical example are provided in this section. A system of linear equations, a nonlinear equation, which is a scalar ARE, and a system of nonlinear equations are considered separately in the following. 4.1 System of Linear Equations Consider first a system of equations of the form (1) with 1000 unknowns, i.e. n = 1000 and the error (15) ek = max{|Auk + d|i } , i

which quantifies the maximum (absolute) residue at each time. Note that this provides also the termination criterion of the algorithm. A matrix A = {aij }, such that A ∈ R1000×1000 , and such that its diagonal elements satisfy 1 < aii < 10 and its non-diagonal elements satsify −0.1 < aij < 0.1, and a vector d ∈ R1000 are randomly generated in MATLAB. Algorithm A-LI shown in Table 1 is applied with thefollowing selection of parameters. The initial condition x0 = u0 = 0 is selected, the sampling time is taken to be T = 0.05 seconds and  = 5 · 10−6 . In line with Proposition 1 K1 = −B  and M = 10I. Following

8470

Proceedings of the 20th IFAC World Congress Toulouse, France, July 9-14, 2017 T. Mylvaganam et al. / IFAC PapersOnLine 50-1 (2017) 8133–8138

8137

200 0

10

x 100 −2

10 e

0 0

50

100

150

200

250

300

50

100

150 iteration

200

250

300

20 −4

10

u 10 0

−6

10

0

1000

2000

3000

4000

5000 6000 iteration

7000

8000

9000

10000

0

Fig. 1. Time history of the error e associated with Algorithm A-LI (solid line) and with an alternative algoritm based on the Runge-Kutta method (dashed line). Remark 4 an alternative to Algorithm A-LI in which the discretisation is done using the classical (fourth order) Runge-Kutta method in place of the Euler method (see, for instance, Chapter 5, Ortega [1990]), is also considered, for the same selection of parameters with the exception of the step size which is taken to be T = 0.1. Note that once the desired accuracy is reached the algorithms are terminated. The time history of the error e is shown in Figure 1 for Algorithm A-LI (solid line) and for the alternative algorithm based on the fourth order Runge-Kutta method (dashed line). Note that, despite the parameters not having been selected to optimise the convergence rate, the alternative algorithm requires half the number of iterations compared to Algorithm A-LI, indicating that, as noted in Remarks 4 and 5, the convergence rate may be further improved.

Fig. 2. Time histories of x (top) and u (bottom). 5

10

0

10

e

−5

10

−10

10

−15

10

0

50

100

150

200 iteration

250

300

350

400

Fig. 3. Time history of the error e. 0 -5 x -10

-15

4.2 Scalar ARE

-20 0

Consider the case in which n = 1 and (16) f (x) = 2ax + 2b2 x2 + q , where a ∈ R, b ∈ R and q ∈ R. Note that, similarly to one of the examples considered in Mylvaganam and Astolfi [2016b], the quadratic equation (16) is a scalar ARE. Note also that (16) can be written in the form (10) in Remark 7. Thus, Algorithm A-NLI can be implemented with F (u) = 2a + 2b2 u and d = q. Consider the case in which a = −0.1, b = 0.2 and q = 10. The parameters for Algorithm A-NLI are selected as M = 10,  = 10−6 and T = 0.2. The resulting time histories of x (top) and u (bottom) are shown in Figure 2, whereas the time history of the nonlinear error (17) ek = max{|f (uk )|i } , i

is shown in Figure 3. The circular markers and dashed vertical lines in Figures 2 and 3 indicate the iteration at which Algorithm A-NLI terminates and returns a solution of (16), namely x∗ = 13.5078. In this case the algorithm converges to the positive root. Note that in the context of optimal control and differential games one seeks to obtain positive definite solutions of AREs.

F (x)

200

300

400

500

600

700

800

900

1000

100

200

300

400

500 iteratiom

600

700

800

900

1000

2 u

1 0

0

Fig. 4. Top: time histories of x1 (solid line) and x2 (dashed line). Bottom: time histories of u1 (solid line) and u2 (dashed line). and consider the case in which a1 = a2 = a3 = 1, a4 = 0.5  and d = [−5, −1] . Algorithm A-NLI is applied to the problem with M = 20I,  = 10−6 and T = 0.1. Let u = [u1 , u2 ] . The time histories of x1 (solid line) and x2 (dashed line) are shown Figure 4 (top) whereas the time histories of u1 (solid line) and u2 (dashed line) are shown Figure 4 (bottom). The time history of the error (17) is shown in Figure 5. The circular markers and dashed vertical lines in Figures 4 and 5 indicate the iteration at which Algorithm A-NLI terminates, returning the solution  x∗ = [1.831, 0.7808] . Note that the error response is not monotinic.

4.3 System of Nonlinear Equations with Two Unknowns Consider the system   a1 x1 x2 + a2 sin x2 x +  d , f (x) = 0 a 3 x2 + a4    f (0)

100

3

5. CONCLUSION In this paper an iterative numerical algorithm for solving systems of linear equations are proposed. The algorithm, which takes its inspiration from control theory, is then generalised to obtain solutions for systems of nonlinear 8471

Proceedings of the 20th IFAC World Congress 8138 T. Mylvaganam et al. / IFAC PapersOnLine 50-1 (2017) 8133–8138 Toulouse, France, July 9-14, 2017

105

100

e

10-5

10-10

10-15

0

200

400

600

800

1000

1200

1400

1600

1800

2000

iteration

Fig. 5. Time history of the error e. equations. The developed theory is demonstrated on a series of numerical examples. Directions for future work include improving the performance of the proposed algorithms by using methods different from the Euler method for discretising the dynamical systems which form the foundations of the algorithms, following the observations made in Remarks 4 and 5 and in Section 4.1. It is also of interest to perform a more thorough analysis of complexity and convergence of the proposed algorithms, including a comprehensive comparison with other numerical methods. Underdetermined and overdetermined systems of linear equations will also be considered. Extensions of the proposed idea to matrix equations and, in particular, coupled AREs, such as those encountered in Mylvaganam and Astolfi [2016b,a], will be investigated. Such systems of AREs typically arise in linear quadratic nonzero-sum differential games. As mentioned in Section 4.2, in optimal control and differential games, one typically seeks positive definite (and symmetric) solutions of the AREs. Therefore, systematic ways of selecting particular roots of the coupled AREs (if they exist) will be sought. REFERENCES A. Bhaya and E. Kaszkurewicz. A control-theoretic approach to the design of zero finding numerical methods. IEEE Transactions on Automatic Control, 52(6):1014– 1026, 2007. S. Boyd and L. Vandenberghe. Convex optimization. Cambridge university press, 2004. J. E. Dennis Jr. and R. B. Schnabel. Numerical methods for unconstrained optimization and nonlinear equations, volume 16. Society for Industrial and Applied Mathematics, 1996. P. Deuflhard. Newton methods for nonlinear problems: affine invariance and adaptive algorithms, volume 35. Springer Science & Business Media, 2011. R. C. Dorf and R. H. Bishop. Modern Control Systems. Pearson Prentice Hall, 12th edition, 2011. ISBN 9780136024583. J.C. Engwerda. LQ dynamic optimization and differential games. Chichester: John Wiley & Sons, 2005. D. G. Feingold and R. S. Varga. Block diagonally dominant matrices and generalizations of the Gerschgorin circle theorem. Pacific Journal of Mathematics, 12(4): 1241–1250, 1962. T. Han and Y. Han. Solving large scale nonlinear equations by a new ODE numerical integration method. In Applied Mathematics, volume 1, pages 222–229, 2010.

R. D. Harbor. A control theory perspective of iterative numerical methods. In IEEE Proceedings of the TwentyFirst Southeastern Symposium on System Theory, pages 71–73, 1989. A. Hasan, E. C. Kerrigan, and G. A. Constantinides. Control-theoretic forward error analysis of iterative numerical algorithms. IEEE Transactions on Automatic Control, 58(6):1524–1529, 2013. S. Ilie, G. S¨oderlind, and R. M. Corless. Adaptivity and computational complexity in the numerical solution of ODEs. Journal of Complexity, 24(3):341 – 361, 2008. A. Isidori. Nonlinear Control Systems. Springer, 1995. K. Kashima and Y. Yamamoto. System theory for numerical analysis. Automatica, 43(7):1156 – 1164, 2007. D. Keenan. Further remarks on the global Newton method. Journal of Mathematical Economics, 8(2):159 – 165, 1981. H. K. Khalil. Noninear Systems. Prentice-Hall, New Jersey, 1996. D. S. Laila and A. Astolfi. Construction of discrete-time models for port-controlled Hamiltonian systems with applications. Systems & Control Letters, 55(8):673–680, 2006. T. Mylvaganam and A. Astolfi. Dynamic algorithms for solving coupled algebraic Riccati equations arising in mixed H2 /H∞ control for scalar linear systems. In Submitted to the 55th Annual Conference on Decision and Control, 2016a. T. Mylvaganam and A. Astolfi. A Nash game approach to mixed H2 /H∞ control for input-affine nonlinear systems. In Proceedings of the 10th IFAC Symposium on Nonlinear Control Systems, 2016b. T. Mylvaganam, M. Sassano, and A. Astolfi. Constructive -Nash equilibria for nonzero-sum differential games. IEEE Transactions on Automatic Control, 60(4):950– 965, 2015. J. M. Ortega. Stability of difference equations and convergence of iterative processes. SIAM Journal on Numerical Analysis, 10(2):268–282, 1973. J. M. Ortega. Numerical analysis: a second course, volume 3. Siam, 1990. J. M. Ortega and W. C. Rheinboldt. Iterative solution of nonlinear equations in several variables, volume 30. Society for Industrial and Applied Mathematics, 1970. Y. Saad and H. A. Van Der Vorst. Iterative solution of linear systems in the 20th century. Journal of Computational and Applied Mathematics, 123(1):1–33, 2000. S. Smale. A convergent process of price adjustment and global Newton methods. Journal of Mathematical Economics, 3(2):107–120, 1976. A. Stuart and A. R. Humphries. Dynamical systems and numerical analysis, volume 2. Cambridge University Press, 1998. H. A. Van Der Vorst. Krylov subspace iteration. Computing in Science and Engineering, 2(1):32–37, 2000.

8472