Carleman State Feedback Control Design of a Class of Nonlinear Control Systems

Carleman State Feedback Control Design of a Class of Nonlinear Control Systems

7th IFAC Workshop on Distributed Estimation and 7th IFAC Workshop on Distributed Estimation and Control Networked 7th IFACin onSystems Distributed Est...

558KB Sizes 0 Downloads 41 Views

7th IFAC Workshop on Distributed Estimation and 7th IFAC Workshop on Distributed Estimation and Control Networked 7th IFACin onSystems Distributed Estimation Availableand online at www.sciencedirect.com Control inWorkshop Networked Systems 7th IFAC Workshop onSystems Distributed and Chicago, IL, USA, September 16-17,Estimation 2019 Control Networked Chicago,inIL, USA, September 16-17, 2019 Control Networked Systems16-17, 2019 Chicago,inIL, USA, September Chicago, IL, USA, September 16-17, 2019

ScienceDirect

IFAC PapersOnLine 52-20 (2019) 229–234

Carleman State Feedback Control Design Carleman State Feedback Control Design Carleman State Feedback Control Design a Class of Nonlinear Control Systems Carleman State Feedback Control Design a Class of Nonlinear Control Systems a Class of Nonlinear Control Systems a Class of Nonlinear Control Systems Arash Amini ∗∗ Qiyu Sun ∗∗ Nader Motee ∗∗ ∗∗

of of of of

Arash Amini ∗ Qiyu Sun ∗∗ Nader Motee ∗ Arash Amini Qiyu Sun Nader Motee ∗ ∗∗ ∗ Arash Amini Qiyu Sun Nader ∗ Engineering and Mechanics, LehighMotee University, ∗ Mechanical ∗ Mechanical Engineering and Mechanics, Lehigh University, Mechanical Engineering and Mechanics, Lehigh University, Bethlehem, PA, 18015 {ara416, motee}@lehigh.edu) PA, USA, USA, 18015 (e-mails: (e-mails: {ara416, motee}@lehigh.edu) ∗ ∗∗Bethlehem, Mechanical Engineering and Mechanics, Lehigh University, PA, USA, 18015 (e-mails: {ara416, motee}@lehigh.edu) Department of Mathematics, University of Central Florida, ∗∗Bethlehem, DepartmentPA, of Mathematics, University of Central Florida, Orlando, Orlando, ∗∗Bethlehem, USA, 18015 (e-mails: {ara416, motee}@lehigh.edu) Department of Mathematics, University of Central Florida, Orlando, FL 32816, USA (e-mail: [email protected]) FL 32816, USA (e-mail: [email protected]) ∗∗ DepartmentFL of 32816, Mathematics, University of Central Florida, Orlando, USA (e-mail: [email protected]) FL 32816, USA (e-mail: [email protected]) Abstract: Abstract: We We consider consider optimal optimal feedback feedback control control design design for for nonlinear nonlinear control control systems systems with with Abstract: right-hand We considersides. optimal feedback control design for nonlinear controlcost systems with polynomial The control objective is to minimize a quadratic functional polynomial right-hand sides. The control objective is to minimize a quadratic cost functional Abstract: We consider optimal feedback control structure design for nonlinear controlcost systems with polynomial right-hand sides. The control objective is to minimize a quadratic functional over all state feedback control laws with polynomial subject to dynamics of the system. over all stateright-hand feedback control laws control with polynomial subjectatoquadratic dynamicscost of the system. polynomial sides.Carleman The objectivestructure is to minimize functional over all state feedback control laws with polynomial structure subject to dynamics of the system. First, we utilize ideas from linearization to lift a given finite-dimensional nonlinear First, we utilize ideas control from Carleman linearization to lift a given finite-dimensional nonlinear over state feedback laws with polynomial structure subject to dynamics of the system. First,allwe utilize ideas from Carleman linearization toFinite-order lift a given finite-dimensional nonlinear system into an infinite-dimensional linear system. truncations of the resulting system into an infinite-dimensional linear system. Finite-order truncations of the resulting First, we utilize ideas from Carleman linearization to lift a given finite-dimensional nonlinear system into an infinite-dimensional linear system. Finite-order truncations of the resulting infinite-dimensional linear investigated and connections between (local) stability infinite-dimensional linear system system are arelinear investigated and connections between of (local) stability system intoof an infinite-dimensional system. Finite-order truncations the resulting infinite-dimensional linearnonlinear system are investigated and connections between (local) stability properties the original system and its finite-order truncations are established. We properties of the original nonlinear system and its finite-order truncations are established. We infinite-dimensional linear system are investigated and connections between (local) stability properties of the original nonlinear system and its finite-order truncations are established. We show that the optimal feedback control design can be approximated and cast as an optimization show that the optimal feedback control design can be approximated and cast as an optimization properties of the original nonlinear system andcan itsbe finite-order truncations are established. We show that the optimal feedback control design approximated and cast as an optimization problem with bilinear matrix equation constraints. Through several simulations, we show that problem with bilinear feedback matrix equation constraints. Through severaland simulations, we show that show that the optimal control design can be approximated cast as an optimization problem with bilinear matrix equation constraints. Through several simulations, we show that this approximate method can efficiently implemented using the Direction Method this approximate method can be be efficiently implemented using several the Alternating Alternating Direction Method problem with bilinear matrix equation constraints. Through simulations, we show that thisMultipliers approximate method can be efficiently implemented using the Alternating Direction Method of (ADMM) methods. of Multipliers (ADMM) methods. this approximate method can be efficiently implemented using the Alternating Direction Method of Multipliers (ADMM) methods. Copyright © 2019.(ADMM) The Authors. Published by Elsevier Ltd. All rights reserved. of Multipliers methods. Keywords: Keywords: Nonlinear Nonlinear control control systems, systems, Carlemen Carlemen linearization, linearization, optimal optimal feedback feedback control, control, Keywords:ADMM. Nonlinear control systems, Carlemen linearization, optimal feedback control, stability, stability, ADMM. Keywords: Nonlinear control systems, Carlemen linearization, optimal feedback control, stability, ADMM. stability, ADMM. 1. INTRODUCTION polynomial right-hand right-hand side. side. It It is shown shown that that local local stability stability 1. polynomial 1. INTRODUCTION INTRODUCTION polynomial right-hand side. It is is shown that local stability properties of aa nonlinear system can be related to the properties of nonlinear system can be related to the stasta1. INTRODUCTION polynomial side. It is shown that local properties ofright-hand a nonlinear system can be related to stability the bility of the corresponding Carleman linearization andstaits bility of the corresponding Carleman linearization and its properties ofcorresponding a nonlinear system can be relatedwe to consider the staDeveloping efficient tools for stability analysis and optimal bility of the Carleman linearization and its finite truncations. Using this stability result, Developing efficient tools stability analysis and optimal truncations. Using this stability result, we consider bility of the control corresponding Carleman linearization and its Developing efficient toolsoffor fornonlinear stability systems analysis has and been optimal feedback control control design an finite finite truncations. Using this stability result, weobjective consider an optimal problem where the control feedback design of nonlinear systems has been an an optimal control problem where the control objective Developing efficient tools for stability analysis and optimal finite truncations. Using this where stability result, weobjective consider feedback controlarea design of nonlinear systems has beenand an an active research for several decades now (Khalil optimal control problem the control is to minimize a given quadratic cost functional over active research area for several decades now (Khalil and is tooptimal minimize a given quadratic cost functional over all all feedback control(Sastry design of nonlinear systems been an is an to control problem wherecost thefunctional controlforms objective active research area for[2013]). several decades now has (Khalil and Grizzle [2002]), In 1932, (Carleman [1932]) minimize a given quadratic over all feedback control laws that have polynomial with Grizzle [2002]), (Sastry [2013]). In 1932, (Carleman [1932]) feedback control laws that have polynomial forms with active research area for[2013]). several now (Khalil and finite is to minimize quadratic cost functional overwith all Grizzle [2002]), (Sastry Indecades 1932,to(Carleman [1932]) formulated an technique linearize class control laws polynomial forms orders. It ais is given shownthat thathave efficient approximations can formulated an embedding embedding technique to(Carleman linearize aa[1932]) class feedback orders. It shown that efficient approximations can Grizzle [2002]), (Sastry [2013]). Indimensions 1932,to feedback control laws that have polynomial forms with formulated ansystem embedding technique linearize a linear class finite of nonlinear with finite into a finite orders. It is shown that efficient approximations can be obtained obtained for for this this class class of of optimal control control problems. problems. The The of nonlinear with dimensions into aaa linear formulated ansystem embedding technique to was linearize class finite orders. It this is shown efficient approximations can of nonlinear system with finite finite dimensions into linear system of infinite infinite dimension. The idea explored in be be obtained for class that of optimal optimal control problems. The resulting truncated optimal control problem, then, can be system of dimension. The idea was explored in resulting truncated optimal control problem, then, can be of nonlinear system with finiteThe dimensions a linear be obtained for this optimal class ofproblem optimal controlconstraint problems. The system of infinite dimension. idea wasinto explored in resulting more details by other researchers (Brockett [1976]) and truncated control problem, then, can be cast as an optimization whose is a more details by other researchers (Brockett [1976]) and cast as an optimization problem whose constraint is a systemdetails of[1974]). infinite dimension. The (Brockett idea was explored in cast resulting truncated optimal control problem, then, canisusbe more by other researchers [1976]) and (Krener One of the early works on the stabilas matrix an optimization problem whose constraint a bilinear equation that can be solved efficiently (Krener [1974]). One of the early works on the stabilequation that can be solved efficiently more by other researchers (Brockett [1976]) and bilinear castAlternating as matrix an optimization problem whose isusa (Krener Onesystem of theusing earlyCarleman works onlinearization the stability of details the[1974]). nonlinear bilinear matrix equation can of be solvedconstraint efficiently using Directionthat Method Multipliers (ADMM) ity of the nonlinear system using Carleman linearization ing Alternating Direction Method of Multipliers (ADMM) (Krener [1974]). One of the early works on the stabilbilinear matrix equation that can beMultipliers solved efficiently usity ofpresented the nonlinear system using Carleman[1978]), linearization was by (Loparo and Blankenship where ing Alternating Direction Method of (ADMM) methods. The usefulness of our proposed method is shown was by and Blankenship [1978]), where methods. The usefulness ofMethod our proposed method is shown ity ofpresented the nonlinear system using Carleman linearization ing Alternating Direction of Multipliers was presented by (Loparo (Loparo and Blankenship [1978]), the authors utilized nonlinear feedback control towhere esti- methods. usefulness our proposed method(ADMM) is shown via severalThe examples and of simulations. the authors utilized nonlinear feedback control to estivia several examples and simulations. was presented by (Loparo and Blankenship [1978]), methods. usefulness our proposed method is shown the authors utilized nonlinear feedback control towhere esti- via mate the domain of attraction using Carleman linearizaseveralThe examples and of simulations. mate the domain of attraction using Carleman linearizathe authors utilized nonlinear feedback control to examples and simulations. mate the domain of attraction Carleman tion. There have been several using follow-ups workslinearizaon estithe via several 2. MATHEMATICAL MATHEMATICAL PRELIMINARIES PRELIMINARIES tion. There have been several follow-ups works on mate the domain of attraction using Carleman lineariza2. tion. There havenonlinear been several follow-ups works (Banks on the the stability of the system; for example, 2. MATHEMATICAL PRELIMINARIES stability of the nonlinear system; for example, (Banks tion. There havenonlinear beenEmbedding several follow-ups works on the stability of Carleman the system; example, (Banks [1992]) uses onfor Lyapunov equation, 2. MATHEMATICAL PRELIMINARIES [1992]) Embedding on Lyapunov equation, In this this section section we explain explain important important notation, definitions definitions stabilityuses of Carleman the nonlinear system; example, (Banks [1992]) uses Carleman Embedding onfor Lyapunov equation, In we notation, (Hernandez and Banks [2004]) investigates properties of (Hernandez and Banks [2004]) investigates properties of In this section we explain important notation, definitions and key lemmas used in this paper. For any of matrices [1992]) uses Carleman Embedding on Lyapunov equation, (Hernandez and Banks [2004])forinvestigates propertiesand of and key lemmas used in this paper. For matrices infinite-dimensional matrices stability purposes, n×m p×q In ∈this section we explain important notation, definitions purposes, and and key lemmas used in Kronecker this paper.product For any anyisof of matrices infinite-dimensional matrices for stability ,B ∈ R the defined as: A R n×m p×q (Hernandez and Banks [2004]) investigates properties of A ∈ R ,B ∈ R the Kronecker product is defined as: infinite-dimensional matrices [2006]) for stability purposes, and and key (Mozyrska and and Bartosiewicz Bartosiewicz explores the controlcontroln×m p×q lemmas used in Kronecker this paper.product For any of matrices A ∈ R (Mozyrska [2006]) explores the ,B ∈ R the is defined as:   infinite-dimensional matrices for stability purposes, and B a B . . . a B a n×m p×q (Mozyrska and Bartosiewicz [2006]) explores the control11 12 1m lability of of special special cases cases for for infinite-dimensional infinite-dimensional nonlinear nonlinear A ∈ R the B a12 B . .product . a1m B  a11Kronecker ,B ∈ R is defined as: lability   (Mozyrska and Bartosiewicz [2006]) explores the control.. B   lability of special cases for infinite-dimensional nonlinear dynamical systems. In et   a11... B a12... B ..... .. a1m A  dynamical systems. In (Rauh (Rauh et al. al. [2009]), [2009]), the the authors authors A⊗ ⊗B B= =   a11.. B a12.. B .... . a1m  ... B  lability aofprocedure special cases for infinite-dimensional dynamical systems. In (Rauh et al.control [2009]),and thenonlinear authors create for the design state esA ⊗ B =   .   create a procedure for the design control and state esB a B . . . a B a . . . . n1 n2 nm dynamical systems. In (Rauh et al. [2009]), the authors . . . B a B . . . a B a create a procedure for the design control and state esA ⊗ B =  n1. timation method by Carleman linearization. The more  n2 nm . . . timation method by Carleman linearization. The more B a B . . . a B a n1 n2 nm create works a procedure theand design control and statesome es- The Kronecker power of a vector x can defined by the timation method byforCarleman linearization. The more recent by (Armaou Ataei [2014]) apply B a B . . . a B a The Kronecker power of a vector x can defined by the n1 n2 nm recent works by and [2014]) apply some timation method by Embedding Carleman linearization. The piecemore The Kronecker power of a vector x can defined by the following: recent works by (Armaou (Armaou and Ataei Ataei [2014]) apply some ideas from Carleman in order order to design design following: ideas from Carleman Embedding in to pieceKronecker power of a i vector operandsx can defined by the recent works by (Armaou andlaws Ataei [2014]) apply some The following: ideas from Carleman Embedding infor order to design piecewise constant feedback control a class class of nonlinear  ii operands   wise constant feedback control laws for a of nonlinear following: [i] operands ideas from Carleman Embedding in order to design piecewise constant feedback control laws for a class of nonlinear control systems. systems. x x ⊗ ·· ·· ·· ⊗ [i] =   control x = x ⊗ ⊗x x i operands [i] wise constant feedback control laws for a class of nonlinear control systems. [i] xnnii . = x ⊗ · · · ⊗ x ∈ R With vectors x [i] In this systems. paper, we we build upon upon ideas ideas from Carleman Carleman linlin- With vectors x[i] ∈ Rxn[i]i . = x ⊗ · · · ⊗ x control In this With vectors x[i] ∈ Rnand i. In this paper, paper, we build build upon ideas from from Carleman lin- Lemma earization and propose an approximate method for optimal 1. Farahat [1972]) If A ∈ Rn×n ,B ∈ Lemma 1. (Lancaster (Lancaster earization and an approximate method for optimal . Farahat [1972]) If A ∈ Rn×n With x ∈ R and n×n ,B ∈ m×mvectors In this paper, wedesign build upon ideas from Carleman linearization and propose propose an for approximate method forwith optimal Lemma 1. (Lancaster and Farahat [1972]) If A ∈llpRnorms ,B ∈ feedback control nonlinear systems the R then the following equality holds for all : m×m feedback control design for nonlinear systems with the R then the following equality holds for all norms : m×m 1. (Lancaster and Farahat [1972]) If A ∈ pRn×n ,B ∈ earizationcontrol and propose an for approximate optimal Lemma feedback design nonlinearmethod systemsforwith the R then the following equality holds for all l norms : p feedback control ©design forAuthors. nonlinear systems withLtd. theAll rights Rm×m then the following equality holds for all lp norms : 2405-8963 Copyright 2019. The Published by Elsevier reserved.

Copyright © 2019 IFAC 229 Copyright © under 2019 IFAC 229 Control. Peer review responsibility of International Federation of Automatic Copyright © 2019 IFAC 229 10.1016/j.ifacol.2019.12.163 Copyright © 2019 IFAC 229

2019 IFAC NecSys 230 Chicago, IL, USA, September 16-17, 2019 Arash Amini et al. / IFAC PapersOnLine 52-20 (2019) 229–234

A ⊗ Bp = Ap Bp

Lemma 2. If A ∈ R , has eigenvalues λi , i ∈ {1, . . . , n} and B ∈ Rm×m has eigenvalues µj , j ∈ {1, . . . , m} then A ⊗ B eigenvalues are: λi µj for i, j ∈ {1, . . . , n} × {1, . . . , m} Lemma 3. (Horn and Johnson [2012]) Let A ∈ Rn×n and B ∈ Rm×m be given. If λ is an eigenvalue of A with x ∈ Cn is a corresponding eigenvector of A and µ is an eigenvalue of B with y ∈ Cm is a corresponding eigenvector of B, then µ + λ is eigenvalue for Kronecker sum (Im ⊗ A) + (B ⊗ In ) (with In represent Identity matrix size n × n.) and y ⊗ x ∈ Cnm is corresponding eigenvector. In other words if σ(A) = {λ1 , . . . , λn },σ(B) = {µ1 , . . . , µm } then σ((Im ⊗ A) + (B ⊗ In )) = {λi + µj : i = 1, . . . , n, j = 1, . . . , m} n×n

3. PROBLEM STATEMENT

It is shown by (Forets and Pouly [2017]) that the representation of nonlinear system (4) with respect to the Carleman lifting operator is the following infinite-dimensional linear system ˙ = AΨ Ψ (5) with initial condition Ψ(0) = Ψ(x0 ), where  1 1 1  0 0 ... A1 A2 A3 . . . A1M  0 A22 A23 . . . A2M A2M +1 0 ...   3 A =  0 0 A3 . . . A3 A3 . A 3 M M +1 M +2 . . .   .. .. .. . . .. .. . . . . . . and each Aii+j−1 is defined by

i times

Aii+j−1

i     In×n ⊗ · · · ⊗ Fj ⊗ · · · ⊗ In×n . = ↑ v=1

(6)

v’th position

Let us consider the following class of nonlinear control systems x˙ = f (x) + g(x)u, (1) where x ∈ Rn is the vector of state variables and u is the scalar control input. It is assumed that functions f, g : Rn → Rn are finite degree polynomials of x, i.e., they can be represented using Kronecker powers of x as follows Nf Ng   f (x) = gi x[i] fi x[i] and g(x) = i=1

i=1

in which fi and gi are constant matrices with fi , gi ∈ i Rn×n . The control objective is to find the optimal control law Nu  u = kiT x[i] (2) i=1

that minimizes the following quadratic cost functional  ∞  T  J(x, u) = x (t)Qx(t) + ru2 (t) dt, (3) 0

where Q is a positive semidefinite matrix and r > 0, subject to the dynamics of the system (1). In the following, it is shown an efficient approximate solution using Carleman lifting operator can be obtained for this nonlinear optimal control problem. Remark 4. In order to improve traceability of our derivations, we focus only on nonlinear control systems (1) with scalar inputs. Our results can be extended to systems with multiple inputs, but with more involved notations. 4. CARLEMAN LIFTING OPERATOR AND LINEARIZATION Let us consider the following nonlinear system x˙ = F (x) with initial condition x(0) = x0 and F (x) =

M 

This procedure is known as Carleman linearization of a nonlinear system . The dynamical behavior of the original nonlinear system (4) with initial condition x0 and its lifted linear system (5) with initial condition Ψ(x0 ) are identical. The resulting linear operator A is usually unbounded. Thus, for analysis and computational purposes, we consider finite truncations of infinite-dimensional system (5) by forming finite-sections of operator A. For a given truncation length N ≥ 1, the corresponding truncated system is given by ˙ N = AN Ψ N Ψ (7)   with ΨN (x) = colm x, x[1] , . . . , x[N ] , initial condition ΨN (0) = ΨN (x0 ), and state matrix  1 1 1  0 A1 A2 A3 . . .  0 A22 A23 . . . 0    AN =  . . . . .  .. . . . . . . AN −1  N 0 ... 0 AN N

Remark 5. In (Forets and Pouly [2017]), it is proven that, under some reasonable assumptions, the first n elements of state vector ΨN stay close to the trajectory of the original nonlinear system (4) over a characterizable time interval. 5. STABILITY ANALYSIS

By substituting (2) into (1), the closed-loop system will take the following form x˙ =

Nf 

fi x

[i]

i=1

= (4)

Fi x[i] .

i=1

The Carleman lifting operator is an infinite-dimensional nonlinear map whose elements are defined using monomials of x1 , . . . , xn and can be represented as   Ψ(x) := colm x, x[1] , x[2] , . . . .

230

M 

+

 Ng i=1

Fi x[i]

gi x

[i]

  Nu i=1

kiT

x

[i]

 (8)

i=1

where M = max{Nf , Ng + Nu } and coefficients Fk are matrices with appropriate dimensions that can be determined explicitly in terms of matrices fi , gi , ki . It is straightforward to verify that each Fi is an affine function of feedback gains k1 , . . . , kNu . In this section, we establish a connection between stability of the original nonlinear system (8) and its corresponding lifted linear system.

2019 IFAC NecSys Chicago, IL, USA, September 16-17, 2019 Arash Amini et al. / IFAC PapersOnLine 52-20 (2019) 229–234

Theorem 6. For a given integer N ≥ 1, let us represent the truncation of the corresponding Carleman linearization of (8) by ˙ N = AN ΨN . (9) Ψ Then, (8) is locally stable 1 if and only if (9) is stable Proof. According to our assumption, all Fi ’s for i = 1, . . . , N are constant. As a result, (4) can be written in the following form: M  Fi x[i] = F1 x + h(x), x˙ = i=1

where

h(x) =

M 

Fi x[i] .

i=2

By applying the triangle inequality, we get  M M         [i]  Fi x  ≤ h(x) =  Fi x[i]    i=2

i=2

and according to Lemma 1 for all i ≥ 2 we have         Fi x[i]  ≤ Fi  × x[i]  = Fi  × x ⊗ · · · ⊗ x i

= Fi  × x . Therefore, it follows that N i Fi  × x h(x) ≤ lim i=2 = 0, lim x→0 x→0 x x Which implies h(x) lim = 0. x→0 x

=

N 

i=1

0

(10) (11) (12)

=

i  v=1

In×n ⊗ · · · ⊗ F1 ⊗ · · · ⊗ In×n

and according to Lemma 2, the eigenvalues of each term In×n ⊗ · · · ⊗ F1 ⊗ · · · ⊗ In×n is the same F1 . If σ(F1 ) = {λ1 , . . . , λn }, by applying Lemma 3 to each matrix Aii , we will have that σ(Aii ) =  λk + λj  k, j = 1, . . . n . If all eigenvalues of F1 have negative real parts, then all eigenvalues of Aii , which are sum of eigenvalue of F1 , will have negative real parts. From (14), it will follow that all eigenvalues of AN will have negative real parts. Thus, we can conclude that the truncated linear system (9) is exponentially stable if (8) is locally stable. Now, suppose that (9) is exponentially stable. Then, matrix AN is Hurwitz, which means that all eigenvalues of Aii ’s have negative real parts, including A11 = F1 . According to ([Verhulst 2006, Theorem 7.1]), nonlinear system (8) will be locally stable.

Using Carleman linearization, we can equivalently cast our finite-dimensional nonlinear optimal control problem  ∞  T  minimize x (t)Qx(t) + ru2 (t) dt J(x, u) = 0

subject to:

x˙ = f (x) + g(x)u

u= (13)

N . . . AN N − λIN

nN +1−n n−1

and the new notation Iii = Ini is defined where q = for simplicity. This equality means that N  λ(Aii ). (14) λ(AN ) = i=1

1

A dynamical system is locally stable if there exist a pair (δ, t0 ) such that if x(t0 ) ≤ δ then the equilibrium point x = 0 is asymptotically stable, i.e., there exist constants C, µ such that for all

Aii

x,u

det(Aii − λIii )

x(t) ≤ C x(t0 ) e−µ(t−t0 )

We recall that

6. STATE FEEDBACK CONTROL DESIGN

If (13) holds and matrix F1 is real and all its eigenvalues have negative real parts, then according to ([Verhulst 2006, Theorem 7.1(Poincar´e-Lyapunov)]) there exist a δ, t0 with x(t0 ) = x0 such that if x0  ≤ δ then the solution x = 0 is asymptotically stable and the attraction is exponential in δ-neighborhood of x = 0. This reveals that the local stability of the nonlinear system depend on the eigenvalues of matrix F1 . On the other hand, we have the following relationship for the truncated system of order N :   1 ... A12 A13 A1 − λI11   0 A22 − λI22 A22 ...   |AN − λIq | = det   .. . . . .   . . . 0

231

t ≥ t0 .

231

Nu 

kiT x[i]

i=0

as the following infinite-dimensional linear optimal control problem minimize Ψ,K

subject to:



∞ 0

  ΨT Q + rKKT Ψ dt

(15)

˙ = AΨ. Ψ

In this reformulation, Q is a linear operator, whose (1, 1) block is the weight matrix Q and the rest of its block

T  T , 0, 0, . . . . The elements are zero, and K = k1T , . . . , kN u key observation is that A is an affine function of feedback gains k1 , . . . , kNu ; see Lemma 7. In order to make our design problem tractable in finite-dimensions, we propose an approximate method by truncating the optimal control problem (15) and calculating near-optimal state feedback control using the truncated problem. Using the local stability result of the previous section, our goal is to find feedback control gains in (2) that stabilizes the corresponding truncated linear system, while minimizing the cost functional (3). First, we represent the state feedback control law (2) and the integrand in the cost functional (3) in terms of the state vector of the truncated system, i.e., Nu  T u = kiT x[i] = KN ΨN , i=1

T xT Qx + ru2 = ΨTN QN ΨN + rΨTn KN KN Ψn   T T = Ψ N Q N + KN K N Ψ N

2019 IFAC NecSys 232 Chicago, IL, USA, September 16-17, 2019 Arash Amini et al. / IFAC PapersOnLine 52-20 (2019) 229–234

for positive semidefinite matrix QN whose (1, 1) block element is the weight matrix Q and the rest of its blocks are zero. The truncated optimal control problem is given by  ∞   T minimize ΨN dt ΨTN QN + rKN KN ΨN ,KN (16) 0 ˙ subject to: ΨN = AN (KN )ΨN . Since this problem involves minimizing a quadratic cost functional subject to linear dynamics, similar to the standard Linear Quadratic Regulator (LQR), it is straightforward to calculate the value function as (17) J ∗ = ΨTN (0)P ΨN (0), where P is the unique positive definite solution of the following Lyapunov equation T = 0. AN (KN )T P + P T AN (KN ) + QN + rKN KN In order to eliminate effect of specific initial conditions in our design procedure, we assume that the components of   the initial state x(0) = x0 = x1 (0), . . . , xn (0) are drawn from a Gaussian distribution with mean 0 and variance σ 2 . As a result, the covariance matrix of the initial state of the truncated system is given by      [1] [1]T [1] [N ]T E x0 x 0 . . . E x 0 x0       .. .. .. E ΨN (0)ΨTN (0) =  . . .    .    [N ] [1]T [N ] [N ]T . . . E x 0 x0 E x 0 x0

If we further assume that each variable in the vector of initial condition is independent from the variables, then we will get   E X1p1 X2p2 . . . Xnpn = E(X1p1 )E(X2p2 ) . . . E(Xnpn ).

For a random variable z ∼ N (0, σ 2 ), one has  0 if p is odd E(z p ) = σ p (p − 1)!! if p is even  n2  with p!! = k=0 (n − 2k).

Putting all these pieces together, the approximate optimal feedback control design problem is equivalent to the following optimization problem minimize Tr(P Σ) P,KN (18) T subject to: ATN P + P AN + QN + rKN KN =0   in which Σ = E ΨN (0)ΨTN (0) can be computed explicitly.

We emphasize that optimization problem (18) is nonconvex and can potentially be challenging to solve. We employ a version of the Alternating Direction Method of Multipliers (ADMM) method (Boyd et al. [2011]) to solve our optimization problem: (i) fix KN solve for P , (ii) then fix P and solve for KN , (iii) repeat this process until a pre-specified error bound is achieved. In steps (i) and (ii), our nonconvex optimization problem will boil down to a convex optimization problem with linear constraints, which can be solved with existing standard optimization tools. In Section 7, we show effectiveness of our approach via several examples. In order to complete our analysis in this section, we still need to show that AN is an affine function of the vector of state feedback gains KN .

232

Lemma 7. The state matrix AN (KN ) is affine with respect to KN . Proof. In order to show that AN (KN ) is affine with respect to KN , we only need to show that each block matrix Aii+j−1 is affine with respect to KN . From (6), it follows that i  In×n ⊗ · · · ⊗ Fj ⊗ · · · ⊗ In×n Aii+j−1 = v=1

If Fj is affine w.r.t KN , then Fj ⊗ In×n and In×n ⊗ Fj will be affine w.r.t KN . Therefore, we only need to show that every Fj is affine w.r.t KN . From (8), we recall that      Nf Ng Nu N  kiT x[i] = Fi x[i] fi x[i] + gi x[i] x˙ = i=1

i=0

i=1

i=1

More careful examinations reveal that     Ng Ng +Nu p Nu  gi x[i] kiT x[i] = kiT x[i] gp−i x[p−i] i=0

p=1

i=1

i=1

Let us denote j = p − i, we have   T T kiT x[i] gj x[j] = kiT x[i] gj x[j] = x[i] ki gj x[j] .

T By representing the h’th row of gj by γjh , it follows that



kiT x[i] gj x[j]



 T T = x[i] ki γjh x[j] h  T  T  T = x[j] ⊗ x[i] vec ki γjh  T  T = x[p] vec ki γjh  T = x[p] (γjh ⊗ ki )

(19) (20) (21) (22)

We employ the fact that vec(abT ) = b ⊗ a for a ∈ Rn and b ∈ Rm . Taking transpose from a scalar does not change its value; therefore,  T   T T x[p] (γjh ⊗ ki ) = (γjh ⊗ ki ) x[p] = γjh ⊗ kiT x[p] . Finally, we have This results in

  kiT x[i] gj x[j] = gj ⊗ kiT x[p] . Fp = f p +

p  i=1

gj ⊗ kiT ,

(23)

which shows that Fp is affine with respect to feedback gains ki . Thus, state matrix AN is affine w.r.t the feedback gains. Remark 8. To ensure stability of the truncated system by Theorem 6, matrix F1 must be Hurwitz. From (23), one gets: F1 = f1 + g0 ⊗ k1T = f1 + g0 k1T . As a consequence, F1 is Hurwitz if pair (f1 , g0 ) is controllable. Remark 9. Since matrices KN and QN has sparse structures, one may argue that it is not necessary to consider truncation orders greater than Nu when solving the optimization problem (18). Our extensive simulations assert that ignoring truncation orders greater than Nu will affect the precision of the resulting solution.

2019 IFAC NecSys Chicago, IL, USA, September 16-17, 2019 Arash Amini et al. / IFAC PapersOnLine 52-20 (2019) 229–234

(a) Nu = 1

233

(b) Nu = 2

Fig. 1. Performance loss compared to optimal control for 1000 samples for example A Fig. 2. Comparison Between signals with Carleman linearized feedback and optimal feedback for example B

7. EXAMPLES AND SIMULATIONS Throughout this section, we are going to apply our method into the nonlinear system and find optimal feedback by using the Carleman linearization. In order to measure the performance of the feedback designed by our method, we compare the result with the solution of Hamilton Jacobian(HJB) equation which provide us the optimal feedback for the problem. 7.1 Example A For first example we are going to study a modified version of Van der Pol oscillator. We changed the multiplier of feedback so the pair (f1 , g0 ) become controllable by changing the g(x) = x in classic oscillator to g(x) = (1+x). 1 ˙ − (1 + x)2 ) + x = u(1 + x) (24) x ¨ − x(1 2 By taking x1 = x and x2 = x˙ we can rewrite the system in the following form: x˙ 1 = x2 1 x˙ 2 = −x1 + x2 x21 + x1 x2 + (1 + x1 )u 2 ∞ By assuming the objective function is J = 0 (x22 + u2 )dt, (24) has an analytical solution for HJB equation discussed in (Nevistic and Primbs [1996]). The optimal feedback for this case is: u∗ = −g(x)x2 = −(1 + x1 )x2 From here we are going to lift the system to find the Carleman linearization. For solving the optimization problem over the truncated system we are going to start with an initial guess of K that make the truncated system controllable (i.e. F¯1 (K) is Hurwitz), then we find appropriate matrix P that is the solution of Lyapunov equation. From there, we are going to update the K matrix and repeat the procedure until the algorithm converges. We randomly choose 1000 samples for initial condition from normal distribution with mean zero and variance 1 for simulation. Table 1 show the result for this example. As Figure 1 shows the distribution of performance loss by increasing of Nu we loose performance. Increasing Nu make the optimal solution for the truncated system vary from the nonlinear system. Due to the fact that in the objective function we do not have any term that represents x1 , the optimal solution of the truncated system diverges from the real nonlinear system as the complexity of the feedback increases. 233

Nu 1 2 3

Example A 9.6317 29.8836 ∞

Example B 56.2 6.39 6.67

Table 1. Performance losses 7.2 Example B For this example we choose the system to have dynamics as (25). x ¨ − x(1 + x˙ + u) − (x + u) = 0 (25) By taking x1 = x and x2 = x˙ we are going to have the system of equations as: x˙ 1 = x2 x˙ 2 = x1 x2 + x1 + (1 + x1 )u For this case, the HJB equation does not have an analytical solution, therefore we need to solve a boundary value problem to find the optimal feedback to compare it with feedback found by Carleman linearization. The objective function for this system is going to be:  ∞ J= (xT x + u2 )dt 0

Solving HJB equation is expensive, therefore in this example, we compare our method with just one initial condition x0 = [0, 0.5]. Figure 2 compares the response of the Carleman linearized feedback to the HJB solution for optimal control response. The modification of objective function keep the optimal solution of truncated and nonlinear system close to each other. One can observe the result in Table 1, that by increasing the order of polynomial for feedback the performance improves. We also calculated the value of bjective function for 1000 samples of initial conditions, however this time we compare the distribution of loss functions since its not possible to solve 1000 HJB equations. For this example we consider x0 ∈ X0 ∼ N (0, 0.5), the reason we shrunk the initial condition ball was due to the Lemma 6 to insure that all the initial conditions have a stable response for nonlinear system. Distribution of error for Nu = 1, 2, 3 are in figure 3. The result of these two examples shows, since there is no optimality condition to force the truncated system to stay close to the nonlinear system in a long time,

2019 IFAC NecSys 234 Chicago, IL, USA, September 16-17, 2019 Arash Amini et al. / IFAC PapersOnLine 52-20 (2019) 229–234

Fig. 3. Distribution of loss function for 1000 samples of initial condition there is no guarantee that the feedback founded by our approach is close to the optimal feedback for the problem. However, example B shows that for some cases (based on the definition of the objective function) the truncation shows promising results. 8. CONCLUSION We show that optimal feedback control design for a class of nonlinear systems can be approximated and reduced to some finite-dimensional (nonconvex) optimization problems with bilinear constraints. It is shown that such optimization problems can be solved efficiently using existing ADMM methods. Our future work includes obtaining explicit error bounds for our approximation method. REFERENCES Armaou, A. and Ataei, A. (2014). Piece-wise constant predictive feedback control of nonlinear systems. Journal of Process Control, 24(4), 326–335. Banks, S. (1992). Infinite-dimensional carleman linearization, the lie series and optimal control of non-linear partial differential equations. International journal of systems science, 23(5), 663–675. Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J., et al. (2011). Distributed optimization and statistical learning via the alternating direction method of multiR in Machine learning, pliers. Foundations and Trends 3(1), 1–122. Brockett, R.W. (1976). Volterra series and geometric control theory. Automatica, 12(2), 167–176. Carleman, T. (1932). Application de la th´eorie des ´equations int´egrales singuli`eres aux ´equations diff´erentielles de la dynamique. T. Ark. Mat. Astrom. Fys. B, 22, 1. Forets, M. and Pouly, A. (2017). Explicit error bounds for carleman linearization. arXiv preprint arXiv:1711.02552. Hernandez, C.N. and Banks, S. (2004). A generalization of lyapunov’s equation to nonlinear systems. IFAC Proceedings Volumes, 37(13), 745–750. Horn, R.A. and Johnson, C.R. (2012). Matrix analysis. Cambridge university press. Khalil, H.K. and Grizzle, J.W. (2002). Nonlinear systems, volume 3. Prentice hall Upper Saddle River, NJ. Krener, A.J. (1974). Linearization and bilinearization of control systems. In Proc. 1974 Allerton Conf. on Circuit and System Theory, volume 834. Monticello. 234

Lancaster, P. and Farahat, H. (1972). Norms on direct sums and tensor products. mathematics of computation, 26(118), 401–414. Loparo, K. and Blankenship, G. (1978). Estimating the domain of attraction of nonlinear feedback systems. IEEE Transactions on Automatic Control, 23(4), 602– 608. Mozyrska, D. and Bartosiewicz, Z. (2006). Dualities for linear control differential systems with infinite matrices. Control and Cybernetics, 35, 887–904. Nevistic, V. and Primbs, J.A. (1996). Constrained nonlinear optimal control: A converse hjb approach. Technical report. Rauh, A., Minisini, J., and Aschemann, H. (2009). Carleman linearization for control and for state and disturbance estimation of nonlinear dynamical processes. IFAC Proceedings Volumes, 42(13), 455–460. Sastry, S. (2013). Nonlinear systems: analysis, stability, and control, volume 10. Springer Science & Business Media. Verhulst, F. (2006). Nonlinear differential equations and dynamical systems. Springer Science & Business Media.