7th 7th IFAC IFAC Symposium Symposium on on System System Structure Structure and and Control Control Sinaia, Romania, September 9-11,Structure 2019 7th IFAC Symposium on System and Control Available online at www.sciencedirect.com Sinaia, Romania, September 9-11, 2019 7th IFAC Symposium on System and Control Sinaia, Romania, September 9-11,Structure 2019 Sinaia, Romania, September 9-11, 2019
ScienceDirect
IFAC PapersOnLine 52-17 (2019) 25–30
A Simple Simple Method Method for for Orthogonal Orthogonal A A Simple Method for Polynomial Approximation of A Simple Method for Orthogonal Orthogonal Polynomial Approximation of Linear Linear Polynomial Approximation of Time-Varying System Gramians Polynomial Approximation of Linear Linear Time-Varying System Gramians Time-Varying System Gramians Time-Varying System Gramians Kamen L. Perev ∗∗
Kamen L. Perev ∗∗ Kamen L. Perev ∗ Kamen L. Perev ∗ ∗ Systems and Control ∗ Systems and Control Department, Department, Technical Technical University University of of Sofia, Sofia, ∗ 1756 Sofia, Bulgaria (e-mail: kperev@ tu-sofia.bg) Systems and Control Department, Technical University of Sofia, 1756 Sofia, Bulgaria (e-mail: kperev@ tu-sofia.bg) ∗ Systems andSofia, Control Department, 1756 Bulgaria (e-mail:Technical kperev@ University tu-sofia.bg)of Sofia, 1756 Sofia, Bulgaria (e-mail: kperev@ tu-sofia.bg) Abstract: problem of of reachability reachability and and observability observability finite finite interval interval Abstract: The The paper paper considers considers the the problem gramians computation and approximation for linear time-varying systems. Both gramians Abstract: The paper considers the problem of reachability and observability finite interval gramians computation and approximation foroflinear time-varying systems. Both gramians Abstract: The paper considers the problem reachability and observability finite interval gramians computation andtrajectories. approximation linear time-varying systems.from Both are obtained from The reachability gramian the adjoint are obtained from system system trajectories. The for reachability gramian is is derived derived from thegramians adjoint gramians computation and approximation for linear time-varying systems. Both gramians state impulse response and the observability gramian is obtained from the zero-input output are obtained from system trajectories. The reachability gramian is derived from the adjoint state impulse response and the observability gramian is obtained from the zero-input output are obtained from system trajectories. The reachability gramian is derived from theresponse adjoint response. The application of the adjoint system is discussed and the two basic impulse state impulse response and the observability gramian is obtained from the zero-input output response. The application of the adjoint system is discussed and the two basic impulse response state impulse response and the observability gramian is impulse obtained from theare zero-input output characteristics, namely the regular and the adjoint state responses presented. The response. The application of the adjoint system is discussed and the two basic impulse response characteristics, namely theofregular and the adjoint state impulse responses are presented. The response.with Thethe application the adjoint system is discussed and the two basic impulse response characteristics, namely the regular and the adjoint state impulse responses are presented. The relation linear time-invariant case is also discussed and the role of the state transition relation with the linear time-invariant case is also discussed and the role of the state transition characteristics, the regular and the isadjoint state impulse responses are presented. The matrix computing gramians An for derivation of relationfor with thenamely linearthe time-invariant case alsoalgorithm discussed and the role of matrix for computing the gramians is is shown. shown. An algorithm for derivation of the the state state transition transition relation with the linear time-invariant case is also discussed and the role of the state transition matrix is is on integration the state by using for computing which the gramians is shown. algorithmof derivation of the state matrix is proposed, proposed, is based based on the the An integration offor the state equation equation by transition using the the matrix for computing which theThe gramians is shown. An algorithm for derivation of theorthogonal state transition method of Runge-Kutta. gramians are approximated in terms of Legendre matrix is proposed, which is based on the integration of the state equation by usingseries the method of Runge-Kutta. Theisgramians arethe approximated inofterms of Legendre orthogonal series matrix is proposed, which based on integration the state equation by using the representations of system trajectories. method of Runge-Kutta. The gramians are approximated in terms of Legendre orthogonal series representations of system trajectories. method of Runge-Kutta. The gramians are approximated in terms of Legendre orthogonal series representations of system trajectories. representations of system Copyright © 2019. The Authors.trajectories. Published by Elsevier Ltd. All rights reserved. Keywords: linear linear systems, systems, regular regular state state impulse impulse response, response, adjoint adjoint state state impulse impulse response, response, Keywords: Keywords: linear systems, regular state impulse response, adjoint state impulse response, zero-input output response, reachability and observability gramians. zero-input output response, reachability and observability gramians. Keywords: output linear systems, state impulse response, adjoint state impulse response, zero-input response,regular reachability and observability gramians. zero-input output response, reachability and observability gramians. 1. INTRODUCTION INTRODUCTION in Verriest Verriest & & Kailath Kailath (1983) (1983) :: finite finite interval interval gramians, gramians, 1. in 1. INTRODUCTION in Verriest & Kailath (1983) :sliding finite interval gramians, infinite interval gramians and gramians. infinite interval gramians and sliding interval gramians. 1. INTRODUCTION in Verriest & Kailath (1983) :sliding finite used interval gramians, infinite interval gramians andinitially interval gramians. Sliding interval gramians are in Verriest & Sliding interval gramians are initially used in Verriest & infinite interval interval gramians and sliding used interval gramians. The gramians are initially in for Verriest & Kailath (1983) and then in Verriest (2008) system The reachability reachability and and observability observability gramians gramians have have imporimpor- Sliding (1983) gramians and then are in Verriest (2008) system Sliding interval initiallyforused in for Verriest & The and observability important reachability energy interpretation interpretation in linear lineargramians system have theory. The Kailath Kailath (1983) and then in Verriest (2008) for system balancing. However, the procedure procedure determining the tant energy in system theory. The balancing. However, the for determining the The reachability and observability gramians have imporKailath (1983) andcomputing then in Verriest (2008) for system tant energy gramian interpretation in the linear system theory. The reachability contains energy of the state imbalancing. However, the procedure for determining the gramians requires high order matrix derivareachability gramian contains the energy of the state The im- gramians requires computing high order matrix derivatant energy interpretation in for linear system theory. balancing. However, the procedure for determining the reachability gramian contains the energy of the state im- gramians pulse response and accounts accounts the energy distribution requires high order matrix tives, which createscomputing certain numerical problems forderivalargepulse response and for the energy distribution tives, which creates certain numerical problems for largereachability gramian contains the energy of the state imgramians requires computing high order matrix derivapulse response and accounts for the energy distribution at the system input. The observability gramian contains which creates certain of numerical problems for largescale systems. systems. The concept concept sliding interval interval gramians is at theresponse system input. The observability gramian contains tives, The of sliding gramians is pulse and accounts for the energy distribution tives,used which creates certain numerical problems for largeat the system input. The observability gramian contains scale the energy of the the zero-input output response and accounts scale systems. The concept of sliding interval gramians is also in Shokoohi et al. (1983) for defining uniformly the energy of zero-input output response and accounts in Shokoohi et al. of (1983) forinterval defininggramians uniformly at the input. The observability gramian contains scaleused systems. The concept sliding is the energy of the zero-input output and accounts for thesystem energy distribution at the the response system output. Both also also used in Shokoohi et al. (1983) forgramians defining by uniformly balanced realizations. Computing the solving for the energy distribution at system output. Both balanced realizations. Computing the gramians by solving the energy of the zero-input output response and accounts also used in Shokoohi et al. (1983) for defining uniformly for the energy distribution the system output. Both balanced gramians are related related to some someat important system properties realizations. Computing theisgramians by solving differential equations of Lyapunov presented in Lang gramians are to important system properties equations Computing of Lyapunov presentedbyinsolving Lang for the energy distribution at the system output. Both differential balanced realizations. theis gramians related to some important and their nonsingulatity is for system complete equations of Lyapunov isgramians presented in Lang et al. (2016). (2016). The proposed proposed method uses backward difand their are nonsingulatity is a a criterion criterion forsystem systemproperties complete differential et al. The method uses backward difgramians are related to some important system properties differential equations of Lyapunov is uses presented in Lang and their nonsingulatity is a criterion for system complete reachability and observability. observability. The reachability reachability gramian et al. (2016). The proposed method backward differentiation formulas and the procedure of Rosenbrock. gramian reachability and The formulas and themethod procedure Rosenbrock. and their nonsingulatity is a criterion for system complete et al. (2016). The for proposed usesof difreachability and observability. Theminimum reachability gramian participates in the the expression for energy least ferentiation ferentiation formulas the procedure ofbackward Rosenbrock. Different approach computing the for participates in expression for minimum energy least Different approach forand computing the gramians gramians for linear linear reachability and observability. The reachability gramian ferentiation formulas and the procedure of Rosenbrock. participates in the expression for minimum energy least squares control, control, which which transforms transforms the the system system from from one one Different approach foriscomputing theSandberg gramians&for linear time-varying systems proposed in in Rantzer squares time-varying systems proposed &for Rantzer participates in the expression for minimum energy least DifferentThe approach forisis theSandberg gramians linear squares control, which transforms the system from one state to another. The observability gramian is used to time-varying systems iscomputing proposed in Sandberg & Rantzer (2004). method based on solving time-dependent state to another. The observability gramian is used to The method isis based on solving time-dependent squares which transforms the systemthe one time-varying systems proposed in Sandberg & Rantzer state to control, another. The observability gramian isfrom used to (2004). determine the initial state vector by observing output (2004). The method is based on solving time-dependent linear matrix matrix inequalities. inequalities. Solving Solving linear linear matrix matrix inequalinequaldetermine the initial state vector by observing output state toBoth another. The observability gramian the is used to linear (2004). The method is gramians based on in solving time-dependent determine the initial state vector by observing the output signal. gramians are part of the the balancing algorithms linear matrix inequalities. Solving linear matrix inequalities for obtaining the the discrete domain signal. Both gramians are part of balancing algorithms ities for obtaining the gramians in the discrete domain determine initial vector by observing output linear matrix inequalities. Solving linear matrix inequalsignal. Both gramians are part of the balancing algorithms and can bethe used forstate solving the model orderthe reduction ities forsuggested obtaininginthe gramians in(2003). the discrete domain is also Lall & Beck Error bounds and can be used for solving the model order reduction is also suggested in Lall & Beck (2003). Error bounds signal. Both gramians are part of the balancing algorithms ities forerror obtaining gramians inpresented the discrete domain and can be solving part the model orderthe reduction problem. Theused mostfor important in deriving deriving similar- is also suggested inthe Lall & Beck (2003). Error bounds on the of approximation are in Sandberg problem. The most important part in the similarthe error of approximation are presented in Sandberg and transformation can be solvingfor thesystem model orderthe reduction is also suggested inand LallLall & Beck (2003). Error bounds problem. Theused mostfor important part in deriving similarity matrices balancing is the the on on the error of approximation presented inwhere Sandberg & Rantzer (2004) &are Beck (2003), where the ity transformation matrices for system balancing is & Rantzer (2004) and Lall & Beck (2003), the problem. The most important part in deriving the similaron the error of approximation are presented inwhere Sandberg ity transformation matrices for For system balancing is the & computation of the gramians. linear time-invariant Rantzer (2004) and Lall & Beck (2003), the time-dependence of the Hankel singular values leads to computation of the gramians. For linear time-invariant time-dependence of the Hankel singular values leads to ity transformation matrices for For system balancing isarea, the & Rantzer (2004) and Lall & Beck (2003), where the computation of the gramians. linear time-invariant systems, balanced model reduction is well developed time-dependence of the Hankel singular valuesapproach, leads to different expressions for the bounds. Another systems, balanced model reduction is well developed area, different expressions for the bounds. Another approach, computation of the gramians. For linear time-invariant time-dependence the values leads to systems, balanced model reduction well developed area, different where the the computation of system systemis gramians gramians is mainly mainly expressions for Hankel the bounds. Another which proposes anofalgorithm algorithm for singular computing the approach, gramians where computation of is proposes an for computing the gramians systems, balanced modelLyapunov reduction is gramians well developed area, which different expressions for for the linear bounds. Another approach, where the computation of system isAntoulas mainly based on solving certain equations, see which proposes an algorithm for computing the gramians in the discrete domain time-varying periodic based solving certain Lyapunov equations, seeisAntoulas the proposes discrete an domain for linear time-varying periodic where on the computation offorsystem gramians mainly in which algorithm foral. computing the gramians based on solving certain Lyapunov equations, see Antoulas (2005). This is not the case linear time-varying systems. in the discrete domain for linear time-varying systems is presented in Ma et (2010). The periodic (2005). This is not the case for linear time-varying systems. systems is presented in Ma et al. (2010). The periodic basedsystem onThis solving certain Lyapunov equations, see Antoulas in the discrete domain for linear periodic (2005). ismatrices not the case for linear time-varying systems. The for systems are of is presented in Ma et al. time-varying (2010). The(2000). periodic discrete-time case is also also considered in Varga Varga A The system matrices for such such systems are functions functions of systems case is considered in (2000). A (2005). This ismatrices not the case for linear time-varying systems. systems is presented in Ma et al. (2010). The periodic The system forLyapunov such systems are functions of discrete-time time, and the algebraic equations for computdiscrete-time case for is also considered in Vargais (2000). A different approach computing the gramians to use the time, and the algebraic Lyapunov equations for computapproach for computing the gramians is (2000). to use the The the system such systems are functions of different discrete-time case is also considered in Varga A time, and thematrices algebraic Lyapunov forLyapunov computing gramians are for replaced by equations differential different approach for computing theobtain gramians is to use the trajectories of the system, and to the solution by ing the gramians are replaced by differential Lyapunov trajectories of the system, and to obtain the solution by time, and the algebraic Lyapunov equations for computdifferent approach for computing the gramians is toSirovich use the ing the gramians are replaced by for differential Lyapunov equations. Solving such equations large scale systems trajectories of the system, and to obtain using data snapshots from system trajectories, see the solution by equations. Solving are suchreplaced equations for large scale systems using data snapshots from system see Sirovich ingcomputationally the gramians by differential Lyapunov trajectories of the trajectory system, and totrajectories, obtain the solution by equations. Solving such equations for large scale systems is cumbersome task and requires serious data from system trajectories, see Sirovich (1987). Thesnapshots state is discretized discretized in equally equally disis computationally cumbersome task and requires serious using (1987). The state trajectory is in disequations. Solving such equations for large scale systems using data snapshots from system trajectories, see Sirovich is computationally cumbersome task there and requires serious (1987). computational resources. Moreover, exist different state trajectory is discretized in equally distributedThe state points called snapshots, snapshots, which are further further computational resources. Moreover, there exist different state points called which are is computationally cumbersome task and requires serious (1987). The state trajectory is discretized in equally discomputational resources. Moreover, there exist different types of balancing balancing and gramians definitions. Three types tributed tributed state points called snapshots, which are further employed for low dimensional approximation of system types of and gramians definitions. Three types employed for low dimensional approximation of system computational Moreover, there exist tributed state points called snapshots, which are types of balancing and time-varying gramians definitions. Three types employed of gramians forresources. linear time-varying systems aredifferent defined for low dimensional approximation of further system of gramians for linear systems are defined types of balancing and time-varying gramians definitions. types employed for low dimensional approximation of system of gramians for linear systems Three are defined of gramians for linear time-varying systems are defined 2405-8963 Copyright © 2019. The Authors. Published by Elsevier Ltd. All rights reserved.
Copyright © 2019 67 Copyright © under 2019 IFAC IFAC 67 Control. Peer review responsibility of International Federation of Automatic Copyright © 2019 IFAC 67 10.1016/j.ifacol.2019.11.021 Copyright © 2019 IFAC 67
2019 IFAC SSSC 26 Sinaia, Romania, September 9-11, 2019
Kamen L. Perev / IFAC PapersOnLine 52-17 (2019) 25–30
states. The trajectories based approach avoids solving the usual Lyapunov equations, while rather uses the snapshots matrices, which one can obtain either from experiment or from simulation. The trajectories based approach relates closely to the approach of empirical gramians, see Lall et al. (1999) and Himpe & Ohlberger (2013). The trajectories based approach is also considered in Perev (2018a), where the main derivations and results are obtained for linear time-invariant systems. It is shown that the gramians can be computed from the system state impulse and zeroinput output responses. The paper gives the general framework for orthogonal polynomial approximation of system gramians, and presents basic information about the errors of approximation. The obtained results are extended for the linear time-varying case by employing the empirical gramians approach.
Wr (t0 , t1 ) =
(2)
From (2) is clear that the reachability gramian is a symmetric positive semi-definite matrix. If we fix t0 and consider the function X : t1 → Wr (t0 , t1 ), the reachability gramian can be computed as a solution of the following differential Lyapunov equation, see Callier & Desoer (1991): ˙ X(t) = A(t)X(t) + X(t)A(t)T + B(t)B(t)T (3) with initial condition X(t0 ) = 0. Similarly, the observability map of system (1) on the interval [t0 , t1 ] is defined by the expression: L0 : Rn → P C([t0 , t1 ]) as Lo : x0 → C(t)Φ(t, t0 )x0 for every t ∈ [t0 , t1 ]. The finite interval observability gramian of system (1) on the interval [t0 , t1 ] is defined by the expression: t1 (4) Wo (t0 , t1 ) = Φ(τ, t0 )T C(τ )T C(τ )Φ(τ, t0 )dτ t0
From expression (4) is clear that the observability gramian is also a symmetric positive semi-definite matrix. If we fix t1 and consider the function Y : t0 → Wo (t0 , t1 ), the observability gramian can be computed as a solution of the following differential Lyapunov equation, see Callier & Desoer (1991): Y˙ (t) = −A(t)T Y (t) − Y (t)A(t) − C(t)T C(t), (5) with final condition Y (t1 ) = 0. In the linear time-invariant case, i.e. when the system matrices are constant matrices, the gramians are defined as follows: Wr (0, T ) = T T T At T e BB T eA t dt and Wo (0, T ) = 0 eA t C T CeAt dt. In 0 the time-invariant case, the gramians are functions of one variable only, namely the difference between the final and initial time moments. For such systems, the state impulse response when u(t) = δ(t) is obtained as x(t) = eAt B. If we apply at the input a shifted signal u(t) = δ(t − t0 ), the obtained state impulse response is also a shifted function x(t) = eA(t−t0 ) B. Yet, it has the same shape and form and this is the reason to excite the linear time-invariant system at t0 = 0. The reachability gramian can be computed by using the state impulse response x(t) = eAt B as T Wr (0, T ) = 0 x(τ )x(τ )T dτ . Similarly, the observability T gramian can be obtained as Wo (0, T ) = 0 y(τ )T y(τ )dτ , where y(t) = CeAt is the zero-input output response of the linear system due to sequentially selected unity initial conditions. This is not the case for linear timevarying systems. The state impulse response is obtained as w(t, t0 ) = Φ(t, t0 )B(t0 ) by exciting the system with delta impulse u(t) = δ(t − t0 ). It is clear that, the state impulse response depends on two variables: the initial time moment t0 and the final time moment t. Moreover, for different values of t0 the obtained characteristics will be different. The reason for this conjecture is that the system parameter values change with time, and if we change the initial time moment, the parameter values will also change and therefore, the time response will be different. For example, by applying a shifted delta impulse u(t) = δ(t − t10 ), the obtained state impulse response is w(t, t10 ) = Φ(t, t10 )B(t10 ) and if the input signal is u(t) = δ(t−t20 ), then the response
2. LINEAR TIME-VARYING SYSTEM GRAMIANS Consider the stable linear time-varying system described by its state space model: t≥0
Φ(t1 , τ )B(τ )B(τ )T Φ(t1 , τ )T dτ
t0
The present paper considers the problem of finite interval gramians computation by using orthogonal polynomials approximations of the adjoint system state impulse response and the zero-input output response. A new feature of the proposed method is the application of the adjoint system state trajectories for computing the reachability gramian. The presented algorithm is restricted only for computing finite interval gramians. This algorithm can not be used for computing infinite interval gramians, since the adjoint system is unstable and on infinite time scale, the adjoint state impulse response will approach infinity. For difference with the empirical gramians approach, the presented approach for computing the reachability gramian is not experimental, because the adjoint state impulse response can only be obtained by simulation. Both gramians for linear time-varying systems are functions of two variables, namely the initial time moment and the final time moment. If the initial time moment is fixed, then the state impulse response of the adjoint time-varying system can be considered as its Green function. In order to obtain the system trajectories we can use the RungeKutta method for numerical integration, requiring stepwise only several evaluations of system functions. The proposed method avoids solving the matrix differential equations of Lyapunov, which in the simplest case, requires solving algebraic equations of Lyapunov at each step of the algorithm. We claim that, the proposed method is more efficient than the Lyapunov’s approach, since it replaces integrating matrix differential equations with the simpler operation of integrating vector differential equations and thus, reducing the computational cost.
x(t) ˙ = A(t)x(t) + B(t)u(t),
t1
(1)
y(t) = C(t)x(t), x(0) = x0 , where x(t) ∈ Rn , u(t) ∈ Rm and y(t) ∈ Rp . The reachability map of system (1) on the interval [t0 , t1 ] is defined by the expression Lr : P C([t0 , t1 ]) → Rn as t Lr : u[t0 ,t1 ] → t01 Φ(t1 , τ )B(τ )u(τ )dτ , where Φ(t1 , t) is the state transition matrix on the interval [t, t1 ]. The finite interval reachability gramian of system (1) on the interval [t0 , t1 ] is defined by the expression: 68
2019 IFAC SSSC Sinaia, Romania, September 9-11, 2019
Kamen L. Perev / IFAC PapersOnLine 52-17 (2019) 25–30
will be w(t, t20 ) = Φ(t, t20 )B(t20 ). The impulse responses w(t, t10 ) and w(t, t20 ) are not only shifted in time, but they are with different shape and form. Therefore, the reachability gramian computed for different initial time moments and the same duration, will also be different. In this sense, the gramians Wr (t10 , t11 ) and Wr (t20 , t21 ) are different, although the length of the time intervals, where the gramians are defined are the same, i.e. t11 −t10 = t21 −t20 . The reachability gramian can be computed as: t1 (6) Wr (t0 , t1 ) = w(t1 , τ )w(t1 , τ )T dτ
where expression (8) has been used. Therefore, in order to compute the reachability gramian, the state transition matrix of the adjoint system (7) has to be derived and appropriately used in the integral (9). However, there exists a major obstruction in computing Ψ(t, t0 ) due to instability of system (7). There exists a fundamental difference between the regular and adjoint state impulse responses. While the regular state impulse response for a stable system converges to zero, the adjoint state impulse response will diverge. Therefore, the adjoint state impulse response can not be computed on an infinite time interval. The state impulse response for an unstable system can be computed only on a finite interval of time. The procedure of computing the finite interval reachabilty gramian follows the idea behind equation (3), where we fix the initial time moment t0 . The application of the trajectorybased approach to the adjoint system depends also on the trajectories, obtained from the regular system. Since the adjoint system is unstable, the interval of integration for this system trajectories is determined entirely from the interval of integration of the original regular system trajectories. The algorithm for computing the system gramians is further presented:
t0
The state impulse response of the linear time-varying system is Green function and for u(t) = δ(t − t0 ) can be computed as w(t, t0 ) = Φ(t, t0 )B(t0 ). The Green function for a given system is determined by the kernel of the integral operator and denotes the response of the system to a concentrated into a given point unit input signal. The unit input signal is concentrated at the time moment t0 and is presented by a delta function u(t) = δ(t − t0 ). In the empirical gramians approach, we derive the state impulse response w(t, t0 ), where the fixed time variable is t0 and the time variable which changes is t. In the finite interval gramians case however, the fixed time variable for the integrand w(t1 , τ ) is the final time moment t1 , while the changing one is the initial time moment τ , see expression (2). Therefore, the integration variables of the integral kernel have to be replaced. This can be achieved by switching the time variables in the state transition matrix. This switching of time variables in the state transition matrix can be obtained by using the adjoint system description, see Perev (2018b). The homogeneous adjoint of system (1) is defined by the equation: p(t) ˙ = −A(t)T p(t),
Algorithm for computation of system gramians The reachyability gramian is obtained as follows: • Fix t0 . Apply the Runge-Kutta algorithm to system (1), compute its state impulse response and determine the state transition matrix Φ(t, t0 ), t ≥ t0 • Determine the final time moment and therefore, the interval of integration [t0 , t1 ] from stability considerations • Apply the Runge-Kutta algorithm to the adjoint system on this interval and compute the matrices Ψ(τ, t0 )T as a function of the current time moment • Use the expression Φ(t, τ ) = Φ(t, t0 )Φ(t0 , τ ) = Φ(t, t0 )Ψ(τ, t0 )T to obtain the state transition matrix of system (1) as function of its first argument • Determine the state trajectory w(t, τ ) = Φ(t, τ )B(τ ) t and compute Wr (t0 , t1 ) = t01 w(t, τ )w(t, τ )T dτ .
(7)
where p(t) ∈ Rn is the state vector of the adjoint system (7). We denote by Ψ(t1 , t0 ) the state transition matrix of the adjoint system (7) on the time interval [t0 , t1 ]. The relation between the state transition matrices of the original system (1) and the adjoint (7) is given by the expression, see Callier & Desoer (1991): Ψ(t, t0 ) = Φ(t0 , t)T
The observability gramian is obtained as follows:
(8)
• Apply the Runge-Kutta algorithm to system (1) and determine the state transition matrix Φ(t, t0 ), t ≥ t0 • Obtain the transpose of the zero-input output response h(t, t0 ), where h(t, t0 ) = Φ(t, t0 )T C(t0 )T is obtained by sequentially selecting the initial conditions as the columns of the identity matrix. In order to see this, consider the zero-input output response y(t) = C(t)Φ(t, t0 )x0 . If we fix the initial time moment t0 and apply the computation of the output with respect to different initial state vectors, we obtain the following expression:
It is clear that the ordering of the time variables for the adjoint system is reversed with respect to the ordering of these time variables in the original system. Using (8), the state transition matrix of the original system can be written in the form Φ(t, τ ) = Φ(t, t0 )Φ(t0 , τ ) = Φ(t, t0 )Ψ(τ, t0 )T . Therefore, the computation of the state transition matrix as a function of its first variable depends on the computation of the state transition matrix of the original system (1) by computing Φ(t, t0 ) and the computation of the state transition matrix of the adjoint system (7) in terms of computing Ψ(τ, t0 ). The reachability gramian of the original system (1) can be computed as:
Wr (t0 , t1 ) = Φ(t1 , t0 )
t1
Y (t) = [ y1 (t) y2 (t) · · · yn (t) ] =
(10)
[ C(t)Φ(t, t0 )x0,1 · · · C(t)Φ(t, t0 )x0,n ] , where the initial state vectors are selected as columns of the identity matrix, i.e. x0,j = ej , j = 1, 2, · · · , n. Therefore, Y (t)T = Φ(t, t0 )T C(t)T = h(t, t0 ) is a [n × 1] vector. • Compute the observability gramian as:
Ψ(τ, t0 )T B(τ )B(τ )T
t0
·Ψ(τ, t0 )dτ Φ(t1 , t0 )T ,
27
(9) 69
2019 IFAC SSSC 28 Sinaia, Romania, September 9-11, 2019
Wo (t0 , t1 ) =
t1
Kamen L. Perev / IFAC PapersOnLine 52-17 (2019) 25–30
T
h(τ, t0 )h(τ, t0 ) dτ,
2 qn = t1 − t 0
(11)
t0
2 t 1 + t0 t− )dt, t 1 − t0 t1 − t 0
where qk , k = 0, 1, 2, · · · N are the Fourier vector coefficients of the Legendre series expansion of the impulse response, which are determined as qk = +t0 2 2k+1 t1 2 w(t1 , τ )Pk ( t1 −t τ − tt11 −t )dτ . The reacht1 −t0 2 t0 0 0 ability gramian is determined as: Wr (t0 , t1 ) =
t1
t0
w(t1 , τ )w(t1 , τ )T dτ ≈
N t 1 − t0 qk qkT 2 k=0
In the MIMO case, the reachability gramian approximation is computed by using the Dyadic Expansion Lemma of two matrices product, see Callier & Desoer (1991), N m i i T i 0 as Wr (t0 , t1 ) ≈ t1 −t k=0 i=1 qk (qk ) , where qk , k = 2 0, 1, 2, · · · , N is the Fourier vector coefficient corresponding to the ith input signal component i = 1, 2, · · · , m, when the other components are zero, see Perev (2018a). Similarly to the reachability gramian case, we approximate the observt ability gramian as Wo (t0 , t1 ) = t01 h(τ, t0 )h(τ, t0 )T dτ ≈ N t1 −t0 T k=0 fk fk , where fk , k = 0, 1, 2, · · · , N are the 2 Fourier vector coefficients of the Legendre series expansion of the zero-input output response of system (1) and are 2 2k+1 t1 2 computed as fk = t1 −t0 h(τ, t0 )Pk ( t1 −t τ − 2 t0 0
(12)
The Legendre polynomials can also be computed by using the following recurrence relation, see Abramowitz & Stegun (1972): (2n + 1)tPn (t) − nPn−1 (t) , n+1 P0 (t) = 1, P1 (t) = t, n = 1, 2, · · ·
t0
f (t)Pn (
t1 +t0 t1 −t0 ),
The Legendre polynomials form a complete set of orthogonal functions in the Hilbert space L2 [−1, 1]. The n-th order Legendre polynomial is defined as in Abramowitz & Stegun (1972):
Pn+1 (t) =
t1
We consider the Legendre orthogonal series approximation of system gramians. Assume first the SISO case. As a first step we fix the initial time moment t0 . Based on energy considerations, we determine the final time moment t1 . We partition the time interval [t0 , t1 ] uni0 formly with a step δ = t1 −t n . The next step is to compute the adjoint state impulse response w(t1 , τ ), τ ∈ [t0 , t1 ] by using the algorithm presented in the previous section. Then, we determine its Legendre series apN 2n+1 2 proximation as w(t1 , τ ) ≈ k=0 qk 2 Pn ( t1 −t0 τ −
3. LEGENDRE POLYNOMIAL APPROXIMATION OF SYSTEM GRAMIANS
1 dn 2 (t − 1)n , n = 0, 1, 2, · · · 2n n! dtn
2n + 1 2
with N being the order of series truncation in the Legendre orthogonal series approximation. The mean square error of approximation is determined by the expression, see t1 N 2 Schetzen (1989), as εN = t0 f (t)2 dt − n=0 qn2 . In the vector case, the error of determined from t approximation is N the expression: ε2N = t01 f (t)T f (t)dt − n=0 qnT qn
After computing both gramians for the initial time moment t0 , we can change the initial time moment as t0 → t0 + δ and repeat the whole procedure again. We apply the trajectories based approach for computing the finite interval gamians by utilizing the fourth order Runge-Kutta algorithm. The order of accuracy for the presented method is O(h4 ), where h is the discretization step. The main computational task is to integrate numerically the adjoint state equation, which is a vector differential equation. Different approach for computing the gramians is by solving the matrix differential equations of Lyapunov. In its simplest form, when the system matrices are constant, most of the popular algorithms require solving algebraic Lyapunov equations at each step of the algorithm, see Lang et al. (2015), Behr et. al (2018). However, in the case when system matrices are functions of time, the numerical integration of the Lyapunov matrix differential equations is unavoidable, see Benner & Stykel (2017), Behr et. al (2018). The main advantage of the proposed method is that, it replaces integrating matrix differential equations with integrating vector differential equations. Additional feature of its efficiency is the possibility for parallelization of the computing processes, which further reduces the time of calculations.
Pn (t) =
(13)
t1 +t0 t1 −t0 )dτ ,
where h(t, t0 ) = Φ(t, t0 )T C(t0 )T . In the MIMO case the observability gramian is approximated by the N p j j T 0 expression Wo (t0 , t1 ) ≈ t1 −t k=0 j=1 fk (fk ) , where 2 fkj , k = 0, 1, 2, · · · , N is the Fourier vector coefficient in the Legendre series expansion for the j th output response j = 1, 2, · · · p, see Perev (2018a).
The Legendre polynomials can be normalized by using 2n+1 the functions ϕn (t) = 2 Pn (t) and they satisfy the orthonormality condition with an weighting function w(t) = 1. When the definition interval is different than [−1, 1], the Legendre functions are rescaled and the so called shifted Legendre functions are computed. For the Hilbert space L2 [t0 , t1 ], the shifted Legendre functions
4. NUMERICAL EXAMPLE
t1 +t0 2 are obtained as ϕn (t) = 2n+1 2 Pn ( t1 −t0 t − t1 −t0 ). Then every function f (t) ∈ L2 [t0 , t1 ] can be approximated on the interval [t0 , t1 ] by the Legendre polynomial series N t1 +t0 2n+1 2 as f (t) ≈ n=0 qn 2 Pn ( t1 −t0 t − t1 −t0 ), where the Fourier coefficients are computed as:
Consider the linear time-varying stable system (1), where the system matrices are determined as follows: −1 + cos2 t 1 − ae−0.2t sin t cos t A(t) = −1 + a sin2 t −1 − ae−0.4t sin t cos t 70
2019 IFAC SSSC Sinaia, Romania, September 9-11, 2019
Kamen L. Perev / IFAC PapersOnLine 52-17 (2019) 25–30
Fig. 1. Adjoint state impulse responses
Fig. 4. Approximated zero-input output responses
Fig. 2. Adjoint reversed-time state impulse responses
Fig. 5. Hankel singular values as functions of t0
29
system gramians based on the approximated curves. We use the same discretization step for the series expansion as δ = 0.005 and the order of series truncation is N = 25. The approximated adjoint reversed-time state impulse responses are shown in fig.3. The zero-input output approximate responses obtained by using 25th order Legendre series approximation of the output characteristics are shown in fig.4. Fig.5 contains the Hankel singular values of system (1) for the time interval t ∈ [0, 0.5], when the time variable is the initial time moment t = t0 . The corresponding Hankel singular values on the same time interval, obtained by orthogonal approximation of the system (1) adjoint state impulse and the zero-input output responses are shown in fig.6, where both Hankel singular values are again functions of the initial time moment t0 .
Fig. 3. Approximated adjoint reversed-time state impulse responses 1 B(t) = , C(t) = [ 1.5 0.5 ] , 1
5. CONCLUSION
where the parameter a assumes the value a = 1.5. The state impulse response characteristics are obtained on the interval [0, 0.5] with discretization step δ = 0.005.
The paper considers the problem for Legendre orthogonal polynomials approximation of system finite interval gramians. A simple method for computing the gramians instead of solving differential Lyapunov equations is presented. The proposed method is based on approximation of the adjoint system state impulse response and the regular system zero-input output response. The numerical implementation of the method uses the Runge-Kutta algorithm for performing linear time-varying system simulations. Both gramians are approximated by using Legendre orthogonal
It is clear that the adjoint state impulse response is unstable and is bounded only on a finite interval of time as can be seen in fig.1. If we reverse the time variable by using τ = t1 − t, we obtain the adjoint reversed-time state impulse responses in fig.2. The next step is to apply the Legendre orthogonal series approximation of these characteristics and to compute the 71
2019 IFAC SSSC 30 Sinaia, Romania, September 9-11, 2019
Kamen L. Perev / IFAC PapersOnLine 52-17 (2019) 25–30
IEEE Transactions on Automatic Control, 55, 2, pages 469 - 473, 2010. Perev, K. (2018a) Legendre orthogonal polynomials approximation of system gramians and its application in balanced truncation. International Journal of Systems Science, 49, 10, pages 2170-2186, 2018. Perev, K. (2018b) Computation of system gramians for linear time-varying systems. AIP Conference Proceedings: Application of Mathematics in Engineering and Economics, AMEE’18, 2048, pages 050006-1 - 050006-8, Sozopol, BG, 2018. Sandberg, H. and A. Rantzer. Balanced truncation of linear time-varying systems. IEEE Transactions on Automatic Control, 49, 2, pages 217 - 229, 2004. Schetzen, M. The Volterra and Wiener theories of nonlinear systems. Krieger Publ. Corp., Malabar, 1989. Shokoohi, S., L. Silverman, and P. Van Dooren. Linear time-variable systems: Balancing and model reduction. IEEE Transactions on Automatic Control, 28, 8, pages 810 - 822, 1983. Sirovich, L. Turbulence and the dynamics of coherent structures. Part I-III. Quarterly of Applied Mathematics, 45, pages 561 - 590, 1987. Varga, A. Balanced truncation model reduction of periodic systems. in Proceedings of the Conference on Decision and Control, Sydney, AU, pages 2379 - 2384, 2000. Verriest, E. Time variant balancing and nonlinear balanced realizations. in Model order reduction. Theory, research aspects and applications, W. Schilders, H. Van der Vorst, and J. Rommes, edts., Springer - Verlag, Berlin, pages 213 - 250, 2008. Verriest, E. and T. Kailath, T. On generalized balanced realizations. IEEE Transactions on Automatic Control, 28, 8, pages 833 - 844, 1983.
Fig. 6. Approximated HSV as functions of t0 polynomial representations of the system adjoint state impulse response and the zero-input output response. ACKNOWLEDGEMENTS This work was supported in part by the Research and Development Sector and the Faculty of Automatics, Systems and Control Department at the Technical University of Sofia. REFERENCES Abramowitz, M. and I. Stegun, eds., Handbook of mathematical functions with formulas, graphs and mathematical tables. Dover Publ., New York, NY, 1972. Antoulas, A. Approximation of large-scale dynamical systems. SIAM Publ., Philadelphia, PA, 2005. Behr, M., P. Benner and J. Heiland. Solution formulas for differential Sylvester and Lyapunov equations., retrieved from arXiv:1811.08327v1 [math.NA], Cornell University, 2018. Benner, P. and T. Stykel. Model order reduction for differenetial-algebraic equations: A survey. in Surveys in differential-algebraic equations IV, Differentialalgebraic equations forum, A. Ilchmann and T. Reis, edts. Springer Verlag, Berlin, pages 107-160, 2017. Callier, F. and C. Desoer. Linear system theory. Springer Verlag, New York, NY, 1991. Himpe, C. and M. Ohlberger. A unified software framework for empirical gramians. Journal of Mathematics, 2013, pages 1-6, 2013. Lall, S. and Beck, (2003). Error-bounds for balanced model-reduction of linear time-varying systems. IEEE Transactions on Automatic Control, 48(6), pages 946 956, 2003. Lall, S., J. Marsden, and S. Glavaski. Empirical model reduction of controlled nonlinear systems. Proceedings of the IFAC World Congress, pages 473-478, 1999. Lang, N., H. Mena, and J. Saak. On the benefits of the LDLT factorization for large-scale differential matrix equation solvers. Linear Algebra and its Applications, 480, 1, pages 44-71, 2015. Lang, N., J. Saak, and T. Stykel. Balanced truncation model reduction for linear time-varying systems. Mathematical and Computer Modelling of Dynamical Systems, 22, 4, pages 267-281, 2016. Ma, Z., C. Rowley, and G. Tadmor. Snapshots-based balanced truncation for linear time-periodic systems. 72