Proceedings of the 18th World Congress The International Federation of Automatic Control Milano (Italy) August 28 - September 2, 2011
An explicit formula for computation of the state coordinates for nonlinear i/o equation. ⋆ ¨ Kotta ∗ Maris T˜onso ∗ Juri Belikov ∗ Ulle ∗
Institute of Cybernetics, Tallinn University of Technology, Akadeemia tee 21, 12618, Tallinn, Estonia, (e-mail: {jbelikov, kotta, maris}@cc.ioc.ee)
Abstract: For a nonlinear input-output equation defined on homogeneous time scale, an explicit formula is given allowing to compute recursively the differentials of the state coordinates. The recursive formula is based on the Euclidean division algorithm. Keywords: nonlinear control system, input-output models, state-space realization, polynomial methods, time scales. 1. INTRODUCTION Time scale calculus unifies the theories of differential and difference equations, see Bohner and Peterson (2001). Both continuous- and discrete-time (in terms of the difference operator) cases are merged in time scale formalism into a general framework which provides not only unification but also an extension. The main concept of the time scale calculus is the so-called delta-derivative that is a generalization of both time-derivative and the difference operator (but accommodates more possibilities, e.g. q-difference operator). However, the time scale formalism does not allow to study the discrete-time systems described in terms of the more conventional shift operator because the shift operator is not a delta-derivative unlike the difference operator. Though less popular, the description of dynamical systems based on the difference operator has been advocated by many researchers (see, for example, Middleton and Goodwin (1990)). Recently, it was demonstrated that a difference operator based NARX model improves the numerical properties of the system structure detection, being known as a very difficult subtask in system identification, and provides a model that is closely linked to the continuous-time system both in terms of parameters and structure, see Anderson and Kadirkamanathan (2007).
formula herein also covers the discrete-time nonlinear systems, represented in terms of the difference operator. The explicit formula is easier to implement and the program based on it gives the results much faster than the one based on the recursive algorithm. Finally, note that the realization problem for linear and nonlinear systems defined on time scales from the i/o map has been studied in Bartosiewicz and Pawłuszewicz (2006) and Bartosiewicz and Pawłuszewicz (2008), respectively. The paper is organized as follows: Section 2 recalls the notions from time scale calculus used in paper. In the next section the problem setting is explained and a brief overview of algebraic framework on homogeneous time scale is given. Section 4 is devoted to the theory of noncommutative polynomial rings. In Section 5 the main result is presented. It is followed by examples and conclusions. 2. CALCULUS ON TIME SCALE For a general introduction to the calculus on time scales, see Bohner and Peterson (2001). Here we give only those notions and facts that we need in our paper. A time scale T is an arbitrary nonempty closed subset of the set R of real numbers. The standard cases include T = R, T = Z, T = hZ for h > 0, but also T = q Z := {q k : k ∈ Z} ∪ {0}, q > 1 is a time scale.
Realization problem, i.e. the problem of recovering the statespace model from the input-output (i/o) delta-differential equation (DDE), relating the inputs and outputs of the system together with their delta derivatives, has been studied for nonlinear control systems defined on homogeneous time scales in Casagrande et al. (2010). In this paper the state coordinates were computed using a recursive algorithm. Note that, the application of the algorithm is time-demanding and second, it requires solving a system of equations, which may fail when implemented in CAS Mathematica for systems with medium complexity. In this paper we provide for a nonlinear DDE a straightforward formula for finding the differentials of the state coordinates by generalizing the results obtained for linear systems in Rapisarda and Willems (1997) and for nonlinear continuous-time systems in T˜onso and Kotta (2009). The novelty of our result, besides unification, relies in the fact that the
If µ ≡ const then a time scale T is called homogeneous. In this paper we assume that the time scale T is homogeneous. Definition 1. The delta derivative of a function f : T→R at t is the number f ∆ (t) such that for each ε > 0 there exists a neighborhood U (ε) of t, U (ε) ⊂ T such that for all τ ∈ U (ε), |f (σ(t)) − f (τ ) − f ∆ (t)(σ(t) − τ )| ≤ ε|σ(t) − τ |.
⋆ This work was partially supported by the Estonian Science Foundation Grant no. 8787.
The typical special cases of the delta operator are summarized in the following remark.
978-3-902661-93-7/11/$20.00 © 2011 IFAC
The following operators on T are often used: • the forward jump operator σ : T→T, defined by σ(t) := inf {τ ∈ T : τ > t} and σ(sup T) = sup T, if sup T ∈ T, • the backward jump operator ρ : T→T, defined by ρ(t) := inf {τ ∈ T : τ < t} and ρ(inf T) = inf T, if inf T ∈ T, • the graininess function µ : T→[0, ∞), defined by µ(t) := σ(t) − t.
7221
10.3182/20110828-6-IT-1002.00091
18th IFAC World Congress (IFAC'11) Milano (Italy) August 28 - September 2, 2011
Remark 1. (i) If T = R, then f : R→R is delta differentiable (s) = f ′ (t), at t ∈ R if and only if f ∆ (t) = lim f (t)−f t−s s→t i.e. iff f is differentiable in the ordinary sense at t. (ii) If T = T Z, where T > 0, then f : T Z→R is always delta differentiable at every t ∈ T Z with f ∆ (t) = f (σ(t))−f (t) = f (t+TT)−f (t) meaning the usual forward µ(t) difference operator. Proposition 1. Let f : T → R, g : T → R be two delta differentiable functions defined on T and let t ∈ T. Then the delta derivative satisfies the following properties (i) (ii) (iii) (iv)
f σ = f + µf ∆ , (αf + βg)∆ = αf ∆ + βg ∆ , for any constants α and β, (f g)∆ = f σ g ∆ + f ∆ g, if gg σ 6= 0, then (f /g)∆ = (f ∆ g − f g ∆ )/(gg σ ).
For a function f : T→R we define second delta derivative f [2] := f ∆∆ provided that f ∆ is delta differentiable on T. Similarly we define higher order derivatives f [n] .
σ(F ) y, . . . , y [n−1] , u, . . . , u[k] := σ σ , F y σ , . . . , y [n−1] , uσ , . . . , u[k] [l] σ
[l]
[l+1]
(3)
[n−1] σ
where (y ) = y + µy , 0 ≤ l < n − 1, (y ) = y [n−1] + µφ(y [0..n−1] , u[0..s] ), (u[k] )σ = u[k] + µu[k+1] , k ≥ 0 and ∆(F ) y, . . . , y [n−1] , u, . . . , u[k+1] := Z 1 i h {grad F y [0..n−1] + hµ y [1..n−1] , φ y [0..n−1] , u[0..s] , 0 u[0..k] + hµu[1..k+1] · iT h y [1..n−1] , φ y [0..n−1] , u[0..s] , u[1..k+1] }dh. (4) We will use σ(F ) and F σ to denote the action of σ on F . Similarly, both ∆(F ) and F ∆ will be used interchangeably.
3. PROBLEM STATEMENT AND ALGEBRAIC FRAMEWORK
In case σ is not injective, there may exist non-zero functions φ such that σ(φ) = 0 meaning that the operator σ is not welldefined on the field K . For the operator σ to be injective the i/o equation (1) has to satisfy an assumption from Kotta et al. (2011) T n−1 X n−i−1 n−i ∂φ (−1) µ 1+ ∂y [i] i=0 = 1. rankK s (5) X(−1)s−j+1 µs−j+2 ∂φ ∂u[j] i=0
For f : T→R and i ≤ k, denote f [i..k] := (f [i] , . . . , f [k] ). Let y : T→Y ⊆ R and u : T→U ⊆ R the output and the input functions respectively, such that y is delta-differentiable up to the order n and there exists the delta-derivative of any order of u. Consider a single-input single-output dynamical system described by a higher order input-output (i/o) DDE on homogeneous time scale T (1) y [n] = φ y, . . . , y [n−1] , u, . . . , u[s] ,
The operator ∆ satisfies a generalization of Leibnitz rule: (F G)∆ = F σ G∆ + F ∆ G, (6) for F, G ∈ K . The derivation satisfying rule (6) is called a ”σderivation” (see for example Cohn (1965)). Therefore K is a field equipped with a σ-derivation ∆. We assume furthermore that the operator σ : K →K is an automorphism 1 . Under the latter assumption, the pair (K ; ∆) is an inversive σ-differential field meaning that the backward jump operator ρ is welldefined, i.e. for function φ ∈ K there exists ψ such that ψ σ = φ, or alternatively, φρ = ψ.
n
Denote σ := |σ ◦ ·{z · · ◦ σ} and f
σn
:= f ◦ σ
n
n−times
Proposition 2. (Kotta et al. (2009)). Let f and f ∆ be delta differentiable functions on homogeneous time scale T. Then (i) f ∆σ =P f σ∆ , n (ii) f σ = nk=0 Cnk µk f [k] .
where φ is a real analytic function defined on Y n × U s+1 . Moreover, for causality, one assumes that s < n, where n and s are nonnegative integers.
The realization problem is defined as follows. Given a nonlinear system, described by the i/o equation of the form (1), find, if possible, the state coordinates x ∈ X ⊆ Rn , x = ψ(y, . . . , y [n−1] , u, . . . , u[s] ) such that in these coordinates the system takes the classical state-space form x∆ = f (x, u) y = h(x) called the realization of (1).
(2)
Below we briefly recall the algebraic formalism for nonlinear control systems defined on homogeneous time scales, described in Bartosiewicz et al. (2007), Casagrande et al. (2010), Kotta et al. (2009). Let K denote the field of meromorphic functions in a finite number of the (independent) variables n o C = y, y ∆ , . . . , y [n−1] , u[k] , k ≥ 0 . The operators σ : K →K and ∆ : K →K are defined as follows
Over the σ-differential field K one can define the vector space E := spanK dy, dy ∆ , . . . , dy [n−1] , du[k] : k ≥ 0 . The elements of E are called one-forms. For F ∈ K d : Pn−1 ∂F Pkwe define ∂F [i] [j] K →E as follows dF := i=0 ∂y + j=0 ∂u . [i] dy [j] du dF is said to be the total differential (or simply the differential) of the function F and it isPa one-form. Any element in E is a vector of the form ω = i αi dϕi , where all αi ∈ K and ϕi are functions on variables from the set C . The operators σ : K →K and ∆ : K →K induce the operators σ : E → E and ∆ : E → E by X σ(αi )d [σ(ζi )] , (7) σ(ω) := i
∆(ω) :=
X
{∆(αi )dϕi + σ(αi )d [∆(ϕi )]} ,
(8)
i
Since σ(αi ) = αi + µ∆(αi ), (8) may be alternatively written as X {∆(αi )dϕi + (αi + µ∆(αi )) d [∆(ϕi )]} . ∆(ω) = i
1
For this assumption to hold, one has to extend the field K up to its inversive closure which is always possible, see Cohn (1965).
7222
18th IFAC World Congress (IFAC'11) Milano (Italy) August 28 - September 2, 2011
It has been proved that ∆(dF ) = d[F ∆ ], σ(dF ) = d[F σ ] and ∆σ = σ∆, see Bartosiewicz et al. (2007).
Since σ is an automorphism, the ring of the left differential polynomials is an Ore ring, see McConnell and Robson (1987).
The relative degree r of a one-form ω ∈ E is defined to be the least integer such that ω [r] 6∈ spanK {dy, . . . , dy [n−1] , du, . . . , du[s] }. If such an integer does not exist, we set r = ∞. A sequence of subspaces {Hk } of E is defined by n o H1 = spanK dy, . . . , dy [n−1] , du, . . . , du[s] , (9) Hk+1 = {ω ∈ Hk | ω ∆ ∈ Hk }, k ≥ 1.
Let σ n := |σ ◦ ·{z · · ◦ σ} and denote σ n (a) by aσ for a ∈ K .
Note that Hk contains the one-forms whose relative degree is equal to k or higher than k. It is clear that the sequence (9) is decreasing. Denote by k ∗ the least integer such that H1 ⊃ H2 ⊃ · · · ⊃ Hk∗ ⊃ Hk∗ +1 = · · · =: H∞ . In what follows we assume that the i/o delta differential equation (1) is in the irreducible form, that is, H∞ is trivial Kotta et al. (2009), i.e. H∞ = {0}. A nth-order realization of equation (1) will be accessible if and only if system (1) is irreducible. System (2) is said to be single-experiment observable if the observability matrix has generically full rank rankK
∂(h(x), [h(x)]∆ , . . . , [h(x)][n−1] ) = n. ∂x
We say that ω ∈ E is an exact one-form if there exists ξ ∈ K such that dξ = ω. A one-form ω for which dω = 0 is said to be closed. A subspace is integrable or closed, if it has a basis which consists only of closed one-forms. Note that closed oneforms are locally exact. The Frobenius theorem can be used to check the integrability of the subspace of one-forms; in the theorem presented below the symbol ∧ means the exterior or wedge product. Theorem 1. (Choquet-Bruhat et al. (1982)). Let V = spanK {ω1 , . . . , ωr } be a subspace of E . V is closed if and only if dωi ∧ ω1 ∧ . . . ∧ ωr = 0, (10) for all i = 1, . . . , r. Theorem 2. (Casagrande et al. (2010)). The nonlinear system Σ, described by the irreducible input-output delta differential equation (1), has an observable and accessible state-space realization iff for 1 ≤ k ≤ s + 2 the subspaces Hk defined by (9) are completely integrable. Moreover, the state coordinates can be obtained by integrating the basis vectors of Hs+2 . 4. POLYNOMIAL FRAMEWORK
n
n−times
Lemma 1. (Kotta et al. (2009)). Let a ∈ K . Then ∂ n · a ∈ σi i Pn ∂. K [∂; σ, ∆], for n ≥ 0, and ∂ n · a = i=0 Cni a[n−i] Lemma 2. (Kotta et al. (2009)). Let F ∈ K and n ∈ N. Then ∂ n (dF ) = dF [n] .
A ring D is called an integral domain, or a domain, if it does not contain any zero divisors. The latter means that if a and b are two elements of D such that ab = 0, then either a = 0 or b = 0 or both. Proposition 3. (McConnell and Robson (1987)). The ring K [∂; σ, ∆] is an integral domain. Let us define ∂ k dy := dy [k] and ∂ l du := du[l] , k, l ≥ 0 in the vector space E . Since every one-form ω ∈ E has the Pn−1 Pk [i] following form ω = + j=0 Bj du[j] , where i=0 Ai dy Ai , Bj ∈ K , so ω can be expressed in terms of the left differenPn−1 tial polynomials in the following way ω = ( i=0 Ai ∂ i )dy + Pk ( j=0 Bj ∂ j )du. A left differential polynomial can be considered as an operator acting on vectors dy and du from E Pk Pk and ( i=0 Ai ∂ i )(αdν) := i=0 Ai (∂ i · α)dν, with Ai , α ∈ K and dν ∈ {dy, du}, where by Lemma 1, ∂ i · α = Pi k [i−k] σk k ) ∂ . It is easy to note that ∂(ω) = ∆(ω), k=0 Ci (α for ω ∈ E . Instead of working with equation (1), describing the control system, we can work with its differential s n−1 X X ∂φ ∂φ [i] dy − du[j] = 0. (12) dy [n] − [i] [j] ∂y ∂u j=0 i=0 By definition, ∂ i dy := dy [i] and ∂ j du := du[j] , therefore (12) can be rewritten as p(∂)dy + q(∂)du = 0, (13) Ps Pn−1 j i n with p(∂) = ∂ − i=0 pi ∂ , q(∂) = − j=0 qj ∂ and ∂φ ∂φ pi = ∂y [i] ∈ K , qj = ∂u[j] ∈ K . Equation (13) describes the behavior of dynamical system in terms of two differential polynomials p(∂), q(∂) in operator ∂ over the σ-differential field K . Since K [∂; σ, ∆] is an Ore ring, one can construct the division ring of fractions. If p(∂) = p1 (∂)p2 (∂) and deg(p1 (∂)) > 0, then p1 (∂) is called a left divisor of p(∂) and p(∂) is called left divisible by p1 (∂).
Consider the differential field K with the σ-derivation ∆ with σ being an automorphism of K . A left differential polynomial is an element Pn which can be uniquely written in the form a(∂) = i=0 ai ∂ n−i , ai ∈ K , where ∂ is a formal variable and a(∂) 6= 0 if and only if at least on of the functions ai , i = 0, . . . , n is nonzero. If a0 6≡ 0, then the positive integer n is called the degree of the left polynomial a(∂), denoted by deg a(∂). In addition, we set deg 0 = −∞. For a ∈ K let us define multiplication by the commutation rule
To find the left divisor one can use the left Euclidean division algorithm, see Bronstein and Petkovˇsek (1996). To perform the left Euclidean division algorithm it is sufficient that σ be an automorphism. For given two polynomials p1 (∂) and p2 (∂) with deg(p1 (∂)) > deg(p2 (∂)) there exist a unique polynomial γ(∂) and a unique left remainder polynomial r(∂) such that p1 (∂) = p2 (∂)γ(∂) + r(∂) and deg(r(∂)) < deg(p2 (∂)).
∂ · a := aσ ∂ + a∆ .
5. PROBLEM SOLUTION: POLYNOMIAL APPROACH
(11)
This rule can be uniquely extended to multiplication on monomials by (a∂ n ) · (b∂ m ) = (a∂ n−1 ) · (bσ ∂ m+1 + b∆ ∂ m ). The ring of differential polynomials will be denoted by K [∂; σ, ∆].
We introduce certain one-forms in terms of which the main result will be formulated. Let ωk,l := pk,l (∂)dy + qk,l (∂)du (14)
7223
18th IFAC World Congress (IFAC'11) Milano (Italy) August 28 - September 2, 2011
for k = 1, . . . , s + 2, l = 1, . . . , n, where pk,l (∂) and qk,l (∂) are Ore polynomials, which can be recursively calculated as left quotients from equalities pk,l−1 (∂) = ∂ · pk,l (∂) + rk,l , deg rk,l = 0, (15) qk,l−1 (∂) = ∂ · qk,l (∂) + ρk,l , deg ρk,l = 0 with the initial polynomials defined as p10 (∂) := ∂ n , q10 (∂) := 0 for k = 1 and n−1 s X X n i pk,0 (∂) := ∂ − pi ∂ , qk,0 (∂) := − qj ∂ j i=n−k+1
j=s−k+2
(16) for k = 2, . . . , s + 2. Two following lemmas are necessary to prove the main result. Lemma 3. The one-forms ωk,l for k = 1, . . . , s + 2 and l = 1, . . . , n, defined by (14), satisfy the relationships ωk+1,l = wk,l + α(∂)dy + β(∂)du, (17) where n−k−l ρl+i X [i] (−1)i Cil+i−1 pn−k ∂ n−k−l−i , i=0 α(∂) = if 1 ≤ l ≤ n − k 0, if n − k < l ≤ n and
β(∂) =
s−k−l+1 ρl+i X [i] i l+i q (−1) C ∂ s−k−l−i+1 , i s−k+1 i=0
if 1 ≤ l ≤ s − k + 1
0,
if s − k + 1 < l ≤ s.
Proof. From (14), ωk+1,l = pk+1,l (∂)dy + qk+1,l (∂)du.
(18)
We find the relationships between pk,l (∂) and pk+1,l (∂) and between qk,l (∂) and qk+1,l (∂), respectively. By repeated application of the recursive formula (14), equation (18) can be replaced by a formula which allows to find pk,l (∂) directly from pk,0 (∂) l
pk,0 (∂) = ∂ pk,l (∂) + Rk,l (∂),
deg Rk,l (∂) < l.
(19)
By (16), the relationships (19) for k + 1 may be written as pk,0 (∂) − pn−k ∂ n−k = ∂ l pk+1,l (∂) + Rk+1,l (∂), (20) where deg Rk+1,l (∂) < l. Divide the left-hand side of (20) by ∂ l to find the quotient pk+1,l (∂). Notice that the quotient of pk,0 (∂) and ∂ l is pk,l (∂) by (19). Therefore, it remains to found the left quotient of pn−k ∂ n−k and ∂ l , denoted by α(∂). If l > n − k, the quotient is zero. If l ≤ n − k, the quotient may be written as α(∂) = αn−k−l ∂ n−k−l +αn−k−l−1 ∂ n−k−l−1 +· · ·+α1 ∂+α0 , (21) where αn−k−l , . . . , α0 ∈ K are unknown coefficients. Multiplying the right-hand side of (21) by ∂ l from the left and, using Lemma 1, one can rewrite formula (21) as pn−k ∂ n−k =
l X i=0
σ i [l−i] ∂ n−k−l+i + · · · Cil αn−k−l +
l X i=0
σi [l−i] ∂ i . (22) Cil α0
Equating the coefficients of the same power of ∂ on both sides of formula (22) yields the system of linear equations l
l α∆ Cl−1 n−k−l
Cll (αn−k−l )σ = pn−k ,
σl−1
[l]
l C0l αn−k−l + · · · + Cl−1
l
+ Cll (αn−k−l−1 )σ = 0, .. . l−1 ∆ σ l σl α1 + Cl (α0 ) = 0.
It is easy to see that the system of equations is in the triangular form, and one can find a solution recursively as l
αn−k−l = pρn−k , αn−k−l−1 = −C1l p∆ n−k .. .
ρl+1
,
ρn−k [n−k−l] n−k−1 α0 = (−1)n−k−l Cn−k−l pn−k . After substitution the solution into (21), we obtain n−k−l ρl+i X [i] α(∂) = (−1)i Cil+i−1 pn−k ∂ n−k−l−i . i=0
Thus, pk+1,l (∂) = pk,l (∂) + α(∂). In a similar manner one can prove that qk+1,l (∂) = qk,l (∂) + β(∂), where β(∂) = 0 if s − k + 1 < l ≤ s and β(∂) = Ps−k−l+1 l+i [i] (−1)i Cil+i (qs−k+1 )ρ ∂ s−k−l−i+1 if 1 < l ≤ s − i=0 k + 1. Lemma 4. For the one-forms ωk,l , defined by (14), the property ∆ ωk,l = ωk,l−1 − rk,l dy − ρk,l du (23) holds, where deg rk,l = deg ρk,l = 0, for l = 2, . . . , n and k = 1, . . . , s + 2.
Proof. Taking the delta derivative of (14) ∆ ωk,l = (pk,l (∂)dy)∆ + (qk,l (∂)du)∆ = σ ∆ ∆ σ ∆ p∆ k,l (∂)dy + pk,l (∂)dy + qk,l (∂)du + qk,l (∂)du = ∆ σ ∆ σ pk,l (∂) + pk,l (∂)∂ dy + qk,l (∂) + qk,l (∂)∂ du (24) and using the commutation rule (11), equation (24) can be rewritten as follows ∆ ωk,l = (∂ · pk,l (∂))dy + (∂ · qk,l (∂))du. (25)
Next, we replace in (25) the terms ∂ · pk,l (∂) and ∂ · qk,l (∂) by the corresponding expressions from (15) to obtain ∆ ωk,l = (pk,l−1 (∂) − rk,l )dy + (qk,l (∂) − ρk,l )du and use the definition of ωk,l−1 , given by (14), to get ∆ = ωk,l−1 − rk,l dy − ρk,l du. ωk,l
Theorem 3. For the input-output model (1), the subspaces Hk can be calculated as Hk = spanK {ωk,l , du, . . . , du[s−k+1] } (26) for k = 1, . . . , s + 1 and Hs+2 = spanK {ωs+2,l }, (27) where ωk,l for l = 1, . . . , n are defined by (14). Proof. The proof is by induction. We first show that formula (26) holds for k = 1. From (14), it follows that
7224
18th IFAC World Congress (IFAC'11) Milano (Italy) August 28 - September 2, 2011
p1,l (∂) = ∂ n−l and q1,l (∂) = 0 for l = 1, . . . , n. Consequently, ω1,l = ∂ n−l dy = dy [n−l] and H1 = spanK {dy, . . . , dy [n−1] , du, . . . , du[s] } what agrees with the definition of H1 in (9). Assume now that formula (26) holds for k and prove it to be valid for k + 1. The proof is based on definition of the subspaces Hk . We have to show that Hk+1 = spanK {ωk+1,l , du, . . . , du[s−k] } in (26) satisfies the condition (9). First, we show that the basis one-forms ωk+1,l , du, . . . , du[s−k] are in Hk . It is obvious that du, . . . , du[s−k] ∈ Hk . Lemma 3 represents the one-forms ωk+1,l as a linear combination of vectors ωk,l , dy, . . . , dy [n−k−l] , du, . . . , du[s−k−l+1] . It is easy to see that ωk,l , du, . . . , du[s−k−l+1] are in Hk by definition of subspaces (9). Though dy, . . . , dy [n−k−l] are not listed explicitly among the basis vectors of Hk in (26), they can be expressed as a linear combination of other basis vectors. From (15) and (16) follows that coefficients of the higher order terms of polynomials pk,l (∂) are always 1 as well as deg pk,l (∂) = n − l for l = 1, . . . , n − 1 and deg qk,l = s − l for l = 1, . . . , s. It means that pk,l (∂) and qk,l (∂) have the form pk,l (∂) = ∂ n−l −
n−l−1 X
pk,l,j ∂ j ,
j=0
qk,l (∂) = −
s−l X
qk,l,j ∂ j .
j=0
For l = n we get pk,n (∂) = 1 and qk,n (∂) = 0. Consequently, ωk,n = dy. The rest of the differentials dy ∆ , . . . , dy [n−k] can be recursively computed from (15) as follows dy [l] = ωk,n−l +
l−1 X j=0
pk,n−l,j dy [j] +
s−n+l X
qk,n−l,j du[j]
j=0
for l = 1, . . . , n − k. Second, we have to show that the delta derivatives of the oneforms, computed according to (26) and defined by (14), also belong to Hk . Again, it is obvious that du∆ , . . . , du[s−k+1] ∈ ∆ Hk . We have to prove that ωk+1,l ∈ Hk . Note that according to Lemma 4, ∆ ωk+1,l = ωk+1,l−1 − rk+1,l dy − ρk+1,l du,
where deg rk+1,l = deg ρk+1,l = 0 for l = 2, . . . , n. It was proved in the previous step that ωk+1,l for l = 2, . . . , n and dy are in Hk . Therefore, for l = 1 we have to show separately that ∆ ωk+1,1 ∈ Hk . From (14) we have ∆ ωk+1,1 = (∂ · pk+1,1 (∂))dy + (∂ · qk+1,1 (∂))du.
Increasing k by 1 and taking l = 1 in (15) allows us to express ∂ ·pk+1,1 (∂), ∂ ·qk+1,1 (∂) and substitute them into the previous equality ∆ ωk+1,1 = (pk+1,0 (∂) − rk+1,1 )dy + (qk+1,0 (∂) − ρk+1,1 )du.
Replacing in the above equality initial polynomials ∂·pk+1,0 (∂) and ∂ · qk+1,0 (∂) by their expressions (16) and using relations ∂ i dy = dy [i] for i = n − k, . . . , n and ∂ j du = du[j] for j = s − k + 1, . . . , s, we obtain
∆ ωk+1,1 = dy [n] −
n−1 X
s X
pi dy [i] −
i=n−k
qj du[j] −
j=s−k+1
rk+1,1 dy − ρk+1,1 du. Finally, replacing dy [n] in the above equality by the right-hand side of (12), we get ∆ ωk+1,1 =
n−k−1 X i=0
pi dy [i] +
s−k X
qj du[j] − rk+1,1 dy − ρk+1,1 du.
j=0
∆ The latter means that the one-forms ωk+1,1 can be expressed as a linear combination of the basis vectors of Hk . This completes the proof.
The differentials of the state coordinates can be found from the subspace Hs+2 , see Theorem 2. Though in case of the realizable i/o equation, Hs+2 , defined by (27), is completely integrable, the one-forms ωs+2,l for l = 1, . . . , n, are not necessarily always exact. Therefore, one has to find for Hs+2 a new integrable bases, using the linear transformations. From Theorem 3 the next corollary can be concluded 2 . Corollary 1. For realizable i/o equation (1), the differentials of the state coordinates can be calculated as the integrable linear combinations of the one-forms ωl = pl (∂)dy + ql (∂)du, l = 1, . . . , n (28) where pl (∂) and ql (∂) can be computed iteratively as follows pl−1 (∂) = ∂ · pl (∂) + rl , ql−1 (∂) = ∂ · ql (∂) + ρl , with the initial polynomials defined by p0 (∂) := p(∂), q0 (∂) := q(∂). In the case T = R (the continuous-time case) the i/o deltadifferential equation (1) turns to an ordinary differential equation y (n) = φ y, . . . , y (n−1) , u, . . . , u(s) (29) and Theorem 3 yields the following corollary. Corollary 2. For the i/o model (29) the subspaces Hk can be calculated as Hk = spanK {ωk,l , du, . . . , du(s−k+1) } for k = 1, . . . , s + 1 and Hs+2 = spanK {ωs+2,l }, where ωk,l for l = 1, . . . , n are defined by (14) in which ∂ d has to be interpreted as dt , and the commutation rule (11) is ∂ · a = a∂ + a. ˙ Note that the results of Corollary 2 were proved in T˜onso and Kotta (2009). Moreover, note that the results for the discrete-time case (T = T Z for T > 0) with ∂ being interpreted as a difference operator, see Remark 1 item (ii), are new, since Kotta and T˜onso (2008) cover the discrete-time shift-operator based case. Example 1: Consider the model of the controlled van der Pol oscillator in the form of i/o DDE y [2] = (ξ1 − ξ2 y 2 )y ∆ − ξ3 y + ξ4 u, (30) where ξi ∈ R. Usually, system (30) is considered separately for continuous- (T = R, Remark 1 item (i)) and discrete-time 2
7225
The first index s + 2 is omitted in Corollary 1.
18th IFAC World Congress (IFAC'11) Milano (Italy) August 28 - September 2, 2011
(T = Z, Remark 1 item (ii)) cases, see for example Anderson and Kadirkamanathan (2007). Here, however, we consider the time scale based case which accommodates both continuous and discrete-time models. System (30) can be described by two polynomials p(∂) = ∂ 2 − (ξ1 − ξ2 y 2 )∂ + 2ξ2 yy ∆ + ξ3 and q(∂) = −ξ4 . Note that n = 2 and s = 0. To find the state coordinates, one has to, according to Corollary 1, compute the one-forms ω1 and ω2 , defined by (28). From equalities p0 (∂) := p(∂) = ∂ · p1 (∂) + r1 and q0 (∂) := q(∂) = ∂ · q1 (∂) + ρ1 one can find that p1 (∂) = ∂ − ξ1 + ξ2 (y 2 )ρ and q1 (∂) = 0, respectively. Furthermore, from equality p1 (∂) = ∂ · p2 (∂) + r2 one can find that p2 (∂) = 1. Finally, the one-forms that define the differentials of the state coordinates can be computed, according to (28), as follows ω1 = p1 (∂)dy + q1 (∂)du = (∂ − [ξ1 − ξ2 (y 2 )ρ ])dy, ω2 = p2 (∂)dy = dy. Thus H3 = spanK {ω1 , ω2 } = spanK {dy, dy ∆ } and integrating the basis elements, the state coordinates are x1 = y and x2 = y ∆ yielding the state equations x∆ 1 = x2 2 x∆ 2 = (ξ1 − ξ2 x1 )x2 − ξ3 x1 + ξ4 u y = x1 Example 2: Consider the control system in the form of i/o DDE y [2] = uy ∆ − µu∆ + uy 2 (31) that can be described by two polynomials p(∂) = ∂ 2 −u∂−2uy and q(∂) = µ∂ − y ∆ − y 2 . Note that n = 2 and s = 1. To find the state coordinates, one has to, according to Corollary 1, compute the one-forms ω1 , ω2 , defined by (28). From equalities p0 (∂) := p(∂) = ∂ · p1 (∂) + r1 and q0 (∂) := q(∂) = ∂ · q1 (∂) + ρ1 one can find that p1 (∂) = ∂ − uρ and q1 (∂) = µ, respectively. Furthermore, from equalities p1 (∂) = ∂ · p2 (∂) + r2 , q1 (∂) = ∂ · q2 (∂) + ρ2 one can find that p2 (∂) = 1, q2 (∂) = 0. Finally, the one-forms that define the differentials of the state coordinates can be computed, according to (28), as follows ω1 = p1 (∂)dy + q1 (∂)du = (∂ − uρ )dy + µdu, ω2 = p2 (∂)dy + q2 (∂)du = dy. Thus H3 = spanK {ω1 , ω2 } = spanK {dy ∆ + µdu, dy} = spanK {dy, d(y ∆ + µu)} and integrating the basis elements, the state coordinates are x1 = y and x2 = y ∆ − µu yielding the state equations x∆ 1 = x2 + µu x∆ 2 = (x1 + x2 − µu)u y = x1 6. CONCLUSION Theorem 3 provides an alternative, polynomial method for computing the bases vectors for the subspaces Hk , where k = 1, . . . , s+2. Note that the polynomial method has advantages in computer implementation. First, it is direct, meaning that there is no need to compute step-by-step all Hk ’s in order to find Hs+2 . Second, its program code is shorter and more compact. Algebraic methods require to solve a pseudolinear system of equations, which is linear with respect to unknowns, but those coefficients are nonlinear functions, and not real numbers. If
the expressions, found on previous steps, have been not enough simplified, there is a chance that Mathematica may be unable to solve the pseudolinear system of equations and the computation fails. Meanwhile, polynomial method does not require solving any system of equations. The algebraic method requires to insert the additional simplification commands into the code, while the polynomial method is able to produce the result without any intermediate simplification. REFERENCES S.R. Anderson and V. Kadirkamanathan. Modelling and identification of nonlinear deterministic systems in delta-domain. Automatica, 43:1859–1868, 2007. Z. Bartosiewicz and E. Pawłuszewicz. Realizations of linear control systems on time scales. Control and Cybernetics, 35 (4):769–786, 2006. Z. Bartosiewicz and E. Pawłuszewicz. Realizations of nonlinear control systems on time scales. IEEE Transactions on Automatic Control, 53(2):571–575, 2008. ¨ Kotta, E. Pawłuszewicz, and M. Wyrwas. Z. Bartosiewicz, U. Algebraic formalism of differential one-forms for nonlinear control systems on time scales. Proceedings of the Estonian Academy of Sciences, 56(3):264–282, 2007. M. Bohner and A. Peterson. Dynamic Equations on Time Scales. Birkh¨auser, Boston, USA, 2001. M. Bronstein and M. Petkovˇsek. An introduction to pseudolinear algebra. Theoretical Computer Science, 157(1):3–33, 1996. ¨ Kotta, M. T˜onso, and M. Wyrwas. Transfer D. Casagrande, U. equivalence and realization of nonlinear input-output deltadifferential equations on homogeneous time scales. IEEE Transactions on Automatic Control, 55(11):26012606, 2010. Y. Choquet-Bruhat, C. DeWitt-Morette, and M. Dillard-Bleick. Analysis, Manifolds and Physics, Part I: Basics. NorthHolland, Amsterdam, The Netherlands, 1982. R.M. Cohn. Difference Algebra. Wiley-Interscience, New York, USA, 1965. ¨ Kotta and M. T˜onso. Realization of discrete-time nonlinU. ear input-output equations: polynomial approach. In The 7th World Congress on Intelligent Control and Automation, pages 529–534, Chongqing, China, June 2008. ¨ Kotta, Z. Bartosiewicz, E. Pawłuszewicz, and M. Wyrwas. U. Irreducibility, reduction and transfer equivalence of nonlinear input-output equations on homogeneous time scales. Systems and Control Letters, 58(9):646–651, 2009. ¨ Kotta, B. Rehak, and M. Wyrwas. On submersivity assumpU. tion for nonlinear control systems on homogeneous time scales. Proceedings of the Estonian Academy of Sciences, 60(1):25–37, 2011. J.C. McConnell and J.C. Robson. Noncommutative noetherian rings. John Wiley and Sons, New York, USA, 1987. R.H. Middleton and G.C. Goodwin. Digital Control and Estiamation: a Unified Approach. Prentice Hall, New Jersey, USA, 1990. P. Rapisarda and J.C. Willems. State maps for linear systems. SIAM Journal on Control and Optimization, 35(3):1053– 1091, 1997. ¨ Kotta. Realization of continuous-time nonlinM. T˜onso and U. ear input-output equations: polynomial approach. In Computer Aided Systems Theory - EUROCAST 2009, Lecture Notes in Control and Information Sciences, pages 633–640. Springer Berlin / Heidelberg, 2009.
7226