MATRIX POLYNOMIAL EQUATIONS IN CONTROL THEORY R. Rutman* and Y. Shamash** -Department of Electrical Engineenng, Southeastern Massachusetts University, North Dart mouth, Massachusetts 02747 --Department of Electronics, Tel-Aviv University, Tel-Aviv, Israel Abstract. The linear combination of polynomial matrices is considered as an entity, the matrix polynomial equation. Necessary and sufficient conditions for the existence of a solution are considered, and the general solution is studied. It is shown that a number of computational algorithms such as the Euclidean algorithm, partial fractions' problem, the spectral factorization problem, may be considered using the concept of matrix polynomial equations and, as a result, methods of solution are derived that are computationally simpler than existing methods of solution. Further, the concept and methods of polynomial equations are used to analyze some of the problems in multivariable control such as the system invariants under state feedback, conditions for exact model matching and problem of decoupling.
1.
MATRIX POLYNOMIAL EQUATIONS
Consider the linear combination
Ii
A.. (s)X.j(S)B.j(s) l.J
l.
l.
= cj(s)
(1.1)
where Aij(s), x. (s) and Bij(s) are matrix polynomials (Ga~imakher, 1959), i.e. polynomials in the complex variable s, whose coefficients are matrices with the coefficients from the complex field; i and j are positive integers; the polynomial matrices involved are of compatible dimensions. The matrix polynomial can be considered also as the polynomial matrix. A polynomial in s with coefficients from the field is considered as a particular case of the polynomial matrix; a small Roman letter will be normally (but not necessarily) reserved for such a polynomial. If polynomial matrices Aij(s), Bij(S) and Cj(s) are given, (1.1) can be considered as a matrix polynomial equation for Xi(s) (this equation is a specification of linear equations over non-commutative rings). Clearly, (1.1) is an undetermined equation.
Specifically we will consider the existence of solution, the general solution and the minimal solution. Existence of Solution The conditions for the existence of a solution to equation (1.2) are given by the following two theorems. Theorem 1 (Roth, 1952). A necessary and sufficient condition that there exists a solution [X(s), yts)] to the equation (1.2) is that there exist polynomial matrices p(s) and Q(s) such that det p(s) and det Q(s) do not depend on s and are non-zero, and A(s)
C(s)
(1.2)
o
= C(s)
(1. 3)
*This work was partly done when the author was with Tel-Aviv University, Tel-Aviv, Israel and on leave with University of Toronto, Toronto, Canada.
o
B( s)
o
B(s)
Q(s)
Definition 1 The degree of a polynomial matrix, oAts), is equal to the highest power of s in any of the entries of A(s). Theorem 2 (Barnett, 1971). For square A(s), B(s) and C(s) and with oC(s) < eAts) + eB(s) - 2, there exists a unique solution to equation (1.2) such that oX(s) <
oB(s), oY(s) < oAts)
if and only if det [A(s)] and det [B(s)] are relatively prime. The existence of a solution for equation (1.3) may be shown in the following way
or A(s) X(s) + B(s) yts)
A(s) p( s)
Two special cases of (1.1) are to be considered here: A(s) X(s) + yts) B(s)
C(s)
Definition 2. Let R[A(s), B(s)] ~ left common divisor of A(s) and B(s) of the highest degree anddefine a reference polynomial equation:
1135
R. Rutman and Y. Shamash
1136
= R[A(s),
A(s)X(s) + B(s)Y(s)
B(s)]
(1.4)
Lemma 1. There exists a solution to the reference equation (1.4). Proof. The lemma is proved by constructing a solution using lhe Euclidean algorithm. Thus taking B(s) = RO(S)' it follo~ that
= Ro(s)Ql(s)
A(s) RO(s )
A solution to the homogeneous
A(s) X(s) + yes) B(s)
xCs)
2
(1.5)
R2(slQ3(s) + R (S) 3
(1.8)
1
eCs} B(s) + A(s) Il Cs) I 1
- A(s) eCs) + Il 2 (s) B(s) 1
+ R (s)
=0
is
yes)
+ Rl(S}
= Rl(s)~(s)
Rl(s)
Theorem 4. equation
(1.9) (1.10)
1
where A Cs) and (B) are particular solutions to the equations A(s) X(s)
= 0;
yes) B(s)
=0
(loll)
and e(s), Ill(s) and Il (s) are arbitrary 2 matrix polynomials. and
Proof. Follows from substituting (1.9) and (1.10) into (1.8).
R (s) ~ R[A(s), B(s)] v
Let Xl(s)
~
I, Yl(s)
~
-
Corollary . X(s) and yes) of Theorem 4 can be also expressed in the forms
~(s).
Therefore Rv _ 2 (s) Xl(s) + Rv_l(S) Yl(s)
Cl. 6)
R[A(s), B(s)]
X(s)
6(s) B(s) + (I - AR(s) A(s) Ill(s),
yes)
- A(s) 6(s) + Il2(B~ - I)
R L where A (s), B (s) are particular solutions to non-homogeneous equations
But from (1.5) we have Rv_l(s)
= Rv _ 3 (S)
A(s) X(s) - Rv_2~_1(s)
Rv _ (s)[- ~(s)] + Rv_s(s)[I + ~_l(s)~Cs)] 3 = R [A( s), B( s) ]
Setting X (s) 2
t:,
= [-
Qv(s) );
yes) B(s)
= I.
Theorem 5. The general solution to the nonhomogeneous equation (1.3) is the sum of the general solution to the homogeneous equation (1 . 8) and a particular solution to (1 . 3) . Proof .
Y (s) ~ [I + Q l(s)Q (s)] and 2 vv
= I,
By linearity of equation (1 .8).
The structure of the solution for equation (1 .3) is clarified by the following theorem.
continuing as before we get A(s)X(s) + B(s)Y(s)
= R[A(s),
Theorem 6. If the pair (X (s), Y (s)) is a solution for the referencerequatibn (1 .4 ) , then
B(s)].
Thus a solution to (1 . 4) has been shown to exist by construction. Theorem 3. There exists a solution to (1 .3 ) if and only if C(s)
= R[A(s),
B(s)] CO(s)
(1. 7)
where Co(s) is any polynomial matrix of compatiole dimensions. Proof. The proof easily follows from Lemma 1, since multiplying the solution to the reference equation from the right by CO(s) proves sufficiency, while necessity is proved by contradiction. General Solution In this section the structure of the general solution to equations (1 . 2) and (1.3) is studied. We will first consider the solution to equation (1 .2).
X(s)
Xr(s) CO(s) + (I
AR(s) A(s) Ill(s)
yes)
Yr(s) CO(s) + (I
BR(s) B(s) Il (s) 2
is a solution for equation (1 . 2) , where Ill(s), Il (s) are arbitrary~ C (s) is as defined 2 in Theorem 3, and A~(s~ and BR(s) are solutions for the equations A(s) X(s) = I and B(s) yes) = I. Proof.
By substitution. 2.
COMPUTATIONAL ALGORITHMS
In this section, a number of computational problems are formulated in terms of polynomial equations that simplify the computational requirements of the algorithms. An essential aspect is the determination of a minimal solution to equation (1.3 ).
Matrix Polynomial Equations in Control Theory The Minimal Solution to the Polynomial Equation
There is no loss in generality in assuming 6{p(s)} < 6{q(s)}. Then (2.6) may be rewritten in the form
Consider the case when C(s) in (1. 2) or (1. 3) is a polynomial, which is not necessarily a common factor of A(s) and B(s). The solution pair [X(s). yes)] , i f it exists, is not unique. Let [Xo(s), YO(s)] be the minimal solution, i.e. solution where XO(s) and Yo(s) is of least degree. The min1mal solut10n must satisfy the following constraints: 6{X (s)} O
6{B(s)}
1
6{Y (s)} O
6{A(s)}
1
(2.1)
m-I
=n
(2.2)
1
where n = 6{A(s)} and m = 6{B(s)} and 6{C(s)} ~ m + n - 1 (Volgin, 1952); a method for finding the minimal solution was suggested there. Another method for solution was suggested (Shamash, 1976) which, assuming m > n, would require the inversion of an (n x n) matrix only.
which is exactly the same form as (1.2) or (1.3) [since for the scalar case, equations (1.2) and (1.3) are equivalent]. Thus the algoirthm outlined in (2.1) may be used to compute Pl(s) and P2(s). The problem of computing the greatest common divisor of two polynomials was also formulated (Shamash, 1976) in terms of polynomial equations and a computational algorithm was given which was computationally superior to other existing methods. Feinstein and Shamash (1976) considered the problem of spectral factorization of rational matrices, and again using the concepts of polynomial equations, a solution was suggested which is computationally simple and conceptually easy. 3. APPLICATIONS TO CONTROL PROBLEMS
The method is based on rewriting (1.2) or (1.3) in the form 1
r;TsJ
[X ( ) C(S)] _ 0 s - ATsT -
Consider the system
B(s)
- ATsT
:ic
or [
n-l i=O
= Ax
+ Bu
l... = Cx m-I
1
L
1137
Yi s
i
L
i=O
\s.s 1
i
L uis
i
L a.s i=O 1
]
i=
i
(2.3)
where m-I
L
x.s
n m m where .!.ER , l...~R and ~~R ; A, B, and Care constant matrices of compatible dimensions. The matrix transfer function of (3.1) is given by
G(S)
i
= C(sIn pes)
1
= q(s)
i=O
where pes) is an (m x m) polynomial matrix and q(s) is a monic polynomial of degree n: and 6. = x. - U. 111
i
= 0,
q(s) = det (sI
l, ... ,m - 1.
Cross multiplying and equating coefficients in (2.3), as far as the term in sm+n-l, gives a set of (m + n) sim~ltaneous equations the last n of which are dependent on the Yi's only. Thus these are solved and then the computed values of the y. 's are substituted 1 in the first m equations to become the x.' s. 1
Expansion Into Partial Fractions Let the expression G(s) to be expanded into a partial fraction be given by G(s)
=~ q(s)
(2.4)
The linear state-variable feedback (l.s.v.f) control law is u
= Fx
GFK(s)
q ( s) = ql ( s) ~ ( s ) .
we have
q;rsT
= C(sI n
- A - BF)-lBK.
Using the equality [I
!Ps+ - ~
+ Kw
where ~ERm, F and K are matrices of compatible dimensions, with K assumed nonsingular to ensure the linear independence of the m external inputs. The matrix transfer function of the closed loop system is given by
where
The problem is to find that (s) _ Pl(s) + P2(s)
( 3.4)
- A).
n
n
- BF ( sI
n
- A)
G(s)[I
m
-1
where J(s)
= adj
B[I
m
- F(sI - A)-lB)
- F(sI _ A)-lB]-lK
p(s)[q(s)I (2.6)
]B
m
FJ(s))-lK
(sI - A)B
(3.6)
R. Rutman and Y. Shamash
1138
GFK(s) may be written in the form
Z
U( s}
= G-1 .
n
Hence and by (3.12) it f ollows that
GFK (,s) = v(s)
where U(s) is an (m x m) polynomial matrix and v(s} is a scalar monic polynomial. Substituting in (3.6) leads to
(3.15) thus Di is F-invariant.
Finally by letting
U(S)K-l[qes)I m - FJ(s}l = v(s} pes} '
(3.8)
h(s)
i.e.
(3.9)
~K(s) = qFK(s) det GFK(s)
U(s) Z(s) - v es} pes} = 0
where Z(s) is given by
where qFK(s) = det (sIn - A - BF), it can be shown that
(3.10)
GZ(s) + FJ(s) = q(s}Im. . Hence it
~ollows
q(s) det G(s)
~K(s) = h(s) det G(s)
that (3.11)
where Z(s) is invertible. It is easy to see that equations (3.9) and (3.10) are special cases of equation (1.2). In control problems. we are interested in choosing F and K such that GFK(s} would have certain properties. For example. GFK(s) is to be identical to a given matrix transfer function of a model system. i.e. "exact model matching". Another important problem is to choose F and K such that G is deFK coupled. These problems and the problem of feedback invariants are next discussed.
Thus by simply formulating the problem in terms o f polynomial equati ons, we have easily derived the F-invariants of the system in equations (3.13), (3.15) and (3.16). Exact Model Matching The problem may be stated as f ollows: Find a feedback pair {F, K} such that the transfer function matrix of the closed loop system, namely C(sI - A - BF)-lBK, is identical t o the given transfer function matrix of a model system. It should be noted that the input ~(t) may also appear linearly in the output equation of the system, and the following analysis can then be easily adjusted (Rutman, Shamash, 1974).
F-Invariants In this section some properties of the system that are not affected by changes in Fare discussed. Denote the ith row of pes) and U(s) by P.(s) and U.(s). Then from (3.9) we have 1 1
Consider equations (3.9) and (3.10) in which for this particular problem U(s) and v(s) are specified; also pes) and q(s) are given. The problem is to find F and K such that (3.9) and (3.10) hold. We have
1
6{J(s)}
~
6{q(s)}- 1
pes)}
<
n
1
Since K is assumed non-singular. and using (3.10). we have 6{U.(s) Z(s)} 1
6{U. ( s)} + 6{Z(s)}= 6{U.(s)} 1
1
+ O{q(s)}
and using (3.12). we find the F-invariants of the system:
~. ~ 6{q(s}} - 6{P.(s}} - 1 1
1
U(s)} ~ 6{v(s)} . From (3.8) it can be seen that K-l[q(s)I FJ(s)l is a polynomial matrix o f degree Thus we have the condition that the system can be exactly matched to the desired transfer function only if
W.
n
L
i=O
n - 1 - 6{P.ls}}. 1
( 3.13)
q(s)=
(3.14) i
1
~~ {6{ U.(s)}}U i ( S}. 1
Since q ( s) and v(s) are monic polynomials. it f ollows from (3.9 ) and (3.10) that KZ
n
I
n
1
i
= T( s )
(3.17)
L
n
.
q.Sl, ~ = 1.
(3 .18)
i=O 1 Substitut i ng (3 .17) and (3 .18) int o (3 .9 ) and (3.1 0) leads t o K-
l
= Tn'
or K
= T-n l .
Using this solution f or K, equati on (3.10) becomes
m
where Z
T.s
where Ti' i = O.l •... ,n are constant matrices and det t # O. Let n
DFK
n - 1
(3.12)
U.(s) Z(s) = v(s) P.(s).
is defined by Z(s)
n
i
L z.s ,
i=O
1
T- l Z(s ) + FJ ( s ) n
q(s )I , m
(3 .20)
Matrix Polynomial Equations in Control Theory or n-l
{L
F
1139
Theorem 7 (Silverman, Payne, 1971). A necessary and sufficient condition for the system (3.1) to be decoupled by l.s.v.f. is that the matrix D defined by
i
Jis}
i=O
Hence
(3.28) ( 3.21)
is non-singular; Di are as defined in (3.14). It follows that
K
= D-IA
( 3.29)
while F is determined using (3.21) and (3.22). Lemma 2. For the case where the system (3.1) is decouplable, h(s) defined by Using the first n linearly independent columns of [JOJ1 ... J _1] and using (3.21) we compute F which is tHe required solution.
h(s) ~ q(s) det G(s) is a polynomial.
Thus by formulating the problem in the form of polynomial equations we have determined the conditions that must be satisfied for G K(s) to be realized by l.s.v.f., and also t~e required F and K matrices are easily computed. Decoupling Using l.s.v.f. The problem of decoupling may be stated as follows: given the system (3.1), find F and K such that GFK(s), equation (3.7),will be diagonal and 1nvertible. (Wonham and Morse, 1970; Sylverman and Payne, 1971). Let the desired closed-loop decoupled system be of the form (3.23) t. (s)
(3.24)
1
where u.(s) and v.(s) are monic polynomials with sc~lar coefficients. Equation (3.9) may then be rewritten in the form vl(s) ul(s) Pl(s) v (s)
2 u (s) P2 (s)
z(s)
( 3.25)
2
v (s)
~p(s) U \ SJ m m
Lemma 3. The number of arbitrarily assigned poles in decoupling the system is given by m N
L [i.
i=l
+ 1 + 6{u.(s)}].
1
(3.30)
1
The proofs of both lemmas may be found in the authors' works (Shamash and Feinstein, 1976; Rutman and Shamash, 1975). It should be noted that the above procedure has been extended to the problem of decoupling systems with an excess number of inputs. Other Applications Among other applications, the dynamic response evaluation by use of transfer functions should be mentioned, which necessitates the inversion of Laplace transforms. This often entails the application of partial fractions expansions. Numerical complexities encountered in partial-fraction expansion may stem from the presence of complex conjugate poles and /o r pole multiplic ities. In the case of s ingle-variable systems, the problem can be effectively handled by using the method for expansion into partial fractions described above, where G(s) represents the transfer function in question. For multivariable systems, p(s) and thus Pl(s) and P2(s) must be replaced by (k x i) polynomial matrices. In this case, equation (2 .7) is repeatedly solved for each entry of the (k x £) polynomial matrix p(s).
where
4 But Z(s) must reduce to a polynomial matrix of degree n, and hence u.(s) must be a divisor of all the entries of p.fs), i.e. 1
P. (s) 1
u.(s)t;.(s), i 1
1
1,2, .•. ,m.
COMMENTS AND CONCLUSIONS
A co~cept of matrix polynomial equations has been introduced and its applicati ons in solving computational algorithms and some problems in control theory were discussed. Essent ially two specific types of matrix equations have been discussed.
p o lyn o~ial
R. Rutman and Y. Shamash
1140
The existence of solutions has been demonstrated in both cases, and also the minimal solution was defined.
Feinstein, J. and Shamash, Y. (1977). Spectral factorization using polynomial equations. IEEE Trans. on Information
For the case of polynomial equations with scalar coefficients, an efficient algorithm has been introduced which computes the minimal polynomial solutions.
Roth, W.E. (1952). The equations AX-YB=C and AX-AB=C in matrices. Proc. Amer.
Theory, IT-23, 534-538.
It has been shown that a number of computational problems (e.g. partial fraction expansion, g.c.d., Eculidean algorithm and spectral factorization) may be formulated into a set of polynomial equations that are then easily solved, with a great saving in computat ion.
Math. Soc., 3,
392-396.
Rutman, R. and Shamash, Y. (1974). On exact model matching. Proc. 12th Allerton Conference, University of Illinois, Urbana, Ill., 450-455. Rutman, R. and Shamash, Y. (1975). Design of multivariable systems via polynomial equations. Int. J. Control, 22, 729-737. Shamash, Y. (1976). Partial fraction expansion via polynomial equations. IEEE
Trans. on Circuits and Systems, CAS-23, The problem of exact model matching and decoupling of linear time-invariant systems were formulated in terms of polynomial equations, and hence constraints on the design were naturally imposed, and a solution to the problem found.
REFERENCES Barnett, S. (1971). Matrices in Contro l Theory. Van Nostrand Reinhold, London. Gantmakher, F. R. (1959). The Theory of Matrices, 2 vols. Chelsea Publ., New York.
562-566. Shamash, Y. and Feinstein, J. (1976). The decoupling of multi variable systems using matrix polynomial equations. Int. J. Systems Sci., 7, 759-768. Silverman, L.M. and Payne, H.J. (1971). Input-output structure of linear systems with application to the decoupling problem. SIAM J. Control, 9, 199-233. Volgin, L.N. (1962). Elements of the Theory of Computerized Control, Sovyetskoye Radio, Moscow. Wonham, W.M. and Morse,. A.S. (1970). Decoupling and pole assignment by dynamic compensation. SIAM J. Control, 8,
317-337.