Copyright is') IFAC Software for Computer Control Madrid. Spain 1982
COMPUTATION OF MATRIX-FRACTION DESCRIPTION FOR TRANSFER FUNCTION MATRICES M. Alvarez and
J.
M. Sainz de Baranda
Catedra de Maternatl'cas I, Escuela Tecnica Superior de Ingenieros Industriales de Madrid, Spain
Abstract.A new algorithm for computing irreducible matrix-fraction descriptions of transfer functions matrices is presented. In this conmunication we give a method for obtaining an irreducible right (left) factorization from an arbitrary left (right) factorization. The problem is to obtain a basis-matrix of the set of solutions of a homogeneous linear system of equations, which is a polynomial mOdule M. This algorithm constructs a set of subspaces of M by elementary operations in a scalar matrix. From these subspaces one can write by simple inspection a basis-matrix for the mOdule Mwhich is in proper column form. Keywords.Computational methods; minimal realization; polynomials; transfer functions. INTRODUCTION Let G{s) be a (p,m) stricly proper rational transfer function matrix whose elements are proper polynomial fractions with coefficients in a field K. An important problem in most areas of multivariable control systems design is to make factorizations of G{s) according to G{S)=Nl{s)Dl{s)-I=D2{s)-IN2{s)
(I)
where Nl {s), Dl {S) are (p,m) and (m,m) relatively right prime polynomial matrices, and O2 (5), N (s) are (p,p) and (p,m) relatively left prime 2 polynomial matrices . See Wolovich (1974) for general references on this problem. It is easy to obtain factorizations of G(s) from the expression G(s) = ~ N(s)
N(S)(d(S).I )-1 m (d(S).I p )-I N(s)
In this conmunication we give a method for obtaining an irreducible right (left) factorization from an arbitrary left (right) factorization without the computation of the h.c.d .. This method is strongly inspired in the existing solutions for the minimal design problem (Wang and Davison, 1973; Forney, 1975). For an alternative approach to the computation of matrix-fraction descriptions, which is based on state space methods, see Patel (1981).
(2 )
THEORETICAL BACKGROUND
where d(s) is the least commom denominator of all the terms of G(s). In fact we can write from this equality G(s)
highest common divisor of polynomial matrices (Macdufee, 1956) requires elementary row or column operations with polynomials, and is therefore difficult to implement in a computer. Several other algorithms (Bitmead, Kung and Anderson, 1978; Silverman and Van Dooren, 1981) have been arranged to avoid this problem, but the division by the h.c.d. cannot be omitted from this methods.
(3)
as right and left factorizations of G(s). In general N(s) and d(s)I m (or d(s)I p and N(s) ) are not relatively prime right (left) polynomial matrices. Then the important problem is the simplification of polynomial matrix fractions. This task can be performed by computing the highest. right (left) common divisor of N(s) and d(s)Im (d{s)I p and N(s) ) and subsequent division . The standard method for extraction of 31)7
In this section we review some concepts which are well known in current literature. See specially Forney (1975). Suppose that is given some arbitrary left factorization of a (p ,m) transfer matri x G{s) = D{s)-I N{s)
(4)
and introduce the (p,p+m) polynomial matri x P{ s) = [D{S)
-N{s
J]
(5 )
Now, considerer the homogeneous linear system of equations given by P{s) .z(s)
=
0
(6)
M. Alvarez and J. M. Sainz de Baranda
358
From rank P(s)=p, which is obvious because O(s) is a nonsingular polynomial matrix, we can conclude that the set of solutions of system (6) is a submodule M of KP+mCslwhose dimension is m. Let
r~] Lu(s)
Q(s)
(7)
be a (p+m,m) polynomial matrix whose columns form a basis of M. It is easy to see that the, (p,m) and (m,m) respectively, matrices V(s) and U(s) are right coprime polynomial matrices. In fact if we have the factorization
In the following section we explain our algorithm to do this. THE ALGORITHM We have here, in concise form, a method intended for building up a proper column basis matrix for the submodule Mof solutions of (6). For more detailed exposition see Sainz de Baranda (1982). For every nonnega t i ve i ntege r k introduce the set Mk={z(s)E
G(S)]
=
LU(S)
o(s)] - .R(s) UO(s)
E
where R(s) is a non-unimodular square polynomial matrix of order m, the matrix Q(s) in (7) cannot be a basis-matrix of M. From the relation O(s)V(s)-N(s)U(s) = 0
M:deg(z(s)) ~k}
(16)
(8)
(9)
we can conclude that if U(s) is a nonsingular matrix then V(s).U(s)-1 is an irreducible right factorization of G(s). So, the problem is to obtain a basis-matrix Q(s) where U(s) meets this cond ition.
Each Mk is a vector subspace of M, and it is clear that Mk_1C:Mk and also that if z(s) is in Mk _1 then s.z(s) is in Mk . Let Nk be the vector subspace of Mk composed by the elements of Mk_1 and their products by s, and let Lk be a supplementary subs pace of Nk in Mk. Then from an integer r we have ( 17)
and if we assemble some basis of LO' L1 , ~~b~~llu~: ~~t a minimal basis for the
Let z(s) = fy(s)]
(10)
Lu(s)
be a solution of equation (6). Then we have the pair of polynomial vectors y(s) and u(s) satisfies y(s) = G(s)u(s)
<
deg(u(s))
(12)
and also deg(z(s))
deg(u(s))
P() s =POs n +P s n-l +... +P _ s+P
1 n 1 n k k-1 z () s =zOs +zl s +... +zk_1s+zk
(11 )
and from the fact that G(s) is a proper rational transfer function matrix we can write deg(y(s))
The preceding scheme can be implemented algorithmically in the following fashion. Suppose that
(13 )
(18) Then if z(s) is a solution of (6) it is easy to see that the coefficients z·1 of z(s) satisfied the equations Po 0
..... 0
PI Po
..... 0
Then we have
.P
[ Q(s
J] h = [[tU(s)0] h]
(14)
where (Q(sil h and [U(sU h are the scalar (p+m,m) and (m,m) matrices constituted with the highest degree coefficients of the column vectors of Q(s) and U(s). Then if Q(s) is built up as a "column proper" polynomial matrix (Wolovich,1974) we have det
~(sJ]h
1 0
(15)
and U(s) will be a nonsingular polynomial matrix.
Zo O
Pn Pn-l 0 P n
o
0
zl 0
(19)
zk .... :P
n
In this way the determination of a basis for the subspace Mk is reduced to the determination of a basis for the kernel of a scalar matrix. A standard mode to do this for a matrix S is to echelon, by elementary column operations, the matrix
Computation of Matrix-Fraction Description
359
:.2_J_Q_ •
(20)
S'
'•...0- .
.•
. S" O.
.- - •
where I is the identity matrix of appropiate order. If the final matrix is
1
• ... , ... ' o 0 I •
S"
I
I
"5"'
XO'•
(21)
(26)
:I __0__
- - -' Xl •
• o·•
then the columns of Z form a basis-matrix for the kernel of S. A slight modification of this idea allows us to determine, in recursive manner. the basis of the subs paces Lk introduced above. The steps of the algorithm are the following: 1.
Take
X1
0--'-'0--' I
•
and iterate this way to obtain a set of m columns. Then we successively attain basis-matrices of L2 , L3 •· .. , Lr . From these bases one can write by simple inspection a basis-matrix Q(s) for the submodu1e Mwhich is in proper column form. NUMERICAL EXAMPLE
(22)
S
• ••
Let
1 [,
( 23) Then the columns of Zo form a basis of La. 2.
::J " 0J ~:1 '-j 2s+1
G(s}=-s2+s+1 s-l
and effect the echelon reduction
s
-1
.["+'o+1
2s+1
s2+s+1
s+l
s
We have in this case
Considerer now the matrix
P(s}=
: 0 S'
["+:+1
0
-2s-1
-s
s2+s+1 -s+l -s
-'+~ -s-l
• S'
o
(24)
By applying the above algorithm we find that there are no vectors of zero degree, there are two linear independent vectors of degree one and only one vector of degree two . The basis-matri x of solutions of (6) is in this case
and complete its reduction to echelon form to obtain S'
: 0
o
•
S"
10
-3
9
0 5+3s
• 0
:----.--.
____ •I
1
Q(s}=
0
3+3s 15+6s 1-2s -4-13s
(25)
-5-5s-2s 2 3s+s 2
6+lOs
-2s
-2s
and from this expression we obta i n Then the columns of Zl determine a basis-matrix of Ll 3.
Effect a similar procedure with the matrix
GI'){
r 3
10 0 1
-3
J
9 5+35
-1 15+6s
1-25 -4-135 '
-25
6+10s
as right-coprime factorization.
-5-5'-"1 35+S 2 -2s
M. Alvarez and J. M. Sainz de Baranda
360
REFERENCES Bitmead, R. R. , S. Y. Kung and B. D. O. Anderson (1978). Greatest common divisor via generalized Sylvester and Bezout matrices. IEEE Trans. Autom. Control, 23, 1043-1047. Forney, G. D. (1975) . Minimal bases .of rational vector spaces, with application to multivariable linear systems. SIAM J. Control, 13, 493-520. Macdufee, C. C. (1956). The theory of ma tri ces. Chelsea, New York. Patel, R. (1981). Computation of matrix fraction descriptions of linear time-invariant systems. IEEE Trans. Autom . Control, 26, 148-161. Sainz de Baranda, J. M. (1982). Aspectos algebraicos de la teoria de realizaci6n de los sistemas dinamicos lineales . Ph . D. dissertation . Dep . Mathematics,E .T.S . I. Industriales de Madrid. Silverman, L. M. and P. V. Van Dooren (1981). A system theoretic interpretation for G. C.D. extraction. IEEE Trans . Autom. Control , 26, 1273-1276 . Wang, H. and E. J . Davison (1973). A minimization algorithm for the design of linear multivariable systems. IEEE Trans. Autom . Contro 1, 18, 220-225. Wolovich, W. ~ (1974). Linear multivariable systems. Springer-Verlag, New York.
s: