Computation of transfer function matrices of linear multivariable systems

Computation of transfer function matrices of linear multivariable systems

Automatica, Vol. 23, No. 5, pp. 635-640, 1987 0005-1098187 $3.00 + 0.00 Pergamon Journals Ltd. ~) 1987 International Federation of Automatic Control ...

499KB Sizes 2 Downloads 69 Views

Automatica, Vol. 23, No. 5, pp. 635-640, 1987

0005-1098187 $3.00 + 0.00 Pergamon Journals Ltd. ~) 1987 International Federation of Automatic Control

Printed in Great Britain.

Brief Paper

Computation of Transfer Function Matrices of Linear Multivariable Systems*t P. MISRA~t a n d R. V. P A T E L ~ Key Words--Numerical methods; transfer functions; control systems; computer-aided system design.

AbstrKt--This paper is concerned with the computation of transfer function matrices of linear multivariable systems described by their state-space equations. The algorithm proposed here performs orthogonal similarity transformations to find the minimal order subsystem corresponding to each input-output pair and uses some determinant identities to determine the elements of the transfer function matrices. Numerical examples are included to illustrate the performance of the proposed algorithm.

performance for doing this. This technique is based on some determinant identities and uses a variant of Hyman's method (Wilkinson, 1965) to compute the characteristic polynomial of an upper Hessenberg matrix. 2. Relevant facts We shall use the following facts from linear algebra and control theory for the development of the algorithm: Fact 1: a matrix of the form

i

1. Introduction CONSIDER the linear time-invariant, multivariable system described by

i(t) = Ax(t) + Bu(t)

(1. la)

y(t) = Cx(t) + Eu(t)

(1. lb)

-1o...

o-

01

...

0

...

0 ..-

0

00

...

y

...

o .-.

0

y ...

0

0 •..

1

i

~=

where x(t) e R n, u(t) e R " and y(t) e R p. A number of design procedures, e.g. Horowitz (1963), Rosenbrock (1974), Postlethwaite and MacFarlane (1980), Owens (1981) and Vidyasagar (1985) require knowledge of the system's transfer function matrix W(s) defined by W(s) = C(sI. - A ) - ' B + E.

j

o ...o...

J O0 .... .00

...

o... 0

-..

where o2 + yz= 1, is called a plane rotation in the (i, j)-th

(1.2)

plane• If x e R n, then there exists a plane rotation P0 such that y = Pox, where

A direct method of computing W(s) is to determine the resolvent matrix ( s i n - A ) - * . However, when the order of the system is high, this approach is extremely prone to numerical round-off errors. Using an alternative approach proposed by Varga and Sima (1981) and also by Emami-Naeini and VanDooren (1982), which we shall refer to as the pole-zero approach, the coefficients of the denominator polynomial of the (i,/')-th element of W(s) are determined by computing the eigenvalues of the state matrix of the controllable and observable subsystem corresponding to the (], i)-th input-output pair. The coefficients of the corresponding numerator polynomial are obtained by solving a generalized eigenvalue problem and finding a constant multiplier. The accuracy of this scheme depends on the accuracy of the computed eigenvalues and generalized eigenvalues. For systems with ill-conditioned eigenvalue problems or ill-conditioned generalized eigenvalue problems, the computed coefficients of the numerators and denominators of transfer function elements can be very inaccurate. These difficulties can be avoided in many cases by computing the coefficients directly. In subsequent sections, we present an efficient technique with good numerical

yk=Xk

k~i,j

Yi=

yj=0. Moreover, the plane rotations are orthogonai and therefore "perfectly conditioned" (Stewart, 1973 and Wilkinson, 1965). Fact 2: a single-input system (A, b) can always be reduced to an upper Hessenberg form (UHF) (Patel and Misra, 1984), by means of a sequence of plane rotations T = PIP2. • • such that F = T TA T is an upper Hessenberg matrix (Stewart, 1973) and g = T T b = [gu0" • • 0]T . The element gla S 0 and F is an unreduced upper Hessenberg matrix if and only if (A, b) is a controllable pair• Further, if the system is not completely controllable, then the above transformation will reduce (A, b) to (F, g) such that

F=[~

~

F'2]

(2. la)

F22J and

* Received 23 April 1986; revised 21 November 1986; revised 25 February 1987. The original version of this paper was not presented at any IFAC meeting. This paper was recommended for publication in revised form by Editor H. Austin Spang III. J"This research was supported by the Natural Sciences and Engineering Research Council of Canada under grant A1345. :~Department of Electrical Engineering, Concordia University, Montreal, Quebec, Canada H3G 1M8.

where F ~ e R nl×n~ is an unreduced upper Hessenberg matrix, /722e R "-"a . . . . 1 and gl( e R nl) = [gn0. • • 0]T. Note that (Fla, ga) is a controllable pair and the eigenvalues of F22 correspond to the uncontrollable modes of (,4, b). Fact 3: similar results can be stated for a single output system (,4, ¢x), except that the results apply to the

635

636

Brief Paper

observability properties of the system with F = TTAT in "lower Hessenberg form" (Patel and Misra, 1984) and cTT = [ q l 0 ' ' " 0]. The unobservable modes of the system are also defined in a similar manner.

Fact 4: for a single input, single output system (A, h, cT), we may write (Patel and Munro, 1982), det (sl~ - A

+

hc T) = det (sl,, - A) + CT adj (sl,, - A)b.

Further,

cV(sl~ _ A)_lb

cv adj (sl, - A)b det (sI, - A) (2.2)

Now, in (1.2), the (i, j)-th element of W(s) is given by

wq(s) = cT(sln - - A ) - ' b j + %

(2.3)

which may be written, using (2.2), as (2.4)

Fact 5: the determinant of a matrix whose k-th row can be expressed as a sum of row vectors a T + i~, may be written as (Stewart, 1973), det [(aaa2.. • a, + a k ' ' . a,)] T = d e t [ ( a l a z . - - a k . . • an)/T+det [(aia2... a k ' ' ' a,)] T. (2.5) Fact 6: the characteristic polynomial of an upper Hessenberg matrix A can be computed efficiently using a variant of Hyman's method (Wilkinson, 1965), e.g. for a 3 x 3 matrix A, this involves the determination of Uls(S) and uz3(s) such that

I

0 u.(s)] u23(s )

-at371-1 -azs//o S -- a33 J

-a32

=

Steps (1) and (2) ensure that there is no cancellation in the numerator and denominator polynomials of the elements of the transfer function matrix, and the third step evaluates the required transfer function element for the minimal order subsystem obtained from steps (1) and (2). The algorithm may be formally described as follows.

Algorithm TFM: (Transfer Function of Linear Multivariable Systems)

det (sin - A + bic~) w°(s) = det (sin - A) - 1 + eq.

--a12 s - a22

(1) removal of output decoupling zeros for the i-th output; (2) removal of input decoupling zeros for the j-th input; (3) computation of transfer function element corresponding to the controllable and observable subsystem of

(A, hi, ¢T, %).

det (si n - A + bc T) det (s L - A) - 1.

S -- all -~21

be controllable and/or observable for each input-output pair, we must remove the input and/or output decoupling zeros before computing the transfer function element corresponding to that input-output pair (Varga and Sima, 1981; Emami-Naeini and VanDooren, 1982). The algorithm to compute the transfer function matrix therefore has the following three main steps:

Step I: (Initialize) (1) Set i : = 0 , n~:=0, no:=0. (2) Set i := i + 1, j :=0, n := order of the given multi-input, multi-output systems. (3) Set (F, G, fT, eT):= (A, B, c/T, e~).

Step II: (Remove output decoupling zeros) (1) Find an orthogonal transformation matrix U • R "×" such that f T u -- []~110''' 0]

and set fit := ftTu, p := u T p u and ~ := uT~. (2) Find an orthogonal transformation matrix V ¢ R "×" such that VT~¢ ^is a lower Hessenberg matrix and the structure of h is preserved:

1

LO

0

1

raa,(s)

a,2(s ) a , ! s ) ]

La20(s)

a=(s)

T^

.

asz(S)

and

The required determinant is then given by the product

fit v = [f(o)T 0T].

a13(S)azl(S)a32(s). 3. The algorithm and its properties The proposed method determines one element of the transfer function matrix at a time. Assume that the controllable and observable subsystem (A, bj, c/T, eq) is in its UHF, then (2.4) may be written as Wij(S)

--

-

det (/i + bic~)_- det (A) + det (A) %

(3.1)

where A = ~1, - A). Note that due to the structure of hp the product h,c~' will have only its first row as a non-zero row, given by/~ljc~ where blj is the first element of br Next, we define A as the matrix obtained from A by replacing its first row by bljC/x. Then, using (2.5) we have

(3) Partition the system and set (~, (~, fT, e T):= (/~(o), (~(o), /~(o)T,e T) and n o := dim (~(o)).

Comment. Note that the dimension of the observable system is easily determined by inspecting the first super-diagonal of the lower Hessenberg matrix V TFV. The partition is performed if any of the elements on the super-diagonal is zero. For a completely observable system, no = n. Step III: (Remove input decoupling zeros)

(1) Set j := j + 1 and (F, ~, fT, e) := (/~, gi, fT, ei])" (2) Find an orthogonal transformation matrix U~ R "°×n° such that uT~ = [ 4 1 1 0 . - . 0] T

and set | := uT~, P := UTFU and fiT:= fTu. (3) Find an orthogonal transformation matrix V ~ R "°×n° such that V T~ is an upper Hessenberg matrix and the structure of ~ is preserved:

det (rio .

wq(s) = ~

r ~(o) "]

v o = [ej

(2.6)

"1"%

= ~#(s) + eiflq(s) d,As)

~=nq(s)

(3.2)

d~As)

fT v = [~(c)T

/~(,,OTl

A

where tiq(s) = det (A) and dq(s) = det (A). The determinant identity and its application described above will be used in Algorithm TFM to compute the numerator and denominator polynomials of individual elements of the transfer function matrix. However, since the single input, single output subsystems (A, bj, eT, eq) may not

and

vT~ = [|(c)T 0TIT. (4) Partition the system and set (#(c), |(c),/~(c)r, e) and nc := dim (#(O).

(F, g, hT, e) :=

Brief P a p e r

Comment. Note that at this stage we have isolated the controllable and observable subsystem of (,4, bj, ¢~, e#) and therefore, are in a position to evaluate the numerator and denominator polynomials of the corresponding subsystem. Step IV: (Compute the transfer function element %( s ) ) (1) Compute the vector [ul(s)uz(s)... unc(s)] by using the recurrence relation:

uk(s) -1 where u,c(s ) = 1.0. (2) Compute the denominator polynomial from

dij(s) = [(s - fll)ul(s) - r~=2fl,rur(s)]. (3) Evaluate h#(s) as

fiq(s)= [glj ~=lhi.,u,(s) ]. (4) Compute the numerator polynomial from

n,j(s) = hij(s) + e#dq(s) and set

m, go to Step III-(1), else, if i < p and j = m, go to Step I-(2), else,

(5) If j <

END. At the end of the algorithm, we will have the desired transfer function matrix W(s).

4. Discussion and remarks about the algorithm In this section, we will discuss various properties and implementation issues associated with Algorithm TFM. (1) In Algorithm TFM, the minimal order subsystems corresponding to each input-output pair are obtained using only orthogonal similarity transformations. This is very desirable from the numerical point of view, because the round-off errors incurred by the use of finite precision arithmetic are well bounded for orthogonal transformations (Stewart, 1973). (2) Unlike the pole-zero method, the proposed method is direct. It should be pointed out that the pole-zero method obtains the minimal order subsystems corresponding to each input-output pair in a similar manner. However, having obtained the minimal order subsystem, it is necessary to solve the algebraic eigenvalue problem for the corresponding state matrix to obtain the denominator polynomial, followed by a generalized eigenvalue problem to obtain the numerator polynomial. The methods for solving the algebraic and generalized eigenvahie problems are iterative in nature and are, therefore, computationally much more expensive. Moreover, if these problems for a system are ill-conditioned, then the transfer function matrix computed using this approach may be inaccurate. For example, if the eigenvalues of the system differ from each other by several orders of magnitude, then while computing the coefficients of various powers of s of the denominator polynomial, there could be significant loss in accuracy. A similar problem can occur while computing the coefficients of the numerator polynomial. The proposed algorithm works directly from the unreduced upper Hessenberg form of the minimal order state matrix and therefore does not suffer from the shortcomings discussed above. Since the algorithm computes the determinant of a matrix in a single step, it is computationally inexpensive. However, the user must be cautioned about the

637

implementation of the algorithm. In Step IV(l), there is an inversion of the sub-diagonal elements of the Hessenberg matrix. If these elements are extremely small, one could run into possible loss of accuracy or floating point overflow (Wilkinson, 1965). However, if the elements of the vector a(s) are stored in a product form until the final step, the floating point overflow or underflow can be avoided. (3) If the elements of the system (it, B, C, E) are not well scaled, balancing the system (Emami-Naeini and VanDooren, 1982) prior to the computation of the transfer function matrix would, in general, improve the accuracy of the computed numerator and denominator polynomials. (4) The bulk of the computational effort is involved in finding the minimal order subsystem for each input-output pair. From the algorithm, it is clear that we require " p " reductions of single input systems to their lower Hessenberg forms followed by " m " reductions of the observable subsystems to their upper Hessenberg forms for each output. In the worst case, when the system in controllable and observable from each input-output pair, the reductions to Hessenberg form will require approximately ~3n3(p+ m ) floating point operations. The final step for computing the two determinants can be accomplished in approximately ~n~ + nc floating point operations, where nc is the dimension of the controllable and observable subsystem for a particular input-output pair. It should be pointed out that the pole-zero method also requires the initial reduction to Hessenberg form to obtain the minimal order subsystems and subsequently needs to solve the computationally expensive algebraic eigenvalue as well as generalized eigenvalue problems to obtain the elements of the transfer function matrix. It suffices to say that Algorithm TFM has significant advantage in computational cost over the pole-zero technique. (5) It is probably true to say that the pole-zero method gives better accuracy than the proposed method if the associated algebraic eigenvalue problem and the generalized eigenvalue problem are better conditioned than the problem of computing the characteristic polynomial and vice versa. However, without carrying out detailed error analysis, it is not possible to explicitly state the conditions under which one method performs better than the other. The proposed method and the pole-zero method for computing the elements of a transfer function matrix are based on entirely different principles. One cannot therefore directly compare the performance of the two approaches. However, the computational costs discussed above and the numerical examples given in the next section illustrate that the proposed method has significant advantages in many cases.

We have included a listing of a MATLAB (Moler, 1980) version of Algorithm TFM which computes the transfer function of a single input, single output system. The subroutines should be called in the sequence "lhf", "uhf" and "txfun". The input to the program should be the parameters of the single input, single output system (A, b, cT, e). On output, the coefficients of the decreasing powers of s in numerator and denominator polynomials will appear in arrays "nume" and "deno" respectively. All numerical results given in the next section were computed using this program. One can easily generalize the program to a multi-input, multi-output system by appropriately defining the input and output vectors in the calling program. Note that it is computationally more efficient to evaluate a complete row (column) of the transfer function matrix before the next row (column) is computed. In this way, we will avoid repeated removal of output (input) decoupling zeros.

5. Examples Here, we present an illustrative example and several numerical examples to demonstrate the performance of the proposed algorithm.

Example 1. This 3rd order example is included to show the steps of the algorithm and not to illustrate its numerical

638

Brief Paper TABLE 4.1. COEFFICIENTSOF NUMERATORAND DENOMINATOR OF W34(S), Example 2 Power of s

Actual coeff,

Coeff. in full precision

Coeff. in reduced precision

2 1 0

10.0 54.0 68.0

0. l ~ d + 02 0.5399999999999999d + 02 0.6799999999999999d + 02

O.9999999821186066d + 01 0.5399999618530273d + 02 0.6799998760223389d + 02

3 2 1 0

1.0 9.0 23.0 15.0

0. liJ00~00000~0C~00d 0.8999999999999998d 0 . 2 3 ~ d 0 . 1 5 ~ d

0.10000000900C00~0d 0.8999999165534973d 0.2299999725818634d O. 1499999684095383d

nume. poly. deno. poly.

+ + + +

-1 0

Y(t)=[~

~

x(t) +

u(t)

-~]x(t)+[O1]u(t).

A t the end of Step III of the algorithm, we have (for i=l,j=l)

o

v~

Powers of s 6 5 4 3 2 1 0

Full precision 2.096991388799999d 2.240810395996802d 8.058142994149724d 1.054129331589858d 2.667315391676311d 1.470991471959572d 1.189248279895743d

Reduced precision + + + + + + +

02 03 03 04 03 02 O0

2.096991405487061d 2.240810379028320d 8.058142883300781d 1.054129321289063d 2.667315307617188d 1.470991382598877d 1.189248189330101d

+ 02 + 03 + 03 + 04 + 03 + 02 + O0

The italic digits indicate the accuracy of the reduced precision computation (Tables 4.2 and 4.3). Example 4. This example is the 16th order model of the F100 Turbofan engine (Saln et al., 1978). Due to the lack of space, we only give the coefficients of the (1, 1) element. Tables 4.4 and 4.5 show the numerator and denominator polynomials respectively, using full precision as well as reduced precision.

Applying Step IV, we get (s 2 -

wH(s) =

+ 01 + 02 + 02 + 02

TABLE 4.2. NUMERATOR POLYNOMIALOF Wll(S)

properties. The system is described by: i(t) =

01 02 02 02

4s

-

5)

(s 3 - s 2 - 9s + 9)"

Similarly, for i = 2 and j = 1, we have ~(c) = 1,

~(c)= 1,

/~(c)r= 1

and

and, applying Step IV, we obtain

Power of s

$ w21(s) = s - 1" The complete transfer function (vector) is, therefore, given by

w(s) =

s3- ~ - - ~ s

TABLE 4.3. DENOMINATORPOLYNOMIALOF Wll(S )

eza = 1

+9

s Next, we demonstrate the numerical performance of the proposed algorithm by means of several numerical examples. All computations were carried out on a V A X 11/780, using M A T L A B . The accuracy of the coefficients computed using reduced precision is shown by the italic digits. Example 2. In this example, we use the system in Kalman (1963) and Patel (1981a, b). The coefficients of the transfer function are known and therefore provide an excellent basis for comparison. The coefficients were computed using double precision (accurate to 15 significant digits) and compared with those computed using reduced precision (accurate to 7 significant digits). For the sake of brevity, only the coefficients of w~(s) are given in Table 4.1. Example 3. In this example we consider the 9th order boiler model (Davison and Wang, 1974) with two inputs and two outputs. This is an extremely ill-conditioned system with eigenvalues ranging from ~10 -x° to ~10. The pole-zero approach in this case will not be reliable because forming the coefficients with eigenvalues so far apart will lead to loss in accuracy of several significant figures. The coefficients computed using the proposed algorithm with full and reduced precision show that the algorithm is quite reliable.

Full precision .

~

Reduced precision

8

1

7 6 5 4 3 2 1 0

1.089330000000039d+ 01 4.255744585600835d + 01 6.701215774012081d + 01 3.338350289656865d + 01 6.338532196643809d + 00 4.170298739139534d - 01 5.786203563399542d - 03 2.266130444584768d - 05

+00

1

.

~

+00

1.089329999685287d + 01 4.255744528770447d + 01 6. 701215744018555d + 01 3.338350939750671d + 0l 6.338539093732834d + O0 4.170327670872211d - 01 5. 786365509266034d - 03 2.266192825572944d - 05

TABLE 4.4. NUMERATOR POLYNOMIALOF WlI(S ) Power ors

Full precision

Reduced precision

13 12 11 10 9 8 7 6 5 0 0 4 3 2 1 0

2.237571713959001d + 01 1.280893826437945d+ 04 3.459829385969874d + 06 5.399744232136825d + 08 5.256249713956723d + 10 3.347738522604136d + 12 1.439849685457888d+ 14 4.261046730400941d + 15 8.741971749283574d + 16 1.238556601681131d+ 18 1.191167969266945d+ 19 7.511916701377381d + 19 2.929289296127906d + 20 6.441179416321844d + 20 6.813651655176550d + 20 2.411682586770263d + 20

2.237571341425345d + 0l 1.280893190104345d + 04 3.459826384353035d + 06 5.399737153460229d + 08 5.256241313617192d + 10 3.347733035155416d + 12 1.439847625294940d + 14 4.261042474433011d + 15 8. 741968503036248d + 16 1.238557132355414d + 18 1.191169620431854d + 19 Z511935543443150d + 19 2.929300605064148d + 20 6.441215973664072d + 20 6.813711798559055d + 20 2.411721911565391d + 20

Brief Paper TABLE 4.5. DENOMINATOR POLYNOMIAL OF Wll(S)

Power of s

Full precision

Reduced precision

16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0

1 . ~ +00 1.0637927~ + 03 3.780490569816117d+ 05 6.691526311348407d+ 07 7.021546974802544d+ 09 4.777151336769724d+ l l 2.215003008088449d+ 13 7.194934651568205d+ 14 1.658384199538869d+ 16 2.715113660072002d+ 17 3.125501870096960d+ 18 2.473853219739258d+ 19 1.297307808544943d+ 20 4.257759876067597d+ 20 7.998317653320628d+ 20 7.430402303469340d+ 20 2.411884944069373d+ 20

1 . ~ + 0 0 1.063792415201664d + 03 3.780488482913048d+ 05 6.691521157614168d + 07 7.021540417170378d + 09 4.777146362873559d + 11 2.215000618149209d + 13 7.194927245652294d+ 14 1.658382750858897d + 16 2.715112046649806d + 17 3.125501327570240d + 18 2.473854246689722d + 19 1.297309403119946d + 20 4.257749672779555d + 20 7.998284223781448d+ 20 7.430346889636519d + 20 2.411921441067980d + 20

Wilkinson, J. H. (1965). The Algebraic Eigenvalue Problem. Oxford University Press, Oxford. Appendix ( M A T L A B program f o r Algorithm TFM) Algorithm TFM was implemented using MATLAB (Moler, 1980). The implementation given below consists of three subroutines "lhf", "uhf" and "txfun". The user should include the subroutines as individual files within a directory. A calling program that specifies the appropriate inputoutput pair and a data file containing the system parameters should also be provided. The code given below should be used only on an experimental basis and with caution. The user can generate a F O R T R A N or any other programming language code using the general guidelines in the listing below.

// // Subroutine lhf ff Input to the routine: // AeR "x" // B E R n×m

// 6. Conclusions A computationally efficient method was presented for determining the transfer function matrices of linear multivariable systems. The proposed method computes one element of the transfer function matrix at a time. The method removes the uncontrollable and unobservable modes corresponding to each input-output pair so that the resulting entry of the transfer function matrix has no cancellations. Numerical experiments carried out so far indicate that the proposed algorithm gives comparable and in many cases better accuracy than other methods. References Davison, E. J. and S. H. Wang (1974). Properties and calculation of transmission zeros of linear multivariable systems. I E E E Trans. Aut. Control, 10, 643-658. Emami-Naeini, A. and P. VanDooren (1982). On computation of transmission zeros and transfer functions. Proc. I E E E Control and Dec. Conf., Orlando, pp. 51-56. Horowitz, I. (1963). Synthesis o f Feedback Systems. Academic Press, New York. Kalman, R. E. (1963). Mathematical description of linear dynamical systems. S l A M J. Control, 1, 152-192. Moler, C. (1980). MATLAB users' guide. Tech. Rep. C581-1, Dept Comp. Sci., Univ. New Mexico, U.S.A. Owens, D. H. (1981). Multivariable root-loci: an emerging design tool. l E E Int. Conf. on Control, Warwick, U.K. Patel, R. V. (1981a). Computation of minimal-order state-space realizations and observability indices using orthogonal transformations. Int. J. Control, 33, 227-246. Patel, R. V. (1981b). Computation of matrix fraction description of linear time-invariant systems. I E E E Trans. Aut. Control, 26, 148-161. Patel, R. V. and P. Misra (1984). Numerical algorithms for eigenvalue assignment by state feedback. Proc. I E E E , 72, 1755-1764. Patel, R. V. and N. Munro (1982). Multivariable System Theory and Design. Pergamon Press, New York. Postlethwaite, I. and A. G. J. MacFarlane (1980). Complex Variable Methods f o r Linear Multivariable Feedback Systems. Taylor and Francis, London. Rosenbrock, H. H. (1974). Computer-Aided Control Systems Design. Academic Press, London. Sain, M. K., J. L. Peczkowski and J. L. Melsa (1978). Alternatives f o r linear multivariable control. Nat. Eng. Consortium, Chicago. Stewart, G. W. (1973). Introduction to Matrix Computation. Academic Press, New York. Varga, A. and V. Sima (1981). Numerically stable algorithm for transfer function matrix evaluation. Int. J. Control, 33, 1123-1133. Vidyasagar, M. (1985). Control Systems Synthesis: A Factorization Approach. MIT Press, Cambridge, MA.

639

// //

// // //

//

C ~ R p×"

D ~ R px" n, m, p, zero i, j correspond to the output and input respectively for the transfer function element to be computed

no = 0 ; n c = 0 ; a 1 = a ' ; b 1 =c(i, :)'; c 1 = b';.. (u, b 1, v ) = s v d ( b 1); a 1 = u' *a l * u ; c l = c 1 , u;.. (u, a 1) = hess(a 1);c 1 = c 1 * u;dim = 0; f l a g = 0;..

//

/] Find the dimension of observable subsystem

//

if abs (b l ( 1 ) ) > zero, dim = d i m + I; else, f l a g = 1; f o r l = l:n - 1,.. if abs (a 1(1 + 1,1))> zero, dim = dim + 1;.. else, exit;.. end ;.. // no = dim; i f f l a g = 1, no = O;clear dim ; if n o = O , w ( i , : ) = d ( i , : ) ; else, if n o ( )0,.. a 1 = a l(l:no, l : n o ) ' ; x = c l(:,l:no)';.. c 1 = b l ( l : n o ) ' ; b 1 =x;clearx;

// // Subroutine uhf to be executed after lhf

//

~u,b 2,0) = sod (b l(:,j));a 2 = u' *a 1 * u;c 2 = c 1 * u;.. (u,a 2) = hess(a 2);b 2 = u' * b 2;c 2 = c 2 * u;dim = 0; f l a g = 0;.. i f a b s (b 2(1)) > zero, dim = dim + 1; else, f l a g = 1; // // Find the dimension of the controllable subsystem

//

f o r l = 1:no - 1,.. if abs (a 2(l + 1,l)) >zero,.. dim = dim + 1;else, exit;.. end; // nc = dim;if f l a g = 1, nc = O;clear no;clear dim; i f nc =O, f l a g = 2;else, if nc( )0,.. a 2 = a 2(l:nc, l:nc);b 2 = b 2(l:nc);c 2 = c 2(l:nc);.. if f lag = 2, w(i, j) = d(i,j); else,.. i f n c = 1,p = ( 1 ; - a 2(1,1)),r(2,1) = c 2(1);.. r = r + d(i,j) *p,else i f n c > 1, exec('txfun') // /] Subroutine txfun to be executed after uhf

//

y(nc, nc) = l',x = 1/a 2(nc, nc - 1);y(nc - 1,nc - 1) = x;.. y(nc, nc - 1) = - x * a 2(nc, nc); // // compute the transformation matrix ][ using modified Hyman's method.

//

640 for l = n c - 2 : - 1:1,.. y(l:nc - 1,l) = y(l + l:nc, l + 1)/a 2(l + 1,l);.. y 1 = (1/a 2(l + 1,l)) * y ( l + l:nc, l + l:nc) * a 2(l + 1,l +

l:nc)';.. y(l + l:nc, l) = y ( l + l:nc,l) - y 1(:,1);.. end;

//

clear q;dear x;clear y 1;

//

//

//

Compute denominator and numerator polynomials

Brief Paper deno(nc + 1,1) = O;deno(1 :nc, l) = y(:, l)/b 2( 1 );.. y 1 = (1/b 2 ( 1 ) ) * y * a 2(1,:)';.. deno(2:nc + 1,1) = deno (2:nc + 1,1) - y 1;.. nume(2:nc + 1,1) = y * c 2' ;nume = nume + d(i,j) * deno: long e ;nume = nume / deno (1), deno = deno / deno (1),

//

clear q 1 ;clear nc ;clear no ;clear y ;clear flag; clear y 1 ;clear a 1;clear b 1 ;clear c 1 ;clear a 2; clear b 2;clear c 2;clear dim;