On Some Uses of Factorizations in Linear System Theory

On Some Uses of Factorizations in Linear System Theory

Copyright © IFAC System Structure and Control. Nantes. France. 1995 ON SOME USES OF FACTORIZATIONS IN LINEAR SYSTEM THEORY P.A. Fuhrmann' 'Earl l\'at...

4MB Sizes 0 Downloads 50 Views

Copyright © IFAC System Structure and Control. Nantes. France. 1995

ON SOME USES OF FACTORIZATIONS IN LINEAR SYSTEM THEORY P.A. Fuhrmann' 'Earl l\'at= Family Chair in Algebraic System Theory, Departm ent of i'v[athematics. Ben-Guri on Universi t y of the Seget'. Beer Sheva, Israel

Abstract. \Ve present a short, and far from exhaustive, survey of some of the uses of factorization theory in the study of linear systems . The whole development is based on model theory. be it in the context of vectorial polynomial, rational or Hardy spaces. Invariant subspaces of operators are related to factorizations of matrix functions . The connection to system theory is via realizations based on various coprime factorizations. Among topics touched upon are the study of factorizations of rational functions and in particular the important cases of inner and all-pass functions . We survey the subject of coprime factorizations and their connection , via spectral factorizations, to various Riccati equations. In this connection we describe some geometric aspects of \Viener-Hopf factorizations. We proceed to describe connections with geometric control theory and give a Hilbert space characterization of stabilizability and detectability subspaces. Finally we describe a complete parametrization of all minimal spectral factors in the case of a coercive spectral density function. Key Words. factorization , spectral factorization , coprime factorizations, invariant subspaces, realization theory. inner functions , Wiener-Hopf factorizations , geometric control theory, parametrization of spectral factors.

1. Introd uction

Factorization theory has it~ roots in ancient mathematics , motivated by the study of primes and the factorization of integers into the product of primes. Rational numbers were known to the Greeks though their properties were studied mainly from the geometric viewpoint of proportions . The existence of irrational numbers stems from this source as do the Euclidean algorithm and the Bezout equation. :\Iodular arithmetic was known in the East , e.g. the Chinese remainder theorem. v\iith the shift of emphasis from numbers to functions came the study zeros of functions, starting with the study of zeros of polynomials . The analogies between the ring of integers and polynomials became apparent and led to certain abstractions . The introduction of the notions of groups, rings, fields , ideals and modules led to the establishment of abstract algebra. In this short paper we will discuss the interrelation between factorization theory and linear system theory. This connection has been a first rate example of the crossfertlization between pure and applied mathematics. Factorization results and related concepts such as coprimeness have had a great impact in the study of linear systems. However , system theory in turn has proved a great source for research problems in the area of fac-

torization theory. Moreover , it is hard to imagine now a study of factorizations without some of the tools provided by system theory. It is natural to seek for the underlying reasons behind this symbiotic relation. In my opinion , the reasons for this are to be found in the multifaceted way of representing linear systems. Such systems can be described externally by input / output maps , impulse responses, transferfunctions or behaviours. But they also have internal descriptions in terms of differential or difference equations, and state spate models . The link connecting external and internal descriptions of a linear system is provided by realization theory. Thus external properties of a linear system are reflected by corresponding, internal, properties of realizations. Coprime factorizations are directly connected to reo' alizatior!s. Other properties related to factorizations have internal characterizations. Thus external properties have their counterpart in internal properties, like signature or Hamiltonian symmetry. Input / output stability is reflected in the stability of generators. The whole area of dynamic stabilization , including robustness issues, is best approached via doubly cop rime factorizations.

The structure of the paper is as follows . We begin by studying ideals and invariant subspaces in spaces of functions and their representations. Set operations , which are geometric in nature are re-

lated to the arithmetic of certain factorizations . Model theory, as related to invariant subspaces and as a tool in the study of operators, is briefly described . A special case , where stability is assumed, is studied more closely. This case provides a point of contact between algebraic and analytic models.

from exhaustive, with the most notable omission being the approach to optimization problems via spectral and J-spectral factorizations. For some other aspects of factorization theory, in the spirit of this paper we refer to Fuhrmann [1989] .

We proceed to introduce several types of Hankel operators . The representations of their kernels and images are directly related to certain cop rime factorizations , which in turn provide a key to the construction of realizations. A short study of doubly coprime factorizations (DCF) is done on two levels . We also indicate how several other classes of rational functions admit appropriate cop rime factorizations. This is based on Fuhrmann and Ober [1993] .

2. Polynomial and rational models We begin by introducing polynomial models . In this paper, given a matrix function , it can be viewed alternatively as acting on column vectors by multiplication on the left or on row vectors on the right. We can pass from one case to the other by transposition , however we will find it useful to distinguish the t\VO cases . Thus, generally, all function spaces as well as operators will have a sub(super)script, using the letters r or c, depending on whether the the space consists of row vectors or column vectors. For operators that act in a self evident way, the subscripts will be dropped .

Next we describe one of the most widely applicable results on factorization of rational functions namely that of Sakhnovich [1976], which was popularized by Bart , Gohberg , Kaashoek and Van Dooren [1979]. This is a state space characterization of the existence of a factorization of a biproper rational function. The assumption is quite restrictive , but it is still the best of its kind currently available.

We will denote by R m the space of all mvectors with coordinates in R ; R m is similarly defined. Let iT+ and iT_ denote the projections of Rm (( z-l)) the space of truncated Laurent series on R m[=] and =-lRm[[z-l]], the space of formal power series vanishing at infinity, respectively. Since

"Ve mention a few results on Wiener-Hopf factorizations and the connection to invertibility of Toeplitz operators. Following that we proceed to introduce controlled and conditioned invariant subspaces, focusing on internally and externably stabilizable and detectable subspaces. We first derive an algebraic characterization of these spaces. Following that , using the fact that stability is built in , we proceed to give also an analytic characterization in terms of Hardy spaces and inner functions . These results are taken from Fuhrmann and Gombani [1995].

iT + and iT_are complementary projections . In R m [=] we define the shift operator by (S+p)(=) = zp(z) , whereas 5_ acting in z-lRm [[=-l]] is defined by S_h iT_h.

=

Given a nonsingular polynomial matrix D in Rmxm[z] we define two projections iTn in Rm[z] and iT D in z-lRm[[z-lll by iTbf = DLD- 1 f

Inner functions are the basic component in thew description of invariant subspaces of Hardy spaces . State space characterizations of inner functions are well known , e.g. Genin et al [1983], and are given in terms of a particular solution of a homogeneous Riccati equation . We show how polynomial spectral factorization leads to an analogous characterization of inner functions. Factorizations of inner functions are related to general solutions of the same Riccati equation. This leads to the study of spectral factorizations for rational functions , positive on the imaginary axis . Extrmal spctral factors ar special casees of WienerHopf factorizations. We will use these as a starting point for the parametrization of the set of all minimal spectral factors.

for f E Rm[z]

and define two linear subspaces of Rm[z] and z-lRm[[z- l]] by

Xb

= Im7rn ,

and

An element f of R m [z] belongs to X'D if and only if +D- 1 f = 0, i.e. if and only if D-l f is a strictly proper rational vector function . The space is analogously defined . In particular f E if and only if fD- 1 is strictly proper . We refer to .\0 and Xb as polynomial models whereas to .\ D 7r

Xo

Due to limitations of space. this short survey is far

2

Xo

and X D as rational models.

characteristic function .

We turn Xi; into an R[z)-module by defining

Theorem 2.

p. f = 7ri;pf

1. A subset M of X'b is a submodule, or equiv-

for p E R[=), f E Xi; .

alently an S'b invariant subspace, if and only if M = Dl Xi;2 for some factorization D = DID,] with Dj E Rmxm[z]. 2. A subset of is a submodule, or equivalently an Sp invariant subspace, if and only if AI .\p~ for some factorization D DlD2 with Dj E Rm xm[z] .

Since f{er'ifD = DRm [=] it follows that Xi; is isomorphic to the quotient module Rm[=]/ DRm[.:l Similarly, we introduce in a module structure by

!vI Xp

Xp

p'h

= 'if_ph

for p E R[.:). hE

=

Xp.

We summarize now the connection between the geometry of invariant subspaces and the arithmetic of polynomial matrices.

In Xi; we will focus on a special map S'b, a generalization of the classical companion matrix , which corresponds to the action of the identity poly nomi~l z, i.e.,

for

Theorem 3. Let M i , i = 1, ... , s be submodules of X'b , having the representations Mi = EiXFi' that correspond to the factorizations

fEX'b.

D=EiFj •

Thus the module structure in X'b is identical to the module structure induced by Si; through p . f = p(S'b)f· With this definition the study of S'b is identical to the study of the module structure of Xi;. In particular the invariant subspaces of S'b are just the submodules of X'b which are characterized next. They are related to factorization of polynomial matrices . Similarly, the module structure introduced in is the same one induced by the shift map

=

Then the following statements are true. (i) Ml c '\h if and only if El = E2R , i.e. if and only if E,] is a left factor of El ' (ii) ni=l JIi has the representation E vXFv with Ev the I.c .r.m. of the Ei and Fv the g .c.r .d. of the F j •

Xp

(iii) lvh + ... + M. has the representation EJJX'j:with E JJ the g.c.l.d. of the Ei and F JJ the l.c.I.~. of all the F i . Corollary 4. Let D Then

Polynomial and rational models are closely related. Thus we have.

EiFj , for

1, .. . , s .

(i) We have

Proposition 1. The polynomial model X'b and the rational model are isomorphic, with the isomorphism PD : Xb given by f ....... D- l f. Moreover we have

Xp Xp - .

if and only if the Ei are left coprime.

cc_cSD S DPD - PD c .

(ii) We have ni=lEiX Fi are right coprime.

The class of polynomial and rational models is rich enough to model , up to similarity, all linear transformations acting in finite dimensional vector spaces. In fact , if A is a square matrix then Sz[-A is isomorphic to A. In particular this gives an easy approach to the study of canonical forms for A , via the study of the pencil zI - A .

= 0 if and only if the F

j

(iii) The decomposition

is a direct sum if and only if D = Ei Fi for all i, the E j are left cop rime and the Fi are right coprime. The next result summarizes the relation between factorization and the spectral decomposition of linear maps.

The following is a key theorem in factorization theory. It connects S'b-invariant subspaces of the polynomial model X'b , or alternatively, sfinvariant subspaces of the rational model with factorizations of the nonsingular polynomial matrix D . The underlying idea goes back to the work of Brodskii on factorizations of the Livsic

Xp ,

Theorem 5. Let D(z) E Rnxn[z] be nonsingular and let d( z ) = det D( z) be its characteristic polynomial. Suppose d has a factorization d = el .. . e. with the ei pairwise coprime . Then D admits fac-

3

torizations

This opens up the possibility of applying this theorem to the analysis of realizations based on polynomial models. This is taken up next.

D=DiEi with detDi that

3. Realization theory 3.1. Shift realizations

Moreover

A p x m rational matrix function can be considered as a transfer function in two ways. We will be interested in realizations associated with rational functions having the following representations

Theorem 3. and Corollary 4. connect coprimeness conditions to the geometry of invariant subspaces. The next result connects coprimeness to invertibility properties of intertwining maps.

G

1. A map Z : X1 X'b' satisfies Z 51 = 5~ Z if and only if there eXlst polynomial matnces E and E for which

Theorem 7. Let G be a proper p x m rational function. Assume G = VT- 1 U + -W. , where T, U, V, Ware appropriately sized polynomial matrices.

(1)

and in terms of which for

f E X D·

r'

1. Define, in the state space X a quadruple of maps A , B , C, D, with A : Xf - . Xf , m B : R Xf , C : Xf RP and D : m R - . RP by

(2)

2. The map Z defined by (2) is injective if and

only if D and E are right coprime . 3. The map Z defined by (2) is surjective if and only if D and E are left coprime .

= 5T BE, = '7rTUf"

A

Assuming the coprimeness conditions of the previous theorem , there exist polynomial matrices satisfying XD - YE = I and DX - EY = I, or in matrix form X

( -E

{

-Y) (DE Y) _(I0 XY -YX ). D X I

Multiplying on the left by the inverse of the matrix on the right and modifying accordingly the definition of the matrices X, Y , we get the polynomial doubly coprime factorization (DCF)

= (VT- 1 f)-l

(4)

D = G(oo).

Then G

= ( ~ I~

).

The realization is

reachable if and only if T and U are left coprime and observable if and only if T and V are right cop rime. 2. Define, in the state space X T, a system by

A= 5T = f,iTT V,1 C f = (fT- U)-1 { D = G(oo) .

Doubly coprime factorizations are extremely useful. We give an example. Reversing the order of multiplication in the previous DCF , we obtain DY = Y D, which indicates that the inverse of the map Z of(2) is given by Z-l g = irDYg. That this is indeed true can be directly verified by a simple computation, using the Bezout equations arising from the second DCF. In the sequel we will see how DCF arise in the context of Hardy spaces.

Dl E

Cf

f,B

(DE.\Y:) (I0 0). ( -EX -Y) D I

Note that (1) can be written as

(3)

where T , U, V, Ware appropriately sized polynomial matrices. Each representation of the form (3) , is a basis for a state space realization and corealization. This is summed up in the following theorem.

Theorem 6. Let D and D be nonsingular polynomial matrices. Then

ED=DE

= VT- 1 U + W.

Then G = (

(5)

~ I ~ ). This corealization is

reachable if and only if T and V are right cop rime and observable if and only if T and U are left coprime . Using the realization (4) and the corealization (.)) , as well as the isomorphism between polynomial and rational models, one can transfer these realizations to the context of rational models. We will find ample use for both representations when studying problems of geometric control theory. It

= ED- 1 . 4

isomorphism Z : Xh

should be noted that the advantage of basing a realization procedure on representations of the form (3), in contrast to one sided matrix fractions, is that it allows us to study from a functional point of view realizations that are neither reachable nor observable .

-

X15- being given by ( 10)

Similarly, the realizations (8) and (9) are isomorphic. The isomorphism Z : Xf Xp being given by

Theorem 8. Let

(11 ) The polynomial cop rime factorizations appearing in Theorems 6. and 8. arise most naturally in the study of Hankel operators. In fact , if G is a p x m rational matrix function , the Hankel operator He : Fm[=] --. z-l FP[[z-l]] is defined by

be a matrix fraction representation of a proper, p x m rational function . 1. In the state space

Xb

a system is defined by

(12)

(6)

Then G

= (~ I ~

);

=

It is easy to check that S-Hc HcS+ , which shows that Ker He is a submodule of F m [=] which as a linear space is of finite codimension , whereas lm He is a finite dimensional submodule of .:-1 FP[[=-l]]. Thus there exist polynomial matrices D and D for which

this realization is

reachable and it is observable if and only if E and D are right coprime. 2. In the state space X15- a system is defined by

Ker He {

A=S~

D_

= ;r15E~ , -1 Ch = (D h)-l B~

{

(7)

This leads immediately to the cop rime factorizations G = l)1 E = ED- 1. Thus a link has been established between the study of Hankel operators and that of intertwining maps. The same connection can be made also in the analytic context and is basic to the study of the Nehari problem and the commutant lifting theorem . These are central tools in HOC -control.

D = G(oo).

Then G = (

~ I ~ ); this realization is ob-

servable and it is reachable if and only if E and D are left coprime. 3. In the state space xf a system is defined by

From any minimal realization we can easily construct a basis for the model space associated with a right cop rime factorization . This construction goes back to Hautus and Heymann [1978] and is extremely useful.

A= SD

B~ = ~_D-1~:

{ Ch = (Eh)-l D = G(oo) .

- (~) CfD'.

Then G -

ImHc

(8)

Theorem 9. Let G be a proper rational function of McMillan degree n . Let

this realization is

reachable and it is observable if and only if E and D are right coprime. 4. In the state space Xp a system is defined by

(13)

be a minimal realization . Then C(sI - A)-l

(9)

Defining N(s)

= K(s)B --1-

Then G = (

~ I~

=

l)1 J{(s) for some polynomial matrices K and D.

G= Dx +D

N.

we have ( 14)

): this realization is obA basis for

servable and it is reachable if and only if E and D are left coprime. 5. Under the coprimeness assumptions the realizations (6) and (7) are isomorphic. The

X~

and a basis for

5

is given by the columns k i of K

Xp

is given by the columns of

C(sI - A)-I. In particular

x! = {C(sI -

.4)-I~ I ~

E

R~} .

valued, analytic functions in the open right half plane which satisfy

(15)

111112

Moreover, the matrix representation of the shift realization corresponding to (14) with respect to the basis B {hI .. . . ,h n } is given by (13).


-00

=

as well as the subspace of L2 of their boundary values. Similarly we define As a cosequence of the Fourier-Plancherel and the Paley- Wiener theorems we have the orthogonal direct sum decomposition

H: .

As a result we conclude that. given any polynomial matrix Q , D I Q is strictly proper if and only if there exists a constant. matrix J{ for which Q(z) = ~(z)l\·. Similarly P D- I is strictly proper if and only if there exists a constant matrix L for which P(z) L\II(z).

" L"-= H"+ $ H:.

=

We denote by P+ and P _ the orthogonal projections of L2 onto Hi and H: respectively.

Theorem 10. Let G be a proper rational matrix having the minimal realization G

=sup foo IIf(x + iy)1I 2 dy < x>o

= ( ~ I ~ ).

We will use the same notation for row and column vector spaces. Usually it will be clear from the context which space is considered .

Then 1. If G I is proper and Ker H G1 ~ Ker H G then G 1 admits a realization of the form

Before the introduction of Hankel operators we digress a bit on invariant subspaces of Hi . Since , with an eye towards continuous time problems, we are using the half planes for our definition of the H2 spaces, we do not have the shift operators coveniently at our disposal, and in order to keep the scope of the paper within reasonable limits we avoid introducing the translation semigroups and their Fourier transforms . This for ces us to a slight departure from the usual convention .

2. If G I is proper and ImHG 1 C ImHG then G I admits a realization of the form

Corollary. Let G = N D- I have the reachable realization (A : E, C ) and let G' = !vI D- 1 . Then G' has a realization (A , E , Co) for some Co.

The algebra Hf can be made an algebra of operators acting on Hi by letting, for 1j; E Hf , induce a multiplication map M ", : H~ ---;. H~ which is defined by

3.2. Hardy spaces

1 E Hi·

The theme of realization theory can be developed also in the analytic context of Hardy spaces. This has some disadvantages , inasmuch as stability is generally assumed , but it has the great advantage of a Hilbert space structure which is ideally suited to the study of optimization poroblems . Also it has the technical advantage that backward invariant subspaces are , simultaneously, the counterparts of both polynomial and rational models , and thus they provide a more symmetric setting.

(16 )

In algebraic language we have introduced an H f module structure in H~ . The adjoint of M ", is given by ( 17) Both M ", and M ; are special cases of Toeplitz operators. A subspace M C Hi is called an invariant subspace if, for each 1/J E Hf we have i\tJ", M C )\.11 . Similarly, a subspace M C H~ is called a backward invariant subspace if, for each 1j; E H f we have jVI~ M CM . Clearly backward invariant subspaces are just orthogonal complements of invariant subspaces. Inyariant subspaces have been characterized by Beurling in terms of inner functions . We recall that an m x n matrix function M E H f is called inner if IIMlloo ~ 1 and its boundary values on the imaginary axis are isometric a .e., i.e. M(it)" M(it) = I .

Let Loo denote the space of measurable, essentially bounded , matrix valued functions on the imagmary axis. We denote by H f the Banach space of all bounded , matrix valued , analytic functions in the open right half plane. In order not to encumber the notation: we omit the reference to the size of the matrices as this should always be clear from the context . In the case of square matrices Hr:;: is a Banach algebra. By a theorem of Fatou , these functions have nontangential boundary values a .e. on the imaginary a..xis . Thus Hf can be identified with a closed subspace of LOO. H~ will denote both the Hardy space of vector

Theorem 11. [Beurling-Lax-Halmos] A non-

6

~rivial subs~ace

Ht

torizations M = AIj Nj . Then VI C V" if and only if Ml = AI2R for some inner function R. Alternatively Then VI C V2 if and only if 1'12 RNl for some inner function R. 3. We have

;vt C is an inva~iant subspace If and only If M = AI H + for some mner function M.

=

It can be shown that a nontrivial subspace J\.1 C Hi is an invariant subspace with a finite dimensional orthogonal complement if and only if ;\.1 M-Hi for some rational inner function M .

=

where 1'1"1,,/ is the greatest common left inner factor of all Mi . 4. vVe have

The availability of Beurling's theorem allows the arithmetization of the geometry of invariant subspaces. Proposition 12. Let Hi be the Hardy space of column vector functions . Then

where M>. is the least common right inner multiple of all AI; . and N>. is the greatest common right inner divisor of all Ni.

1. Let J\.1, N be full invariant subspaces having the representations M QHi and N ~Hi respectively. Then MeN if and only if Q = ST for some inner function T. 2. Let J\.1 i QiHi , i 1, .... s be full invariant subspaces. Then

=

=

;\.11

=

Vie say that inner functions Ni , i = 1, .. . , S are mutually right cop rime if, for each i , Ni is right coprime with the least common left inner multiple of the i'ij . j #- i .

=

+ .. . + ;\.1, =

Proposition 14. The inner functions Ni are mutually right cop rime if and only if for the mner function :'1 defined by

QHi

where Q is the greatest common left mner factor of all Qi . QiHi, i 1, . . . , s be full invariant 3. Let J\.1i subspaces . Then

=

nMi

2 V H +' - ni, =1 H"N+ i

=

we have

= RH~

det X

where R is the least common right inner multiple of all Q i.

= IIt=1 det Ni, =

Corollary 15. Given the factorizations !vI MiNi , we have the algebraic direct sum decomposition

The class of subspaces H~ (Q), with Q inner , is important for modelling and for the development of realization theory. Since elements 1/! E H'f act in Hi by multiplication and leave QHi invariant , they induce linear operators acting on H(Q) which are defined by T"', Qf = PH(Q )1/! f. The adjoints are given by T 0,Qf = p+ w*f . Clearly, if 1/! E H'f , we have IITt!;,QII:S Ilwll co .

if and only if Mi are mutually left coprime and Ni are mutually right coprime . The following theorem contains a continuous time version of the celebrated commutant lifting theorem. In this connection see Sarason [1968] and Sz,-Nagy and Foias [1970] . The invertibility conditions are from Fuhrmann [1968].

=

We note that , for T :S 0, the functions exp-r(s) e--r, are all in H'f . The operators TexPr form a strongly continuous semigroup of operators on

{MHi}.L.

Theorem 16. Given two square inner functions Q and Q in H'f. Then

The results of Proposition 12. can be adapted to the context of the coinvariant subspace Hc(M). This provides the counterpart to Theorems 2. and

l. A map Z : H(Q) H(Q) is an H'f-module homomorphism, i.e. satisfies

3. Theorem 13. Let M be an inner function and Hc(M) = {M Hi}.L . Then

for

H'f

if and only if there exist H, H E H+ satisfy-

l. A subspace V of Hc(AI ) is invariant under all operators Tt!; ,M ifand only if V l"vIIHc(M") , where lvI = M 1 :'v'I" is a factorization of M into the product of inner functions. 2. Let Vi Aifi Hc ( Nd , i 1" . . , S be invariant subspaces of H~U"I) corresponding to the fac-

=

=

1/! E

Ing

HQ=QH

=

7

(18)

and in terms of which

have the polynomial cop rime factorizations

f

E H(Q).

(19)

2. The map Z defined by ( 19) is injective if and only if Q and H are right coprime. 3. The map Z defined by (19) is surjective if and only if Q and H are left cop rime. 4. The functions H . H can be taken to satisfy

then

and

IIHlloo = IIHll co = 11211· Clearly, equation (18) can be rewritten as 4. With respect to the corealization (5) of !\' in the state space

= Sff

{

~B

"= -sf~s) + liI1lq_oo uf(u)K(s)

{

B

Then we

{

fC

~B

= = =

D

=

if and only if G 1 = G F + H for some F, H E Hf. Theorem 18. Let r(t) be a p x m matrix function, defined on [0 , (0). whose Laplace transform, G, is strictly proper, stable and rational , i.e. in H~. Then

(20)

sf(s ) (21 )

I.

from a different perspective. Let f{1 and K2

=

oo

=

by

Af

- lims _ ~(I\. - 1) lim s _ oo sf( s)

Suppose now that the inner function has a factorization of the form K = f{ 1 !{2 , with f{j inner and f{j ( (0) I . This of course implies the inclusions K1J{2 H~ C KIH~ and , equivalently, {I{lH~.}l. C {K1K2Hi}l. . This can be seen also

1. A time domain minimal realization of G is given in the state space

x = span{r(t + T)~IT ~ O , ~ ERn}

sf ( ~ )

Part 3. of Theorem 18 . sheds light on one of the intersection points between the algebraic and analytic theories . It allows the simultaneous interpretation of the same object both as a rational model space as well as a backward invariant subspace.

ImHG 1 C ImHG

Cf

= - bm._ oo sf(s) .

(..,1./)(s)

[1975,1981].

{

~

6. A frequency domain minimal corealization of K is given in the state space X = .eX by

The following result on range inclusion of Hankel operators is of use. It is taken from Fuhrmann

H~ .

1).

(A" I)(s)

f E H~ .

B~

= U{7r- = ~(f{(s) -

5. The Hilbert space adjoints of the maps (A , B) defined above are, for f E Hr(I{) , given by

Given a function G E LOO the Hankel operator H G : H~ ----> H: is defined by

Let G , G 1 E

we have

(.41)(s) = sf(s) - lim._ oo sf(s)

which provides a factorization , referred to as the Douglas-Shapiro-Shields (DSS) factorization, of the LOO function G . Not every LOO function admits such a factorization , and for more on this see Fuhrmann [1981] .

Proposition 17. have

X!

f'

= E2D2 -

1

and let E2Dl

1

= D1 -

= E1D 1 1 l-

E 2 . So

K

r ( t)~

f rO) Using H e(I{) = xf, we have the inclusion Xf2 Dl :J X!! 1 • This of course is consistent with the characterization of rational model inclusions given in Theorem 2.

2. Let the space X be defined by (20) and let X = .eX be its Laplace transform . Then Xl. is an invariant subspace of H~ having the representation

for some rational , normalized inner function

f{ . 3. Let the normalized , rational inner function K

8

4. Transfer function factorization

sf(s) . We circumvent this difficulty, using the identification Hc(I"':) xf, where K D- 1 D is a polynomial coprime factorization of K, to define the operator A : Hc(I{) - - Hc(K) given by Af sf(s)-lim.-Nsf(s). In fact, it was shown in Fuhrmann [19S1] that Hc(K) = xf is coinvariant, i.e. invariant under all operators M" if and only if it is invariant under all the maps "if-pI , for an arbitrary polynomial, Obviously the last condition is equivalent to the invariance under the map Sf f = Lsf = sI(s) -lim._ oo sf(s) . It is certainly not an accident that this map appears as the generator in the shift realization.

=

4.1. Factorization of rational functions

Given (A;, B i , C i ), i = 1, 2, minimal realizations of G i with 6(G i ) = dimX i the McMillan degree of Gi then we have a realization of G '1. G 1 in the state space X 1 e X 2 . This realization may not be canonical, but it gives an upper bound on the McMillan degree of a product of rational matrices. Specifically we have the inequality 6(G zGd ~ 6(Gd + 6(G:?).

=

1>--

We tackle now the inverse problem. Namely given a transfer function , when can it be factorred into the product of two transfer functions. We will assume throughout this section that the transfer function is biproper. that is proper with a prop,er inverse , which is equivalent to the costant term equal being invertible. If the constant term is actually equal to the identity, we will call it a normalized biproper function. In particular the inverse of a normalized biproper function is also normalized biproper.

Suppose now that the inner function f{ has the minimal realization (21) , in the state space Hr(K) = xf . Since Hr(f{) has a Hilbert space structure , we can compute the Hilbert space adjoints of all maps appearing in this realization . This was done in Theorem 18. But the innerness of f{ implies K* = J",:- 1 and hence we get fA'

We pass now to the basic theorem concerning factorization of transfer function due to Sakhnovich [1976], see also Bart, Gohberg , Kaashoek and van Dooren [1979] . Theorem

19.

[Sakhnovich]

Let G

=

-o5f(s) + lim._ oo sf(s) . K(s) -(sf(s) -lim._ oo sf(s) + lim. _ oo sf(s)(f{(s) - J) -f(A-CB)

So, if A1 C Hr(K) is a backward invariant subspace , it has to be of the form l\lt = Hr(Kt} for a factorization K = K 2 K 1 . Clearly we have the orthogonal direct sum decomposition

=

( ~ I~ ) be a biproper rational function with

the realization given in the state space X . Then a necessary and sufficient condition for G to admit a factorization G = G'1. G 1 with G; normalized biproper rational functions is that X = A'I 1 EB 1H'1. with AIl an A.-invariant subspace , 1Hz an A x -invariant. where A. x is defined by A x

and Mol = Hr (I{'1.)K 1 is an A"-invariant subspace, hence also A - CB-invariant. Note that , as we deal here with row vector Hardy spaces , we have A x = A - CB , differing from the definition ofA x in Sakhnovich 's theorem.

A-BC.

Before proceeding with the analysis of a state space characterization of the factorization of inner functions, we digress a bit on how spectral factorization techniques enter into the construction of rational inner functions . Note that if p is a Hurwitz polynomial and p' is defined by

4.2. Factorization of inner functions

vVe proceed now to study square inner functions and their factorization properties. However in this case we are , given a normalized rational inner function f{ , we are interested in factorizations of the form f{ K '1. K 1 , where the Ki are also normalized inner functions. It is a consequence of Beurling's theorem that factorizations of inner functions are related to invariant subspace inclusions. However here we focus on the induced action in the model space, or backward invariant subspace Hr.(I{). To make the connection with the Sakhnovich factorization theorem we need to establish the invariance with respect to a single operator. Contrary to the case of 1/J E where the induced map on Hc(I{) was defined by T tJJ ,K f PHc ( K) 1/J f , the " multiplication by s" operator is unbounded in H~ and hence we cannot apply the orthogonal projection PHc ( K ) to

p* (s)

=

=

p( -"5): then q( s)

=

p' is an inner function

in the right half plane. Thu!the pole structure of rz determines q uniquely, up to a constant factor of modulus l. We find it convenient to normalize the inner function by requiring that its value at infinity is equal to 1. There are other inner functions derived in the same way but of lower McMillan degree if we allow pole-zero cancellations. In the multi variable case , the pole structure can be prescribed from either side. Moreover it can be done in two distin ct but related ways . We consider the prescription of left poles. One way is to specify a stable, nonsingular , polynomial matrix to act as

Hr

=

9

left denominator . The other way depends on the fact that, given such a D , there exists a unique, up to isomorphism , observable pair (A , C), with A stable , such that

xf

= {C(sI -

A)-I~I~ E

inite solution of the homogeneous ARE

A·X+XA,+XBB·X=O 2. A function U is rational and inner in Hf if and only if it has a minimal state space realization of the form

ell} .

Thus it suffices to prescribe such a pair. Naturally, in the functional form the construction depends on polynomial spectral factorization . see in this connection Callier (1985) and the further references therein . This translates t.o a homogeneous Riccati equation in the state space setting. This is the content of the next theorem .

with A stable and Y the unique , positive definite solution of the homogeneous ARE AY

Theorem 21 establishes a connection between inner functions and the Riccati equation . On the other hand we know already the connection between factorizations of inner functions and invariant subspaces. The direct link between solutions of Riccati equations and invariant subspaces goes back to the classic paper Willems (1971). In this paper Willems parametrizes the set of solutions of an ARE via the set of invariant subspaces of a certain related operator. For more on this theme , as well as connections to factorization theory, we refer to Finesso and Picci (1982) and Fuhrmann [1985,1989).

Theorem 20. 1. Given an observable pair (..t , C) , with A sta-

ble. Then ( ;

I ~ ) is a normalized inner

function if and only if B = - Y C· with Y a nonnegative solution of the homogeneous Riccati equation AY

+ YA* + Ye-CY = O.

The realization

(~ I ~)

+ Y A· + Y C· CY = 0

(22) is observable.

The reachable subspace is given by I mY. 2. Let D be a square , nonsingular , stable polynomial matrix. Let (.4 . C ) be determined Then U = D- 1 N is an inner function if and only if N is a solution of the spectral factorization problem

In view of the preceeding remarks , it is natural to develop a factorization theory for inner functions that would unify arithmetic (i .e. factorization) , geometric (i.e. invariant subspaces) and dynamic (i .e. Riccati equations oriented) approaches. This leads to the following.

N N · = DD· .

Theorem 22.

The maximal NlcMillan degree inner function is gi ven if we take l'i to be the antistable spectral factor .

1. Let U be a square inner function in the right half plane , normalized so that U (00) = I . Let

As a corollary to the previous theorem we obtain the following result , giving a well known characterization of rational inner functions . See Genin , Van Dooren , Kailath, Delosme and Morf (1983), Finesso and Picci (1982) and also Fuhrmann and Ober (1993) where a slightly more general result is derived . Of course one can give a direct proof of this result based on the state space isomorphism theorem. This is the approach taken in the previously mentioned papers.

be a minimal realization with A stable and Y+ the unique , positive definite solution of the homogeneous ARE

AY + Y A·

+ Y C· CY = 0

(23)

Then there exists a bijective correspondence between (a) Normalized left inner factors of U. (b) Invariant subspaces of A . (c) Nonnegative definite solution of the homogeneous ARE (23) . The correspondence is given by

Theorem 21. 1. A function U is rational and inner in H+ if and only if it has a minimal state space realization of the form

. = (AC I-YC· I

VI

with A stable and X the unique , positive def-

10

)

where Y is a non negative definite solution of the homogeneous ARE (23). and V = ImY C Rn is A invariant. 2. Let U be a square inner function in the right half plane. normalized so t.hat U(oo) I. Let

This result has important applications in stochastic realization theory.

=

4.3 . Coprime Factori:ations Over H'f

We cont.inue our study by focusing on coprime factorizations over H'f. THe motivation for this is twofold. On the one hand was the study of analytic , or rather meromorphic, continuation properties in terms of Hankel operator ranges. This was initiated by Douglas, Shapiro and Shields [1971] and carried over to the multivariable case in Fuhrmann [1975] and led to the DSS factorizations. The associated DCF are instrumental in the spectral analysis of intertwining maps , see Fuhrmann [1981] . The other driving force was stabilization theory. It turns out that stabilizing a linear system by dynamic feedback is equivalent to the solvability of a Bezout equation for the coprime factors over H'f . We refer to Vidyasagar [1985] for an extensive study of the coprime factorization approach.

be a minimal realization with A stable and X the unique, positive definite solution of the homogeneous ARE A*X+X.4+XBB"X=O.

(24)

Then there exists a bijectiye correspondence ,between (a) Normalized right inner factors of U . (b) Nonnegative definite solution of the homogeneous ARE (24). (c) Invariant subspaces of .4 . The correspondence is giYen by

Two elements M , N E H'f are called (strongly) right coprime if there exists an H'f solution to the Bezou t equation V lV! - UN = 1. Similarl v M, N E H'f are called (strongly) left cop rime "if there exists an H'f solution to the Bezout equation MV - NU = 1. Note that M, N need not have the same size, only the same number of columns. There is also a weaker notion of coprimeness in H'f , see Fuhrmann [1981] for the details, but we will not discuss it here as we focus on rational functions and in this case the two notions of coprimeness coincide. In the sequel we will take coprimeness to mean the strong one.

where X 2 is a nonnegatiye definite solution of the homogeneous ARE (24). and V = f{ er X C Rn is .4 invariant. There is a natural partial ordering of left inner factors of a given inner function. There is another natural partial order in the set of symmetric solutions of the Riccati equation (22). It is not surprising that these partial orders are related through the following theorem. The proof is based on previous results and hence omitted. Theorem 23. 1. Let U(s) be inner. and let ["(s) = I - C(sIA)-l y+C* with Y+ > 0 the unique, positive definite solution of the ARE AY

+ Y.4* + YC·CY = O.

Let G be a proper rational matrix-valued function. Then the factorization G = N M -1 is called a right factorization of G , if N, M are stable rational functions and M is invertible with proper inverse. If N, M are right coprime, then the factorization is called a right coprime factorization. Similarly we define a left cop rime factorization G = N! 1 N. The two proper stable rational block matrices

(25)

Let Ui (s) = 1 - C i (s 1 - A.) -1 Y; Ct , i = 1, 2 be two left factors of U with Yi nonnegative definite solutions of the Riccati equation (25). Then U2 is a left inner factor of U1 if and only if Y 2 ::; Y 1 . 2. Let U(s) be inner, and let U(s) = 1B* X(s1 - .-1)-1 B with X > 0 the unique, positive definite solution of the ARE A*X+X.4+XBB*X=O.

U) (V-N -u) V ' N! ' with M (AI) having a proper inverse, form a doubly coprime factorization of the proper rational function G , if

(26)

1o 0) [

Let Ui(s) = 1-B;,'(;(s1-.-"i)-lBi , i= 1, 2 be two left factors of C with Xi non negative definite solutions of the Riccati equation (26). Then U2 is a right inner factor of U1 if and only if X 2 ::; Xl '

and

11

Doubly coprime factorizations are related to the Youla-Kucera parametrization of all stabilizing controllers. This accounts for the importance of coprime factorizations over Hf .

From a computational point of view, it is desirable to have state space formulas available for the various (normalized) coprime factorizations. A general approach to such a derivation was initiated in Khargonekar and Sontag [1982]. For normalized cop rime factorizations the modified formulas were derived by Meyer and Franklin [1987] and Vidyasagar [1988]. A comprehensive study is given in Fuhrmann and Ober [1993] and the following theorem is taken from there. Some simplifying assumptions have been made in order to fit the format of this paper. Spectral factorizations are central to these derivations and are translated into the solution of various Riccati equations . Usually one Riccati equation arises for the left factorization and the other for the right one.

Now , clearly, every proper rational transfer function G has a coprime factorization over Hf. To see this, assume G = E D- 1 is a polynomial right coprime factorization . Choose a stable , nonsingular polynomial matrix T such that DT- 1 is biproper, with D, T right coprime. This can easily be done using Rosenbrock's theorem. Thus we can write C = (ET-1)(DT-1)-1 = NM- 1, with N, lvI E Hf . This is the approach taken in Fuhrmann and Ober [1993] . Note that in the previous process we had great freedom in the choice of the denominator matrix T. This choice is severely restricted if we make additional requirements on the cop rime factorizations. In fact if C E H':: and we require the denominators to be inner functions, we recover the DSS factorizations. If we make no restriction on G other that that it is proper and rational, but require the cop rime factorizations to be normal-11 ized (NCF), i.e. , with C !'vI .V N j1;I- ,that lvI" lvI + N* N = I and ;\I }vI + 'FiN = I hold, the choice of T becomes essentially unique and has to be obtained via a polynomial spectral factorization procedure . To see this consider the normalized left coprime factorization case . We have N = ET- 1 and lvI = DT- 1 and the normalization condition translates into E" E + D* D = T*T . So T is obtained by polynomial spectral factorization , under the additional requirement that DT- 1 I. The importance of normalized coprime factorizations became clear in the work of McFarlane and Glover [1989] on robustness issues in stabilization theory. For more on this see Ober and Sefton [1991] and Georgiou and Smith [1990] . Other classes of functions that admit specially normalized cop rime factorizations are the class of bounded real functions and the related class of positive real functions . All the normalizations can be given in the form

=

Theorem 24.

.. I rea I'lzatlOn . G= mllllma

N*) J (

~~

)

G E H':: is strictly proper, then J snormalized coprime factorizations are given by

=

(

(

C

1)

_ ( A - ZC* C

-

-C

lvI s ) -

1-

B

0

where X , Z are , respectively, solutions of the homogeneous Riccati equations A* X + XA - XBB* X = 0 AZ + ZA* - ZC*CZ 0

=

2. For any strictly proper C, h-normalized coprime factorizations are given by

= ( M) NL .L

( _ -NL

=I :

For the standard NCF we have J L =

(~ ~) .

For the bounded real case we take

M

(A - BB" X -B* X

C L

B) I

0

) _ ( A - Zc*C -

-C

1

B

0

A* X + XA + c*C - XBB* X = 0 AZ + ZA.* + BB" - ZC*CZ = 0

3. For any strictly proper, bounded real G , J Bnormalized coprime factorizations are given

) , and finally for the positive real

case the metric is chosen to be Jp =

Ns

where X , Z are, respectively, solutions of the Riccati equations

(~ ~).

(~ ~I

A - BB* X -B*X

lvfs )

( N5

where J is an appropriate metric. For H':: functions the metric is the degenerate one J s =

JB =

(~) '""""C'fD .

1. If

=

(M*

Let C be proper and have the

(~ ~) .

12

by

(

M) (A+B"BB"X N ,\,B

-

C

B

orem 26 is standard, see Gohberg and Feldman [1971]. Some of the results of Theorem 27 can be found in Nikolskii [1985] .

1)

We recall the definition of Toeplitz operators. As we are working with both row and column spaces we must distinguish between two types of Toeplitz operators . Definition 25. Let G be an m x m matrix function in LOO. We define the Toeplitz operator T& acting on column Hardy space by

where X, Z are . respectively, solutions of the Riccati equations

=

A" X + XA + CC + XBB" X 0 AZ + ZA" + BB" + ZC"CZ = 0

T& : H~ -H~

f ....... P+Gf.

4. For any proper positive real G normalized so that D+D" = J , Jp-normalized coprime factorizations are given by

(

M) p

Np

-Np

=

(A - B CC -(C -

B

Similarly, we define the Toeplitz operator ing on row Hardy space by

Ta

act-

Ta :Hi -Hi f ....... P+fG .

B-

X ) X)

O- C +OB-X

Mp)

In both cases G is called the symbol of the corresponding Toeplitz operator.

=

A-CB;;ZC- \C 1-IBO-_+o zc-ol

(B-/C-))

We say that where X , Z are , respectively, solutions of the Riccati equations

+ X (A - BC) + C· C + X B B" X = 0

is a right Wiener-Hopffactorization if G~l E OO H -, oo G±1 + E H+ and

(A" - C· B- )X

(A - BC)Z + Z(.4'- - CB·) + BB" + ZC'CZ O.

=

The derivation of doubly coprime factorizations is now easy. For the details we refer to Fuhrmann and Ober [1993] .

with K1 2: . .. 2: Km· The indices K1 ,·· · . Km are called the right Wiener-Hopf factorization indices. Left factorizations and left factorization indices are similarly defined.

5. Wiener-Hopf factorizations

It is well known that if G E L':JO is continuous, and in particular if it is rational , then WienerHopf factorization exist and the factorization indices, though not necessarily the factorizations , are uniquely defined .

In the analysis of stationary stochastic processes the phase function associated with a rational spectral density function is an important tool. The phase function To is defined by To = W +1 W _ , where W _ is the stable , minimum phase spectral factor and W + is the antistable, maximum phase spectral factor . The phase function itself is an allpass function and the previous factorization is a special case of a right Wiener-Hopf factorization , to be defined next. However , any rational all-pass function has also left and right DSS factorizations . Thus

Theorem 26. Let G E LOO and G = G-D..rG+ its right Wiener-Hopf factorization . Then 1. The following statements are equivalent. (a) The Toeplitz operator Tt; is injective. (b) The Toeplitz operator T~r is injective. (c) All right Wiener-Hopf factorization indices are nonnegative. 2. The following statements are equivalent. (a) The Toeplitz operator Tt; is surjective. (b) The Toeplitz operator T~r is surjective . ( c) All right Wiener- Hopf factorization indices are nonpositive. 3(a) The Toeplitz operator Tt; is invertible if and only if all right Wiener-Hopf factorization indices are trivial.

with Q± , I\.-± all inner functions . The next two theorems analyse this situation and give also characterizations of invertibility based on considerations of geometry as well based on properties of Hankel operators . The content of this section is based on Fuhrmann and Gombani [1995] . The-

13

Ta

(b) The Toeplitz operator is invertible if and only if all left Wiener-Hopf factorization indices are trivial.

(f) We have

In the following theorem the inner functions can be taken to arise from a DSS factorization of an all-pass function , namely To Q:IC K+Q+.

(g) We have

=

H!K+

=

+ Hr(Q+) =

H! .

PHr(K+)Hr(Q+) = Hr(K+) .

Theorem 27. Let Q-,Q+,IC,I{+ be inner functions satisfying

(h) We have 3. The following statements are equivalent (a) The Toeplitz operators TQ+K+ = Tk:Q_ and TQ+K+ = Tx: Q_ are both invertible. (b) AIL left and right, Wiener-Hopf factorization indices of Q+K~ = K:Q_ are trivial. (c) We have

with the coprimeness conditions

Hc(IC) n Q_H~

I

{

satisfied. Then 1. The following statements are equivalent

Hc(IC)

{O}

+ Q_H!

(d) We have

(a) The Toeplitz operator TQ+K+ = Tk:Q_ IS injective. (b) The Toeplitz operator Tk+Q+ = Tk:Q_ is surjective. (c) All right Wiener- Hopf factorization indices of Q+K~ = K:Q_ are nonnegative. (d) We have

PHr(K_)Hr(Q-) {

PHc(Q_ )Hc(IC)

(e ) We have

ICH: nQ_H! { ICH:

{O}

+ Q_H!

(e) We have (f) All singular values of the Hankel operators HQc K. are less than 1. + +

(f) We have

ICH!

+ Hc( Q-)

dim Ker TQc

= H!.

K. H Qc K· + +

+ +

= dim{1 I II

III

= 11/11} ·

(g) We have 6. Geometric control theory In the analysis and synthesis of control systems, especially when using geometric methods as in Wonham [1979], it is necessary to extend the notion of an invariant subspace. Thus, given a trans-

(h) All the singular values of the Hankel operator H'k+Q+ = H'k:Q_ are < 1. 2. The following statements are equivalent (a) The Toeplitz operator TQ+K+ = 1'K:Q_ IS injective. (b) The Toeplitz operator TX+Q+ = T x : Q_ IS surjective. (c) All left Wiener- Hopf factorization indices of Q+K~ = K:Q_ are nonnegative. (d) We have

fer function G with realization G = ( ;

I ~ ) in

a state space X , we say that a subspace V C X is a controlled invariant subspace if given any initial condition Xo E V there exists a control function u for which the solution to x(t) = Ax(t) + Bu(t) remains in V . It is easy to check that V is controlled invariant if and only if there exists a feedback map F such that (A+BF)V C V. Thus we can introduce the dual notion . We say t.hat a subspace W C X is conditioned invariant if there exists an output injection map H such that (A + HC)W c W. The concepts of controlled and conditioned invariant subspaces were first introduced by Basile and Marro [1969]. The following is a standard characterization. For a

Hr(I{+) n H!Q+ = {O} (e) We have

14

· . G reaIIzatlOn

= (~). ~ 111

is A + HC-invariant and A + HClv is stable. A subspace V is outer detectable if there exists an injection map H such that V is A + H C-invariant and A + HClx/v is stable. Again , the concepts of inner and outer antidetectability are naturally defined .

h testate space

X a subspace V C X is controlled invariant if and only if AV C V + 1mB. Similarly, a subspace V C X is conditioned invariant if and only if A(V n Ker C) C v. It is of interest to the notions of controlled and conditioned invariant subspaces directly to the transfer function it.self. This can be done using the shift realizations. The following proposition is taken from Fuhrmann and \Yillems [1980] and Fuhrmann [1981]. Since invariant subspaces of the shift operator are in correspondence with factorizations, it is only natural to expect that factorizations play a major role in the characterization of controlled and conditioned invariant subspaces. This is indeed the case, although the factorizations leading to the following result have been eliminated from the statement .

A simple application of duality considerations in an inner product space X implies that V C .:t' is an outer detectable subspace with respect to the pair (C, A) if and only if Vl. is inner stabilizable with respect to the pair (A" , C"). With the previous definitions, the characterizations given in Proposition 28 can be made more specific to take into account stabilizability and detectability. To this end we define a submodule M C FP[s] to be a stable sub module if AI = FP[s]E and E is a stable polynomial matrix. Analogously we define antistable submodules. In a similar fashion we define stable, finite dimensional, submodules of s-l Fm[[s-l]] if they are of the form L = X!? , with D a stable polynomial matrix. Antistable submodules require D to be antistable. Clearly stable submodules coincide with finite dimensional coinvariant subspaces of H~.

Proposition 28. Let G be a p x m proper rational function having the polynomial coprime factorization G = ED- 1 =""]')1 E . Then 1. With respect to the realization (8) in the state space X?, a subspace V C X? is controlled invariant if and only if V = ir D L for some submodule L C =-1 Fm [[ z-l]]. 2. vVith respect to the realization (7) in the s~a~e spa~e a subspace ~. C X~ is condltlOned invariant If and only If V = x])n M for some submodule JI C FP [z].

The proof of the following proposition is based on results in Fuhrmann and Willems [1980] and Fuhrmann [1981].

Xt,

Proposition 29. Let G be a p x m transfer function having the polynomial coprime factorizations

Given a linear transformation A in X and an Ainvariant subspace V , we denote by Alv the restriction of A to V . By a slight abuse of notation we will denote by Alx/ v the induced map , i.e. the map induced by A. in the quotient space X / V . This notation extends to conditioned and controlled invariant subspaces . Thus if V C X is a controlled invariant subspace for the pair (A , B ) and if F is a feedback such that (A + BF)V C V , then we use the notation A+BFlv and A+BFlx/v for the restricted and induced maps respectively.

Then 1. With respect to the corealization

(27)

in the state space XDr : a subspace V is outer anti detectable if and only if

Following Schumacher [1981], we say that a controlled invariant subspace V for the pair (A, B) is stabilizable, or inner stabilizable if there exists a feedback F such that V is A+BF-invariant and A + BFlv is stable. Analogously, we say that a controlled invariant subspace V is outer stabilizable if there exists a feedback F such that V is A + BF-invariant and A + BFlx / v is stable . Similarly we define inner antistabilizable subspaces.

for some antistable polynomial matrix E+ . 2. With respect to the corealization (27) in the state space X Dr ' a subspace V is outer detectable if and only if

for some stable polynomial matrix F _ .

There are natural dual concepts. Thus a conditioned invariant subspace V is inner detectable if there exists an injection map H such that V

15

5. With respect to the pair A, B, defined as in (30), the subspace V C Hr(K)QII is inner stabilizable if and only if there exists an inner function Q' such that

3. With respect to the corealization

(28)

in the state space X!?' , a subspace V C x!? I is inner stabilizable if and only if there exists a stable polynomial matrix E_ such that

V =.I\

-E

D,

7. Parametrization of the set of minimal spectral

factors

Assume we are given a coercive, rational spectral density matrix , i.e. it is positive definite on the 4. With respect to the corealization (28) in the extended imaginary axis . Moreover , for simplicity, state space X!?' , a subspace V C X!? I is inner we assume it is normalized by cI>( (0) = I. Our aim antistabilizable if and only if there exists an is to parametrize the set of all minimal McMilantistable polynomial matrLx E+ such that lan degree spectral factors, that is square rational matrix functions W satisfying WW* = cI>. v=xf+;r!?' · This is best done by considering the four extremal We are ready now to state the analytic counterspectral factors, W _, W+, W _ , W + , the minimum part to Proposition 29 . phase stable, the maximum phase stable, the minimum phase antistable and the maximum phase Proposition 30. Let K be an inner function . antistable factors respectively. This implies the existence offour inner functions Q±, K± for which 1. With respect to the pair A. C, defined in the W+ W_Q+, W+ = W_Q_ , W_ = W_IC and state space Hr(J() by W+ W +K+ . Any minimal stable spectral fac(Af) (s) sf( s ) - lims- oo sf(s) (29 ) tor is of the form W = W _ Q' , where Q' is a left { factor of Q+ and any minimal , minimum phase limsx sf(s) fe spectral factor W is of the form W = W _ K' , the subspace V C Hr(J\-) is outer antidewhere K' is a left factor of le . It can be shown tectable if and only if that any minimal spectral factor W is uniquely determined by such factorizations of K _ and Q+. This is given schematically by the following diagram. for some inner function Q' . 2. Let K , Q" be skewprime inner functions and Q' let KQ" = Q" K+. \Vith respect to the pair W~-'---.... __ W + A. , C , defined as in (29) , the subspace V C Hr(K)Q" is outer detectable if and only if r

-iT r

"

= =

t

t ----·W--

H r (K)Q" n Hr(PI."+) Hr(K)Q" n H:'K+ .

V

+

3. We have ((Hr(K) n H~Q'lQ")l. = [PHr CK )Hr(Q')]Q" (Hr(K )Q" n Hr(J{ +))l.

= PHr(K )Q,, (HrCQ")K+) 4. \Vith the pair A , B , defined in the state space Hr(K) by

Now factorizations of inner functions are in a bijective correspondence with the set of all non~ negative solutions of a corresponding homogeneous Riccati equation . Leaving out all the details , which can be found in Fuhrmann [1995], this leads to the next theorem which describes the parametrization in state space terms.

~ I~ )

(Af) (s )

sf(s) - lim._ oo sf(s) (30 ) Theorem 31. Let W_ = ( be a min- l) imal realization of the stable , minimum phase the subspace V C Hr(K) is inner antistabilizspectral factor of a normalized spectral function able if and only if there exists an inner funccI> . Let X be a nonnegative definite solution of tion Q' such that the Riccati equation

{

~B

~(I\

X(A· - C" B·)

16

+ (A -

BC)X

+ XC·CX = 0,

and let Z be a nonnegative definite solution of the Riccati equation

327. P.A. Fuhrmann [1981]' "Duality in polynomial models with some applications to geometric control theory," Trans. Aut. Control, AC-26, 284-295. P.A. Fuhrmann [1981)' Linear Operators and Systems in Hilbert Space, McGraw-Hill. P.A. Fuhrmann [1984], "The algebraic Riccati equation - a polynomial approach", Systems and Control Letters, 5, 369-376. P.A. Fuhrmann [1989], "Elements of Factorization Theory", in H. Nijmeijer, J. M. S. Schumacher (eds.), Three Decades of Mathemati-

A·Z+ZA+ZBB·Z=O. Then each of the following realizations A+BB"Z C + B"Z

I

B-(l+XZ1-1X(C"+ZB)) I

and

cal System Theory. A Collection of Surveys at the Occasion of the Fiftieth Birthday of Jan C. Willems. Lecture Notes in Control

gives a complete parametrization of all minimal spectral factors of .

and Information Sciences Vol. 135 . Springer Verlag, Berlin. P.A. Fuhrmann [1994], " A duality theory for robust control and model reduction" , Lin . Alg. Appl., vols. 203-204,471-578. P.A. Fuhrmann [1995], "On the characterization and parametrization of minimal spectral factors", to appear, Journal of Mathematical

The two realizations above are isomorphic and the intertwining isomorphism is given by the map 1+ XZ.

Acknowledgement

The author wishes to gratefully acknowledge the support of GIF under Grant No . I 184.

Systems , Estimation and Control.

P.A. Fuhrmann and A. Gombani [1995], " A study of rectangular spectral factors", preprint. P.A . Fuhrmann and R . Ober [1993], "On coprime factorizations" , the T. Ando Anniversary Volume, in Operator Theory: Advances and Applications, vol. 62, 39-75. Birkhiiuser Verlag. P.A. Fuhrmann and J .C. vVillems [1979], "Factorization indices at infinity for rational matrix functions", Integral Equat . and Oper. Theory , 2, 287-30l. P.A. Fuhrmann and J .C. Willems [1980], " A study of (A,B)-invariant subspaces via polynomial models" , Int. J. Contr. 31 , 467-494. Y. Genin , P. Van Dooren , T . Kailath, J.M. Delosme and M. Morf [1983], "On B-lossless transfer functions and related questions, Lin. Alg. Appl., 50, 251-275 . T.T. Georgiou and M.C. Smith [1990], "Optimal robustness in the gap metric", IEEE Trans. on Auto. Contr., 35 , 673-686. LC. Gohberg and LA . Feldman [1971], Convo-

References H. Bart, 1. Gohberg, M.A . Kaashoek and P. Van Dooren [1979], " Factorizations of transfer functions", . G. Basile and G. Marro [1969], "Controlled and conditioned invariant subspaces in linear system theory" , J. Optim. Th . & Appl., 3, 306315 . F.M. Callier [1985], " On polynomial matrix spectral factorization by symmetric extraction", IEEE Trans. on Auto. Contr., 30,453-464. R .G . Douglas , H.S. Shapiro & A.1. Shields [1971]' "Cyclic vectors and invariant subspaces for the backward shift" , Ann. Inst. Fourier, Grenoble 20,1, 37-76. 1. Finesso and G. Picci [1982]' "A characterization of minimal spectral factors" , IEEE Trans. Autom. Contr. AC-27, 122-127. P.A. Fuhrmann [1968], "On the corona problem and its application to spectral problems in Hilbert space" , Trans. Amer. Math . Soc. , 132, 55-67. P.A. Fuhrmann [1968], " A functional calculus in Hilbert space based on operator valued analytic functions", Israel 1. J\{ath., 6, 267-278 . P.A. Fuhrmann [1976], " Algebraic system theory: An analyst's point of view", 1. Franklin Inst ., 301 , 521-540. P.A. Fuhrmann [19Ti]' "On Hankel operator ranges, meromorphic pseudo-continuation and factorization of operator valued analytic functions", 1. Lon . Math. Soc . (2) 13, 323-

lution Equations and Projection Methods for their Solution , Amer. Math . Soc. Transla-

tions of Mathematical Monographs, vol. 41. M.L .J. Hautus and M. Heymann [1978], "Linear feedback-an algebraic approach", SIAM 1. Control 16, 83-105 . U. Helmke and P.A . Fuhrmann [1989], "Bezoutians" , Lin. Alg. Appl.. P. Khargonekar and E. Sontag [1982]' "On the relation between stable matrix factorizations and regulable realizations of linear systems over rings", IEEE Trans. on Auto. Contr.,

17

27, 627-638. D. McFarlane and K. Glover [1989], "Robust controller design using normalized coprime factor plant descriptions" . Lecture Notes in Control and Information Sciences, vo!. 10, Springer Verlag. D. Meyer and G. Franklin [1987], "A connection between normalized cop rime factorizations and linear quadratic regulator theory", IEEE Trans. on Auto. Contr. 32 , 227-228 . N.K. Nikolskii [1985], Treatise on the Shift Operator, Springer. R.J. Ober and J .A. Sefton [1991],"Stability of control systems and graphs of linear systerns", Systems and Control Letters, 17, 265280. G. Picci and S. Pinzoni [1994]. "Acausal models of stationary processes" , Lin. Alg. Appl, 205206 , 997-1043. L.A. Sakhnovich [1976], "On the factorization of an operator valued transfer function", Soviet IvIath. Dokl. , 17, 203-207. D. Sarason [1967], " Generalized interpolation in Hoc ", Trans. Amer. Math. Soc. 127, 179203 . J .M. Schumacher [1981]' "Dynamic feedback in finite- and infinite-dimensional linear systerns", Ph .D. thesis , Free rniversity of Amsterdam. B. Sz.-Nagy and C. Foias [1970], Harmonic analysis of Operators on Hilbert Space , North Holland , Amsterdam. M. Vidyasagar [1988]'''Normalized coprime factorizations for non strictly proper systems", Automatica, 85-94.

M. Vidyasagar [1985], Control System Synthesis, MIT Press. J .C. Willems [1971]' "Least squares stationary optimal control and the algebraic Ricatti equation" , Trans . Automat. Contr., 16, 621634 .

18