Generalized inversion of finite rank Hankel and Toeplitz operators with rational matrix symbols

Generalized inversion of finite rank Hankel and Toeplitz operators with rational matrix symbols

LINEAR ALGEBRA AND ITS APPLICATIONS ELSEVIER Linear Algebra and its Applications 290 (1999) 119 134 Generalized inversion of finite rank Hankel and ...

555KB Sizes 3 Downloads 167 Views

LINEAR ALGEBRA AND ITS APPLICATIONS ELSEVIER

Linear Algebra and its Applications 290 (1999) 119 134

Generalized inversion of finite rank Hankel and Toeplitz operators with rational matrix

symbols Victor M. A d u k o v 1 Department of Mathematical Analysis, South-Ural State University, Lenin avenue 76, 454080 Chelyabinsk, Russian Federation Received 11 February 1998; accepted 28 September 1998 Submitted by P.A. Fuhrmann

Abstract

The goal of the paper is a generalized inversion of finite rank Hankel operators and Hankel or Toeplitz operators with block matrices having finitely many rows. To attain it a left coprime fractional factorization of a strictly proper rational matrix function and the Bezout equation are used. Generalized inverses of these operators and generating functions for the inverses are explicitly constructed in terms of the fractional factorization. © 1999 Elsevier Science Inc. All rights reserved. Keywords: Hankel and Toeplitz operators; Generalized inversion; Coprime fractional factorization

Introduction In the works [1,2] F u h r m a n n has p r o p o s e d the following inversion m e t h o d for finite H a n k e l matrices. Let H = ]lgi-=i--1]]i.j=l,2...... be an invertible H a n k e l matrix. By the matrix H and the n u m b e r ~ = g2, we can uniquely determine the coprime polynomials pc (z) and q~ (z) (the polynomial q~(z) is a monic one) such that {gi}2~__,~-iare the M a r k o v p a r a m e t e r s o f the rational function p~(z)/q:(z). Then H -1 is the Bezoutian of the coprime polynomials q~(z) and a(z), where a(z) is a solution of the Bezout equation

J E-mail: [email protected]. 0024-3795/99/$ - see front matter © 1999 Elsevier Science Inc. All rights reserved. PII: S 0 0 2 4 - 3 7 9 5 ( 9 8 ) 1 02 1 6 - I

120

v.M. Adukov / Linear Algebra and its Applications 290 (1999) 119-134 a(z)p¢(z) + b(z)q¢(z) : 1.

Similar results have been also obtained by Heinig and Jungnickel [3]. In Ref. [4] the method was extended to the case of finite block Hankel matrices• In the present paper we will show that the fractional representation and the Bezout equation can also be used for a generalized inversion of finite rank Hankel and Toeplitz operators. We consider Hankel operators that are determined by the infinite block Hankel matrices a

1

a2

a

2

a3

a 3

a-4

a_3 ..i/

a-4 a-5



or the block Hankel matrices with finite number of rows a 1

a-2

a-3

...

a-2

a3

a4

...

a-n

a-n-1

a-n-2

.. •

and the Toeplitz operators with the matrices

CI

i

Cn

1

CO

C-I

Cn-2

Cn-3

We restrict our attention to the operators whose symbols a(t) = ~.l=J ~ a## v--~n- 1 j and c(t) = ~ j = _ ~ c# are rational p x q matrix functions. We will also assume that poles of a(t) and t - ' c ( t ) lie into the unit disk. The operators under consideration have finite ranks. Hence their images and kernels are complemented subspaces and the operators are generalized invertible. Recall that a linear boundary operator A is called generalized invertible if there exists an operator A t (a generalized inverse of A) such that AA~A = A (see, e.g., [5]). Our goal is to obtain in explicit form generalized inverses for finite rank Hankel and Toeplitz operators. We will consider a generalized inverse having an additional property A t A A t = A t. In the matrix theory such generalized inverses are called (1,2)inverses [6]. It turns out that the following matrix fractional representation a(t) = L -1 (t)N(t)

V.M. Adukov / Linear Algebra and its Applications 290 (1999) 119-134

121

of the strictly proper rational matrix function a(t) is central to all further development. Here L(t), N(t) are left coprime matrix polynomials in t, and L(t) is a nonsingular p x p matrix polynomial. This representation is called a left coprime fi'actional matrix factorization of a(t). The coprimeness condition is equivalent to solvability of the following Bezout equation

L(t)U(t) + X(t)V(t) = Ip. Here U(t) and V(t) are matrix polynomials in t. The fractional factorization of a transfer matrix function and the Bezout equation widely used in solving several important problems of linear system theory (see, e.g., [7 9]). Besides the inversion of finite Hankel matrices the fractional factorization was applied for a description of the kernel of finite Hankel matrices [3] and the kernel and image of Hankel operators with rational symbols [10,I 1]. In this paper we obtain formulas for a generalized inversion of finite rank Hankel and Toeplitz operators with rational matrix symbols in terms of the factor L(t) from the fractional factorization for their symbols and the polynomial V(t) from the solution (U(t), V(t)) of the Bezout equation. I. Notation and usual definitions L e t C p×q be the set of complex p x q matrices. By Wp×q we denote the set of p x q matrices with entries in the Wiener algebra W. If a(t) E Wp×q, then

a(t) =

a~-tk,

ak ~ cpxq, It[ = 1,

k -oc

where ~k.~ ~ ]akf < oc. Here I" ] is any norm in C pxq. The set ~×q endowed with the norm Ila(t)ll = ~ k ~ - ~ lakl is a Banach space. We will denote by Wp×q(f~), where f~ c 7/, the subspace of Wp×q consisting of matrix functions of the form

a(t)=Zaktk,

It]= 1.

ken

For brevity we will use the notation Wp~qfor Ws×q(Y±), where 7/± is the set of all nonnegative/nonpositive integers. It is obvious that matrix functions in Wp~×q (Wp×q) are analytic in D+ = {z C C I Iz]< I } ( D = { z E C U { o c } I ]zl> 1}). By Eq×~ we denote any of the following Banach spaces of double infinite sequences {Xk}A.~ ~ (Xk E cq×J): lq×l(1 ~S < oC.), Cqxl~ 0 Cqxl~ mq×l. We will denote by Eq×l(O), where f2 C 7/, the subspace of Eq×l consisting of sequences {x~}~= ~ for which xk = 0 if k g ~. It is evident that

V.M. Adukov I L&ear Algebra and its Applications 290 (1999) 119-134

122

Eq×l = Eqxl (~-+ )4Eq×l ( 77* ).

Here Z* is the set of all negative integers. Let Q+ be the projector from Eq×l onto Eq×l(77+) along Eq×l(77*) and let Q_ be the complementary projector. Similarly, we denote by P+ the projector from Ep×l onto Ep×I(Y-+) along Ep×l(77*_) and by P_ the complementary projector. If a(t) = ~-~,~-o~akt* E Wp×q, then we will use the notation h for the convolution operator acting from Eq× ~ into Ep× j according to the formula oc

(ax)i : ~ j_-

ai_jxj,

i C 77.



A Toeplitz (or a discrete Wiener-HopJ) operator with the symbol a(t) E Wp×q is defined by the formula V~ = P+hQ+ ]Eq×, (77+),

(1.1)

that is,

(qF,x)i = Zai_jxj,

i = 0, 1 , 2 , . . . .

j-0

Hence the operator -I-~ from finite block Toeplitz matrix

al 2

ao

a-i

al

a0

Eq×l(77+) into Ep×l(~/+) is determined

by the in-

For Toeplitz operators there is the following property of partial multiplicativity:

"~aa. ~- ~a'~a+,

-~a a = "~a -~a.

(1.2)

Here a+(t) 6 Wq+k, a_(t) 6 Wt;p. We will also use the Wiener-Hopf operators with respect to the space Eq×l(Z*)

Y' = P_hQ_lEq×, (Z*). For these operators we have q]-'aa_

' '

Y'

= ql-al- . . . . . .

'

= TAT=.

'

(1.3)

A Hankel operator with the symbol a(t) C Wp×q is the operator acting from Eq×j(77+) into Ep×l(77*) by the formula

14 M. Adukov / Linear Algebra and its Applications 290 (1999) 119--134

123

H, = P_hQ+IEq×~(~_+). Hence (H,,x),=Zai /=0

/x;,

i=-1,-2,-3,...,

and this operator is determined by the infinite block Hankel matrix

a_l

a 2 a-3

...)

a-2 a 3 a-4 a 3 a-4 a-5

~l,+)/'."'



^

*

Also we will use the Hankel operator H' = P+aQ_ IEq×l(2_) determined by the matrix

al a2 a3 a2 a3 a4 a3 a4 a5

...~

J

2. Generalized inversion of finite rank Hankel operators

In this paper we will assume that the symbol a(t)= ~ I _ ~ a~t~ of the Hankel operator Ha is the strictly proper rational p × q matrix function having poles in D+ only. In the system theory the following matrix fractional representation

a(t) = L-'(t)N(t),

(2.1)

of the strictly proper rational matrix function a(t) plays an important role (see, e.g., [7-9]). Here L(t), N(t) are left coprime matrix polynomials in t, and L(t) is a nonsingular p x p matrix polynomial. This representation is called a left coprime fractional matrix factorization of a(t). The coprimeness condition is equivalent to solvability of the following Bezout equation

L(t)U(t) + N(t)V(t) = Ip.

(2.2)

Here U(t) and V(t) are matrix polynomials in t. (Algorithms of an effective construction of representation (2.1) and effective solving of Eq. (2.2) by elementary row operations see, e.g., [12].) Since poles ofa(t) lie in D~, detL(t) ¢ 0 if [t[ ~> 1. Hence L I(t) E Wp×p.

V.M. Adukov I Linear Algebra and its Applications 290 (1999) 119-134

124

We will show that for a generalized inversion of the Hankel operator H, representation (2.1) plays a role of the W i e n e r - H o p f factorization. Theorem 2.1. Let H. be a Hankel operator with the rational matrix symbol a(t)

and a(t) = L -j (t)N(t) its fractional matrix factorization. Let (U(t), V(t) ) be an arbitrary solution of the Bezout Eq. (2.2). Then the operator H~ = Tv~'L is a generalized inverse of ~ . Moreover,

Proofi It follows from the fractional factorization and the definition of the operator 14o that H. = ~L ,YxHence, using the partial multiplicativity (1.2) and the Bezout Eq. (2.2), we have

~,,H~H,, = I-IL-ITNTvHLII L , T N = I'{L 'Hti.~I. ' T N - ]{L 'TLTUI-ILI-IL ITN" Since

HL,

=

P_E',P+LP+P&x1(z+)

= PL-'LP+IEv×I(Z+)

[H]L I [I-IlL[I-IlL I/9+ ~

=

O,

P_L-1P+LP_L-1P+

= p_L-~-~P+LL--'---~p+_ p_L'-~p+Lp+~--~p+ =

p_L-A)p+ _

p_~-'-,Lp+Z'Z'-lp+

=

p_L--'-tp+ =

H L ,P÷,

we obtain H.H~H~ = Ha. In a similar manner we can prove that l-ltl-l=lq]~ = H~

[]

Remark 2.1. A similar theorem holds for integral Hankel operators with rational matrix symbols. Moreover, if a(t) is an arbitrary strictly proper rational matrix function and

V.M. Adukou I Linrur Algebra und its Applications 290 (1999) 119-134

125

u(t) = -g ai+ ,=-rc

is the Laurent-series expansion of a(t) at infinity, then it is not difficult to see that the infinite block matrix

wl:= i

q, K .

0 v, ..

0 .. . 0 ... .. .

L, L2

L3

.. .

L2 . .

L4 .. .

.

Lj .

is a generalized inverse of the infinite block Hankel matrix

..

.

.. .

Here V, and Li are the coefficients of the matrix polynomials

V(t), L(t).

In the system theory a description of the kernel and image of a Hankel operator with a rational symbol plays an important role (see, e.g., [13]). It turns out that the spaces ker W, and Im W, are directly related to the coprime fractional factorization of the symbol a(t) [ 10,l 1,131. Using the generalized inverse WL, now we can describe ker W,, Im W, in terms of the factorization of a(t) for the matrix case. We will need the fractional factorization of a(t) with an additional property. Let a(t) = t-‘(c)N(t) be an arbitrary left coprime fractional matrix factorization of a(t). It is well known (see, e.g., [S]) that for the nonsingular p x p matrix polynomial L(t) there exists a unimodular matrix polynomial s(t) such that L(t) = S(t)L(t) is a row proper matrix polynomial, i.e. the constant p x p matrix L“‘” consisting of the coefficients of the highest degrees in each row of L(t) is nonsingular. Hence any left coprime factorization a(t) = L-‘(t)N(t) can be reduced to the row proper form a(t) = L-‘(t)N(t). In the system theory it is shown that the row degrees p, ~ . . ,pP coincide with observability indices of the system with the transfer matrix function a(t). The sum K of the indices is the McMilh ciegree of the system (and the transfer matrix function a(t)). We can assume that Pl 6 ‘..
V.M. Adukov / Linear Algebra and its Applications 290 (1999) 119-134

126

Theorem 2.2. Let a(t) = L-1 (t)N(t) be an arbitrary left coprime fractional matrix factorization of a(t). The space ker H~ consists of the vectors of the form u - Tv(l - TLTL-,)TNu,

where u E Eq×~(Z+). Let, in addition, L(t) be the row proper matrix polynomial Then the space Im H,, is K-dimensional and the vector functions

{t '[L-' (t)]J,...,

t -p~ [LZ' (t)]J}~=,

(2.3)

are the generating functions for the elements of a basis for Im ~ in any space Ep×,(Z*). Here L_(t) = d '(t)L(t), d(t) = diag[tP',... ,top] and tc = ~-~f=, pj is the McMillan degree of a(t).

Proof.

Since n~t is a generalized inverse of b~a and H~HaHt~ = n~, the operator H~H= is the projector onto Im N~ along ker Ha. It is easily seen that H~Ha = TAD - ~rLTL ,)TN.

Hence ker Ha = Im(l -- Tv(I - TLTL ~)TN) and we get the first assertion of the theorem. The operator HaH~ is the projector onto Im Ha along kerH~ and

~aH~ = 1 - P _ L - P LP_]Ep×,(~__) = 1 - T L ,T L, Here ql-~ = P_[P_IEp×I(Y*) is the W i e n e r - H o p f operator with respect to the space Ep×N(Z*). Hence Im H, = kerT~ ~T~. Since L(t) = d(t)L_(t) and L_(t), d-l(t) are elements of W7~ , we have, by l ! ! l ¢ ! ! t t P P t • • Eq. (1.3), T L ,T L = TL~T d ~TdTL . Thus k e r T L ~q]-L= q]-L' kerYd. It is easily seen that the vector functions { t - l e j , . . . , t-P~ej}~_l, where {ej}jPl is the standard basis of Cp×~, are the generating functions for elements of a basis of the space k e r T d in any space Ep×l (Z*). It follows from this that vector functions (2.3) are the generating functions for elements of a basis of kerT~ , T~. [] Remark 2.2. We can also obtain a generalized inverse of Ha in terms of a right fractional factorization

a(t) = M(t)R -l (t) of the symbol a(t). In this case we have a more simple description of the kernel: kerHa = Im YRTR-,' but a more complicated description of the image Im Ha Thus the Hankel operator H. with the rational matrix symbol a(t) is a finite rank operator and its rank coincides with the McMillan degree of a(t). It is

V.M. Adukov I Linear Algebra and its Applications 290 (1999) 119-134

127

easily seen that if the matrix (1.4) has finite rank then its symbol a(t) is a rationab m~i~:¢ %a~c%on. hhence "~e ~ari~'~e~to % e 5 ohho v / ~ ",,~ebb'-xno~ r esaah'~3a block analog of the Kronecker theorem): the rank of a block Hankel matrix is finite iff the symbol of this matrix is a rational matrix function. We can use Theorems 2. ! and 2.2 for solving of an infinity system of linear equations with a Hankel matrix. The equation Hax = y has a solution iff the generating function of y is a linear combination of the functions (2.3). If this ceadition is 6ulfdled, ttten x = Tv ~'lY is a solution o f the equation, The general solution can be found by the formula (see, e.g., [6])

x = qFvH'ty + u - qFv(fl - 7LTFLI)YNU, where u ~s an arbitrary etevaent o f E~,:,~(Z+), Ex'atttl~e E L Let as carts?dec as art examy'te ~'rte fo'dtou/mg syszem of eqttaZ?orte l X l -}-

4 ~X 2 -}-

7 X 3 -~-

10 _T2X4 @

4X I ~-

7 X 2 -~

~10X 3 ~-

~13X 4 ~-

7xl +

g10. ~~ 2 +

g13x 3 +

~x4+

10 v _~-,tl Jr-

13 ~ X 2 -it-

1@8X3 -l-

0~

1

m

4r 8~ 3 16 1

m

2@6X4 Jr-

The symbol of the Hankel operator Ha in the left-hand side of the system is aOt) - - ~ = ~ 1 3 n + l)2"s~t"'~a). ]t )s eas~fly seen that a(t)

t+l

(2t- l)

is the fractional factorization of a(t). Hence L(t) = (2t - 1) 2, N(t) = t + 1. By Euclidean algorithm we have U(t) = 1, F(t) = - ~t + ~. Then 2 -1

0 2

0

-1

2

0

0

-1

-2

2

0

0

16

3 -1

-1 0

0 0

0 0

9

0

0

0

0

H* = 7 v H 'L = to 9 a

0 0

O

. ..

.,.

...

.

, ,

i::/

/

-1

1

0

0

...

1

0

0

0

...

0

0

0

0

...

0

0

0

0

...

V.M. Adukm, I Linear Algebra and its Applications 290 (1999) 119 134

128

Since

~-~ n+_l

t ' L - ' (t) = ~

2.+2t.+1 ,

t_2LS,(t)= ~-~ n + l

27+-57t.+2 ,

n=O

n=0

the vectors

1 2

3

0,

4 ' 8 ' 16' "'"

'

8' 4'

16' " "

form basis of I m Ho. Hence the right-hand side o f the system belongs to Im H= and the system is solvable. The vector

4 0, 0, ..)

(; H~y =

,

9'

'

' "

is a solution of the system. N o w the general solution of the system has the following form 4

8

8

10

u0

4

4

4 .2

4 .3,

Here (u0, ul, uz, ...) is an arbitrary vector in E(Z+) and ~ = fl = y'~.L0(u./2").

...).\

~,~o(nU,/2"),

In conclusion of the section we find the generating matrix function G(t,s) for the matrix gij i,j~0 of the o p e r a t o r kO~ = TrilL. It is evident that g,j = 0 for sufficiently large.i,j. Hence G(t, s) =

Zgijtis j i,j=O

is a matrix polynomial in t, s.

Proposition 2.1. The generating matrix function of the generalized inverse ~ from Theorem 2.1 is found by the formula G(t,s) =

V(t) L(t) - L(s) t--s

Proof. A p p l y the o p e r a t o r E = (Ip,slp,s21p,...).

H~ = T v H ~ to the sequence

Let 0 < ]s I < 1. Then the sequence belongs to ll×j (Z*) and the symbol (i.e. the Fourier transform) o f the sequence H~E coincides with the generating matrix function G(t,s). Let us find the symbol of the sequence

129

IL M. Adukov / Linear Algebra and its Applications 290 (1999) 119-134

,72

~ . . . .

Here L 0 , L ~ , . . are the coefficients of the matrix polynomial L(t). D e n o t e by 2 the degree of L(t). Then the generating function of H i E is L(s) - Lo ~ L(s) - Lo - L j s t + .. . ~ L(s) - Lo - L , s . . . . . S

S2

L; ,s; ' t;-'

$2

_L(S)(l+ts-'+...+ff

is ;.~ ) . L. ° ( l. + t .s .

S S L i t (1 + ts -1 ÷ "'" + t;'-2S ;.-2) . . . .

' +

(1 + ts -I )

L;~-2t'~;

S

+t;. is -;.~1)

S

: L ( s ) ( 1 - V,~ -~-)

L o ( 1 - t~~ ' ;)

s--t

s--I

L;. 2t ;~ 2(1 s -

--

t2s -2)

L;.

L~t(!

- t~-~S

L~

1 t;_l

S ~+') . . . .

s--I

) 1( l - t s

tt ~

t

s-

-1)

t

1 -- s - t [L(s) - t;s-;~L(s) - L(t) + L~t; + t ; s - ; L ( s ) - L;t;] L(s) - L ( t ) s--t

Since the matrix polynomial (L(t) - L ( s ) ) / ( t s) in t belongs to Wp+~p,and V(t) E Wq~p, the symbol o f T v ~ L E coincides with V(t)(L(t) - L(s) ) / ( t - s). The condition 0 < Isl < 1 can be omitted because (L(t) - L ( s ) ) / ( t - s) is a matrix polynomial in s. The proposition is proved. [] 3. Generalized inversion of block Toeplitz and Hankel matrices with finite number of rows In this section we will consider a generalized inversion of Toeplitz and Hankel operators acting from the s p a c e Eq×l(~+) into the finite-dimensional space C ''p×I . These operators are determinated by the block Toeplitz or Hankel matrices I

CO C1

C I C0

C 2 C 1

T, =

.° • !

Cn 1

Cn 2

Cn 3

al

a-2

a-3

a2

a3

a4

a--n

a-n-I

a-,-2

4. =

..~

) (3.1)

130

14.M. ddukov I Linear Algebra and its Applications 290 (1999) 11~134

having finitely many rows. We will assume that the symbols of these operators o(t) = z_~-~ ~k,.k and a(t) = ~-1_~ akt k are rational matrix functions and t-"e(t), a(t) have poles in D+ only. Denote by P,. the projector from Ep×l (2~+) onto the first i coordinates. It is easily seen that ~ = D- ~-el,,q]-, ,l,,. Here Dis the identity operator and Ip is the identity p x p matrix. Obviously, now we have Tc = P,,-II-c. Similarly, if/],.' = D - g ' t ,t~q]-',,/,, is the projector from Ep×l(g*) onto the first i coordinates, then

Let us now show that the fractional factorization of the symbols c(t) and a(t) allows to obtain a generalized inversion of the operators Tc and Ha. Denote by a(t) the strictly proper rational matrix function t-"c(t). Let a ( t ) = L -I ( t ) N ( t )

(3.2)

be its left coprime fractional factorization with the row proper matrix polynomial L(t). Let (U(t), V(t)) be an arbitrary solution of the Bezout equation

L(t)U(t) + N(t)V(t) = Ip. Denote by P l , . . . , Pp the degrees of the rows of L(t). Let

d(t) = diag[tP',..., t&,], L_(t) = d-l(t)L(t). As we show in Section 1, L_ (t) is an invertible element of the algebra Wp×p. The fractional factorization (3.2} gives now the following factorization of c(t)

c(t) = L - l e d -' (t)N(t).

(3.3)

Theorem 4.1. The operator T] = qFvqF, ,,aPnTL Pn[Im P,

(3.4)

is a generalized inverse o f the operator T~ = P, Tc. Moreover, T!T,T! = T!. I f Pi > n.['or j = I , . . . ,p, then T* is a right inverse of T,..

Proof. It follows from Eq. (1.2) that P.TL P. = P.~-L . Hence

T! = WvT,-.aTL Pnllm P..

V.M. Adukov I Linear Algebra and its Applications 290 (1999) 119-134

131

Moreover, in virtue of Eq. (3.3), we have T~. = P,,TL; T,°d , TN. Since TNTV

= "~NV = I -- T L ~ U ,

we obtain TcT,!T~.= P, TL:;Tt°u ,Tr ,,dTL P , , T c - P = T L ITtnd-lTL~O~l nd~L P=Tc. Taking into account the relations

P, Tc ,T,.d ITL =Pn-[[L 't"d 'L =P,T~" 6, = O, P,,Yc = ]]-L T t : l

YL

,T,,,d , T N -- T,,,1,,Tr-,,c,

~-- 1,

][t,,d-l~t

ndTtnd-t

= ~tnd-I,

we have T,.T,!T,, = P,T L ~T,. d , T x - P,q]-L ,T,.d ,T, ..aT,=L -gt-"c. Using Eq. (3.5) and

T~,,L = Yt,,d-' 7L, we finally obtain TcT!Te

: P~,~L It,,d l N : Pn~c = To.

Similarly we can prove the relation r!r,.r! = r!. If pj >~ n, then t"d-l(t) c Wp×p and it is easily seen that r~.r! = D.

The theorem is proved.

[]

Obviously, the matrix JTt,,., where

d =

• ...

0

(3.5)

132

V, M. Adukov / Linear Algebra and its Applications 290 (1999) 119-134

coincides with the matrix H=. Hence Tt!f l is a generalized inverse of H~. Since J is the matrix of the operator P,,Ht%P'=IIm ' ' P"' and /

]-L H't°t,, = klt, L , we arrive to the following proposition on a generalized inversion of the block Hankel matrix H, with finite number of rows.

Proposition 3.1. The operator H~

' : T V ] [ t ndPnHt~' L P~]Im P" is a generalized inverse o f the operator Ha = t=~H~. Moreover, H~HaH~ = H~. I f Pi >>"n for j = 1,... ,p, then H~ is a right inverse o f Ha.

Now we find generating functions for the matrices of the operators T!. /-/~. If % (h0) are the entries of the matrix T~ (H~t), then, by definition, the generating matrix function of this matrix is oc

n

1

~c n

1

! i:0

j=0

i=0 j=0

o

//

Let us introduce the operator ~ = Tv]-, °fi-L acting from 11×1(7/+) into Pq×l(Y+). It is easily seen that B is determined by the infinity block matrix oc B = [Ib,jll~j:0 (b~j E cq×P), having absolutely summable block columns and rows. Hence we can define the generating function of B:

id-0

Then the generating I-v]-, °d-l-L [Im P. is

function

for

the

matrix

of

the

operator

.Y-(t, s) = ;~,.(-n + 1,0)~(t, s),

where ~ ( - n

+ 1,0) is the projector acting by the rule oc

n-l

i=0 j=0

Proposition 3.2. The generating matrix function o f the generalized inverse T~ from Theorem 3.1 is f o u n d by the .formula J~(t,s) = ,~=(-n + 1,0) V(t)d~(t,s)L (s) 1 - ts -I ,

[t]~
Here do(t, s) = diag[J' =,..., s p°-", t po+,-=, . . . , tPp-~]

V . M . A d u k o v / L i n e a r A l g e b r a a n d its A p p l i c a t i o n s 2 9 0 ( 1 9 9 9 ) 119.. 134

133

and the integer a is f o u n d f r o m the condition pl <~ ... <.~p <...n < p~+l <... ... <.pp.

Proof. Apply the operator ~ to the sequence E = ( I p , s - l l p , s 2Ip,...). For Isl > 1 the sequence belongs to lpl×l and has the symbol (the Fourier transform) Zt/s-ilp

1

--

1

- - ts -1

Ip,

]tl~<1.

j-O 30

The symbol of the sequence BE is the function ~i./=0 b J is-J, that is the generating function of B. On the other hand, the sequence q]-L E = ( L _ ( s ) , s JL (s),s 2L (s) . . . . ) has the symbol L .(s)/(1 - ts 1). Hence the symbol of the sequence qft ,,,tqfL E is P+(t " d ( t ) L _ ( s ) / ( 1 - ts 1)). Since I~

tk

k >-0, .

1 ts I ,

P+ 1 - ts-L

--

X

~.~ l-is-t ,

k ~ O,

we have t "d(t)L_(s)

P+

l-ts

_

I

-

dJt, s)L (s) 1-ts

1

Thus the symbol of the sequence ql-vq]-, ,,JgL E is the function

P+

V ( t ) d , ( t , s ) L (s) V(t)d~(t,s)L_(s) = ~ ( t : s ) . l _ ts 1 = l _ ts_ 1

Hence ~(t,s) is the generating function of the operator B: 7X.

7Y~

ZZb,,t's

J,

tl

Isl > 1.

i 0i-0

Then the generating function of the matrix of the operator T~t coincides with the matrix function ~ ( - n + 1,0)~(t, s). Since this function is a polynomial in s -~, we can omit the condition Isl > 1. The proposition is proved. [] From this proposition it follows at once the formula for the generating function of//,~. Proposition 3.3. The generating matrix function o f the generalized inverse H1. f r o m Proposition 3.1 is f o u n d by the f o r m u l a

134

I~ M. Adukov I Linear Algebra and its Applications 290 (1999) 119-134

=

n -

1)

V(t);to(t,s)L(s) S--t

,

Itt < I .

Here [l~(t,s) = d i a g [ 1 , . . . ,

1,

(ts-l)Pa+l-n,..., (ts-1) pp-I;]

a n d the integer a is f o u n d f r o m the condition Pl <~ "'" <~ p , <~n < P~+l ~< . . . <~ Pp.

References [I] P.A. Fuhrmann, On the partial realization problem and the recursive inversion of Hankel and Toeplitz matrices, Contemp. Math. 47 (1985) 149-161. [2] P.A. Fuhrmann, Remarks on the inversion of Hankel matrices, Linear Algebra Appl. 81 (1986) 89 104. [3] G. Heinig, U. Jungnickel, Hankel matrices generated by the Markov parameters of rational functions, Linear Algebra Appl. 76 (1986) 121 135. [4] P.A. Fuhrmann, Block Hankel matrix inversion - the polynomial approach, in: Proceedings of Amsterdam Workshop on Operator Theory and Applications, Birkhiiuser, 1986, pp. 207-230. [5] I.C. Gohberg, N.Y. Krupnik, Einfiihrung in die Theorie der eindimensionalen singulfiren Integraloperatoren, Birkh~iuser, Basel, Boston, Stuttgart, 1979. [6] S.L. Campbell, C.D. Meyer Jr., Generalized Inverses of Linear Transformations, Pitman, London, 1979. [7] P.A. Fuhrmann, Algebraic system theory: An analyst's of view, J. Franklin Inst. 301 (1976) 521-540. [8] H.H. Rosenbrock, State Space amd Multivariable Theory, Wiley, New York, 1970. [9] W.A. Wolovich, Linear Multivariable Systems, Springer, Berlin, Heidelberg, New York, 1974. [10] P.A. Fuhrmann, On symmetric rational functions, Linear Algebra Appl. 50 (1983) 167-250. [11] N.J. Young, The singular-value decomposition of an infinite Hankel matrix, Linear Algebra Appl. 50 (1983) 639-656. [12] V.Ku~era, Discrete Linear Control, Academia, Prague, 1979. [13] P.A. Fuhrmann, A polynomial approach to Hankel norm and balanced approximations, Linear Algebra Appl. 146 (1991) 133-220.