On the inverse of a special Schur complement

On the inverse of a special Schur complement

Applied Mathematics and Computation 218 (2012) 7679–7684 Contents lists available at SciVerse ScienceDirect Applied Mathematics and Computation jour...

184KB Sizes 2 Downloads 42 Views

Applied Mathematics and Computation 218 (2012) 7679–7684

Contents lists available at SciVerse ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

On the inverse of a special Schur complement q ZhiPing Xiong ⇑, Yingying Qin Department of Mathematics, Wuyi University, Jiangmen 529020, PR China

a r t i c l e

i n f o

Keywords: Schur complement Generalized Schur complement Inverse Generalized inverses Minimal ranks of generalized Schur complement

a b s t r a c t The inverse of a Schur complement is a very useful tool in many algorithms for the computation of the matrix inversion. In this paper we study the inverse of a special Schur complement CD1B. We proved that always exist some X and Y such that XDY is a inverse of CD1B. Furthermore, using minimal rank properties, we give some explicit expressions for X, Y and the inverse of CD1B. Numerical examples also given. Ó 2012 Elsevier Inc. All rights reserved.

1. Introduction Throughout this paper, Cmn denotes the set of m  n matrices with complex entries and Cm denotes the set of m-dimensional vectors. Crmn denotes the subset of Cmn with the elements of rank r and Ik denotes the identity matrix of order k. Omn is the m  n matrix of all zero entries (if no confusion occurs, we will omit the subscript). For a matrix A 2 Cmn ; A and r(A) denote the conjugate transpose and the rank of the matrix A, respectively. Let A 2 Cmn ; B 2 Cmk ; C 2 Cln ; D 2 Clk and let





A

B

C

D

 ;

be a 2  2 block matrix over the field C of complex numbers. If m = n and A is nonsingular, then the Schur complement of A in L is defined to be

SA ¼ D  CA1 B: If A is singular or m – n, a generalized Schur complement of A in L is defined to be

SA ¼ D  CGB; where G is a generalized inverse of A, which satisfies some of the following four Moore–Penrose conditions [16]:

ð1Þ AGA ¼ A;

ð2Þ GAG ¼ G;

ð3Þ ðAGÞ ¼ AG;

ð4Þ ðGAÞ ¼ GA:

For a subset g # {1, 2, 3, 4}, the set of n  m matrices satisfying the equations contained in g is denoted by Ag. A matrix G 2 Cnm from Ag is called an g-inverse of A and is denoted by A(g). The well known seven common types of generalized inverse of A are, respectively, the {1}-inverse (inner inverse), {1, 2}-inverse (reflexive inner inverse), {1, 3}-inverse (least squares inner inverse), {1, 4}-inverse (minimum norm inner inverse), {1, 2, 3}-inverse, {1, 2, 4}-inverse and {1, 2, 3, 4}-inverse, the last being the Moore–Penrose inverse of A. Any matrix A has a unique Moore–Penrose inverse, denoted by A . For the sake q

The work of first author was supported by the start-up fund of Wuyi University, Jiangmen 529020, Guangdong, PR China.

⇑ Corresponding author.

E-mail addresses: [email protected] (Z. Xiong), [email protected] (Y. Qin). 0096-3003/$ - see front matter Ó 2012 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2012.01.045

7680

Z. Xiong, Y. Qin / Applied Mathematics and Computation 218 (2012) 7679–7684

of the simplicity in the later discussion, we will adopt the following notations: EA = Im  AA , FA = In  A A stand for two orthogonal projectors and PA = Im  AA(1), SA = In  A(1)A stand for two oblique projectors induced by A, respectively. We refer the reader to [5,21] for basic results on generalized inverses. Schur complement was introduced by Schur [17]. It has quite important applications in numerical analysis and applied fields [3,6,8]. For example, in linear control theory [1], matrix theory [11,13,23], statistics [15], saddle point problems [7], scalar and vector extrapolation methods [2], projection algorithms [4,12] and perturbation analysis of matrix [10]. As one of the fundament research problems in the matrix theory, the inverse of Schur complement is a very useful tool in many algorithms for the computation of the matrix inversion and the matrix generalized inversion [9,14,22]. Moreover, in Electrical Engineering, we often meet the following question: for a special Schur complement A⁄X1A, whether there exist some B and C independent of X such that (A⁄X1A)1 = BXC? where A is a full-column rank matrix. To our knowledge, there is no article yet discussing this problem in the literature which leads us to study it. In this paper we will consider the inverse of a special Schur complement CD1B in the case when it exists. That is for given mm D 2 Cm ; C 2 Cnm ; B 2 Cmn and CD1B is nonsingular, prove that always exist some X and Y such that XDY is a inverse of n n 1 CD B and give some explicit expressions for X, Y and the inverse of CD1B related to the generalized inverses of B and C. For the convenience of reader, we first give a brief outline of the basic idea of the main tools in this paper. Recalled that the rank of a matrix is defined as the dimension of the row (column) space of the matrix. Also recalled that A = O if and only if r(A) = 0. From this simple fact, we see that two matrices A and B of the same size are equal if and only if r(A  B) = 0; two set S1 and S2 consisting of matrices of the same size have a common matrix if and only if

min rðA  BÞ ¼ 0:

A2S1 ;B2S2

If some formulas for the rank of A  B can be derived, they can be used to characterize the equality A = B, as well as relationships between the two matrix sets. In order to use the rank method to give the explicit expressions for the inverse of CD1B, we need the following lemmas for ranks of matrices. Lemma 1.1 [20]. Let P(X) = A  BXC be a linear matrix expression over the complex field C, where A 2 Cmn ; B 2 Cmk and C 2 Cln are given; X 2 Ckl is a variant matrix. Then the minimal rank of P(X) with respect to X is

min rðA  BXCÞ ¼ r ð A; B Þ þ r X

   A A r C C

 B ; O

ð1:1Þ

   A : max rðA  BXCÞ ¼ min r ð A; B Þ; r X C

ð1:2Þ

The matrix X satisfying (1.1) is the general solution of the following consistent matrix equation:

PA2 BXCSA1 ¼ PA2 ASA1 ;

ð1:3Þ ð1Þ A2 A2

ð1Þ

ð1Þ

where A1 ¼ PB A ¼ ðIm  BB ÞA; A2 ¼ ASC ¼ AðIn  C CÞ; PA2 ¼ Im  and SA1 ¼ In  the general expression of X satisfying (1.1) can be written in the following form:

ð1Þ A1 A1 .

X ¼ ðPA2 BÞð1Þ PA2 ASA1 ðCSA1 Þð1Þ þ U  ðPA2 BÞð1Þ PA2 BUCSA1 ðCSA1 Þð1Þ ;

Through generalized inverses,

ð1:4Þ

where U 2 Ckl is arbitrary. Lemma 1.2 ([18,19]). Let A 2 Cmn ; B 2 Cmk ; C 2 Cln and D 2 Clk . Then

min rðD  CAð1Þ BÞ ¼ rðAÞ þ r ð C; D Þ þ r



B D

Að1Þ

min rðD  CAð1;2Þ BÞ ¼ rðAÞ þ r ð C; D Þ þ r





B

þr

A

B

C

D



r



A

O

B

O

C

D



0

A

O

C

D

B  r@ O

1

C B A;

ð1:5Þ

 þ maxfS1 ; S2 g;

D

Að1;2Þ



ð1:6Þ

where

 S1 ¼ r

A

B

C

D



 r

A

O

B

O

C

D

min rðD  CAð1;3Þ BÞ ¼ r Að1;3Þ





0

A

O

B  r@ O

C B A;

C





C

D

AA AB



 S2 ¼ rðDÞ  r

D 

þr

1

B D



0

A

O

A

O

C

D



 r

A

B

O D

 ;

1

B C  r @ O B A; C D

ð1:7Þ

7681

Z. Xiong, Y. Qin / Applied Mathematics and Computation 218 (2012) 7679–7684



A A A B min rðD  CAð1;2;3Þ BÞ ¼ r C D Að1;2;3Þ





A B þr D



0

A

O

C

D

1

B C  r @ O A B A :

ð1:8Þ

In addition, the following results will be needed in the sequel: Lemma 1.3 ([5,21]). Let A 2 Cmn , then the general expressions of the following types of g-inverses of A can be written as:

Af1; 2; 3g ¼ fAð1;2;3Þ : Að1;2;3Þ ¼ Ay þ F A VAAy ; V 2 Cnm g; Af1; 2; 4g ¼ fA

ð1;2;4Þ

ð1;2;4Þ

:A

y

y

¼ A þ A AWEA ; W 2 C

nm

ð1:9Þ

g:

ð1:10Þ

2. Inverse of the special Schur complement CD1B For a special Schur complement CD1B, in this section we will prove that always exist X and Y, such that XDY = (CD1B)1. Theorem 2.1. Let D 2 Cmm ; C 2 Cnm ; B 2 Cmn and CD1B is nonsingular. Then always exist X 2 Cnm and Y 2 Cmn such that m n n (CD1B)1 = XDY. Proof. It is well known that the matrix equation (CD1B)1 = XDY is consistent for some matrices X and Y if and only if

min rðIn  CD1 BXDYÞ ¼ 0:

ð2:1Þ

Y;X

Applying the formula (1.1) in Lemma 1.1 to the difference In  CD1BXDY and simplifying by elementary block matrix operations, we have



1

1





min rðIn  CD BXDYÞ ¼ r In ; CD B þ r

In DY

X



In r DY

CD1 B O

!

I ¼nþnr n O

CD1 B DYCD1 B

! ¼ n  rðDYCD1 BÞ: ð2:2Þ

Thus, from (2.2) and the formula (1.2) in Lemma 1.1, we have

min rðIn  CD1 BXDYÞ ¼ n  max rðDYCD1 BÞ ¼ n  minfrðDÞ; rðCD1 BÞg ¼ n  rðCD1 BÞ ¼ 0: Y;X

Y

ð2:3Þ

Combining (2.2) with (2.3), we have (2.1). That is, there always exist X 2 Cnm and Y 2 Cmn such that (CD1B)1 = XDY. h The following corollary can be viewed as a generalization of Theorem 2.1. . Then always exist X 2 Cnm and Y 2 Cmn such Corollary 2.1. Let D 2 Cmm be a positive definite Hermitian matrix and B 2 Cmn n ⁄ 1 1 that (B D B) = XDY. By applying the minimal ranks of the generalized Schur complements [18,19], we obtained the following interesting results. Theorem 2.2. Let D 2 Cmm ; C 2 Cnm ; B 2 Cmn and CD1B is nonsingular. Then always exists C(1) 2 C{1} and B(1) 2 B{1} such m n n 1 1 (1) (1) that (CD B) = B DC . Proof. For some B (1) 2 B{1} and C (1) 2 C{1}, the matrix equation (CD1B)1 = B(1)DC(1) is consistent if and only if

min rðIn  CD1 BBð1Þ DC ð1Þ Þ ¼ 0:

ð2:4Þ

C ð1Þ ;Bð1Þ

Applying Lemma 1.2 (1.5) to the difference In  CD1BB(1)DC(1) and simplifying by elementary block matrix operations, we obtain 0 1 ! ! B O B DC ð1Þ B O DC ð1Þ B ð1Þ C þr r  r@ O DC A min rðIn  CD BB DC Þ ¼ rðBÞ þ r CD B; In Bð1Þ CD1 B In O CD1 B In CD1 B In ! ! B O DC ð1Þ B O DC ð1Þ ð1Þ ð1Þ  n  n ¼ n þ rðB; DC ¼ n þ n þ n þ rðB; DC Þ  r Þ  r O CD1 B In O CD1 B In 1

ð1Þ

ð1Þ



1



DC ð1Þ þr In

!

¼ n þ rðB; DC ð1Þ Þ  rðCD1 BÞ  rðB; DC ð1Þ Þ ¼ n  rðCD1 BÞ ¼ 0:

ð2:5Þ

7682

Z. Xiong, Y. Qin / Applied Mathematics and Computation 218 (2012) 7679–7684

Thus

min rðIn  CD1 BBð1Þ DC ð1Þ Þ ¼ min rðIn  CD1 BBð1Þ DC ð1Þ Þ ¼ 0:

C ð1Þ ;Bð1Þ

ð2:6Þ

Bð1Þ

From (2.4, 2.5 and 2.6), we obtain the results in Theorem 2.2. h A direct consequence of Theorem 2.2 is the following corollary: Corollary 2.2. Let B 2 Cmn and D 2 Cmm be a positive definite Hermitian matrix. Then always exist B(1) 2 B{1} and (B⁄)(1) 2 B⁄{1} n such that

ðB D1 BÞ1 ¼ Bð1Þ DðB Þð1Þ : The method and the corresponding results in Theorem 2.2 illustrate an important fact: we can use the minimal rank of generalized Schur complement to study the inverse of the special Schur complement CD1B. According to the formulas (1.6, 1.7 and 1.8) in Lemma 1.2 and using the same manner as Theorem 2.2, we have. ; C 2 Cnm ; B 2 Cmn and CD1B is nonsingular. Then always exists C(g) 2 Cg and B(g) 2 Bg such that Theorem 2.3. Let D 2 Cmm m n n

ðCD1 BÞ1 ¼ BðgÞ DC ðgÞ ; where g = {{1}, {1, 2}, {1, 3}, {1, 4}, {1, 2, 3}, {1, 2, 4}}. and D 2 Cmm be a positive definite Hermitian matrix. Then always exist B(g) 2 Bg and (B⁄)(g) 2 B⁄g Corollary 2.3. Let B 2 Cmn n such that

ðB D1 BÞ1 ¼ BðgÞ DðB ÞðgÞ ; where g = {{1}, {1, 2}, {1, 3}, {1, 4}, {1, 2, 3}, {1, 2, 4}}. 3. The explicit expressions for the inverses of CD1B In this section, we will give some explicit expressions of X, Y and (CD1B)1, the main result is the following theorem. Theorem 3.1. Let D 2 Cmm ; C 2 Cnm ; B 2 Cmn and CD1B is nonsingular. Then m n n

e y MÞ; ðCD1 BÞ1 ¼ By DðC y þ ðIm  C y CÞ N

ð3:1Þ

e ¼ CD1 BBy DðIm  C y CÞ. where M = In  CD BB DC and N 1

 

 

Proof. From the results obtained in Theorem 2.3 and the formula (1.9) in Lemma 1.3, we know that there always exist X 2 B{1, 2, 3} and Y 2 C{1, 2, 3} such that (CD1B)1 = XDY. i,e., there exist W1, W2 such that

ðCD1 BÞ1 ¼ XDY ¼ ðBy þ F B W 2 BBy ÞDðC y þ F C W 1 CC y Þ ¼ By DðC y þ ðIm  C y CÞW 1 Þ;  

 

ð3:2Þ

 

where FB = In  B B = O, FC = Im  C C and CC = In. According to the equality (3.2), we know that there always exist X = B  2 B{1, 2, 3} and Y = C  + (Im  C C)W1 2 C{1, 2, 3} such that

In  CD1 BXDY ¼ In  CD1 BBy DðC y þ ðIm  C y CÞW 1 Þ ¼ In  CD1 BBy DC y  CD1 BBy DðIm  C y CÞW 1 ¼ O;

ð3:3Þ

that is, there always exists W1 such that

e 1 Þ ¼ 0: min rðIn  CD1 BBy DC y  CD1 BBy DðIm  C y CÞW 1 Þ ¼ min rðM  NW W1

W1

ð3:4Þ

On the other hand

  e þr M r M rðM; NÞ In In

e N O

! ¼ rðIn  CD1 BBy DC y ; CD1 BBy DðIm  C y CÞÞ þ n  n  rðCD1 BBy DðIm  C y CÞÞ ¼ rðCD1 BBy DðIm  C y CÞÞ þ n  n  rðCD1 BBy DðIm  C y CÞÞ ¼ 0:

Combining (3.4) with (3.5), we know that there always exist W 1 2 C



e 1 Þ ¼ rðM; NÞ e þr M min rðM  NW W1 In



r

M

e N

In

O

mn

ð3:5Þ

such that

! :

ð3:6Þ

Z. Xiong, Y. Qin / Applied Mathematics and Computation 218 (2012) 7679–7684

7683

Thus according to the formulas (1.3) and (1.4) in Lemma 1.1, we can see W1 is the general solution of the following consistent matrix equation

e 1¼M NW

ð3:7Þ

e where N e ð1Þ 2 Nf1g e ð1Þ M þ T  N e ð1Þ NT, e and W1 can be written as W 1 ¼ N and T 2 Cmn are arbitrary. e ð1Þ ¼ N e y , since the inverse of matrix is unique, then By DðC y þ ðIm  C y CÞ N e y MÞ is an explicit expression of Let T = O and N 1 1 (CD B) . h The following corollary can be viewed as a generalization of Theorem 3.1. and D 2 Cmm be a positive definite Hermitian matrix. Then Corollary 3.1. Let B 2 Cmn n

e b y MÞ; ðB D1 BÞ1 ¼ By DððB Þy þ ðIm  BBy Þ N

ð3:8Þ

e ¼ In  B D BB DðB Þ and N b ¼ B D BB DðIm  BB Þ. where M 

1

 y

y



1

y

y

Example 1. Let

0

1 0 1 1

0

1

1

0 1 0

0

1

1 0 1

1

B0 1 0C B 0 1 0 1C C B C B D¼B C: C and B ¼ B @1 1 0A @ 1 1 1 0 A

B C C ¼ @ 0 1 0 0 A; 0 0 1 0

1

0

1 0 1

0 1

Then we have

0

1 2

 12

1 4

3 4 1 4

B By ¼ @  14

0

1 2 1 4

 12

 14

1 4 3 4

5 4 3 4

5 4

0 32 109

B 0 B N ¼B @ 0

24 109

0

ey

0

32 109

24 109

0 0

C A;

1 0  12 2 B0 1 0 C C B Cy ¼ B C; @0 0 1 A 1 0  12 2 01

1

0

C 1 A; 1

e ¼B N @

 12  12 1 1

B By D ¼ @  14

1

1 0 0

1

1

0 0  34 C A;

3 4

 38

0 0

3 8

12 1 109

0 1 1 2 1 0 C C B3 3 3 C C and M ¼ @ 4 2 4 A; 0 A 3 3 3   8 4 8 12

109

e ¼ CD1 BBy DðI4  C y CÞ and M = I3  CD1BB DC . It is easy to verify that where N

e y MÞ ¼ I3 and By DðC y þ ðI4  C y CÞ N e y MÞCD1 B ¼ I3 ; CD1 BBy DðC y þ ðI4  C y CÞ N that is

e y MÞ: ðCD1 BÞ1 ¼ By DðC y þ ðI4  C y CÞ N By the formulas (1.9) and (1.10) in Lemma 1.3, we know that G 2 A{1, 2, 4} if and only if G⁄ 2 A⁄{1, 2, 3}. Then from the results obtained in Theorem 3.1 and Corollary 3.1, we get another explicit expression of (CD1B)1 without proof. ; C 2 Cnm ; B 2 Cmn and CD1B is nonsingular. Then Theorem 3.2. Let D 2 Cmm m n n

ðCD1 BÞ1 ¼ ðBy þ HRy ðIm  BBy ÞÞDC y ;  

1

 

ð3:9Þ  

1

 

where H = In  B DC CD B and R = (Im  BB )DC CD B. A direct consequence of Theorem 3.2 is the following corollary: Corollary 3.2. Let B 2 Cmn and D 2 Cmm be a positive definite Hermitian matrix. Then n

R  y ðIm  BBy ÞÞDðB Þy ; ðB D1 BÞ1 ¼ ðBy þ H y

y

1

y

ð3:10Þ y

1

where H ¼ In  B DBB D B and R ¼ ðIm  BB ÞDBB D B.

7684

Z. Xiong, Y. Qin / Applied Mathematics and Computation 218 (2012) 7679–7684

Example 2. Consider the following matrices:





0

 1 0 1 ; 1 1 0

1 0 1

0

1

1 0

1

B C B C D ¼ @ 1 2 1 A and B ¼ @ 1 1 A: 1 1

1 1 0

By directly computation, we have

By ¼

1

0 0

1

1 2

1 2

0

0

0

1

B R¼@

1 3

1 6

C A;

 13  16

0

!

1 3 2 3

2 3

 13

B C y ¼ @  13

;

y

R ¼

0 0

1

1 3

6 5 3 5

 65  35

C A;

0

1 0

B DC y ¼ @ 13

4 3

0

1

! and H ¼

0

1 C A;

0

 13  16

! ;

where R = (I3  BB )DC CD1B and H = I2  B DC CD1B. Thus, we easily know that

CD1 BðBy þ HRy ðIm  BBy ÞÞDC y ¼ I2 and ðBy þ HRy ðIm  BBy ÞÞDC y CD1 B ¼ I2 ; that is

ðCD1 BÞ1 ¼ ðBy þ HRy ðIm  BBy ÞÞDC y : Acknowledgements The authors thank the Editor-in-Chief and the anonymous referees for their very detailed comments, which greatly improved the presentation of this paper. References [1] T. Ando, Generalized Schur complement, Linear Algebra Appl. 27 (1979) 173–186. [2] C. Brezinski, Some determinatal identities in a vector space with applications, in: H. Werner, H.J. Bunger (Eds.), Pade Approximation and Its Applications, Bad-Honnef, 1983, Lecture Notes in Math., vol. 1071, Springer-Verlag, Berlin, 1984. [3] C. Brezinski, Other manifestations of the Schur complement, Linear Algebra Appl. 111 (1988) 231–247. [4] C. Brezinski, M.R. Zaglia, A Schur complement approach to a general extrapolation algorithm, Linear Algebra Appl. 368 (2003) 279–301. [5] A. Ben-Israel, T.N.E. Greville, Generalized Inverses: Theory and Applications, Wiley Interscience, 1974; A. Ben-Israel, T.N.E. Greville, Generalized Inverses: Theory and Applications, second ed., Springer-Verlag, NewYork, 2003. [6] R.W. Cottle, Manifestations of the Schur complement, Linear Algebra Appl. 8 (1974) 189–211. [7] Z.H. Cao, Constraint Schur complement preconditioners for nonsymmetric saddle point problems, Appl. Numer. Math. 59 (2009) 151–169. [8] G.H. Golub, C.F. Van Loan, Matrix computations, third ed., Johns, Hopkins University Press, Baltimore, 1996. [9] L. Guttman, Enlargement methods for computing the inverse matrix, Ann. Math. Statist. 17 (1946) 336–343. [10] M. Gulliksson, X. Jin, Y. Wei, Perturbation bounds for constrained and weighted least squares problems, Linear Algebra Appl. 349 (2002) 221–232. [11] C.R. Johnson, Inverse M  matrices, Linear Algebra Appl. 47 (1982) 195–216. [12] K. Jbilou, A. Messaondi, Matrix recursive interpolation algorithm for block linear systems Direct methods, Linear Algebra Appl. 294 (1999) 137–154. [13] J.Z. Liu, F.Z. Zhang, Disc separation of the Schur complement of diagonally dominant matrices and determinantal bounds, SIAM J. Matrix. Anal. Appl. 27 (3) (2005) 665–674. [14] M. Newman, Matrix computations, in: J. Todd (Ed.), Survey of Numerical Analysis, Mcgraw-Hill, New York, 1962, pp. 222–254. [15] D.V. Ouellette, Schur complements and statistics, Linear Algebra Appl. 36 (1981) 187–295. [16] R. Penrose, A generalized inverse for matrices, Proc. Cambridge. Philos. Soc. 51 (1955) 406–413. [17] I. Schur, Potenzreihen im innern des einheitskreises, J. Reine Angew. Math. J. 147 (1971) 205–232. [18] Y. Tian, Upper and lower bounds for ranks of matrix expressions using generalized inverses, Linear Algebra Appl. 355 (2002) 187–214. [19] Y. Tian, More on maximal and minimal ranks of Schur complements with applications, Appl. Math. Comput. 15 (2004) 675–692. [20] Y. Tian, S. Cheng, The maximal and minimal ranks of A  BXC with applications, New York J. Math. 9 (2003) 345–362. [21] G. Wang, Y. Wei, S. Qiao, Generalized Inverses: Theory and Computations, Science Press, Beijing, 2004. [22] J.R. Westlake, Handbook of numerical matrix inversion and solution of linear equations, Wiley, New York, 1968. [23] Z.P. Xiong, B. Zheng, The reverse order laws for {1, 2, 3}- and {1, 2, 4}-inverses of a two matrix product, Appl. Math. Lett. 21 (2008) 649–655.