Symbolic computation of Drazin inverses by specializations

Symbolic computation of Drazin inverses by specializations

Journal of Computational and Applied Mathematics 301 (2016) 201–212 Contents lists available at ScienceDirect Journal of Computational and Applied M...

467KB Sizes 0 Downloads 50 Views

Journal of Computational and Applied Mathematics 301 (2016) 201–212

Contents lists available at ScienceDirect

Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam

Symbolic computation of Drazin inverses by specializations J. Rafael Sendra a,∗ , Juana Sendra b a

Grupo ASYNACS (Ref. CCEE2011/R34), Dpto. de Física y Matemáticas, Universidad de Alcalá, Ap. Correos 20, E-28871 Alcalá de Henares, Madrid, Spain b

Dpto. Matemática Aplicada a las TIC, Research Center on Software Technologies and Multimedia Systems for Sustainability (CITSEM), UPM, Spain

article

info

Article history: Received 7 October 2015 Received in revised form 29 December 2015 Keywords: Drazin inverse Analytic perturbation Gröbner bases Symbolic computation Meromorphic functions Laurent formal power series

abstract In this paper, we show how to reduce the computation of Drazin inverses over certain computable fields to the computation of Drazin inverses of matrices with rational functions as entries. As a consequence we derive a symbolic algorithm to compute the Drazin inverse of matrices whose entries are elements of a finite transcendental field extension of a computable field. The algorithm is applied to matrices over the field of meromorphic functions, in several complex variables, on a connected domain and to matrices over the field of Laurent formal power series. © 2016 Elsevier B.V. All rights reserved.

1. Introduction This paper shows how symbolic computation can be applied to compute Drazin inverses of matrices whose coefficients are rational functions of finitely many transcendental elements over the field C of the complex numbers; we recall that an element a is transcendental over a field K if a is the root of none (non-zero) univariate polynomial, with coefficients in K (see e.g. pg. 114 in [1]). 1.1. The interest of Drazin inverses Motivated on the notion of generalized inverse introduced by Moore and Penrose, Drazin, in [2], introduced the notion of Drazin inverse in the more general context of rings and semigroups. For matrices, Drazin inverse is defined as follows. Let A be a square matrix over a field, then the Drazin index of A is the smallest non-negative integer k such that rank(Ak ) = rank(Ak+1 ); let us denote it by index(A). In this situation, the Drazin inverse of A is the unique matrix satisfying the following matrix equations:

 Aindex(A)+1 · X = Aindex(A) , A · X = X · A, X · A · X = X .

(1)

Many authors have analyzed the properties of Drazin inverses (see e.g. Chapter 4 in [3], Chapters 7 in [4–6]) as well as their applications (see e.g. Chapters 8 and 9 in [7,4,8,9]); particularly interesting are the application to singular differential equations and difference equations, and to finite Markov chains.



Corresponding author. E-mail addresses: [email protected] (J.R. Sendra), [email protected] (J. Sendra).

http://dx.doi.org/10.1016/j.cam.2016.01.059 0377-0427/© 2016 Elsevier B.V. All rights reserved.

202

J.R. Sendra, J. Sendra / Journal of Computational and Applied Mathematics 301 (2016) 201–212

1.2. Computing Drazin inverses: the state of the art An important alternative issue in the topic is the computation of the Drazin inverse (see e.g. [3,4,7,6,10–15]). The problem has been approached mainly for matrices with complex numbers. Nevertheless, in a second stage, different authors have addressed the problem of computing Drazin inverses of matrices whose entries belong to other fields as, for instance, rational function fields (see [16–19]). Furthermore, symbolic techniques have proven to be a suitable tools for this goal. The next challenging step is the computation of the Darzin inverses over more general domains, as for instance the field of formal Laurent series (see connection to the study of analytically perturbed matrices in [7]) or the field of meromorphic functions; in [20] one can read about the difficulties of reasoning with transcendental functions. In this paper, we show how to reduce the symbolic computation of the Drazin inverses over such domains to the computation over the field of rational functions, and hence the already available algorithms, in particular the one based on Gröbner bases developed in [19], can be applied. 1.3. Main contributions of the paper In general terms, the main contribution of the paper is the establishment of an algorithmic criterium to symbolically reduce the computation of Darzin inverses of matrices over a field C(t1 , . . . , tr ), where t1 , . . . , tr are transcendental elements over C, to the computation of Darzin inverses of matrices over the field of rational functions C(w1 , . . . , wr ), where w1 , . . . , wr are independent complex variables. More precisely, the main contributions of the paper can be summarized as follows:

• we prove the existence, and actual computation, of a multivariate polynomial in C[w1 , . . . , wr ] (that we call the evaluation polynomial, see Definition 7) such that if it does not vanish at (t1 , . . . , tr ) the computation of the Drazin inverse over C(t1 , . . . , tr ) is reduced to the computation of the Drazin inverse over C(w1 , . . . , wr ). • As an application, we show how to compute the Drazin inverse of matrices whose entries are meromorphic functions or Laurent formal power series.

• Furthermore, we show how to relate the specialization of the Drazin inverse of a matrix, with meromorphic function entries, and the Drazin inverse of the specialization.

• Also, as a consequence of these ideas, one gets a method to compute the Laurent expansion of the Drazin inverses of analytically perturbed matrices. 1.4. Intuitive idea of the novel proposed solution method Given a matrix A, the idea consists in the following three steps: 1. [Specialization step] we associate to the matrix A a new matrix A∗ , whose entries are rational functions in several variables; 2. [Inverse computation step] secondly we compute the Drazin inverse of A∗ , for instance using the algorithm in [19]; 3. [Evaluation step] finally, from the Drazin inverse of A∗ we get the Drazin inverse of A. More precisely, we consider a computable field K, that is a field for which all operations can be executed by means of an algorithm (see pg. 16 in [21]). In addition we consider the chain of fields

K ⊂ K(t1 , . . . , tr ) ⊂ F where K(t1 , . . . , tr ) is the field extension of K via the adjunction of finitely many elements t1 , . . . , tr ∈ F \ K; we recall that the extension of K via the adjunction of {t1 , . . . , tr } is the intersection of all fields containing K and {t1 , . . . , tr } or equivalently the field of all rational combinations of the elements in {t1 , . . . , tr } with the elements in K (see e.g. Section 6.2. in [1]). For instance, we may take

C ⊂ C(sin(z ), cosh(z )) ⊂ Mer(z, Ω ) where Mer(z, Ω ) is the field of meromorphic functions on a connected domain Ω of C, or we may take

 C⊂C

∞  n=−2

1 n4 + 1

z , n

∞  n =0

n n+1

 z

n

⊂ C((z ))

where C((z )) is the field of Laurent formal power series. In this situation, a matrix A ∈ Mn×n (K(t1 , . . . , tr )), with entries in the intermediate field K(t1 , . . . , tr ), is given, and we want to compute the Drazin inverse of A. For this purpose, we replace A by a matrix A∗ ∈ Mn×n (K(w1 , . . . , wr )) with entries in the field of rational functions K(w1 , . . . , wr ). Then, the Drazin inverse of A is achieved from the Drazin inverse of A∗ by specializing (w1 , . . . , wr ) at (t1 , . . . , tr ). We prove the existence, and we actually show how to compute, of a multivariate polynomial in K[w1 , . . . , wr ] such that if it does not vanish at (t1 , . . . , tr ) the correctness of the method is guaranteed.

J.R. Sendra, J. Sendra / Journal of Computational and Applied Mathematics 301 (2016) 201–212

203

1.5. Structure of the paper The paper is structured as follows. In Section 2, we fix the notation and we introduce the theoretical framework. In Section 3 we present the method itself and we prove the main theorems. In Sections 4 and 5 we apply the method to the case of meromorphic functions and to the case Laurent formal power series, respectively. 2. Theoretical framework In this section, we fix the notation to be used throughout the paper and we introduce the theoretical framework of the paper. Let K be a subfield of a field F, let w = (w1 . . . , wr ) a tuple of variables, and let t = (t1 , . . . , tr ) ∈ (F \ K)r . We will assume w.l.o.g. that each ti is transcendental over K. Note that if this is not the case, we can always replace K by K(ti ). In addition, we also assume w.l.o.g. that ti ̸= tj if i ̸= j. We assume that t is fixed. For instance, K could be C, F the meromorphic functions on a connected domain of C, and t = (z , sin(z ), e(z )). We consider the following rings of matrices Rw = Mn×n (K(w)),

and

Rt = Mn×n (K(t)).

Given A ∈ Rw , we define the denominator of A, denoted as den(A), as the least common multiple of the denominators of all entries, taken in reduced form, of A. And we define the numerator of A, denoted by num(A), as num(A) = den(A) · A. Note that num(A) ∈ Mn×n (K[w]). Now, we introduce the set

Ot = {A(w) ∈ Rw | den(A)(t) ̸= 0}. Ot is a subring of Rw . In this situation, we introduce the ring homomorphism

Φt : Mn×n (K[w]) −→ Mn×n (K[t]) A(w) = (ai,j (w))1≤i,j≤n −→ A(t) = (ai,j (t))1≤i,j≤n . Furthermore, we extend Φt to Ot as follows

Φt : Ot ⊂ Rw −→ Rt A(w) −→

1

den(A)(t)

num(A)(t).

We want to work with the inverse of Φt . However, in general, although Φt is surjective it is not injective (see Example 1). To overcome this difficulty, we consider the quotient set Ot /ker (Φt ); note that ker (Φt ) = {B ∈ Ot | Φt (B) = O}. For A ∈ Ot , we denote its equivalence class as [A]. We recall that

[A] := A + ker (Φt ) = {A + B | B ∈ Ot and Φt (B) = O}.

(2)

Then, instead of working with Φt we work with

Φ t : Ot /ker (Φt ) ⊂ Rw −→ Rt [A(w)] −→ A(t). Now, Φ t is bijective and its inverse is

Ψw : Rt −→ Ot /ker (Φt ) A = (ai,j (t))1≤i,j≤n −→ [(ai,j (w))1≤i,j≤n ]. Let us see a simple example. Example 1. Let K = C and let F be the field of all the meromorphic functions in an open connected subset of C. We take t = (sin(z ), cos(z )). Now, we consider the matrix

 A=

0 cos(z )

sin(z ) 1



∈ Rt = M2×2 (C(sin(z ), cos(z ))) ⊂ M2×2 (F).

Then,



Φt : Ot −→ R t  a11 (sin(z ), cos(z )) a12 (sin(z ), cos(z )) −→ . a21 (sin(z ), cos(z )) a22 (sin(z ), cos(z ))

a11 (w1 , w2 ) a12 (w1 , w2 ) a21 (w1 , w2 ) a22 (w1 , w2 )

So, ker (Φt ) = {(ai,j (w1 , w2 ))1≤i,j≤2 ∈ Ot | (ai,j (sin(z ), cos(z )))1≤i,j≤2 = (0)1≤i,j≤2 }.

204

J.R. Sendra, J. Sendra / Journal of Computational and Applied Mathematics 301 (2016) 201–212

Observe that



0 0



0 1 − w12 − w22

∈ ker (Φt ).

Thus, for instance,

 B1 =

0

w2

w1



1

,

 B2 =

0

w2

w1 w + w22 2 1



∈ Ψw (A).

But, in both cases, Φ t (Ψw (A)) = Φt (B1 ) = A = Φt (B2 ) = Φ t (Ψw (A)).



3. Drazin inverses under specializations Given a square matrix A over a field, we denote by D(A) the Drazin inverse of A, and by index(A) the Drazin index of A; usually the Drazin inverse is denote by AD or A# , here we have chosen D(A) to avoid confusions when the matrix is specialized. We recall that the Drazin index of A is the smallest non-negative integer k such that rank(Ak ) = rank(Ak+1 ), and that D(A) is the unique matrix satisfying the matrix equations (1) introduced in Section 1.1. In the following we analyze the behavior of the Drazin inverse of matrices in Rt under specializations. More precisely, if A ∈ Rt , we study the problem of deciding when there exists A∗ ∈ [A] (see formula (2)) such that

    Φt D(A∗ ) = D(Φt A∗ ), that is, when

D(A∗ )(t) = D(A(t)). In this case, we could compute the Drazin inverse of D(A) specializing D(A∗ ). In Diagram (3) we illustrate the strategy; so the idea consists in proving the equality at the left-down corner of the diagram.

(3)

This motivates the next definition. Definition 2. We say that A ∈ Rt behaves properly under specialization if there exists A∗ ∈ [A] such that 1. D(A∗ ) ∈ Ot and 2. Φt (D(A∗ )) = D(A). We will say that A specializes properly at (A∗ , t). The next results shows that the key is the invariance of the index. Theorem 3. Let A ∈ Rt . If there exists A∗ ∈ [A] such that D(A∗ ) ∈ Ot and index(A∗ ) = index(A), then A specializes properly at (A∗ , t). Proof. First we observe that, since A∗ ∈ [A], then Φt (A∗ ) = A. Since D(A∗ ) is the Drazin inverse of A∗ , then D(A∗ ) is the unique solution of

 index(A∗ )+1 index(A∗ ) A∗ · X = A∗ , A∗ · X = X · A∗ ,  X · A∗ · X = X . Thus

 index(A∗ )+1 index(A∗ ) A∗ · D(A∗ ) = A∗ , ∗ ∗ ∗ ∗ A · D(A ) = D(A ) · A ,  D(A∗ ) · A∗ · D(A∗ ) = D(A∗ ). By assumption A∗ ∈ Ot . Since den(D(A∗ ))(t) ̸= 0, one has that D(A∗ ) ∈ Ot . So, we can apply Φt to the equations above; recall that Φt is the extension of a ring homomorphism. Taking into account that Φt (A∗ ) = A, we get

 ∗ ∗ Aindex(A )+1 · Φt (D(A∗ )) = Aindex(A ) , ∗ ∗ A · Φt (D(A )) = Φt (D(A )) · A,  Φt (D(A∗ )) · A · Φt (D(A∗ )) = Φt (D(A∗ )).

J.R. Sendra, J. Sendra / Journal of Computational and Applied Mathematics 301 (2016) 201–212

205

By hypothesis, we have that index(A∗ ) = index(A). So

 Aindex(A)+1 · Φt (D(A∗ )) = Aindex(A) , A · Φ (D(A∗ )) = Φ (D(A∗ )) · A, Φ (Dt(A∗ )) · A · Φ (tD(A∗ )) = Φ (D(A∗ )). t t t Using the uniqueness of the Drazin Inverse, it holds that Φt (D(A∗ )) = D(A).



Let us interpret the previous theorem. Let A ∈ Rt . We consider the set

Σ = {A∗ | A∗ ∈ [A] such that D(A∗ ) ∈ Ot and index(A) = index(A∗ )}. Because of Theorem 3, it holds that Σ ⊂ [D(A)]. So, the general strategy would be as follows. General strategy. Given A = (ai,j (t))1≤i,j≤n • Take A∗ = (ai,j (w))1≤i,j≤n ∈ [A]. • Check whether A∗ ∈ Σ . If so, compute D(A∗ ), and return D(A(t)) = D(A∗ )(t). We leave the case A∗ ̸∈ Σ as future research work. Because of the claim in Theorem 3, we search for sufficient conditions to guarantee that the Drazin index stays invariant. For this purpose, one can use the fact that the index is the multiplicity of the root zero at the minimal polynomial of the matrix (see e.g. [3] or [22]) or the expression of the Drazin inverse based on the characteristic polynomial (see e.g. [23, page 269]). Alternatively, one can find sufficient conditions analyzing the Gaussian elimination of the matrix. More precisely, reasoning as in Theorem 6 in [19], for A ∈ Rw , we proceed as follows. For j ∈ {1, . . . , index(A)}, let Pj Aj = Lj Uj be the PA = LU factorization of Aj , where Pj are permutation matrices and the diagonal entries of Lj are 1. In the following we introduce the concept of index-polynomial. For this purpose, first we recall the notion of squarefree part of a polynomial. A polynomial f ∈ K[w] is called squarefree if every irreducible factor of f has multiplicity 1. The squarefree part of a polynomial is then the product of all its irreducible factors. It is important to observe that there exist algorithms to compute the squarefree part of a polynomial, that essentially require a gcd computation (see e.g. Section 4.4. in [21]). Definition 4. We define the index-polynomial of A ∈ Rw as the squarefree part of the polynomial index(A)

 j=1

den(Lj ) den(Uj )

 g ∈DU

num(g ) j

where DUj is the set consisting in the most left non-zero element in each row of Uj . We denote the index-polynomial as HA (w). In this situation, we have the next lemma that analyzes the Drazin index. Lemma 5. Let A ∈ Rw an let HM be its index-polynomial. If HA (t) ̸= 0, it holds that index(A) = index(Φt (A)). Proof. Since den(Lj )(t)den(Uj )(t) ̸= 0, then Lj , Uj ∈ Ot . Therefore, Φt (Lj ), Φt (Uj ) are well defined. Hence Pj Φt (A)j = Φt (Lj ) Φt (Uj ). Furthermore, since num(g )(t) ̸= 0 for all g ∈ DUj , it holds that rank(Φt (Uj )) = rank(Uj ). Moreover, since Φt (Lj ) is regular,

we get that rank(Aj ) = rank(Uj ) = rank(Φt (Uj )) = rank(Φt (A)j ). Thus, index(A) = index(Φt (A)).



Corollary 6. Let A ∈ Rt and A∗ ∈ [A]. Let KA∗ (w) be the square-free part of the polynomial HA∗ (w) den(D(A∗ ))(w) where HA∗ (w) is the index-polynomial of A∗ . If KA∗ (t) ̸= 0, then A specializes properly at (A∗ , t). Proof. It follows from Theorem 3 and Lemma 5.



Definition 7. Let A ∈ Rt and A∗ ∈ [A]. We call the polynomial KA∗ (w), in Corollary 6, the evaluation polynomial of A∗ . In the next corollary, we use the notion of algebraic independency. We recall that a finite collection of elements t is called algebraically independent over K if there exists none (non-zero) polynomial f (w) ∈ K[w] such that f (t) = 0 (see e.g. Section 10.3 in [1]). Corollary 8. Let t be algebraically independent over K. Then, every A ∈ Rt specializes properly at (A∗ , t), for every A∗ ∈ [A]. Proof. Since t are algebraically independent over K, K (t) ̸= 0 for every K ∈ K[w] \ {0}. So, the result follows from Corollary 6. 

206

J.R. Sendra, J. Sendra / Journal of Computational and Applied Mathematics 301 (2016) 201–212

4. Application to meromorphic functions Let K = C and F = Mer(z, Ω ) be the field of meromorphic functions, in the complex variables z = (z1 , . . . , zℓ ), on a connected domain Ω of Cℓ (see e.g. [24]). We illustrate the computation of Drazin inverses using the ideas developed in this paper. Example 9. We consider the matrix 0



 2 cos (z )  z  e 3 cos (z )

A=

ez

0

2 ez cos (z )

2

2

ez

ez



3

6

ez

ez

  , 

and we want to compute D(A). We take t = (cos(z ), ez ), and we consider the fields C ⊂ C(t) ⊂ Mer(z , C), as well as the matrix rings Rt := M3×3 (C(t)) and Rw := M3×3 (C(w)), with w = (w1 , w2 ). The input matrix A belongs to Rt . Let



0

0

 2 w1  A∗ =   w2  3 w1

2

w2 3

w2

w2

2 w2 w1



2   w2   ∈ [A] 6 

w2

index(A∗ ) = 1 and the PA = LU decomposition of A∗ is

  P =

0 1 0

1 0 0

0 0 , 1



1 0

L= 3 2



0 1



0 0



2

w2

2 U =  0 0

1

2 w2 2 w1

2

 w

, 

3

2 w1

w2  . 2 w2 w1 

0 0

0



Therefore, the index-polynomial of A (see Definition 4) is HA∗ (w) = w2 w1 . Furthermore,

−4 w2 3 w1 2  3 (w14 w2 4 − 2 w1 2 w2 2 + 1)   2 2    w1 w2 + 3 w2 w1 ∗ D(A ) =   2  3 w1 2 w2 2 − 1   w2 w1 2 (w1 2 w2 2 − 1)

−4 w2 3 w1



3 w1 2 w2 2 − 1



 2

 2 2  w1 w2 + 3 w2  2 3 w1 2 w2 2 − 1 w2 2 (w1 2 w2 2 − 1)

3 w1 2 w2 2 + 5 w2 3 w1 



9 w1 2 w2 2 − 1



2

   − 5 w1 2 w2 2 + 3 w2  .  2   9 w1 2 w2 2 − 1   −w2 3 (w1 2 w2 2 − 1) 



Thus, the evaluation polynomial KA∗ (w) (see Definition 7) is KA∗ (w) = w2 w1 (w1 w2 − 1)(w1 w2 + 1). Now, we observe that KA∗ (t) = ez cos(z )(cos(z )ez − 1)(cos(z )ez + 1). Substituting z = π in KA∗ (t) get eπ (eπ + 1)(eπ − 1) ̸= 0. So, KA∗ (t) is we not identically zero. Therefore, the Drazin inverse D(A) of A is D(A∗ )(t), that is

 3    D(A) =     

−4e3z cos (z )2   cos (z )4 e4z − 2 cos (z )2 e2z + 1   cos (z )2 e2z + 3 ez cos (z )  2 3 cos (z )2 e2z − 1 ez cos (z )   2 cos (z )2 e2z − 1

3 cos (z )2 e2z + 5 e3z cos (z )

−4 e3z cos (z )

 2

3 cos (z )2 e2z − 1



cos (z )2 e2z + 3 ez





2

3 cos (z )2 e2z − 1



ez 2 cos (z )2 e2z − 1







9 cos (z )2 e2z − 1



2

− 5 cos (z )2 e2z + 3 ez  2 9 cos (z )2 e2z − 1 −e z   3 cos (z )2 e2z − 1 



     .     

J.R. Sendra, J. Sendra / Journal of Computational and Applied Mathematics 301 (2016) 201–212

207

Example 10. We consider the matrix

 1 1 1 1 ζ (z ) + 0 (z ) + 0 (z )−1 +  2 2 2 2  1 1 1  1 A = − ζ (z ) − 0 (z ) − 0 (z )−1 +  2 2 2 2  1 1 1 1 − ζ (z ) − 0 (z ) − 0 (z )−1 − 2

2

2

1

1

2 1

2 1

2 1

2 1

− ζ (z ) −

2

2

ζ (z ) + ζ (z ) +

0 (z ) + 0 (z ) +

2

1

0 (z ) −

2 1

0 ( z ) −1

0 ( z ) −1

2 1

0 ( z ) −1

2

1



2   1 

,

2   1



2

where ζ (z ) is the Riemann zeta function and 0 (z ) is the gamma function. We want to compute D(A). We take t = (ζ (z ), 0 (z )), and we consider the fields C ⊂ C(t) ⊂ Mer(z , C), as well as the matrix rings Rt := M3×3 (C(t)) and Rw := M3×3 (C(w)), with w = (w1 , w2 ). The input matrix A belongs to Rt . Let

 1 1 1 1 w + w + w −1 +  2 1 2 2 2 2 2  1 1 1  1 A ∗ =  − w 1 − w 2 − w 2 −1 +  2 2 2 2  1 1 1 1 − w 1 − w 2 − w 2 −1 − 2

2

2

1

1

1

2 1

2 1

2 1

2 1

2 1

2 1

2

2

− w1 − w2 − w2 −1

2

2

w1 + w2 + w2

−1

w1 + w2 + w2 −1 ∗2

index(A ) = 2. We take the decomposition of P1 A = L1 U1 and P2 A ∗



1



 w1 w2 + w2 2 − w2 + 1 L1 =  − w w + w 2 + w + 1 1 2 2 2 −1 w1 w2 + w2 + w2 + 1  2 w2  U1 =   0 

0

1

0 ,

0

1

L2 =

1

0 1 0

−1 −1

0 0 , 1



2   1 

 ∈ [A]

2   1



2

= L2 U 2 :



0







−(w1 w2 + w2 2 + 1) 2 w2 w1 w2 + w2 2 + 1 w1 w2 + w2 2 + w2 + 1

2



0

1

1 2

  w1 w2 + w2 + 1   w1 w2 + w2 2 + w2 + 1  2

0

2  w1 w2 + w2 2 + 1  2 w2 2 U2 =  



0

 −

2 w1 w2 + w2 2 + 1 2 w2 2 0 0

0 0

 0

. 

0 0

Therefore, the index-polynomial of A∗ is HA∗ (w) = w1 w2 + w2 2 + w2 + 1 w2 w1 w2 + w2 2 + 1 .









Furthermore,

w2  2 (w1 w2 + w2 2 + 1)  −w2  D(A∗ ) =   2 (w1 w2 + w2 2 + 1)  −w2 2 (w1 w2 + w2 2 + 1)

−w2 2 (w1 w2 + w2 2 + 1) w2 2 (w1 w2 + w2 2 + 1) w2 2 (w1 w2 + w2 2 + 1)





0

   0 .   0

Thus, the evaluation polynomial KA∗ is KA∗ (w) = w1 w2 + w2 2 + w2 + 1 w2 w1 w2 + w2 2 + 1







  w1 w2 + w2 2 − w2 + 1 .

Substituting z = 2 in KA∗ (t) we get ζ (2)3 + 6ζ (2)2 + 11ζ (2) + 6 ̸= 0. So, KA∗ (t) is not identically zero. Therefore, the Drazin inverse D(A) of A is D(A∗ )(t), that is



0 (z )

 2 (ζ (z ) 0 (z ) + 0 (z ) + 1)   −0 ( z ) D(A) =   2 (ζ (z ) 0 (z ) + 0 (z )2 + 1)   −0 ( z ) 2

2 (ζ (z ) 0 (z ) + 0 (z )2 + 19)

−0 (z ) 2 (ζ (z ) 0 (z ) + 0 (z )2 + 1) 0 (z ) 2 (ζ (z ) 0 (z ) + 0 (z )2 + 1) 0 (z ) 2 (ζ (z ) 0 (z ) + 0 (z )2 + 1)



0

   0 .    0

208

J.R. Sendra, J. Sendra / Journal of Computational and Applied Mathematics 301 (2016) 201–212

Besides the direct application of the results in the previous sections, we can consider the following additional application. Let t ∈ Mer(z, Ω ) and A(t) ∈ Mn×n (C(t)). Then, D(A)(t) ∈ Mn×n (C(t)). The functions in t depend on the complex variables z = (z1 , . . . , zℓ ), and hence we can see A(t) and D(A)(t) as functions in z. To emphasize this fact we write A(t(z)) and D(A)(t(z)). Now, for a particular value z0 ∈ Cℓ we analyze whether D(A)(t(z0 )) is D(A(t(z0 ))); i.e. whether the evaluation of the Drazin inverse is the Drazin inverse of the evaluation. In Diagram (4) we illustrate the idea.

(4)

Let A∗ ∈ [A], and let KA∗ (w) be the polynomial associated to A∗ via Corollary 6. We assume that KA∗ (t) is not identically zero. Again, KA∗ (t) can be seen as a function in the complex variables z, so we denote it as KA∗ (t(z)). We observe that, since KA∗ (w) is a polynomial in the variables w with complex coefficients, KA∗ (t(z)) ∈ Mer(z, Ω ). We introduce the set

Ω ∗ = Ω \ {z0 ∈ Ω | KA∗ (t(z0 )) = 0} ⊂ Cℓ . Then, following proposition holds. Proposition 11. Let A∗ ∈ [A] be such that KA∗ (t) is not zero. Then, for z0 ∈ Ω ∗ , it holds that D(A)(t(z0 )) = D(A(t(z0 ))). Proof. Since z0 ∈ Ω ∗ ⊂ Ω , then A(t(z0 )) is well defined as hence D(A(t(z0 ))) exists. On the other hand, since we have assumed that KA∗ (t(z)) is not identically zero, by Corollary 6 we have that D(A)(t(z)) = D(A∗ )(t(z)). Moreover, since z0 ∈ Ω ∗ , then KA∗ (t(z0 )) ̸= 0, and hence den(D(A∗ )) does not vanish at t(z0 ). Thus, D(A∗ )(t(z0 )) and D(A)(t(z0 )) are well defined. On the other hand, since D(A∗ ) is the Drazin inverse of A∗ , it holds (1), that is

 ∗ ∗ A∗ (w)index(A (w))+1 · D(A∗ )(w) = A∗ (w)index(A (w)) , ∗ ∗ ∗ ∗ A (w) · D(A )(w) = D(A )(w) · A (w),  D(A∗ )(w) · A∗ (w) · D(A∗ )(w) = D(A∗ )(w). Specializing at t(z) we get

 ∗ ∗ A∗ (t(z))index(A (w))+1 · D(A∗ )(t(z)) = A∗ (t(z))index(A (w)) , A∗ (t(z)) · D(A∗ )(t(z)) = D(A∗ )(t(z)) · A∗ (t(z)),  D(A∗ )(t(z)) · A∗ (t(z)) · D(A∗ )(t(z)) = D(A∗ )(t(z)). So, since A∗ (t(z)) = A(t(z)), specializing at z0 , we get that

 ∗ ∗ A(t(z0 ))index(A (w))+1 · D(A)(t(z0 )) = A(t(z0 ))index(A (w)) , A(t(z0 )) · D(A)(t(z0 )) = D(A)(t(z0 )) · A(t(z0 )),  D(A)(t(z0 )) · A(t(z0 )) · D(A)(t(z0 )) = D(A)(t(z0 )). Now, let Pj · A∗j (w) = Lj (w) · Uj (w) be the PA = LU decomposition of A∗j with j = 1, . . . , index(A∗ ). Since KA∗ (t(z)) ̸= 0, the index-polynomial of A∗ does not vanish at t(z) and hence the expressions Pj · A∗j (t(z)) = Lj (t(z)) · Uj (t(z)) are well defined. Furthermore, for z0 ∈ Ω ∗ it holds that KA∗ (t(z0 )) ̸= 0, and hence the expressions Pj · A∗j (t(z0 )) = Lj (t(z0 )) · Uj (t(z0 )) are also well defined. Therefore, since KA∗ (t(z))KA∗ (t(z0 )) ̸= 0, it holds that rank(A∗j (t(z0 ))) = rank(Uj (t(z0 ))) = rank(Uj (t(z))) = rank(A∗j (t(z))). Thus, index(A∗ (t(z))) = index(A∗ (t(z0 ))). Now, since A∗ (t(z)) = A(t(z)) then index(A∗ (t(z))) = index(A(t(z))) and index(A∗ (t(z0 ))) = index(A(t(z0 ))). By Lemma 5 we have that index(A∗ (w)) = index(A∗ (t(z))). So, index(A∗ (w)) = index(A(t(z0 ))). So, by the uniqueness of the inverse we get that D(A)(t(z0 )) = D(A(t(z0 ))).  We finish this subsection with an example. Example 12. Let t(z) = (cosh(z1 ), sin(z2 )) ∈ Mer(z, C2 ) and let A be the matrix

 A(t(z)) =

0 cosh (z1 ) cosh (z1 )

0 1 1

sin (z2 ) 1 . 2



J.R. Sendra, J. Sendra / Journal of Computational and Applied Mathematics 301 (2016) 201–212

209

The take

 A (w) = ∗

0

w1 w1

0 1 1

w2



1 2

∈ [A].

Then,

−3 w 2 (w2 w1 − 1)2 w2 w1 + 2 (w2 w1 − 1)2

−3 w2 w1  (w2 w1 − 1)2   (w2 w1 + 2) w1 ∗ D(A )(w) =   (w w − 1)2 2 1   w1 w2 w1 − 1 

1

(w2 w1 − 1)

(w2 w1 + 2) w2  (w2 w1 − 1)2   −2 w2 w1 − 1  . (w2 w1 − 1)2    −1 (w2 w1 − 1)

In addition, KA∗ (w) = w2 (w1 w2 − 1), and KA∗ (t(z)) = sin (z2 ) (sin (z2 ) cosh (z1 ) − 1) that is not zero, since e.g. K (t(0, 3π /2)) = 2 ̸= 0. Therefore, D(A)(t(z)) = D(A∗ )(t(z)), namely

−3 sin (z2 ) cosh (z1 )  sin ( (z2 ) cosh (z1 ) − 1)2   (sin (z2 ) cosh (z1 ) + 2) cosh (z1 ) D(A) =   (sin (z2 ) cosh (z1 ) − 1)2   cosh (z1 ) sin (z2 ) cosh (z1 ) − 1 

−3 sin (z2 ) (sin (z2 ) cosh (z1 ) − 1)2 sin (z2 ) cosh (z1 ) + 2 (sin (z2 ) cosh (z1 ) − 1)2 −1 sin (z2 ) cosh (z1 ) − 1

(sin (z2 ) cosh (z1 ) + 2) sin (z2 )  (sin (z2 ) cosh (z1 ) − 1)2   −2 sin (z2 ) cosh (z1 ) − 1  . (sin (z2 ) cosh (z1 ) − 1)2    −1 sin (z2 ) cosh (z1 ) − 1

On the other hand, for every z0 ∈ C such that KA∗ (t(z0 )) ̸= 0 it holds that D(A)(t(z0 )) = D(A(t(z0 ))). For instance, since KA∗ (t(0, 3π /2)) ̸= 0, we get 2

0 D(A(0, 3π /2)) = D 1 1



0 1 1

 −1 1 2

 =

3/4 1/4 −1/2

3/4 1/4 −1/2

 −1/4 1/4 = D(A)(0, 3π /2).  1/2

5. Application to Laurent formal series In this section, we illustrate how to compute Drazin inverses of matrices whose entries are formal Laurent series. Observe that this contains the case of analytically perturbed matrices (see [7]). So let K = C and let F = C((z )) be the field of formal Laurent series in z with coefficients in C (see e.g. [25, pg. 65]). Let t be a tuple of elements in F \ K, and A ∈ Mn×n (C(t)), then D(A) ∈ Mn×n (C(t)). Applying our ideas, one computes A∗ ∈ [A], as well as the evaluation polynomial, via Corollary 6, KA∗ (w). Then, if KA∗ (t) ̸= 0 it holds that D(A(t)) = D(A∗ )(t). Nevertheless, in general, one works with finite truncations. So, we will consider truncations of order m > 0. More precisely, we consider the map

Tm : C((z )) −→ ∞  i=−r

C(z ) m−1

ai z i

−→



ai z i .

i=−r

We consider also the natural extension of Tm to C((z ))r and Mn×n (C((z ))); i.e. if t = (t1 , . . . , tr ) ∈ C((z ))r then Tm (t) = (Tm (t1 ), . . . , Tm (tr )), and, if A = (aij ) ∈ Mn×n (C((z ))) then Tm (A) = (Tm (aij )). Now, the idea is to analyze when the Drazin inverse of the truncation of A is the truncation of the Drazin inverse evaluated at the truncation of t (see Diagram (5)).

(5)

210

J.R. Sendra, J. Sendra / Journal of Computational and Applied Mathematics 301 (2016) 201–212

For t = (t1 , . . . , tr ) we denote by ord(ti ) the order of ti ; note that, by assumption ti ∈ C((z )) \ C, and hence ti ̸= 0. Now, if P ∈ C[w] is such that P (t) ̸= 0, it holds that for every ℓ ≥ m + deg(P ) · maxi∈{1,...,r } {|ord(ti )|}.

Tm (P (t)) = Tm (P (Tℓ (t))). Similarly, let P /Q ∈ C(w), with gcd(P , Q ) = 1, such that Q (t) ̸= 0. Let m ≥ ord(Q (t)). If ℓ ≥ m + max{deg(P ), deg(Q )} · maxi∈{1,...,r } {|ord(ti )|} then Q (Tℓ (t)) ̸= 0 and

 Tm

P ( t)



 = Tm

Q (t)

P (Tℓ (t))



Q (Tℓ (t))

.

In this situation, let A∗ ∈ [A] and KA∗ (w) be the evaluation polynomial of A∗ ; see Definition 7. Let us assume that KA∗ (t) ̸= 0. Then, D(A)(t) = D(A∗ )(t). Note that D(A∗ )(w) is a matrix with entries in C(w). Let D(A∗ ) be expresses as D(A∗ ) = (aij /bij )1≤i,j≤n with gcd(aij , bij ) = 1. Then, we introduce the quantity

ω = max{max{deg(aij ), deg(bij )} | 1 ≤ i, j ≤ n}, as well as

ˆ = max{ord(bij (t)) | 1 ≤ i, j ≤ n}. m Then, taking into account the previous reasoning one has the following result. Note that it provides a method to compute the Laurent series expansion of perturbed Drazin inverses (see Chapter 3 in [7]).

ˆ and ℓ ≥ Proposition 13. Let A∗ ∈ [A] and let KA∗ (w) be the evaluation polynomial of A∗ . Let KA∗ (t) ̸= 0. If m ≥ m m + ω · maxi∈{1,...,r } {|ord(ti )|}, then Tm (D(A(t))) = Tm (D(A∗ )(Tℓ (t))). We illustrate these ideas by an example. Example 14. We consider the matrix A ∈ M4×4 (sin(z ), sinh(z ), ez )



(sin (z ) + 2 sinh (z )) sinh (z )

0

  sin (z ) sinh (z ) 2  ez A=  sin (z ) sinh (z ) 3  ez  sin (z ) sinh (z ) 4

ez

ez

sin (z ) + 2 sinh (z )

2 sin (z ) 3 sin (z ) + 3 4 sin (z ) + 4

6 sin (z )

ez sin (z ) sinh (z )

8 sin (z )

ez



  3 sin (z ) + 3 sinh (z )  .   6 sin (z )  

2 sin (z )

sin (z ) sinh (z )

2 sin (z ) + 5 sinh (z )

12 sin (z )

So, we can apply the results in Section 4 to compute D(A). Alternatively, we consider the Laurent expansion of A; say

 

0

0 A= 0 0

0 2 3 4

3 2 6 8





7 0 6 2 z+ 6 3 12 4

3 0 3 4

0 0 0 0

  0  −2 0 2   z +  0  −3  0  −4 

1

−3

0

− − −

6 1

1



3 7

2 14 3

3

−1 −

4 3

1 

 0    1 0   z3 +    3 −1   2  2 −2 2 

13 6 0 3 2 2



0

0

0

0 

0 0



 z 4 + O (z 5 ),  0

0

and we want to compute the Laurent expansion of the Drazin inverse of A, namely D(A). Let us say that we want to compute the 5 truncation of D(A), i.e. T5 (D(A)). So, in the terminology of Proposition 13, m = 5 and t = (t1 , t2 , t3 ) are the Laurent ˆ = 5 and ℓ ≥ 14. Therefore, expansion of (sin(z ), sinh(x), ez ), m

T5 (D(A(t))) = T5 (D(A∗ )(T14 (t))). Now, we determine A∗ ∈ [A] such that KA∗ (sin(z ), sinh(z ), ez ) ̸= 0. We get



0

  w2 w1 2  w ∗ 3 A =  w2 w1 3  w3  w2 w1 4 w3

(w2 + 2 w1 ) w1 w3

w2 + 2 w1

2 w2

2 w2

w2 w1 3 w2 + 3 w3 w2 w1 4 w2 + 4 w3

2 w2 + 5 w1

  

3 w2 + 3 w1  

6 w2

6 w2

8 w2

12 w2

    

J.R. Sendra, J. Sendra / Journal of Computational and Applied Mathematics 301 (2016) 201–212

211

as well as D(A∗ ); D(A∗ ) is big and we do not show it here. Finally, we get



0

  0  T5 (D(A(t))) =  0   0



1

3 1



0

2

 13  6  11   6 +  7 −  4  1

3  71

 36  193 −  24 +  221   48  13 − 36

 791 −  108  319 −  144 +  485   96  19 − 72

37 

3 1

1

0

 1 −  3 24    −1  1  1 1  + z  1  4  − 

7



1



1

3 14

4 385

3 31

72 67

12 73

9 79





12 13

2

18 2617 720 3007



240 3497 480 31



72 8363



864 983

288 1367 192 49



144

1

6 17 6 7



4 0

295 

31

37 

9 17

16  121  





6 0

 53

48  311  

24 35 179



4320 43 313 2160 529 48 179

− −

6



216

715

761

72 359

48 395





72 19

48 1 4

− −

72 193 72 7

18



4723 

288  899  

144  z 2 109  



48 5



 

12





160   847

432  63  





1440  z 3 1837  

5184 7069

864 95

4 0

60  29 857  

1440 5779 

864 10 441



24  9  

469 

360 60 581





3 13

 9  29 − −   48  z +  12  17 73   −   24 12   19 1 −



24 1



2 0



8  z 4 + O (z 5 ). 1801  







144   1 2

Remark 1. We observe that a similar reasoning can be done for Puiseux series (see e.g. [25, pg. 65]) or for Gevrey asymptotic power series (see e.g. Chapter 4 in [26]). Acknowledgments Authors supported by the Spanish Ministerio de Economía y Competitividad‘, and by the European Regional Development Fund (ERDF), under the Project MTM2014-54141-P. The authors are members of the Research Group ASYNACS (Ref. CCEE2011/R34). References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12]

B.L. Van der Waerden, Algebra, Vol. I, seventeenth ed., Springer Verlag, 1991. M.P. Drazin, Pseudo inverse in associative rings and semigroups, Amer. Math. Monthly 65 (1958) 506–514. A. Ben-Israel, T.N.E. Greville, Generalized Inverses: Theory and Applications, Second ed., Springer-Verlag, Berlin, 2003. S.L. Campbell, C.D. Meyer, Generalized Inverses of Linear Transformations Series: Classics in Applied Mathematics, SIAM, 2009. H. Diao, Y. Wei, S. Qiao, Displacement rank of the Drazin inverse, J. Comput. Appl. Math. 167 (2004) 147–161. J. Ljubisavljević, Dragana S. Cvetković-Ilić, Additive results for the Drazin inverse of block matrices and applications, J. Comput. Appl. Math. 235 (2011) 3683–3690. K.E. Avrachenkov, J.A. Filar, P.G. Howlett, Analytic Perturbation Theory and Its Applications, SIAM, 2013. S.L. Campbell, C.D. Meyer, N.J. Rose, Applications of the Drazin inverse to linear systems of differential equations with singular constant coefficients, SIAM J. Appl. Math. 31 (1976) 411–425. X. Zhang, G. Chen, The computation of Drazin inverse and its application in Markov chains, Appl. Math. Comput. 183 (2006) 292–300. S. Miljković, M. Miladinović, P.S. Stanimirović, Wei. Yimin, Gradient methods for computing the Drazin-inverse solution, J. Comput. Appl. Math. 253 (2013) 255–263. P.S. Stanimirović, D.S. Cvetković–Ilić, Successive matrix squaring algorithm for computing outer inverses, Appl. Math. Comput. 203 (2008) 19–29. P.S. Stanimirović, I. Živković, Y. Wei, Recurrent neural network for computing the Drazin inverse, IEEE Trans. Neural Netw. Learn. Syst. 26 (11) (2015) 2830–2843.

212

J.R. Sendra, J. Sendra / Journal of Computational and Applied Mathematics 301 (2016) 201–212

[13] P.S. Stanimirović, I. Živković, Y. Wei, Recurrent neural network approach based on the integral representation of the Drazin, Neural Comput. 27 (10) (2015) 2107–2131. [14] Y. Wei, Successive matrix squaring algorithm for computing the Drazin inverse, Appl. Math. Comput. 108 (2000) 67–75. [15] Y. Wei, H. Wu, The representation and approximation for Drazin inverse, J. Comput. Appl. Math. 126 (2000) 417–423. [16] F. Bu, Y. Wei, The algorithm for computing the Drazin inverses of two-variable polynomial matrices, Appl. Math. Comput. 147 (2004) 805–836. [17] J. Ji, A finite algorithm for the Drazin inverse of a polynomial matrix, Appl. Math. Comput. 130 (2002) 243–251. [18] P.S. Stanimirović, B. Milan, M.B. Tasić, K.M. Vu, Extensions of Faddeev’s algorithms to polynomial matrices, Appl. Math. Comput. 214 (2009) 246–258. [19] J. Sendra, J.R. Sendra, Gröbner basis computation of drazin inverses with multivariate rational function entries, Appl. Math. Comput. 259 (2015) 450–459. [20] R.J. Bradford, R.M. Corless, J.M. Davenport, D.J. Jeffrey, S.M. Watt, Reasoning about the elementary functions of complex analysis, Lecture Notes in Comput. Sci. 1930 (2000) 115–126. [21] F. Winkler, Polynomial Algorithms in Computer Algebra, Springer-Verlag, Wien, New York, 1996. [22] F.R. Gantmacher, The Theory of Matrices. Volumen I, Taylor & Francis, 1959. [23] S. Barnett, Matrices. Methods and Applications, in: Oxford Applied Mathematics and Computing Science Series, 1990. [24] L. Kaup, B. Kaup, Holomorphic Functions of Several Variables. An Introduction to the Fundamental Theory, Walter de Gruyter, Berlin, New York, 1983. [25] S. Basu, R. Pollack, M.F. Roy, Algorithms in Real Algebraic Geometry, Springer-Verlag, New York, Inc., Secaucus, NJ, USA, 2006. [26] W. Balser, Formal Power Series and Linear Systems of Meromorphic Ordinary Differential Equations, Springer-Verlag, New York, 2000.