On explicit formulas of the principal matrix pth root by polynomial decompositions

On explicit formulas of the principal matrix pth root by polynomial decompositions

Applied Mathematics and Computation 242 (2014) 435–443 Contents lists available at ScienceDirect Applied Mathematics and Computation journal homepag...

422KB Sizes 4 Downloads 50 Views

Applied Mathematics and Computation 242 (2014) 435–443

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

On explicit formulas of the principal matrix pth root by polynomial decompositions J. Abderramán Marrero a, R. Ben Taher b, Y. El Khatabi b, M. Rachidi b,⇑ a Department of Mathematics Applied to Information Technologies, (ETSIT-UPM) Telecommunication Engineering School, Technical University of Madrid, Avda Complutense s/n, Ciudad Universitaria, 28040 Madrid, Spain b Equipe of DEFA – Department of Mathematics and Informatics, Faculty of Sciences, University Moulay Ismail, B.P. 4010 Beni M’hamed, Meknes, Morocco

a r t i c l e

i n f o

a b s t r a c t We present some explicit formulas for calculating the principal pth root of a square matrix. The main tools are based on various polynomial decompositions of the principal matrix pth root and well-known properties of the linear recursive sequences. Ó 2014 Elsevier Inc. All rights reserved.

Keywords: Matrix pth root Principal matrix pth root Polynomial decomposition

1. Introduction In the last decades the matrix pth root has been studied by many authors, since it appears in various theoretical and applied fields of science. Indeed, in many technical problems arising in areas such as control, geography, finance, medical image analysis and health care, the matrix pth root is involved; see e.g. [3,10,15,17]. For a real or complex square matrix A of order r (r P 2), every matrix X solution of the equation X p ¼ A, is a pth root of A. When A has no eigenvalues on R (the closed negative real axis) there exists a unique matrix X such that X p ¼ A and the eigenvalues of X lie in the segment fz 2 C : p=p < argðzÞ < p=pg, where argðzÞ symbolizes the argument of z (see for instance [12] and references therein). In this case the matrix X is the so-called principal pth root of the matrix A, and it is denoted as X ¼ A1=p . The computation of the pth root (p P 2) of a square matrix is not an easy task. Many theoretical and numerical approaches for its computation have been proposed; see e.g. [4,12,13,16] and references therein. Various results on the computation of the principal matrix pth root with the aid of the polynomial decomposition have also been presented in [4]. The methods for reaching these results were inspired from [9]. More precisely, the methods were based on the scalar power polyP 1 ðpÞ k nomial decomposition Ap ¼ m1 k¼0 Xk A , where m 6 r is the degree of the minimal polynomial of A. The computation of ðpÞ

ðpÞ

explicit formulas of the scalars Xk , requires to solve a system of linear equations with unknown variables Xk ; see [4,9]. The aim here is to give some explicit formulas in the computation of the principal pth root of a matrix A. Our development consists in two alternative approaches, depending whether the technique employed is arisen either of [6] or [7]. More pre1 cisely, in the first approach the matrix Ap is expressed as a linear combination of the power polynomial basis Ir ; A; . . . ; Ar1 , whose coefficients depend on those of the characteristic polynomial of the matrix B ¼ Ir  A. The second approach is based 1 on the scalar polynomial decomposition of Ap in a basis A0 ¼ P 0 ðAÞ; A1 ¼ P 1 ðAÞ; . . . ; Am1 ¼ P m1 ðAÞ, where the P j ðzÞ are determined polynomials. Here m is the degree of the minimal polynomial of A. Moreover, connections between the preceding approaches are given. 1 The outline of our investigation is as follows. In Section 2 we consider the combinatorial expression of Ap using the power polynomial decomposition. Moreover, explicit and analytic formulas are established, and various examples are given. Section ⇑ Corresponding author. E-mail addresses: [email protected] (J. Abderramán Marrero), [email protected] (R. Ben Taher), [email protected] (Y. El Khatabi), [email protected] (M. Rachidi). http://dx.doi.org/10.1016/j.amc.2014.05.110 0096-3003/Ó 2014 Elsevier Inc. All rights reserved.

436

J. Abderramán Marrero et al. / Applied Mathematics and Computation 242 (2014) 435–443 1

3 is devoted to other polynomial decompositions of Ap , where some explicit expressions are presented. We apply the preced1 ing formulas for studying A3 of some 4  4 matrices. Concluding remarks and perspective are also considered in Section 4. Unless otherwise stated, throughout the text p P 2 will denotes a positive integer and k  k a consistent matrix norm. As usual, the polynomials PA ðzÞ and M A ðzÞ represent the characteristic and the minimal polynomial of the matrix A, respectively. In addition, SpðAÞ represents the spectrum (set of eigenvalues) of the matrix A and J A ðkÞ is the Jordan block associated to the eigenvalue k. Finally, all the pth roots of complex numbers that appear in the examples refer to their principal pth roots. 1

2. Power decomposition of Ap and its computation Let B be in M r ðCÞ (r P 2) such that P B ðzÞ ¼ zr  a0 zr1      ar1 , with ar1 – 0. Following Cayley–Hamilton’s theorem, the positive integer powers Bn satisfy the rth linear recurrence relation Bnþ1 ¼ a0 Bn þ a1 Bn1 þ    þ ar1 Bnrþ1 , for n P r  1 (see [1,7,8,18]). Thus, the powers of Bn (n P r) are given by

Bn ¼

! r1 X k X arkþj1 qðn  j; rÞ Bk k¼0

ð1Þ

j¼0

for n P r, where qðn; rÞ is the combinatorial expression

X

qðn; rÞ ¼

k0 þ2k1 þþrkr1

ðk0 þ    þ kr1 Þ! k0 k1 kr 1 a a . . . ar1 ; k0 !k1 ! . . . kr1 ! 0 1 ¼nr

ð2Þ

with qðr; rÞ ¼ 1 and qðn; rÞ ¼ 0 for n 6 r  1 (see e.g. [14]). For studying the combinatorial expression of the principal matrix P 1 n pth root, we consider the power series ð1  zÞp ¼ 1 n¼0 bn z , where jzj < 1; b0 ¼ 1 and

bn ¼

 n1 ð1Þn Y 1  i < 0; n! i¼0 p

for n P 1:

ð3Þ

With the assumption kBk < 1, the preceding power series can be applied to define the principal matrix pth root of A ¼ Ir  B P1 Pr1 Pk 1 k n k k¼0 bk B . Using (1) and (2) we derive B ¼ k¼0 qk ðnÞB for n P r, with qk ðnÞ ¼ j¼0 arkþj1 qðn  j; rÞ.   Pr1 Pþ1 Pr1 1 k k p . Therefore, a direct computation implies. As a result, we obtain ðIr  BÞ ¼ k¼0 bk B þ n¼r bn k¼0 qk ðnÞB

as follows, ðIr  BÞp ¼

Lemma 2.1. Let p P 2 be an integer and B 2 M r ðCÞ such that PB ðzÞ ¼ zr  a0 zr1      ar1 (ar1 – 0). Assume kBk < 1 and P  P P1 1 ðpÞ k ðpÞ k SpðIr  BÞ  C n R . Then, we have ðIr  BÞp ¼ r1 k¼0 Xk B , where Xk ¼ bk þ n¼r bn j¼0 arkþj1 qðn  j; rÞ and the coefficients qðn; rÞ; bn are as in (2) and (3). Lemma 2.1 is useful for formulating a combinatorial expansion of the principal pth root of the matrix A ¼ Ir  B. Theorem 2.2. Let p P 2 be a positive integer and A 2 M r ðCÞ such that P B ðzÞ ¼ zr  a0 zr1      ar1 (ar1 – 0), where P Pr1 ~ ðpÞ s 1 ðpÞ k B ¼ Ir  A. Suppose kIr  Ak < 1 and SpðAÞ  C n R . Then, we have Ap ¼ r1 k¼0 Xk ðIr  AÞ ¼ s¼0 Xs A , where   P1 Pk ðpÞ ðpÞ ðpÞ s Pr1 k ~ Xk ¼ bk þ n¼r bn j¼0 arkþj1 qðn  j; rÞ and Xs ¼ ð1Þ k¼s Xk . The coefficients qðn; rÞ; bn are as in (2) and (3). s Proof. It is a direct consequence of Lemma 2.1, by considering the matrix A ¼ Ir  B. h Remark 2.3. Let A; S 2 M r ðCÞ be two similar matrices, such that one of them, say A, satisfies the conditions of Theorem 2.2. 1 1 Since the matrices A and S have the same characteristic polynomial, then Ap and Sp have the same decomposition as given in Theorem 2.2. 1

For reaching a concrete practical formula of Ap , we can express the coefficients qðn; rÞ in terms of the eigenvalues of B, in a similar way as in [2,6,7]. Let B be in M r ðCÞ and first suppose that its eigenvalues k1 ; . . . ; kr are all simple. Then, for every kn1

kn1

n1

n P r, the qðn; rÞ can be expressed as follows, qðn; rÞ ¼ P0 1ðk Þ þ P0 2ðk Þ þ    þ Pk0 rðkr Þ, where P B is the characteristic polynomial B

1

B

2

B

of B (see [2,6,7] for more details). We recall here that the preceding formula is derived from Corollary 3.2 of [2] (or PropoQ sition 3 of [6]), by taking mi ¼ 1 (1 6 i 6 r) and showing that P0B ðki Þ ¼ rj¼1;j–i ðki  kj Þ. The same formula is obtained in [7], by solving a linear system of Vandermonde type. Let A 2 M r ðCÞ satisfying the conditions of Theorem 2.2, such that the eigenPr1 ðpÞ 1 Xk ðIr  AÞk , values k1 ; . . . ; kr of the matrix B ¼ Ir  A are simple. A straightforward computation yields to have Ap ¼ k¼0 where

XðpÞ k ¼ bk 

r X k r X k X arkþj1 Rj ðki Þ X arkþj1 þ ð1  ki Þ1=p ; 0 jþ1 0 P B ðki Þ i¼1 j¼0 kjþ1 ki P ðk Þ i i¼1 j¼0 B i

437

J. Abderramán Marrero et al. / Applied Mathematics and Computation 242 (2014) 435–443

with Rj ðzÞ ¼

Prþj1 n¼0

bn zn . Using Theorem 2.1 of [2], or Proposition 3 from [6], we obtain the following consequence

Corollary 2.4. Let A 2 M r ðCÞ satisfying the conditions of Theorem 2.2, such that the eigenvalues k1 ; . . . ; kr of B ¼ Ir  A are simple. h i P P P 1 1 ðpÞ ðpÞ k where Xk ¼ bk þ kj¼0 arkþj1 ri¼1 jþ1 Qr 1 ð1  ki Þp  Rj ðki Þ , with Then, we have Ap ¼ r1 k¼0 Xk ðI r  AÞ , Rj ðzÞ ¼

Prþj1 n¼0

ki

‘¼1;‘–i

ðki k‘ Þ

n

bn z .

Corollary 2.4 furnishes a practical method for computing the principal pth root of a matrix. 2   16 Example 2.5. Let compute A1=2 for the matrix A ¼ 31 . The eigenvalues of I2  A are k1 ¼ 12 and k2 ¼ 23. We observe that 1 3

6

ð2Þ

ð2Þ

ð2Þ

A satisfies the conditions of Theorem 2.2. Thus, we have A1=2 ¼ X0 I2 þ X1 ðI2  AÞ, where X0 1 2 z; R1 ðzÞ

1 2z

1 2 3z

7 6,

Corollary 2.4. Since R0 ðzÞ ¼ 1  ¼1  and a0 ¼ a1 ¼ pffiffiffi pffiffiffi pffiffiffi pffiffiffi ð2Þ 2  3 and 2 þ 2 3 . Therefore, the principal Xð2Þ ¼ 2 X ¼ 3 0 1 " pffiffiffi pffiffi pffiffi pffiffi # pffiffi22 þ 33 2  33 1=2 pffiffiffi . ¼ pffiffiffi 2 pffiffiffi A 2  3 3  22 þ 23 3

 13,

ð2Þ

and X1

are given as in

a direct computation implies that

square

root

of

A

is

given

by

In view of Theorem 2.2, we manage to reveal an explicit formula for A1=p , in the case where B ¼ Ir  A admits a unique eigenvalue of multiplicity r. That is, using the expression of qðn; mÞ established in [6] and given by   P P Pk j ðpÞ mþj1 qðn; mÞ ¼ knm m1 ð1Þ for n P m. Since Xk ¼ bk þ þ1 j¼0 n¼r bn j¼0 arkþj1 qðn  j; rÞ and qðm; rÞ ¼ 0 for m < r, we n s i Xk Xr1 ð1Þrþs1 arkþj1 d hXþ1 ðpÞ bn znj . Hence, we obtain the following corollary. deduce that Xk ¼ bk þ s j¼0 s¼0 n¼rþj jz¼k s!krþs dz Corollary 2.6. Let A 2 M r ðCÞ satisfying the conditions of Theorem 2.2, such that B ¼ Ir  A admits a unique eigenvalue k of P 1 k multiplicity r. Then, we have Ap ¼ r1 k¼0 Xk ðIr  AÞ , where

XðpÞ k ¼ bk þ

s  k X r1  X ð1Þrþs1 arkþj1 d 1  ð1  zÞ1=p  Rj ðzÞ ; s rþs j s!k dz z jz¼k j¼0 s¼0

P n with Rj ðzÞ ¼ rþj1 n¼0 bn z . In fact, Corollaries 2.4 and 2.6 can be generalized to the case when the eigenvalues of A (or B) are not simple. Indeed, let A 2 M r ðCÞ satisfying the conditions of Theorem 2.2 and suppose that its eigenvalues are not simple. Consider the matrix 1

function f ðtÞ ¼ ðIr  tBÞp , where B ¼ Ir  A and t a real parameter such that ktBk < 1. We show easily that for t ¼ 1 we get Pr1 ðpÞ 1 1 1 1 Xk ðtÞBk , where f ð1Þ ¼ ðIr  BÞp ¼ Ap . The power series expansion of ðIr  tBÞp takes the form ðIr  tBÞp ¼ k¼0   P P ðpÞ þ1 k n Xk ðtÞ ¼ bk tk þ n¼r bn with qðn; rÞ and bn given by (2), (3). Therefore, we have j¼0 arkþj1 qðn  j; rÞ t , P P 1 ðpÞ n p Xk ðtÞ ¼ bk tk þ kj¼0 arkþj1 Uj ðtÞ with Uj ðtÞ ¼ þ1 n¼rþj bn qðn  j; rÞt . As in Corollary 2.4, explicit expression of ðIr  tBÞ can be obtained when the Uj ðtÞ, are expressed in terms of the eigenvalues of B. Indeed, let k1 , . . ., ks be the eigenvalues of B of multiplicities m1 , . . ., ms , respectively, with mi P 2 for 1 6 i 6 s. Then, Corollary 3.2 of [2] shows that

qðn; rÞ ¼

s X

X

i¼1 k1 þk2

where wki ;n ðki Þ ¼

s Q j¼1; j–i

ð1Þmi 1ki ðn  1Þ! w ðki Þ; ðn  1  ki Þ! ki ;n k ! k ! . . . k ! 1 2 s þþk ¼m 1 s

qðn; rÞ ¼

s X

n1ki

ðmj þkj 1Þ! ðmj 1Þ!

ten under the form,

s Q

ki

ðki kj Þ

k

X

i W i ðk1 ; . . . ; ks ; k1 ; . . . ; ks Þ Y

k þ1 ki i

mi 1ki

where W i ðk1 ; . . . ; ks ; k1 ; . . . ; ks Þ ¼ ð1Þ k1 !k2 !...ks !

Uj ðtÞ ¼

(see also Proposition 3 of [6]). For a practical reason, Expression (4) can be writ-

mj þkj

j¼1; j–i

i¼1 k1 þk2 þþks ¼mi 1

s X

X

Uj ðtÞ ¼

ð5Þ

Qs ðmj þkj 1Þ! j¼1; j–i ðmj 1Þ! Q  s mj þkj . Now following (5) the functions Uj ðtÞ (1 6 j 6 s) take the form j¼1; j–i

ðki kj Þ

n¼rþj q¼1

where Di;j ðtÞ ¼ Di;j ðk1 ; . . . ; ks ; k1 ; . . . ; ks ; tÞ ¼ t

X

ðn  qÞkni ;

q¼1

ki þ1 Y X Di;j ðtÞ ðn  j  qÞbn kni tnjki 1 ;

i¼1 k1 þk2 þþks ¼mi 1

s X

ð4Þ

i

ki

Di;j ðtÞ

i¼1 k1 þk2 þþks ¼mi 1

d

dt

ki



t

jþki þ1

W i ðk1 ;...;ks ;k1 ;...;ks Þ jþki þ1

ki

. Therefore, we obtain

 1=p ; ðð1  tk Þ  R ðtk ÞÞ i j i jþ1 1

Prþj1

ð6Þ

bn zn . Moreover, the functions Uj ðtÞ (1 6 j 6 s) can be given under an operational form. More precisely, P k let consider the operator Di;j ¼ k1 þk2 þþks ¼mi 1 Di;j ðtÞ d kii . We have where Rj ðzÞ ¼

n¼0

dt

438

J. Abderramán Marrero et al. / Applied Mathematics and Computation 242 (2014) 435–443

Uj ðtÞ ¼

  s X 1 Di;j jþ1 ðð1  tki Þ1=p  Rj ðtki ÞÞ t i¼1

ð7Þ

In summary, we have the result. Theorem 2.7. Let p P 2 be an integer and A 2 M r ðCÞ such that B ¼ Ir  A is of characteristic polynomial PB ðzÞ ¼ zr  a0 zr1      ar1 (with ar1 – 0). Assume that kIr  Ak < 1; SpðAÞ  C n R , and the eigenvalues k1 ; . . . ; ks of the P ðpÞ matrix B are not simple. We have Xk ðtÞ ¼ bk t k þ kj¼0 arkþj1 Uj ðtÞ, where the Uj ðtÞ (1 6 j 6 s) are given by (6) or (7). In P Pr1 ^ ðpÞ k 1 ðpÞ k particular, the principal matrix pth root of A is Ap ¼ r1 with k¼0 Xk ðI r  AÞ ¼ k¼0 Xk A ,   Pk P j ðpÞ ðpÞ ðpÞ ðpÞ k r1 ^ ¼ ð1Þ Xk ¼ Xk ð1Þ ¼ bk þ j¼0 arkþj1 Uj ð1Þ and X Xj . j¼k k k Formulas of Corollaries 2.4 and 2.6 and Theorem 2.7 are not known in the literature. Note that Theorem 2.7 has been obtained from the formula (4) of the qðn; rÞ given in Corollary 3.2 of [2]. Meanwhile, the formula of qðn; rÞ given in Proposition 3 of [6] may allows us to obtain another expression of the functions Uj ðtÞ given in (6),(7). Hence, we can derive an equivalent formula for A1=p . Moreover, the scalars Xk (0 6 k 6 r  1) given in Theorem 2.7 may be obtained in the same way as in Corollary 2.6. We elucidate Theorem 2.7 with the following examples. 9  3 

1  12 1 1 Example 2.8. Let compute A3 for the matrix A ¼ 49 , where J A ¼ J 2 34 ¼ 4 3 and P is an 3 . We have A ¼ P  J A  P 0 4 2 4 invertible matrix. Observe that A satisfies the conditions of Theorem 2.7. Then, using Theorem 2.7, we obtain 1 d  1 h 1 i P a 1 ð3Þ ð3Þ ð3Þ 1 3 ; k ¼ 0; 1. Observe that the characteristic A3 ¼ X0 I2 þ X1 ðI2  AÞ with Xk ¼ bk þ kj¼0 11kþj jþ2 dt t¼1 jþ1 ð1  4 tÞ  Rj 4 t t ð4Þ 2 1 1 1 1 polynomial of B ¼ I2  A is P B ðzÞ ¼ z  2 z þ 16, thus a0 ¼ 2 ; a1 ¼  16. On the other hand, since R0 ðzÞ ¼ 1  13 z and

1 131 R1 ðzÞ ¼ 1  13 z  19 z2 , we have R0 14 ¼ 11 12 and R1 4 ¼ 144. Therefore, we deduce ðpÞ

"  1 #  1    2  2 3 3 1 1 3 3 1 3 3 1 3 3  þ R0  b1 ¼ þ ; 4 4 12 4 4 4 12 4

Xð3Þ 0 ¼ b0 þ 16a1 

ð3Þ 1

X

"  1 # "  1 #    2    2 3 3 1 1 3 3 1 3 3 1 1 3 3 1 1   ¼ b1 þ 16a0  þ R0  b1 þ 64a1 2 þ 2R1  b1  b2 4 4 12 4 4 4 4 12 4 4 8  23 1 3 : ¼ 3 4 " 1 # 2 2 3 3

1

Consequently, we obtain A3 ¼

4

 þ 12 34 3 2

 3 3 3 2 4

 1 3 3 3 13 6 14 3 23 . 2 4 4

Instead of Theorem 2.7, we can also utilize Corollary 2.6 in the preceding example. 2 11 2 3 6 3 2 5 7 Example 2.9. Let compute the square root of the matrix A ¼ 4  3 6 1 5. We can verify that A is similar to 2 3 1 2 1 1 1 0 3 3 2 2 6 7 1 J A ¼ 4 0 2 0 5, which satisfies the conditions of Theorem 2.7. Using the formula of qðn; rÞ from [2] (Example 4:2 p. 0 0 32 1

ð2Þ

ð2Þ

ð2Þ

418), and applying Remark 2.3 and Theorem 2.2, we obtain the formula A2 ¼ X0 I3 þ X1 ðI3  AÞ þ X2 ðI3  AÞ2 , where P Xð2Þ ¼ bk þ kj¼0 a2kþj Hk such that k

 12  !  1  !  12  ! 1 1 1 1 2 3 1  1 Hj ¼ a  Rj  Rj  Rj  þb  þc ; 2 2 4 2 2 2 2 P n with a ¼ 2jþ1 ð2j þ 3Þ; b ¼ 2jþ2 ; c ¼ a ¼ ð2Þjþ1 and Rj ðzÞ ¼ 2þj n¼1 nbn z . The characteristic polynomial of the matrix B ¼ I3  A 3 1 2 1 1 1 1 1 1 is PB ðzÞ ¼ z  2 z  4 z þ 8, therefore a0 ¼ 2 ; a1 ¼ 4 and a2 ¼  8. Recall that, b0 ¼ 1; b1 ¼  12, b2 ¼  18 ; b3 ¼  16 , and b4 ¼  5 . 1 23 1

1 1451  1





1 39 1 157 128  1  1 91 5 43 177 We have also R0 2 ¼ 32, R1 2 ¼ 128, R2 2 ¼ 2048, R0 2 ¼  16, R1 2 ¼  128, R2 2 ¼  512; R0  2 ¼ 32, R1  2 ¼ 128, and qffiffi qffiffi pffiffi pffiffi pffiffiffi qffiffi

ð2Þ ð2Þ ð2Þ . Consequently, we have X0 ¼ 22 þ 14 32, X1 ¼ 22  32 and X2 ¼  2 þ 32. Thus, the principal square root R2  12 ¼ 2507 2048 of A is,

2

pffiffi 7 2 6

6 pffiffi qffiffi 6 13 2  3 32 A1=2 ¼ 6 6 6 4 pffiffi qffiffi 7 2  32 6

pffiffi 2 3 pffiffi 5 2 6 pffiffi 2 3

pffiffiffi 3  2 qffiffi pffiffi 7 7 3 32  5 2 2 7: 7 qffiffi pffiffiffi 5 3  2 2

439

J. Abderramán Marrero et al. / Applied Mathematics and Computation 242 (2014) 435–443

We conclude this section by introducing some useful remarks. Particularly, the next section will be concerned with the following remark. Remark 2.10. Lemma 2.1, Theorem 2.2 and Corollary 2.4 remain valid if we replace the characteristic polynomial by the minimal polynomial or any other annihilator polynomial Q ðzÞ ¼ zs  a0 zs1      as1 (s < r) of the matrix B ¼ Ir  A see, e.g.. [5,7,8]. 1 ^ ðpÞ are Remark 2.11. The polynomial decomposition of Ap given in Theorem 2.7 has been studied in [4], where the scalars X k ðpÞ ^ derived from the resolution of a system of linear equations. Meanwhile, Theorem 2.7 shows that the scalars X are given

k

^ ðpÞ can be derived using the method of [9] such that without solving any linear systems. In a similar way, the scalars X k ðpÞ ðpÞ ðpÞ ^ ¼X ^ ð1Þ, where the explicit formula of the functions X ^ ðtÞ may be derived from the resolution of a linear system of X k k k equations (see [4]).

3. Computing the principal matrix pth root by polynomial decompositions 3.1. Polynomial decompositions of the principal matrix pth root We are interested in studying the principal matrix pth root by using the so-called (scalar) polynomial decompositions, Q P that can be defined as follows. Let A 2 M r ðCÞ with M A ðzÞ ¼ sj¼1 ðz  lj Þmj (m ¼ sj¼1 mj 6 r) and satisfying the conditions of Theorem 2.2. For a given integer p P 2, combining Theorem 2.2 and Remark 2.10, we show that there exist scalars ^ ðpÞ (0 6 k 6 m  1) such that A1p ¼ Pm1 X ^ ðpÞ Ak . The preceding scalar power polynomial decomposition, may be generalX k¼0

k

k

ized by considering a set of matrices A0 ; A1 ,. . .,Am1 , generating the same vector space as A0 ¼ Ir ; A, . . ., Am1 . Thus, there exist P 1 ðpÞ ðpÞ scalars Ck (0 6 k 6 m  1) such that Ap ¼ m1 k¼0 Ck Ak . In particular, if Ak ¼ P k ðAÞ (0 6 k 6 m  1), where the P k ðzÞ P 1 ðpÞ (0 6 k 6 m  1) are polynomials, the decomposition Ap ¼ m1 k¼0 Ck P k ðAÞ is called the scalar polynomial decomposition 1

of Ap . In practice, such decompositions are related to the eigenvalues of the matrix A.   Q P Let B 2 M r ðCÞ such that M B ðzÞ ¼ sj¼1 ðz  lj Þmj m ¼ sj¼1 mj 6 r , kBk < 1 and SpðI  BÞ  C n R . Since P Pm1 ^ ðpÞ k 1 1 k ^ ðpÞ p ðIr  BÞp ¼ þ1 k¼0 bk B , Cayley–Hamilton’s Theorem implies that there exist m scalars Xk such that ðIr  BÞ ¼ k¼0 Xk B . Furthermore, the matrices B0 ¼ Ir ; B1 ¼ B  lj Ir , . . ., Bj ¼ ðB  lj Ir Þj , . . ., Bm1 ¼ ðB  lm1 Ir Þm1 generate the same vector space as P 1 ðpÞ ðpÞ k B0 ¼ Ir ; B, . . ., Bm1 , we deduce the formula ðIr  BÞp ¼ m1 k¼0 Xk ðB  lk Ir Þ . The aim in the following is to express the Xk . The Jordan canonical form of the matrix B takes the form J B ¼ M 1 ðl1 Þ  M 2 ðl2 Þ      M s ðls Þ, where M j ðlj Þ is of order r j (1 6 j 6 s) with r ¼ r 1 þ    þ rs . And each jth block M j ðlj Þ (1 6 j 6 s) can be written as follows M j ðlj Þ ¼ J j1 ðlj Þ      J jk ðlj Þ; the block J j1 ðlj Þ is of order mj and the rest of blocks are of order 6 mj . For every j s Q (1 6 j 6 s) B 2 M r ðCÞ such that M B ðzÞ ¼ ðz  lj Þmj , we set j¼1

ðM j ðlj Þ  lj Ir ÞB ¼ Hr1      Hrj1  ðMj ðlj Þ  lj Irj Þ  Hrjþ1      Hrs ; It

was

established in Theorem 2 of [6] that the powers Qs k ðBld Ir Þmd Pmj k1 kþi bi;j ðB  lj Ir Þ , where b0;j ¼ 1 and ðM j ðlj Þ  lj Ir ÞB ¼ d¼1;d–j ðl l Þmd i¼0 j

bi;j ¼

i 1 X

b0;j

of

this

matrix

are

given

by

d

bk;j bik;j with bi;j ¼

  s X Y hd Si;j d¼1;d–j

k¼1

md

ðlj  ld Þmd hd :

ð8Þ

with bi;j ¼ 0 for i > m1 þ    þ mj1 þ mjþ1 þ    þ md and the sum is over the set Si;j of ðh1 ; . . . ; hj1 ; hjþ1 ; . . . ; hs Þ satisfying h1 þ    þ hj1 þ hjþ1 þ    þ hs ¼ i with hd 6 md . Then, we obtain the following formula 1

ðIr  J B Þp ¼

j 1 s m X X

j¼1 k¼0

ðpÞ

F j;k

mj k1 s Y ðJ B  ld Ir Þmd X bi;j ðJ B  lj Ir Þkþi ; md ðlj  ld Þ i¼0 d¼1;d–j

ð9Þ

where the bi;j are given by (8) and ðpÞ

F j;k ¼

8 k1   > k Y 1 > 1 < ð1Þ ð1  lj Þpk ; if k P 1  ‘ p k! > > :

‘¼0 1

ð1  lj Þp ;

if k ¼ 0:

ð10Þ

440

J. Abderramán Marrero et al. / Applied Mathematics and Computation 242 (2014) 435–443 1

1

Since J B and B are similar, there exists a non-singular matrix Z 2 M r ðCÞ such that ðIr  BÞp ¼ ZðIr  J B Þp Z 1 . Thus, with the aid of Eq. (9), we obtain the lemma. Q Lemma 3.1. Let p P 2 be an integer and B 2 M r ðCÞ. Assume that SpðIr  BÞ  C n R ; kBk < 1 and M B ðzÞ ¼ sj¼1 ðz  lj Þmj . md Ps Pmj 1 ðpÞ Pmj k1 Q 1 ðpÞ ðBl I Þ Then, we have the polynomial decomposition ðIr  BÞp ¼ j¼1 k¼0 F j;k i¼0 bi;j ðB  lj Ir Þkþi sd¼1;d–j ðl ld rÞmd , where F j;k and j

d

bi;j are given by (8) and (10). Q 1 Now let consider the principal matrix pth root Ap of A 2 M r ðCÞ with M A ðzÞ ¼ sj¼1 ðz  kj Þmj . For practical reasons, the analogous expression of (8) are given by a0;j ¼ 1 and

ai;j ¼

  i s X Y hd 1 X ðkd  kj Þmd hd ; ak;j aik;j with ai;j ¼ a0;j k¼1 m d S d¼1;d–j

ð11Þ

i;j

where ai;j ¼ 0 for i > m1 þ    þ mj1 þ mjþ1 þ    þ md and the sum is over the set Si;j of ðh1 ; . . . ; hj1 ; hjþ1 ; . . . ; hs Þ satisfying h1 þ    þ hj1 þ hjþ1 þ    þ hs ¼ i with hd 6 md . In a similar way, the analogous formula of (10) is,

DðpÞ j;k ¼

8 k1   1k k Y > > 1 < ð1Þ  ‘ kjp ; if k P 1 p k! > > :

ð12Þ

‘¼0 1 p

if k ¼ 0:

kj ;

In fact, substitution of 1  lk by kk (1 6 k 6 s) in Expressions (8) and (10) permits us to recover (11) and (12). Theorem 3.2. Let p P 2 be any integer and A 2 M r ðCÞ satisfying the conditions SpðAÞ  C n R ; kIr  Ak < 1, and Q P MA ðzÞ ¼ sk¼1 ðz  kk Þmk ð sk¼1 mk ¼ m 6 rÞ. Then 1

Ap ¼

mj k1 j 1 s m s X X X Y ðkd Ir  AÞmd DðpÞ ai;j ðkj Ir  AÞkþi ; j;k ðkd  kj Þmd j¼1 k¼0 i¼0 d¼1;d–j

ð13Þ

ðpÞ

where ai;j and Dj;k are given by (11) and (12). Proof. Setting B ¼ Ir  A, by the given assumptions we observe that B satisfies the conditions of Lemma 3.1. In other words, Q SpðIr  BÞ  C n R ; kBk < 1 and M B ðzÞ ¼ sj¼1 ðz  lj Þmj (with lj ¼ 1  kj ). Therefore, we have 1

ðIr  BÞp ¼

j 1 s m X X

mj k1 ðpÞ

F j;k

j¼1 k¼0

X

bi;j ðB  lj Ir Þkþi

i¼0

s Y ðB  ld Ir Þmd ; ðlj  ld Þmd d¼1;d–j

ðpÞ F j;k

where and bi;j are given by (10)–(8). Substitution of B by Ir  A and allows us to get the desired result. h

lj by 1  kj (j ¼ 1; . . . ; s) in the previous formula,

A consequence of Remark 2.3 and Theorem 3.2, is the following corollary. Corollary 3.3. Let A; S 2 M r ðCÞ be two similar matrices, such that one of them satisfies the conditions of Theorem 3.2. Then, the 1 1 matrices Ap and Sp have the same explicit formula (13). The following numerical example, constitute an illustration of Theorem 3.2 and Corollary 3.3. 2 3 2 6  32 1 6 7 Example 3.4. Let compute A2 for A ¼ 4 12  32  12 5. The eigenvalues of A are k1 ¼ 12 of multiplicity 2 and k2 ¼ 14. We 5 3  34 2 4 observe that the matrix A satisfies the conditions of Theorem 3.2, then we shows that

A1=2 ¼

j 1 2 m X X

j¼1 k¼0

mj k1

Dð2Þ j;k

X

ai;j ðkj I3  AÞiþk

i¼0

    2 2 Y pffiffiffi ðkd I3  AÞmd 1 1 1 þ 8 ¼ 2 þ 3 I  A I  A I  A 2 I 3 3 3 3 2 4 2 ðkd  kj Þmd d¼1;d–j 2

Therefore, the principal square root of A is A1=2

pffiffiffi 3  pffiffi2 6 ¼ 4 1  22  12

pffiffi 3 2 6 p2ffiffiffi 2 2 pffiffi 2 þ 1 2

pffiffiffi 3 33 2 pffiffiffi 7 1  2 5. pffiffiffi 2  12

3.2. Some practical consequences With the aid of Theorem 3.2 we can formulate practical results, useful for studying some special cases. The first consequence is the following.

441

J. Abderramán Marrero et al. / Applied Mathematics and Computation 242 (2014) 435–443

Proposition 3.5. Let A 2 M r ðCÞ satisfying the conditions of Theorem 3.2. P 1 ðpÞ ðpÞ k (1) Suppose that A admits a unique eigenvalue k (with multiplicity m > 1). We have Ap ¼ m1 k¼0 Dk;k ðkI r  AÞ , where the Dk;k are given by (12). 1Q Q P 1 r AÞ (2) Suppose that A admits pairwise distinct eigenvalues with M A ðzÞ ¼ rj¼1 ðz  kj Þ. We have Ap ¼ rj¼1 kjp rd¼1;d–j ðkðkd Ik . Þ d

j

Proof. Assertion ð1Þ is obtained as a direct consequence of Theorem 3.2 by setting s ¼ 1 and m1 ¼ m. And assertion ð2Þ follows from Theorem 3.2 by taking s ¼ r and mj ¼ 1 for j ¼ 1; . . . ; r. h It is easy to show that part 2 of Proposition 3.5 is nothing else but the usual polynomial interpolation of f ðAÞ when 1

f ðzÞ ¼ zp . In order to better clarify the practical role of Theorem 3.2, let assume now that M A ðzÞ ¼ ðz  k1 Þm1 ðz  k2 Þm2 . Then,     i i ðk2  k1 Þ; ai2 ¼ ðk1  k2 Þ, and ai1 ¼ ai2 ¼ 0 for i > m2 , and i > m1 . for 0 6 i 6 m1 or 0 6 i 6 m2 , we have ai1 ¼ m2 m1 Thus, we have

ai;1 ¼ ð1Þi

Qi1

j¼0 ðm2

þ jÞ

i!ðk2  k1 Þ

i

and ai;2 ¼ ð1Þi

Qi1

j¼0 ðm1

þ jÞ

i!ðk1  k2 Þi

ð14Þ

:

Therefore, we can formulate another practical consequence of Theorem 3.2. Theorem 3.6. Let A 2 M r ðCÞ satisfying the conditions of Theorem 3.2. (1) Suppose that A owns 2 distinct eigenvalues with M A ðzÞ ¼ ðz  k1 Þm1 ðz  k2 Þm2 . Then, we have 1

Ap ¼

m1 1 m1X k1 m2 1 m2X k1 ðk2 Ir  AÞm2 X ðk I  AÞm1 X DðpÞ ai;1 ðk1 Ir  AÞkþi þ 1 r DðpÞ ai;2 ðk2 Ir  AÞkþi ; m2 m1 1;k 2;k ðk2  k1 Þ ðk1  k2 Þ i¼0 i¼0 k¼0 k¼0

(2) Suppose that M A ðzÞ ¼ ðz  kÞmk

Qs

j¼1 ðz

 kj Þ. Then, we have

mk 1 mkX k1 s s s X Y 1 Y ðkd Ir  AÞ ðkIr  AÞmk X ðkd Ir  AÞ A ¼ kjp DðpÞ ai;k ðkIr  AÞkþi ; mk þ k;k k kd  k  k ðk  kj Þ j d i¼0 j¼1 k¼0 d¼1;d–j d¼1 1 p

ðpÞ

where Dj;k and ai;j are given by (12) and (14). Proof. Assertion ð1Þ follows from Theorem 3.2 and formula (14). And ð2Þ is a particular case of Theorem 3.2.

h

3.3. Application to some special cases The preceding results can be exploited, in order to establish the explicit form of some principal matrix 3rd root of 4  4 matrices. We start by the case when the matrix A 2 M 4 ðCÞ owns a single eigenvalue. Thus, we have MA ðzÞ ¼ ðz  kÞ; M A ðzÞ ¼ ðz  kÞ2 ; M A ðzÞ ¼ ðz  kÞ3 or M A ðzÞ ¼ ðz  kÞ4 . Assume that A satisfies the conditions of Theorem 3.2 and MA ðzÞ ¼ ðz  kÞ3 , then we have 2

1

1

A3 ¼ k 3 I 4 

5

k3 k3 ðkI4  AÞ  ðkI4  AÞ2 3 9

ð15Þ

Similar formulas can be obtained for the others cases. We present the following numerical situation for exemplifying the 2 3 formula (15). 1 1 0 12 2 61 1 2 07 6 7 2 Example 3.7. Consider the matrix A ¼ 6 2 7 and the unique eigenvalues of A is 12. Observe that A satisfies the 4 0  14 12 18 5 2 0 8 12



2 1 1 conditions of Theorem 3.2. Hence, applying formula (15) we obtain A3 ¼ aI4 þ b 12 I4  A þ c 12 I4  A , where a ¼ 12 3 , 2 3 2 2 1 13 ð12Þ53 5 ð1Þ 3 ð12Þ 3  23  29 12 3 6 2 þ 18 6 7 2 6 7 1 3 1 13 2 6 7 2 5 2 1 3 1 ð12Þ 3 ð12Þ 3 6  ð2Þ6 7 0 3 2 3 2 b ¼  3 and c ¼  9 . Thus, the principal 3rd root of A is A ¼ 6 7. 5 2 5 2     6 ð12Þ 3 ð12Þ 3 1 13 ð12Þ 3 ð12Þ 3 7 6 7   4 72 12 18 24 5 2 2 2 1





  2 1 3 8 1 3 1 3 0 3 2 3 2 2 When the matrix A 2 M 4 ðCÞ owns two distinct eigenvalues k and l, we have M A ðzÞ ¼ ðz  kÞðz  lÞ; MA ðzÞ ¼ ðz  kÞ2 ðz  lÞ; M A ðzÞ ¼ ðz  kÞ3 ðz  lÞ or MA ðzÞ ¼ ðz  kÞ2 ðz  lÞ2 . Suppose that A satisfies the conditions of Theorem 3.2 and MA ðzÞ ¼ ðz  kÞ2 ðz  lÞ, then we have

442

J. Abderramán Marrero et al. / Applied Mathematics and Computation 242 (2014) 435–443

1

1

A3 ¼ l3

"

ðkI4  AÞ2

2

kI4  A k3 Þ ðkI4  AÞ kl 3

1

þ k3 ðI4 þ

ðk  lÞ2

#

lI 4  A lk

ð16Þ

Similar formulas can be obtained for the others cases. The following numerical example showcases the importance of 2 3 Expression (16). 1 4 3 1  10 3 9 9 60 11 1 1 7 6 36  30 18 7. Observe that A satisfies the conditions of Theorem 3.2, therefore Example 3.8. Consider the matrix A ¼ 6 7 1 40 0 05 4 1 1 5 0 36  60 18

2





1 1 1 Expression (16) implies that A3 ¼ a 13 I4  A þ ½bðI4 þ 12 13 I4  A Þ  c 13 I4  A  14 I4  A , where a ¼ 144 14 3 , b ¼ 12 13 3 2 and c ¼ 4 13 3 . Hence,

2 1 1 3 3

6 6 6 6 0 1 6 3 A ¼6 6 6 0 6 4

 12 9

1 13 4

1 1 1 3 3 4

þ5

1 43 3

4 þ 2 13 3 0

1 1 1 3

0

3

4

þ

1 43 3

13

6 1 5 4

 75

1 13

13

8 1 3 4

 73

1 13 3

3 7

1 7 2 2 1 3 7 ½  3  3½ 3  4 7 5 4 7 7: 1 13 7 7 0 4 7 5 1 1 13 1 1 43 1 1 3 2 1 3 ½   þ 5 4 3 3 4 3

1 13

3

1 13

1 13

Assume that A 2 M 4 ðCÞ owns 3 distinct eigenvalues k; l and m, we have M A ðzÞ ¼ ðz  kÞðz  lÞðz  mÞ or MA ðzÞ ¼ ðz  kÞ2 ðz  lÞðz  mÞ. Suppose that A satisfies the conditions of Theorem 3.2 and M A ðzÞ ¼ ðz  kÞ2 ðz  lÞðz  mÞ, then we have

" 



2

#

1 mI4  A ðkI4  AÞ2 13 lI4  A ðkI4  AÞ2 2k  ðl þ mÞ k3 lI4  A mI4  A A ¼l þm þ k3 I4 þ ðkI4  AÞ : ðkI4  AÞ  2 2 ðl  kÞðm  kÞ m  l ðk  lÞ l  m ðk  mÞ 3 lk mk 1 3

1 3

ð17Þ 2

5

6 81 6 Example 3.9. Let compute the principal 3rd root of A ¼ 6 81 4 12

 18 3 8

1 12  14 1 2  13

3

1 12 1 7 12 7.

7 Observe that A satisfies the conditions of 1  12 05 1 2 6 3 1

2









1 3 Theorem 3.2, then formula (17) implies A ¼ 36a 3 I4  A 3 I4  A þ 18b 12 I4  A 23 I4  A þ 18c 12 I4  A 13 I4  A , where 1 1 1 a ¼ 12 3 , b ¼ 13 3 and c ¼ 23 3 . Therefore, we have 1 6

2

1 a þ 34 c 4 61 6 a  1b þ 1c 1 64 2 4 A3 ¼ 6 6 1 ½c  b 4 4 1 ½c 2

 b

3 ½a 4 3 a 4

 c

þ 12 b  14 c 1 ½b 4

 c

1 ½b 2

 c:

1 ½c 2

1 ½c 2

 a

 12 ½a þ c þ b 1 ½b 2

þ c

bc

1 ½c 2

 a

3

7  a 7 7 7 7 0 5 c

Finally, if A 2 M 4 ðCÞ admits 4 distinct eigenvalues and satisfies the conditions of Theorem 3.2, the principal matrix 3rd root of A is trivially derived from assertion ð2Þ of Proposition 3.5. 4. Concluding remarks and perspective The first approach rests upon primarily on the techniques of [6,7], in contrast the second approach requires properties of [6]. However, we notice that we can link some results of Section 2 and those of Section 3. That is, let A 2 M r ðCÞ satisfying the P ~ ðpÞ k 1 A , where conditions of Theorem 2.7, where the eigenvalues k1 ; . . . ; ks of A are not simple. Then, we have Ap ¼ r1 X k¼0

k

~ ðpÞ are given by the following formula, (appending the Proposition 3 of [6]) the X k

~ ðpÞ ¼ ck þ ð1Þk X k such that ck ¼ ð1Þk

qm p r1  X ‘ s m h 1 X X X X ‘ kh h ar‘þj1 Wi;j ð1Þ; m h Qs ms ð1Þ k j¼0 s¼1;s–h ðkh  ks Þ i¼0 ‘¼k h¼1 q¼0

  ‘ a b and Wi;j ðtÞ ¼ qi;h iþj ‘¼k i!kh k ‘

Pr1

di dt i



1 tj

 ðð1  tkh Þ1=p  Rj ðtkh ÞÞ , where ai;j is given by (11). 1

In Theorems 2.2 and 2.7, the principal matrix pth root Ap of a matrix A has been studied with the aid of the matrix B ¼ Ir  A and its eigenvalues. Meanwhile, we can exploit the eigenvalues of the matrix A. That is, according to the hypothesis   P Pþ1 Pn 1 n n j kIr  Ak < 1, it follows that Ap ¼ þ1 Aj . Therefore, if we suppose SpðAÞ  C n R , we n¼0 bn ðI r  AÞ ¼ n¼0 bn j¼0 ð1Þ j

J. Abderramán Marrero et al. / Applied Mathematics and Computation 242 (2014) 435–443

443

  Pþ1 Pk p k Pþ1 b , n¼r cn j¼0 arkþj1 qðn  j; rÞ and ck ¼ ð1Þ p¼k k p where qðn; rÞ and bn are given by (2) and (3). However, nontrivial computations are required to obtain equivalent results. As a potential variant for accomplishing explicit formulas of the principal pth root of an n  n matrix A is the introduction of the method used in [1] for calculating the principal logarithm of a matrix. This is based on the Binet formula for the solutions of the generalized Fibonacci sequences; see for instance [11]. 1

obtain the decomposition Ap ¼

Pr1 ~ s¼0

~ k ¼ ck þ Xs As , such that X

Acknowledgments The authors sincerely thank the reviewers for valuable comments and suggestions. References [1] J. Abderramán Marrero, R. Ben Taher, M. Rachidi, On explicit formulas for the principal matrix logarithm, Appl. Math. Comput. 220 (2013) 142–148. [2] D. Aiat Hadj Ahmed, A. Bentaleb, M. Rachidi, F. Zitan, Powers of matrices by density and divided differences, Int. J. Algebra 3 (2009) 407–422. [3] V. Arsigny, X. Pennec, N. Ayache, Polyrigid and polyaffine transformations: a novel geometrical tool to deal with non-rigid deformations application to the registration of histological slices, Med. Image Anal. 9 (2005) 507–523. [4] R. Ben Taher, Y. El khatabi, M. Rachidi, On the polynomial decompositions of the principal matrix p-th root and applications, Int. J. Contemp. Math. Sci. 9 (3) (2014) 141–152. [5] R. Ben Taher, M. Rachidi, On the matrix powers and exponential by r-generalized Fibonacci sequences methods: the companion matrix case, Linear Algebra Appl. 370 (2003) 341–353. [6] R. Ben Taher, M. Rachidi, Some explicit formulas for the polynomial decomposition of the matrix exponential and applications, Linear Algebra Appl. 350 (2002) 171–184. [7] R. Ben Taher, M. Rachidi, Linear recurrence relations in the algebra matrices and applications, Linear Algebra Appl. 330 (2001) 15–24. [8] R. Ben Taher, M. Mouline, M. Rachidi, Fibonacci-Horner decomposition of the matrix exponential and the fundamental solution, Electron. Linear Algebra 15 (2006) 178–190. [9] H.W. Cheng, S.S.-T. Yau, On more explicit formulas for matrix exponential, Linear Algebra Appl. 262 (1997) 131–163. [10] E. Cinquemani, A. Milias-Argeitis, J. Lygeros, Local identifications of piece-wise deterministic models of genetics networks, in: Hybrid Systems: Computation and Control, in: R. Majumdar, P. Tabuada (Eds.), Lecture Notes in Comput. Sci., vol. 5469, Springer, NewYork, 2009, pp. 105–119. [11] F. Dubeau, W. Motta, M. Rachidi, O. Saeki, On weighted r-generalized Fibonacci sequences, Fibonacci Quart. 35 (1997) 102–110. [12] N.J. Higham, Functions of Matrices: Theory and Computation, Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2008. [13] B. Iannazzo, On the Newton method for the matrix p-th root, SIAM J. Matrix Anal. Appl. 28 (2) (2006) 503–523. [14] M. Mouline, M. Rachidi, Application of Markov chains properties to r-generalized Fibonacci sequences, Fibonacci Quart. 37 (1999) 34–38. [15] J.L. Silvan-Cardenas, Sub-pixel remote sensing for mapping and modelling invasive tamarix: a case study in West Texas, 1993–2005, Theses and Dissertations – Geography, Paper 27, Texas State University, San Marcos, 2009. [16] M.I. Smith, A Schur algorithm for computing matrix p-th roots, SIAM J. Matrix Anal. Appl. 24 (4) (2003) 971–989. [17] Y.-W. Tai, P. Tan, M.S. Brown, Richardson–Lucy deblurring for scenes under a projective motion path, IEEE Trans. Pattern Anal. Mach. Intell. 33 (2011) 1603–1618. [18] L. Verde-Star, Functions of matrices, Linear Algebra Appl. 406 (2005) 285–300.