A modified algorithm for the Perron root of a nonnegative matrix

A modified algorithm for the Perron root of a nonnegative matrix

Applied Mathematics and Computation 217 (2011) 4453–4458 Contents lists available at ScienceDirect Applied Mathematics and Computation journal homep...

165KB Sizes 2 Downloads 71 Views

Applied Mathematics and Computation 217 (2011) 4453–4458

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

A modified algorithm for the Perron root of a nonnegative matrix q Chengming Wen, Ting-Zhu Huang ⇑ School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, Sichuan 611731, PR China

a r t i c l e

i n f o

a b s t r a c t An algorithm of diagonal transformation for the Perron root of nonnegative matrices is proposed by Duan and Zhang [F. Duan, K. Zhang, An algorithm of diagonal transformation for Perron root of nonnegative irreducible matrices, Appl. Math. Comput. 175 (2006) 762–772]. This method can be used for all nonnegative irreducible matrices. In this paper, an improved algorithm which is based on this method is proposed. The new algorithm inherits all the above-mentioned advantages of the original algorithm and has higher efficiency. It is testified by numerical testing that the efficiency of the new algorithm is improved greatly. Ó 2010 Elsevier Inc. All rights reserved.

Keywords: Nonnegative irreducible matrix Perron root Diagonal transformation

1. Introduction The problem about how to obtain the Perron root of a nonnegative irreducible matrix is interesting in the study about theory of nonnegative matrix and other applied fields, and some algorithms were proposed, c.f. [1–6] etc. However, most of the algorithms for the Perron root appear to have two limitations. One is that, the applied area is limited and some algorithms can only be applied for some special matrices. The other is that, the result cannot satisfy the need of the precision. An algorithm of diagonal transformation for the Perron root is proposed by Duan and Zhang [1]. This method can be used for all nonnegative irreducible matrices. It is convenient to compute by this method and it is easy to get the precision you wanted. In this paper, a new improved algorithm which is based on the algorithm of diagonal transformation is proposed. This new algorithm inherits all the above-mentioned advantages of the original algorithm and has higher efficiency. It is testified by numerical testing that the efficiency of the new algorithm is improved greatly. Without loss of generality, we assume throughout this paper that A is an n  n nonnegative irreducible matrix. For such a matrix A = (aij)nn, let q(A) denote the spectral radius of A, namely the Perron root of A. Let hni = {1, 2, . . . , n} and

ri ðAÞ ¼

n X

aij ;

i 2 hni:

ð1Þ

j¼1

ri(A) is called the ith row sum of A. It is well known that for the Perron root q(A), the following Frobenius inequality holds:

rmin 6 qðAÞ 6 r max ;

ð2Þ

where rmin ¼ mini2hni r i ; r max ¼ max r i are the minimum, the maximum row sum of A, respectively. i2hni

q

Supported by NSFC (60973015), Sichuan Province Sci. & Tech. Research Project (2009GZ0004).

⇑ Corresponding author.

E-mail addresses: [email protected] (C. Wen), [email protected], [email protected] (T.-Z. Huang). 0096-3003/$ - see front matter Ó 2010 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2010.10.048

4454

C. Wen, T.-Z. Huang / Applied Mathematics and Computation 217 (2011) 4453–4458

For a nonnegative irreducible matrix A ¼ A0 ¼ ða0ij Þnn , compute the sum of its row

r0i ¼

n X

a0ij ;

i 2 hni

j¼1

to get the minimum sum and the maximum sum of row. In [1], the author set

  D0 ¼ diag r 01 ; r 02 ; . . . ; r 0n and let

  kþ1 Akþ1 ¼ D1 k Ak Dk ¼ aij

nn

;

ðk ¼ 0; 1; 2; . . .Þ

to get the serial of similar matrices {Ak}, the serial of the minimum sum of row frkmin g and the serial of the maximum sum of row fr kmax g. The algorithm of diagonal transformation for the Perron root proposed by Duan and Zhang [1], we call it Algorithm 1. Algorithm 1 [1]:

Step 0: Given a nonnegative irreducible matrix B = (bij)nn, e > 0. Let k = 0. Step 1: Let A ¼ A0 ¼ I þ B ¼ ða0ij Þnn . Step 2: Compute

rki ¼

n X

akij ;

rkmin ¼ min rki ; 16i6n

j¼1

r kmax ¼ max r ki : 16i6n

If ðr kmax  r kmin Þ < e, go to step 5. Step 3: Compute

  Dk ¼ diag r k1 ; r k2 ; . . . ; r kn : Step 4: Update. Let

Akþ1 ¼ D1 k Ak D k : Set k = k + 1. Go back to step 2. Step 5: Let qðBÞ ¼ 12 ðr kmin þ r kmax Þ  1. Stop. In order to get higher efficiency, what we mainly discuss is how to improve on Algorithm 1 and optimize its running efficiency. In this paper, by analysis of the iterative steps in Algorithm 1, we will propose a simple method to improve its running efficiency. This new method inherits all the advantages of the Algorithm 1. It can be used for all nonnegative irreducible matrices and is easy for computing. Numerical tests will show that this method improves the efficiency greatly. 2. Discussion about improving Algorithm 1 In Algorithm 1, the iterative steps include Step 3 and Step 4. It is easily known that the direct improvement is to reduce the iterative times on the condition of ensuring the required precision of results. Firstly we will discuss how to reduce the iterative times from two aspects. Perspective 1. In order to ensure the diagonal elements of matrix to be positive, let, in Step 1,

  A ¼ A0 ¼ kI þ B ¼ a0ij

nn

;

k > 0:

Algorithm 1 takes k = 1 acquiescently. In fact, k = 1 sometimes is not the best value. In some cases, if k takes other positive number, the iterative times may be reduced. The following example illustrates this situation. Example 1. Consider the following 3-by-3 matrix [1]:

2

0 1 0

6 A ¼ 40

3

7 0 2 5:

3 0 0 The single operation is to change the value of k. We can have the results seen in Table 1.

4455

C. Wen, T.-Z. Huang / Applied Mathematics and Computation 217 (2011) 4453–4458 Table 1 Detailed information of Example 1.

e

Iterative times when k = 1

Iterative times when k = 1.6

q(A)

105 1010 1015

21 41 62

18 35 52

1.81712 1.8171205928 1.817120592832139

From this example, we can see that, the iterative times of algorithm reduce a few when k = 1.6. It shows that the value of k has effect on the iterative times of algorithm. However, how to choose the best value of k? It has direct relationship with the actual matrix. It is difficult to find a function which can be used for the calculation of the best value of k for all nonnegative matrices. This problem can be studied further for the scholars who have interest in it. Perspective 2. In Step 2, the end condition is ðrkmax  rkmin Þ < e, and namely use the results of the Frobenius inequality to judge whether the distance between upper bound and lower bound of Perron root is at the required precision area. As a matter of fact, when the matrix is a positive matrix (i.e., all the elements of A are positive), we can apply the better results of the Brauer inequality. Modify Step 2 in Algorithm 1 as follows: Step 2: Compute

r ki ¼

n X

akij ;

j¼1

r kmin ¼ min r ki ; 16i6n

r kmax ¼ max r ki ; 16i6n

pk ¼ s þ mðh  1Þ;

  1 ; qk ¼ S  m 1  g

where s ¼ r kmin ðAÞ; S ¼ rkmax ðAÞ; m ¼ mini;j2hni akij ,

S  2m þ

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi   S2  4mðS  sÞ



; 2ðs  mÞ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s þ 2m þ ðs2 þ 4mðS  sÞÞ : h¼ 2m

If (qk  pk) < e, go to step 5. Step 5: Let qðBÞ ¼ 12 ðqk þ pk Þ  1. Stop. The above-mentioned two places which can be modified for Algorithm 1 actually improve the running efficiency with little influence. On one hand, in the revised Perspective 1, it is difficult to determine the best value of parameter k. On the other hand, Perspective 2 is only response for positive matrices and merely improves the end judgment about whether the required accuracy is achieved or not. At best, it is reducing the time a little. At worst, it plays no role for improving the efficiency. So these two aspects for revising algorithm are not adaptable. These two aspects are based on the perspective to reduce the number of iterations. In the next section, we will change another perspective to improve Algorithm 1, not from reducing the number of iterations, but eliminating redundant operation. In that way, we will improve the efficiency greatly.

3. Improved algorithm In last section, we attempt to get higher operational efficiency by reducing the iterative times of algorithm. But the optimization effect is not ideal. Now we improve Algorithm 1 from another viewpoint that reducing iterative times is replaced by eliminating redundant operation to get higher running efficiency. Then, a novel algorithm based on Algorithm 1 will be proposed. With all the same merits of Algorithm 1 and a short-cut process, this new algorithm is suitable for all nonnegative irreducible matrices, able to calculate the Perron roots with arbitrary accuracy, easy to be implemented by computer program. And, its efficiency is greatly improved in contrast to Algorithm 1. For a large scale matrix A, the most influenced place for the operational efficiency is the formula Akþ1 ¼ D1 k Ak Dk in the Step 4 of Algorithm 1. Twice matrix multiplications are implemented. As is known, the pseudocode of matrix multiplication is as follows:

4456

C. Wen, T.-Z. Huang / Applied Mathematics and Computation 217 (2011) 4453–4458

For ði ¼ 0; i < n; i þ þÞ For ðj ¼ 0; j < n; j þ þÞf B½i½j ¼ 0; For ðk ¼ 0; k < n; k þ þÞ B½i½jþ ¼ D½i½k  A½k½j; g Obviously, the matrix multiplication takes the O(n3) time. It means that the time complexity of Algorithm 1 is O(n3). When the size of matrix n is large, the calculation turns out to be rather inefficiency. Moreover, once iterative operation goes with twice matrix multiplication in Step 4, that will significantly reduce the running efficiency. We had better avoid the matrix multiplication. In the following, we will study the essential of the operations in Step 3 and Step 4 of Algorithm 1. Set

  Dk ¼ diag r k1 ; r k2 ; . . . ; r kn and then let Akþ1 ¼ D1 k Ak Dk . The process of the operation is as follows:

3 2 k 7 a11 6 7 6 0 k 1    0 76 6 6a rk2 ¼6 76 21 6             74    5 4 1 akn1 0 0 0 k r 2

Akþ1

1 r k1

0



0

n

ak12 ak22  akn2

   ak1n

32

r k1

0



0

3

2 rk

1

6 rk1

ak11

7 6 rk1 k 6 k    ak2n 7 76 0 r2    0 7 6 76 7 ¼ 6 rk2 a21       54             5 6 6  4 k r1 k 0 0 0 r kn    aknn a rk n1 n

r k2 r k1 r k2 r k2

ak12 ak22



rk2 r kn

akn2



rkn r k1



rkn r k2

ak1n

3

7 7 ak2n 7 7: 7   7 5 k r    rnk aknn n

We can see that the elements of Ak+1 which is the updated matrix after the iterative operation are satisfied with the following formula:

akþ1 ¼ ij

rkj rki

akij :

And the row sum vector of Ak is denoted by

  rk ¼ r k1 ; r k2 ; . . . ; r kn : Therefore, every element in matrix Ak+1 can be computed directly by the formula:

akþ1 ¼ ij

rkj rki

akij ;

i; j 2 hni:

It means that the updated matrix after the iterative operation can be directly computed by the row sum vector of the current matrix. In this way, we can eliminate the redundant calculation and improve the efficiency of algorithm greatly. So, we can design the following new algorithm which is based on the row sum vector of the matrix. The operational essential of new algorithm is the same as Algorithm 1, but its efficiency is much higher than Algorithm 1. We denote the improved new algorithm as Algorithm 2. Algorithm 2 Step 0: Given a nonnegative irreducible matrix B = (bij)nn, e > 0. Let k = 0.   Step 1: Let A ¼ A0 ¼ kI þ B ¼ a0ij ; k ¼ 1. nn Step 2: Compute

  rk ¼ r k1 ; r k2 ; . . . ; r kn ; P where r ki ¼ nj¼1 akij ; i 2 hni and rkmin ¼ min16i6n rki ; r kmax ¼ max16i6n r ki . If ðrkmax  rkmin Þ < e, go to step 4. Step 3: Compute

akþ1 ¼ ij

rkj rki

akij ;

i; j 2 hni:

Set k = k + 1. Go back to step 2. Step 4: Let qðBÞ ¼ 12 ðr kmin þ r kmax Þ  k. Stop. rk

We can see that the iterative operation is the calculation of Ak+1 by the formula aijkþ1 ¼ rjk akij ; i; j 2 hni. And the pseudocode i of the iterative operation is as follows:

4457

C. Wen, T.-Z. Huang / Applied Mathematics and Computation 217 (2011) 4453–4458

For ði ¼ 0; i < n; i þ þÞ For ðj ¼ 0; j < n; j þ þÞ fB½i½j ¼ r½j  A½i½j=r½i; g Obviously, this step only takes O(n2) time. However, the implementation of the previous iterative operation takes O(n3) time because of matrix multiplication. Therefore, the time the iterative operation takes decreases from O(n3) in Algorithm 1 to O(n2) in Algorithm 2. As a result, the efficiency of Algorithm 2 improves greatly. 4. Numerical experiments The following example will show how to apply Algorithm 2 to get the accurate Perron root of nonnegative matrices. Example 2. Consider the following 8-by-8 nonnegative matrix (see [4] Example 4) and the precision needs 0.001 (e = 0.001). Use Algorithm 2 for the Perron root.

2

8 6 3 5 7 0 7 1

60 6 6 61 6 62 6 B¼6 62 6 64 6 6 43 0

3

7 3 8 5 6 4 17 7 7 2 6 1 3 8 8 77 7 8 4 0 7 7 8 27 7 7: 4 6 2 5 7 6 57 7 1 0 4 8 4 8 27 7 7 1 6 6 4 5 5 05 1 1 6 7 0 3 4

First set e = 0.001, we have

2

9 6 3 5 7 0 7 1

60 6 6 61 6 62 6 A ¼ A0 ¼ I þ B ¼ 6 62 6 64 6 6 43

3

8 3 8 5 6 4 17 7 7 2 7 1 3 8 8 77 7 8 4 1 7 7 8 27 7 7: 4 6 2 6 7 6 57 7 1 0 4 8 5 8 27 7 7 1 6 6 4 5 6 05

0 1 1 6 7 0 3 5 When k = 0, compute the row sum vector and obtain

  r0 ¼ r01 ; r 02 ; . . . ; r 08 ¼ ð38; 35; 37; 39; 38; 32; 31; 23Þ; r 0min ¼ 23:0000; ðr0max

Because  j 2 h8i, we have

2

r0max ¼ 39:0000:

r0min Þ

9:0000

6 0:0000 6 6 6 1:0270 6 6 1:9487 6 A1 ¼ 6 6 2:0000 6 6 4:7500 6 6 4 3:6774 0:0000

r0

¼ 11  5 ¼ 6 > e, then implement the iterative operation. And applying the formula a1ij ¼ rj0 a0ij ; i; i

5:1316

7:0000

0:0000

8:0000

3:1714

8:9143

5:4286

1:8919

5:4857 3:5429 0:6571 7 7 7 6:9189 6:7027 4:3514 7 7 5:7436 6:3590 1:1795 7 7 7: 5:8947 4:8947 3:0263 7 7 5:0000 7:7500 1:4375 7 7 7 5:1613 6:0000 0:0000 5

7:0000

1:0541

3:0811

7:1795 3:7949

1:0000

6:8205

3:6842 5:8421

2:0526

6:0000

1:0938 0:0000 1:1290 7:1613

4:8750 7:5484

9:5000 4:9032

1:5217 1:6087 10:1739 11:5652

0:0000

5:7105

4:0435

0:6053

3

5:5263 2:9211

5:0000

When k = 1, compute the row sum vector and obtain

  r1 ¼ r11 ; r 12 ; . . . ; r 18 ¼ ð35:8947; 35:20; 32:0270; 34:0257; 33:3946; 34:4063; 35:5806; 33:9130Þ; r 1min ¼ 32:0270; r 1max ¼ 35:8947: Because ðr 1max  r1min Þ ¼ 35:8947  32:0270 ¼ 3:8677 > e, then go on implementing the iterative operation, . . . When k ¼ 7; r 7min ¼ 34:2417; r7max ¼ 34:2419. Because



 r7max  r 7min ¼ 34:2419  34:2417 ¼ 0:0002 < e;

4458

C. Wen, T.-Z. Huang / Applied Mathematics and Computation 217 (2011) 4453–4458

Table 2 Comparison of efficiency between Algorithms 1 and 2. n 20 100 300 500

e

Running time of Algorithm 1 (s)

Running time of Algorithm 2 (s)

Iterative times

q(A)

105 1010 105 1010 105 1010 105 1010

0.158 0.249 2.022 3.309 26.251 38.286 137.638 201.173

0.006 0.007 0.049 0.056 0.307 0.380 0.836 1.023

8 14 10 15 11 16 11 16

170.40427 170.4042675054 4093.56047 4093.5604746853 36597.39619 36597.3961862432 101524.01066 101524.0106641804

we have the result

qðBÞ ¼

 1 7 r þ r 7max  1  33:242: 2 min

The result is identical with the accurate value of Perron root which is mentioned in [4]. According to Algorithms 1 and 2, two computer programs are developed respectively. For the following matrix, we apply the two programs to calculate the accurate values of Perron root on computer. And then we need compare the running time of these two programs when their results achieve the same accuracy in the same computer. By this way, the great advantage of Algorithm 2 is displayed. And it also shows that Algorithm 2 has very high efficiency. Example 3. Consider the n-by-n positive matrix (see [1] or [3])

2

1 1 1 6 1 2 2 6 6 6 1 2 3 A¼6 6   6 6 4 1 2 3 1

2

3

 

1 2

1 2

3

7 7 7  3 3 7 7:    7 7 7  n  1 n  15  n  1

n

The computer is collocated as follows: Hardware: CPU: 1.6G; EMS Memory: 256 M; Software: Operating system: Linux Fedora Core 3 Compiled language: G++ 3.4.2. From Table 2, Algorithm 2 cuts down large amount of redundancy, grasps the essential of the problem, greatly enhanced the running efficiency. For example, when the matrix size reaches 500, the running time for Algorithm 1 is nearly 200 times equal to Algorithm 2. Obviously, Algorithm 2 runs much faster than Algorithm 1. The high efficiency of Algorithm 2 is notable. Moreover, the larger the matrix size, the greater advantage Algorithm 2 performs. Therefore, with Algorithm 2, computer can calculate the Perron root of any nonnegative irreducible matrix at high speed. And also, it shows strong generalization and use value. In conclusion, Algorithm 2 eliminates redundant operation and the running efficiency is improved greatly. With all the merits of the original algorithm and the simple process, Algorithm 2 is suitable for any nonnegative irreducible matrices, able to calculate the Perron roots with arbitrary precision, easy to be implemented by computer program. It is testified by numerical testing that Algorithm 2 is implemented on computer at very high speed and its running efficiency is improved greatly. References [1] F. Duan, K. Zhang, An algorithm of diagonal transformation for Perron root of nonnegative irreducible matrices, Appl. Math. Comput. 175 (2006) 762–772. [2] Wolfgang Bunse, A class of diagonal transformation methods for the computation of the spectral radius of a nonnegative matrix, SIAM J. Numer. Anal. 18 (4) (1981) 693–704. [3] L. Lu, Perron complement and Perron root, Linear Algebra Appl. 341 (2002) 239–248. [4] L. Lu, M.K. Ng, Localization of Perron roots, Linear Algebra Appl. 392 (2004) 103–117. [5] R.A. Horn, C.R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, 1985. [6] R.A. Horn, C.R. Johnson, Topics in Matrix Analysis, Cambridge University Press, Cambridge, 1991.