Applied Mathematics and Computation 217 (2011) 7573–7578
Contents lists available at ScienceDirect
Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc
On some classes of structured matrices with algebraic trigonometric eigenvalues L. Gemignani Dipartimento di Matematica, Università di Pisa, Largo Bruno Pontecorvo 5, 56127 Pisa, Italy
a r t i c l e
i n f o
a b s t r a c t Diagonal plus semiseparable matrices are constructed, the eigenvalues of which are algebraic numbers expressed by simple closed trigonometric formulas. Ó 2011 Elsevier Inc. All rights reserved.
Keywords: Rank-structured matrices Chebyshev polynomials Eigenvalue computation
1. Introduction Finding classes of structured matrices with known eigenvalues is important for testing purposes as well as for proving specific properties of the associated sequences of numbers and polynomials. Three well-known collections of test matrices are given in [12,19,13], which also include examples of structured matrices with band and/or displacement structure. The paper [16] describes some elementary applications of matrix eigenvalue theory in computational algebra for deciding whether some numbers are algebraic or not. A quite recent advance in numerical linear algebra has been the design of fast numerical methods for eigenvalue computation of rank structured matrices. The theory of such matrices originated in the work of Gantmakher and Krein [8] concerning the structure of the inverses of certain tridiagonal matrices. The seminal paper [5] laid the bases of fast eigenvalue algorithms for rank-structured matrices by introducing suitable generalizations of the inverses of banded matrices. An up-to-date survey of these algorithms with several applications can be found in [17]. The structure of diagonal-plus-semiseparable matrices [4,10,15] (dpss for short) is perhaps the simplest generalization of that one of the inverses of tridiagonal matrices. A dpss matrix A ¼ ðai;j Þ 2 Cnn is such that
( ai;j ¼
ui v j ; if i > j; pi qj ;
if i < j;
with ui ; v i ; pi ; qi 2 C. The diagonal entries of A, ai,i, 1 6 i 6 n, are arbitrary. The rank structure of A follows from
max rankA½k þ 1 : n; 1 : k 6 1;
16k6n1
max rankA½1 : k; k þ 1 : n 6 1:
16k6n1
In this note we present two classes of dpss matrices whose eigenvalues can be explicitly determined by means of closed trigonometric formulas. Specifically, by following the approach given in [7] it is shown that the sequences of characteristic polynomials of these matrices are closely related to Chebyshev orthogonal polynomials. The connection yields simple formulas for the roots of the characteristic polynomials that are the values of the cotangent function evaluated at certain rational multiples of p. The result also provides an answer in the spirit of [16] to the issue raised in [1,11] of giving a direct elementary proof of the well-known fact that the values of cotangent and tangent functions evaluated at rational multiples of p are algebraic numbers.
E-mail address:
[email protected] 0096-3003/$ - see front matter Ó 2011 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2011.02.032
7574
L. Gemignani / Applied Mathematics and Computation 217 (2011) 7573–7578
2. Preliminaries In this section we review the main result in [7], which under some mild additional assumptions exhibits a three-term recurrence relation for the sequence of the characteristic polynomials of a dpss matrix. The result has been generalized in [6] whereas the same approach has been used in [9,14] to obtain sparse and structured representations of banded-plus semiseparable matrices. The method proposed in [7] makes use of Neville elimination to transform the initial dpss matrix into banded form by means of a congruence transformation. Neville elimination is a classical elimination technique which, differently from the customary Gaussian method, at each step combines two consecutive equations (rows) to progressively make zeros in the lower left corner of the linear system. In the matrix language this means that Neville method employs bidiagonal matrices in the annihilation scheme and, therefore, it may virtually lead to a representation of the input matrix as product of bidiagonal matrices. To be specific, let A ¼ ðai;j Þ 2 Cnn be a dpss matrix with the entries defined as follows:
8 > < ui v j ; if i > j; ai;j ¼ pi qj ; if i < j; > : di ; if i ¼ j;
where uj – 0 and qj – 0 for 2 6 j 6 n 1 and di 2 C; 1 6 i 6 n, are arbitrary. Denote by Lu the lower bidiagonal matrix of order n with unit diagonal entries and subdiagonal entries (Lu)i+1,i = ai with a1 arbitrary and ai :=ui+1/ui, 2 6 i 6 n 1. It is found that H = (hi,j) = Lu. A is upper Hessenberg with possibly nonzero entries given by
8 > < ai1 di1 þ ui v i1 hi;j ¼ ai1 pi1 qi þ di > : ðai1 pi1 þ pi Þqj
if j ¼ i 1; if j ¼ i; if j > i;
where a0 := 0. It is worth noting that the rank structure in the upper triangular part of A is maintained. Thus let Lq be the lower bidiagonal matrix of order n with unit diagonal entries and subdiagonal entries (Lu)i+1,i = bi with b1 arbitrary and bi :=qi+1/qi, 2 6 i 6 n 1. The matrix T ¼ ðt i;j Þ ¼ H LTq ¼ Lu A LTq is tridiagonal with entries
8 > < hi;i1 t i;j ¼ hi;i1 bi1 þ hi;i > : hi;i bi þ hi;iþ1
if j ¼ i 1; if j ¼ i; if j ¼ i þ 1;
where b0 := 0. Hence, there follows that
8 > < ai1 di1 þ ui v i1 t i;j ¼ ai1 bi1 di1 þ di þ bi1 ui v i1 þ ai1 pi1 qi > : bi di þ pi qiþ1
if j ¼ i 1; if j ¼ i;
ð1Þ
if j ¼ i þ 1:
Observe that D ¼ ðdi;j Þ ¼ Lu LTq is also tridiagonal with entries
di;j ¼
8 > < ai1 > :
if j ¼ i 1;
ai1 bi1 þ 1 if j ¼ i; bi
ð2Þ
if j ¼ i þ 1:
Let us consider the sequence of the characteristic polynomials wk(k), 1 6 k 6 n, of the leading principal submatrices Ak := A[1:k, 1:k], 1 6 k 6 n, of A, that is,
wk ðkÞ :¼ detðkIk Ak Þ;
1 6 k 6 n:
From
detðkI AÞ ¼ detðkD TÞ; by using (1) and (2) we obtain that these polynomials satisfies a three-term recurrence relation, namely,
wkþ1 ðkÞ ¼ ck ðkÞwk ðkÞ qk1 ðkÞwk1 ðkÞ;
0 6 k 6 n 1;
ð3Þ
where
ck ðkÞ ¼ ak bk ðk dk Þ þ ðk dkþ1 Þ bk ukþ1 v k ak pk qkþ1 ; 0 6 k 6 n 1; qk1 ðkÞ ¼ ½ak ðk dk Þ ukþ1 v k ½bk ðk dk Þ pk qkþ1 ; 1 6 k 6 n 1; with w0(k) := 1 and w1(k) := 0. In [7] it is shown that the three-term relation (3) can be decoupled in two different recurrent relations which do not require any assumption about the invertibility of the elements ui and qi, 2 6 i 6 n 1, and, therefore, can be evaluated more
7575
L. Gemignani / Applied Mathematics and Computation 217 (2011) 7573–7578
stably. The evaluation is at the core of the divide-and-conquer eigenvalue method for real symmetric dpss matrices proposed in [7]. The properties of (3) are also exploited in [6] for devising a Sturm-like bisection method for the numerical computation of the eigenvalues of a real symmetric dpss matrix. Hereafter in this paper we shall consider some specializations of the recurrence (3) for different choices of the parameters ui, vi, pi, qi, di. Under these specializations the polynomials wk(k) become closely related to Chebyshev orthogonal polynomials and this connection makes possible to find closed formulas for their zeros.
3. A dpss matrix related to first-kind Chebyshev polynomials j ¼ i, where i stands for the imaginary unit, and dj = 0 generate the Hermitian The parameters uj ¼ qj ¼ 1; 2 6 j 6 n; v j ¼ p dpss matrix A ¼ T n 2 Cnn . For n = 5 the matrix is displayed below
3 0 i i i i 7 6 6 i 0 i i i 7 7 6 T5¼6 0 i i 7 7: 6i i 7 6 i 0 i 5 4i i 2
i
i
i
0
i
For a1 = b1 = 1 it is found that the corresponding sequence of polynomials {wk(k)} satisfies
wkþ1 ðkÞ ¼ 2kwk ðkÞ ðk2 þ 1Þwk1 ðkÞ;
1 6 k 6 n 1;
with the initializations w0(k) := 1 and w1(k) := k. These polynomials are related to the Chebyshev polynomials of the first kind tk(k), k P 0, defined by
t0 ðkÞ :¼ 1; t1 ðkÞ :¼ k; t kþ1 ðkÞ :¼ 2ktk ðkÞ t k1 ðkÞ; k P 1: By setting k = cos h, 0 6 h 6 p, we obtain the closed form of the Chebyshev polynomials
tk ðcos hÞ ¼ cos kh;
k ¼ 0; 1; . . . ; ðkÞ
which gives the well-known expression for the roots nj ; 1 6 j 6 k, of tk(k), ðkÞ
nj ¼ cos
ð2j 1Þp ; 2k
1 6 j 6 k; k P 1:
ð4Þ
For k P 0 let us denote by fk(k) the function
! qffiffiffiffiffiffiffiffiffiffiffiffiffiffik k 2 fk ðkÞ :¼ 1þk tk pffiffiffiffiffiffiffiffiffiffiffiffiffiffi ; 1 þ k2
k P 0:
It follows that f0(k) = 1, f1(k) = k and, moreover,
fkþ1 ðkÞ ¼ 2kfk ðkÞ ð1 þ k2 Þfk1 ðkÞ; k P 1: Hence, it is found fk(k) is a polynomial of degree k which coincides with wk(k). In this way we arrive at the following result for the eigenvalues of T n ; n P 1. ðnÞ
Theorem 1. The eigenvalues kj ; 1 6 j 6 n of T n ; n P 1, satisfy ð2j1Þp
ðnÞ
kj
¼ cot
ð2j 1Þp cos 2n ; ¼ 2n sin ð2j1Þp
1 6 j 6 n:
2n
For the purpose of testing numerically the performance of fast structured QR algorithms for dpss matrices it is worth noting that the eigenvalue problem for T n is perfectly well conditioned and, therefore, the problem should be solved in an accurate way even for large values of n. In addition, Theorem 1 provides a simple matrix proof of the well-known fact that the ðnÞ
ðnÞ
numbers kj ; 1 6 j 6 n, are algebraic. By reversing the coefficients of wn(k) we obtain that the reciprocals 1=kj
when defined
are algebraic.
4. A dpss matrix related to second-kind Chebyshev polynomials A dpss matrix with ill-conditioned eigenvalues can be obtained by means of the following set of parameters: 2ij ; 2 6 j 6 n; pj ¼ 1; v j ¼ nþ1 ; pj ¼ 1; 1 6 j 6 n 1; dj ¼ i n2jþ1 ; 1 6 j 6 n. The resulting dpss matrix A ¼ uj ¼ 1; qj ¼ 2ðnþ1jÞi nþ1 nþ1 nn is shown below for n = 5 Un 2 C
7576
L. Gemignani / Applied Mathematics and Computation 217 (2011) 7573–7578
2
4
8
6
4
2
3
7 2 7 7 2 7 7: 7 2 5 2 4 6 8 4
6 6 4 6 2 2 i6 2 4 0 4 U5 ¼ 6 66 6 4 2 4 6 2
This matrix looks very interesting for testing purposes. From one hand it is observed that U n satisfies
U Tn ¼ J n U Hn J n ; and, equivalently,
U n ¼ J n U n J n ; where Jn denotes the permutation (reversion) matrix with unit antidiagonal entries of order n. Thus the computation of the eigenvalues of U n can be a good test problem for fast adaptations of QR-like eigenvalue routines using not orthogonal transformations like the HR and XHR methods [18]. From the other hand it is easily shown that
rankðU n U Hn Þ 6 2; and, therefore, the rank structure of U n is also maintained under the customary QR eigenvalue algorithm. This makes the matrix a good test case for fast adaptations of the QR method, too. For a1 = 1 and b1 ¼ n1 we obtain that n
nk n 2k þ 1 n 2k 1 nk 2k 2ðn kÞ nk ki þ ki i þi ¼ ðk iÞ þ ðk þ iÞ nkþ1 nþ1 nþ1 nkþ1 nþ1 nþ1 nkþ1 ¼ ck ðkÞ; 1 6 k 6 n 1;
and
n 2k þ 1 2k nk n 2k þ 1 2ðn kÞ nk þi ki i ¼ ð1 þ k2 Þ ¼ qk1 ðkÞ; ki nþ1 nþ1 nkþ1 nþ1 nþ1 nkþ1
16k
6 n 1: The sequence {wk(k)} of the characteristic polynomials of the leading principal submatrices of U n verifies the recurrence
wkþ1 ðkÞ ¼
nk nk ðk iÞ þ k þ i wk ðkÞ ð1 þ k2 Þwk1 ðkÞ; nkþ1 nkþ1
ð5Þ
with 1 6 k 6 n 1 and w0(k) := 1 and
w1 ðkÞ :¼ k i
n1 : nþ1
These polynomials are related with the second-kind Chebyshev polynomials uk(k), k P 0, defined by
u0 ðkÞ :¼ 1; u1 ðkÞ :¼ 2k; ukþ1 ðkÞ :¼ 2kuk ðkÞ uk1 ðkÞ; k P 1: In order to enlighten this connection we consider the auxiliary sequences of polynomials
fk ðkÞ :¼
! qffiffiffiffiffiffiffiffiffiffiffiffiffiffik k 1 þ k2 t k pffiffiffiffiffiffiffiffiffiffiffiffiffiffi ; 1 þ k2
g k ðkÞ :¼
! qffiffiffiffiffiffiffiffiffiffiffiffiffiffik k 1 þ k2 uk pffiffiffiffiffiffiffiffiffiffiffiffiffiffi ; 1 þ k2
k P 0;
and
k P 0;
where tk(k) and uk(k) are the Chebyshev polynomials of degree k of the first and the second kind, respectively. The properties of the zeros of certain sequences of polynomials generated from {fk(k)} and {gk(k)} are thoroughly investigated in the papers [2,3]. One of such a sequence turns out to be related with the characteristic polynomials of the dpss matrix U n . Observe that
fkþ1 ðkÞ ¼ 2kfk ðkÞ ð1 þ k2 Þfk1 ðkÞ;
k P 1;
ð6Þ
with f0(k) = 1, f1(k) = k and, similarly,
g kþ1 ðkÞ ¼ 2kg k ðkÞ ð1 þ k2 Þg k1 ðkÞ;
k P 1;
ð7Þ
L. Gemignani / Applied Mathematics and Computation 217 (2011) 7573–7578
7577
with g0(k) = 1, q1(k) = 2k. From these two recurrences it also follows that for k P 1
g kþ1 ðkÞ fkþ1 ðkÞ ¼ 2kðg k ðkÞ fk ðkÞÞ ð1 þ k2 Þðg k1 ðkÞ fk1 ðkÞÞ with g0(k) f0(k) = 0 and g1(k) f1(k) = k. Hence, by induction we find that
g kþ1 ðkÞ fkþ1 ðkÞ ¼ kg k ðkÞ;
k P 0:
ð8Þ
Let us now introduce the sequence of polynomials defined by
ðn þ 1Þqk ðkÞ :¼ g k ðkÞ þ ðn kÞfk ðkÞ iðn kÞg k1 ðkÞ;
k P 0:
Observe that
q0 ðkÞ ¼ 1 ¼ w0 ðkÞ; q1 ðkÞ ¼ k i
n1 ¼ w1 ðkÞ: nþ1
Indeed, from (6)–(8) it is shown that {wk(k)} satisfies the recurrence (5) and, therefore, we can conclude that
wk ðkÞ ¼ qk ðkÞ;
0 6 k 6 n:
h For k ¼ cos ; 0 < h < p, by using the well-known trigonometric forms of the Chebyshev polynomials we obtain that for sin h 0 6 k 6 n,
qk ðkÞ ¼ qk ðcot hÞ ¼
ðsin hÞk sinðk þ 1Þh þ ðn kÞeikh : sin h nþ1
This implies that
qn ðkÞ ¼ qn ðcot hÞ ¼
1 sinðn þ 1Þh ; n þ 1 ðsin hÞnþ1
n P 0:
These polynomials have been also considered in [11]. Their closed trigonometric form makes possible to explicitly determine the eigenvalues of U n . ðnÞ
Theorem 2. The eigenvalues kj ; 1 6 j 6 n of U n ; n P 1, satisfy jp
ðnÞ
kj
¼ cot
cos nþ1 jp ; ¼ jp n þ 1 sin nþ1
1 6 j 6 n:
A closed form for the eigenvectors of U n can also be given. Let
qn ðkÞ ¼
n Y knþ1 1 ¼ 1 þ k þ þ kn ¼ k xjnþ1 ; k1 j¼1
2p 2p where xnþ1 ¼ cos nþ1 þ i sin nþ1 is a (n + 1)st primitive root of unity. If we set
qnðkÞ ðkÞ ¼
n Y qn ðzÞ ¼ k xjnþ1 ; k k xnþ1 j¼1;j–k
1 6 k 6 n;
ðkÞ
ðkÞ
and qn denotes the coefficient vector of order n associated with qn ðkÞ, then it holds T
ðnÞ
T
ðkÞ qðkÞ n U n ¼ kk qn ;
1 6 k 6 n: ðkÞ
Since it is well-known that the coefficients of the polynomials qn ðkÞ grow exponentially as n increases, we remark that U n can have seriously ill-conditioned eigenvalues. For n = 32 and n = 64 the MatLab1 routine condeig applied to U n reports the maximum value cmax = 9.27e+05 and cmax = 1.21e+16, respectively.
5. Conclusion Two classes of dpss matrices whose eigenvalues can be expressed in closed form by means of trigonometric formulas have been exhibited. The characteristic polynomials of these matrices are shown to be related to Chebyshev orthogonal polynomials. The matrices can be useful for testing fast adaptations of customary eigenvalue algorithms for rank structured matrices as well as for an elementary treatment of some issues in number theory.
1
MatLab is a registered trademark of The Mathworks, Inc.
7578
L. Gemignani / Applied Mathematics and Computation 217 (2011) 7573–7578
Acknowledgements The author is grateful to an anonymous referee for pointing out the Refs. [2,3]. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19]
D. Cvijovic, J. Klinowski, Algebraic trigonometric numbers, Nieuw Arch. Wisk. (4) 16 (1–2) (1998) 11–14. D. Cvijovic, J. Klinowski, Polynomials with easily deducible zeros, Atti Sem. Mat. Fis. Univ. Modena 48 (2000) 317–333. D. Cvijovic, J. Klinowski, An application for the Chebyshev polynomials, Mat. Vesnik 50 (1998) 105–110. Y. Eidelman, I. Gohberg, Fast inversion algorithms for diagonal plus semiseparable matrices, Integral Equations Operator Theory 27 (2) (1997) 165–183. Y. Eidelman, I. Gohberg, On a new class of structured matrices, Integral Equations Operator Theory 34 (3) (1999) 293–324. Y. Eidelman, I. Gohberg, V. Olshevsky, Eigenstructure of order-one-quasiseparable matrices. Three-term and two-term recurrence relations, Linear Algebra Appl. 405 (2005) 1–40. D. Fasino, L. Gemignani, Direct and inverse eigenvalue problems for diagonal-plus-semiseparable matrices, Numer. Algorithms 34 (2–4) (2003) 313– 324. International Conference on Numerical Algorithms, vol. II, Marrakesh, 2001.. F.P. Gantmacher, M.G. Krein, Oscillation Matrices and Kernels and Small Vibrations of Mechanical Systems, revised edition., AMS Chelsea Publishing, Providence, RI, 2002. Translation based on the 1941 Russian original, Edited and with a preface by Alex Eremenko. L. Gemignani, Neville elimination for rank-structured matrices, Linear Algebra Appl. 428 (4) (2008) 978–991. I. Gohberg, T. Kailath, I. Koltracht, Linear complexity algorithms for semiseparable matrices, Integral Equations Operator Theory 8 (6) (1985) 780–804. R.J. Gregorac, On polynomials with simple trigonometric formulas, Int. J. Math. Sci. (61–64) (2004) 3419–3422. R.T. Gregory, D.L. Karney, A Collection of Matrices for Testing Computational Algorithms, Robert E. Krieger Publishing Co., Huntington, NY, 1978. Corrected reprint of the 1969 edition. N.J. Higham, The test matrix toolbox for MatLab, Numerical Analysis Reports 276, Manchester Centre for Computational Mathematics, 1995. C.K. Koh, J. Jain, H. Li, V. Balakrishnan, O(n) algorithms for banded plus semiseparable matrices. Numerical methods for structured matrices and applications, Oper. Theory Adv. Appl. 199 (2010) 347–358. I. Koltracht, Linear complexity algorithm for semiseparable matrices, Integral Equations Operator Theory 29 (3) (1997) 313–319. C.K. Li, D. Lutzer, The arithmetic of algebraic numbers: an elementary approach, The College Math. J. 35 (4) (2004) 307–309. R. Vandebril, M. Van Barel, N. Mastronardi, Matrix Computations and Semiseparable Matrices, vol. II, Johns Hopkins University Press, Baltimore, MD, 2008. Eigenvalue and singular value methods. D.S. Watkins, The Matrix Eigenvalue Problem, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2007. GR and Krylov subspace methods. J.R. Westlake, A Handbook of Numerical Matrix Inversion and Solution of Linear Equations, Robert E. Krieger Publishing Co., Huntington, NY, 1975. Reprint of the 1968 original.