A simple method to generate test matrices of known eigenvalue or singular value spectra

A simple method to generate test matrices of known eigenvalue or singular value spectra

Oom~sterJ Math. Applie. VoL 22, No. 7, pp. 65-67, 1991 Printed in Great Britain. All rights reserved 0097-4943/91 $3.00 + 0.00 Copyright{~ 1991 Perga...

147KB Sizes 0 Downloads 34 Views

Oom~sterJ Math. Applie. VoL 22, No. 7, pp. 65-67, 1991 Printed in Great Britain. All rights reserved

0097-4943/91 $3.00 + 0.00 Copyright{~ 1991 Pergamon Press plc

A SIMPLE METHOD TO GENERATE TEST MATRICES OF KNOWN EIGENVALUE OR SINGULAR VALUE SPECTRA K. J. BUNCH AND R. W . GROW Microwave Device and Physical Electronics Laboratory Department of Electrical En~=.~n~ug University of Utah, Salt Lake City, Utah 84112

(.llecei~ed October 1990) A b s t r a c t - - A method is presented to generate matrices with selectable elgenvalue cr singular value spectra. This method is simple and based on well-known properties of orthogonal polynomials. The main advantage of this method is the ability to selectively vary the eigenvaine and singular value spectra to generate matrices that test the range of near-to-complete degeneracy of these values.

1. I N T R O D U C T I O N Typically in testing matrix routines for eigenvalue/singular value decomposition or equation solving, one turns to a collection of test matrices such as that compiled by Gregory and Kearney [1]. Unfortunately, each test case typically involves the tedious entry of data. This paper contains a method to automatically generate matrices with known eigenvalue or singular value spectra. This method is based on well-known properties of orthogonal polynomials. Consider the eigenvalue decomposition of an n × n real matrix M [2-4] given by

j~./= V~ ~t,

(1)

where ~ is and n x n diagonal matrix of real eigenvalues, and ~" is an n × n matrix of normalized eigenvectors of the matrix M. The matrix V is orthogonal [3, Chapter 8], since

~-t ~" = ~, where t denotes the transpose of the matrix, and ~ denotes the identity matrix. decomposition exists for an m x n matrix [3, Chapter 9]:

(2) A similar

(3) where ~ and ~ are rn × m and n × n orthogonal matrices, respectively, and ~ is an rn × n diagonal matrix of singular values. Equations (I) and (3) show that if the orthogonal matrices ~ and ~ can be generated, then test matrices with known eigenvalue or singular value spectra are simple to create. Note the equations n

~,/=

~ ,~,~ ~t

(4)

i=1 n

.~/=

~

o', [7i 17/t,

i--1

6.5

(5)

K.J.

66

BUNOH,

R.W.

GRow

allow the matrix M to be created without having to devote extra storage to the orthogonal matrices ~ and ~. ~ and ~i are the column vectors of the orthogonal matrices.

2. CREATING ACCURATE ORTHOGONAL MATRICES Consider theGanssianquadratureintegration [5--7]: /(=)d~ = 1

~ f(x,).

(8)

i=1

Wi, zi are the weights and abscissa locations for the m point Gaussian integration [5]. This integration formula for m points is exact (theoretically) for all polynomials of degree 2m + 1 or less [8]. Thus, the integral given by,

// 2~/~"n2+I L,(z) 2~f~--m2+ I Lm(z) dz -- {1,

re=n,

O,

1

m~

n,

(7)

(with L,(z) the Legendre polynomial), evaluates exactly thecretieMly, or dose to machine accuracy,using Ganssian integration. An orthogonal matrix ~ can be created by the numerical integration of Eq. (7):

r

V ~ I .f2(¢1)

"'"

v~.fn

(Zl)

l

(8)

~/~(=m)

... ~ ) . ( = . ) J

where

/.(x) = ~/"'-~'2 '----~I L.(x)

(9)

and $411... Wm are the weighting values for the Ganssian quadrature [5].

Taking the dot product of any two column vectors of U is equivalent to numerically integrating the Legendre polynomials, as in Eq. (7). The high accuracy of m-point Gaussian quadrature with polynomials of degree _< 2m + 1 implies that U is orthogonal to a high degree of accuracy. The authors have found that orthogonal matrices with errors < 1 × 10-l~ can be created using this method and double precision arithmetic. Consider a 4 × 4 matrix example. The abscissa and weight values for a 4-point Gaumian integration are given by [5]: ±zl = 0.3399810436,

W1 = 0 . 6 5 2 1 4 5 1 5 4 9 ,

0.8611363116,

W2 = 0 . 3 4 7 8 5 4 8 4 5 1 .

~=2 -

The Legendre functions are given by ~='~,

P0 = 1, Pl -"

X,

fl =

3z 2 - 1 /~=

2

'

5z 8 - 3z

ps=

1

2

X,

/2 = ~/~(3z 2 - I), /s

=

~/7 (5=3 Vo

3z).

A a|mple method to generate teat matrices

6"7

The resu~ingmatrix(8) is given by A~r=

"0.571027651 0.33625787 0.417046068 0.622037490 0.571027651 ~ -0.33625787 .0.417046068 0.622037490

-0.417046088 0.57102765 -0.417046068 0.57102765

-0.622037490" 0.336257878 -0.622037490 -0.336257878.

(10)

It can be verified that this matrix is orthogonal to within an error of 6 x 10 - 9 . A matrix with the selected eigenvalues of a, b, c, d is then created by forming the product

[ ooil b 0 o c 0 0

~t.

(11)

The right eigenvectors are simply the column vectors of the matrix 2~r. The extension to creating a matrix with known singular values is obvious. A complete spectrum of test matrices can he automatically generated by varying the integration order (corresponding to the matrix size) and the eigenvalues a, b, c, etc. Although this paper has shown how to create orthogonal matrices using Legendre polynomials and Gaussian integration, other schemes are possible (for example, Chebyshev polynomials [8] and Newton-Cotes formulas [5]). 3. CONCLUSIONS A simple method to generate a complete range of test matrices with selected eigenvalue or singular value spectra has been presented. This method is based on sampling orthogonal functions over their interval of orthogonality. It can be completely automated to generate a wide range of test matrices. I~FERENCES 1. R.T. Gregory and D.L. Kearney, A Collection of Matrices/or Tearing Computational Algorithms, Wiley Interscience, New York, (1969). 2. L.W. Johnson and R.D. l~eas, Numerical Anallaim, Chapter 3, Addlson-Wesley, P,e~in~ Massachusetts,

(1977).

3. B. Noble and J.W. Daniel, Applied Linear Algebra, Prentice Hall, Englewood Cliffs, New Jersey, (1977). 4. A. Ralston and P. Rabinowltz, A FirJt Oourse in Numerical A#alyeia, Chapter 4,10, McGraw-Hill, New York, (1978). 5. M. Abramowits and I.A. Stegun, H,,ndbook of Ma~hem6~ical F~nctionJ, Dover Publications, 9th Edition, New York, pp. 887, 917--924, (1972). 6. G. Daldquist and R. Bj~rck, Numerical Methods, Chapter 2, Prentice-Hall, Englewood Cliffs, New Jersey, (1974). 7. C.F. Pearson, Numerical MethodR in Enoineering and Science, Chapter 7, Van Ncetrand Reinhold, New

York, (1988).

8. L.C. Andrews, Special Functions for EngineerJ and Applied Mathematicians, Chapter 4, Macmillan, New York, (1985).