On the order of elimination of unknowns

On the order of elimination of unknowns

SCIENTIFIC COMMUNICATIONS ON THE ORDEROF ELIMINATION OF UNKNOWNS* V. V. VOEVODIN MOSCOW (Received 22 January 1966) SUPPOSEthat we are solving a sys...

230KB Sizes 8 Downloads 45 Views

SCIENTIFIC COMMUNICATIONS ON THE ORDEROF ELIMINATION OF UNKNOWNS* V. V. VOEVODIN MOSCOW (Received

22 January

1966)

SUPPOSEthat we are solving a system of linear algebraic equations. We shall assume that one of the variants of the method of elimination is used which is based on the use of orthogonal transformations 111. We shall denote by A0 the augmented matrix of the system. The real numerical process leads to the construction of matrices Al, .42, . . ., As, interconnected by the followi?g recurrence relations Ai = MO + Fi, ‘A2 -2 RzAi +

8’2, (1)

Here Rk is an exact orthogonal matrix calculated from the matrix Ak_1, Fk the matrix of the errors introduced in the k-th step of the process due to the inexact calculation of Rk and the inexact multiplying out of the matrices, and AN a matrix with zero subdiagonal elements. To evaluate the error of the solution of the original problem it is necessary to evaluate some norm (e.g. the Euclidean) of the matrix AN= AN - R.vRN_~ . . . R,Ao. In [21 a practically unimprovable majorizing evaluation was obtained for cyclic order of elimination of the uhknowns. In the present paper evaluations will be given which correspond to another order of elimination. These evaluations are better than those of [21. * Zh. uphis

1.

Mat. mat. Fit.

6, 4, 758 - 760, 1966. 203

V.V.

204

Voevodin

We assuae that the subdiagonal elements of the matrix A0 are elfmfnated by means of the rotation satrices Tij. ffenerally in multiRli~ation (i, j) is eliminated. We shall waive by Tij the element in the position this requirement and consider that any element from the i-th or j-th row can be eliminated. We shall divide the elimination of the elements into cycles which we define as follows: (1) The cycle consists of a tlnite tlons by the rotation matrix;

number of successive

multiplioa-

(2) At each multiplication by the rotation matrix all previously eliminated elements are retained and in addition one more element Is eliminated, which is in the column with the least possible number; (3) No row of the matrix is transformed than once: (4) In the course ments is eliminated.

of each cycle

in the course

the maximum Dossible

of a cycle

more

number of ele-

We now explain the above with an example of a 6th order matrix, We shall multiply successively by the matrices (cycles are separated by a semi-colon)

and here the eliminated

elements

are

GY), (4,lJ, 071); (3,1), (62); (5,1), (42); @,2), 0%; 1594); (6,5).

C&2), (4,3); (5,319 (64);

By mathematical induction we can prove that the number of cycles necessary to eliminate all the subdiagonal elements of a matrix, which to note that has n rows, does not exceed 2(n - 1). It is interesting some rotation matrices cau be used several times and some not even once. A basic operation of this method is pre-multiplication of a secondorder vector by the matrix Tij. The result of the operation will be b = T+ + f,

(2)

where f is the error vector similar to the error matrix Fk of (1). We shall not dwell in detail on the evaluation of f since it depends to a great extent both on the method of calculating the matrix Tij and on the method of performing the arithmetfc operations on the calculating machine. We note only that ror floating point calculations

On the

and

for

f lxed

point

of

order

elimination

of

205

unknowns

IlfllE G c2-~llallE,

(3)

IlfllE < d2-4,

(4)

calculations

if llallB < 1. Here t is the order of the machine, c and d are constants which do not depend on t or a. By the data of [21 we have c < 6 with all operations performed with t-order accuracy, U< 2.5 when using a calculation of scalar products with doubled accuracy.

then

Let Rk be a transformation from (1) we obtain MNIIE

As follows

from

<

matrix

Il&(n_~)ll~

(2)

and

corresponding

to the

k-th

cycle:

IIFsIIE + IlFzil~ + . . . + IIFz(~-I)IIE.

<

(5)

(3) IlbUE

f

(1

+

c2-9

bl\E,

and so

Finally ~IANIIE< c2-‘fi + (I+

<

This

evaluation

(24.1)

of

[21,

is

oh.

~2-~)~ +. . . + (1 + ~-‘)2n-3111d~ll~

CC?‘) + (I+ 2c2-‘(n

)‘n/2

-

times

G

(‘3)

c2-‘)2n-311doll~.

1) (1+

better

than

the

corresponding

evaluation

III.

The evaluation for the fixed point is similarly obtained. We shall denote by Ek the number of elements which are eliminated in the course of the k-th cycle and let vk be a fixed column of the matrix Ak. We have II~kl/E

<

IIVk-,IIE

only if IIu~_LIIE< 1. To satisfy that . . . . 2(n - 1) we require

+

d2-‘1Ti;

the

<

condition

ilvOllE < 1- 2d2-‘(7~ We assume

that

the

matrix

Ae has

IIUk_IiIE

-

l))ln / 2.

IlFkll~ G d2-'l'mlk,

and also

considering

(5)

d2-‘in

/2,

:Iqkllw < 1 for

III columns; --

+

then

all

k = 0,

1.

206

V.V.

This evaluation is [21 bs approximately

Voevodin

better than the corresponding evaluatfon n times If the original matrices are

(33.3) of of the same

!lO!Vl.

Notes. formation we replace

1. The formulae so obtained can be used to evaluate the transerror of each separate column g of the mstrfx A,, ii in (6) and in (‘I) assume that B = 1. IIAoIIB by llgil=,

2. The process described enables us to obtain a more exact expansion arbitrary matrix in the product of the orthogonal and triangular matrix, and also to perform more accurate multiplication of the arbitrary matrix by the sequence of rotation matrices. This makes it possible to realise more accurately certain methods of solving the complete eigenvalue problem (the method of one-sided rotations, the orthogonal power method, Jacobi’s method etc. f . of the

3. We now consider auy method of elimination which reduces to premultiplication of the original multiplication by a sequence of rotation matrices or reflection matrices. Suppose that the enumeration of the columns with each such multiplication is carried out as accurately as we please. In the conditions of t-order invariance of the coefficfents of the transformed matrix at least one last rounding off is inevitable in all enumerations. Be can show that the unimprovable majorizing evaluations of only these errors coincide in order of magnitude on the class of matrices with evaluations (6) and (7) respectively. Thus the evaluations obtained here cannot be essentially improved. In this connection we note that evaluation (6) is the same as the evaluation of [21, obtained for the reflection method using the calculation of-the scalar In the derivation of (6) the use of a products with doubled accuracy. similar operation is not assumed, Translated

1.

FADDEYEV, D.K. and FADDEYEVA, V.N. Computational Algebra. San Francisco, Freeman, 1963.

2.

WILKINSON, J.H. Press, 1965.

The

algebraic

eigenvalue

problem.

by

H.F.

Methods

Oxford,

of

Cleaves

Linear

Clarendon