004-5553/78/0601-0017$07.50/O
U.S,S.R. Comput. MathsMath. Phys. Vol. 18, pp. 17-24 0 Pergamon Press Ltd. 1979. Printed in Great Britain.
CORRECTION OF THE RESULTS OF FACTORIZATION WHEN MODIFYING A NON-DEGENERATE SQUARE MATRIX” N. D. SERGEEV Leningrad (Received 3 November 1976; revised 14 July 1977) MATRIX expressions are given for transforming the triangular factors of a non-degenerate square matrix, in the context of the matrix modification (element variation) scheme used in practice. The efficiency of the correction is estimated by comparison with the formation and factorization of a modified matrix. 1. Introduction A typical feature in a variety of applications (e.g., [l-5] ) is a multistep computational process, at every step of which the modified matrix of a linear system of algebraic equations A, is used, obtained from the matrix A of the previous step in accordance with the expression
(1.1)
A,=A+XSY, where X and Y are rectangular matrices, and S is a square matrix.
Noting that the order of S is usually much less than the order of A, and that, at the first step, the system of equations is solved by Gauss’s or Cholesky’s method, with decomposition of the initial matrix into triangular factors, it is advisable, at each of the steps of re-evaluation, to replace the formation and complete factorization of the modified matrix by correction (re-evaluation) of the results of factorization, employed at the previous step. Note. Triangular factors of a non-degenerate matrix A,, and columns, exist only under the condition M,f’=O,
k=1,2,.
with a fixed position of the rows
. . , n,
where Mm (k) are successive principal minors of A,. However, non-degeneracy of Am is sufficient for correction of triangular factors; it is performed in the context of permutation of the columns (see Section 3, Paragraph 6). An expression for the economic re-evaluation of an inverse matrix in the case of the modification scheme (1.1) is given in [6] .
*Zh. vjkhisl. Mat. mat. Fiz. 18, 3, 546-553,
1978. 17
USSR IX3- R
(1.2)
N. L). Sergeev
18
2. Transformationof the triangularfactors when modifying the matrix Theorem
Given a factorization of the n X n matrix A = PQ, where P is a left, and Q a right, triangular matrix, and ones are on the principal diagonal of Q. Then a fa~to~zation of the same corresponding to condition (1.2), type of the non~egenerate matrix A,=A+XSY =PmQm, where X is an n X h matrix, Y and h X n matrix, and S a square non-degenerate matrix of order h, h f n, is given by the following expressions:
Z=[YQ-'(P-'X)~~'],,=[O(YQ-~)~P-~X]~.
0.3)
Here [S- 11p1is.a quasi-diagonal matrix with n identical square blocks S- 1, and w is a left quasi-triangular matrix, all the non-zero blocks of which are unit matrices of order h. For instance, with n = 4 and h = 1.
(2.4) i o1
E-brf=
O
gooio The subscript d in fZ . l)-(2 3) denotes successive decomposition of the relevant matrix into blocks and formation from then of a quasi-diagonal matrix. Corresponding to nXh, hXn+ matrices we have 1Xh, hXI, hXh blocks. hXhn Proof. We can show from (2.1)-(2.3) that Pm, Q, are triangular matrices of the same type as P, Q. For a non-degenerate matrix A,, such decomposition is unique. Let us show that, by using (2.1) and (2.2), we can obtain P,Q,,,=A,. It follows from (2.31, in the light of the block structure of the matrices w andE - w-l, similar to the element-by-element structure (2.4), that (YQ-‘),(P-‘X),=.2+3-o-‘)Z(Zh.o-‘)*,
(2.5)
P,Q,,,=PQ+P(P-iX)J?(YQ-‘)dQ, B=oC+
([Pi]
XZ(E-o-‘)‘I
,+Z) -*(id--E)
+oC[Z-
~~~-*]~+~)-‘(~t-~),
(E-w-‘)
G-6)
19
Correction of the results of factorization
c={[s-i],+(E-o-‘)Z(E-w-‘)‘)-‘.
(2.7)
On substituting into (2.6) as the first and second additive terms the non-zero matrix and taking OC outside the brackets in the terms of the (o+o’- E) WI,-(ofw’-E) IS], resulting expression (from the second to the fifth), we have
B=(oi-of-E)
[S],+oC[D+F([S-‘].+Z)-‘(o’-E)],
D=-C-‘o-‘(wSo’-E)
-rs-~],s-‘(o’-E)
[S].fE=-[S-‘],o-‘o[S]. [S].-(E-o-‘)Z((E-o-‘)‘(E-o-‘)
-(w-* -E)‘o-fof}+E=-[S-i],o-‘(o’-E)
(2.9)
[S].,
F={[S-~]~+(E-o-‘)Z(E-o-‘)‘)w-‘tZ-(E-~-’)Z(E-~-’)’ =[S-*]no-*_ (E-o-‘)Z(E-o-‘)f(E-w-‘) +z “[s-‘]nO-*_
(2.8)
(2.10)
(E-o-‘)Z+Z=[S-‘].w-‘+o-‘2.
It can easily be seen that the braces in (2.9) contain the difference between two identical matrices, obtained from a single replacement of the last diagonal block of order h by a zero block. The replacement of this matrix in (2.10) by a unit matrix does not affect the result, if we note the subsequent left multiplication of the corresponding term by E - o- 1. Noting that the matrices O-’ and of--E [S- I] n, we obtain, instead of (2.9), (2.10), D=-o-yof-E),
are commutative with respect to
F==o-’ ( [ iPi] ,+Z) .
(2.11)
From (2.8) and (2.11) we have
B=(ofo’--E)
[S],+oC[-o-‘(w’-E)i-tj,-‘([S-‘],+Z) x(rs-iln+z)-i(of-E) I-(o+o’--B) [S],.
In the matrix (o-l-of-E) , all the successive blocks of order h are unit blocks, while in the matrix B they are equal to S.. Hence B = eSet, where e is a hn X h quasi-column, successively composed of unit blocks of order h. Returning to (2.5), we obtain ~mQm=~Q+~[(P-LX)de]S[ef(YQ-‘),]Q=A+XSY=A,, which completes the proof. Obviously, the theorem remains true for matrices with complex elements. 3. Generalizations and particular cases 1. Let A = LU, where L is a left triangular matrix with unit principal diagonal, and U right triangular. From (1 .l) we have A, L=A’+ Y’S’Xf=Um’L,f, where, by our theorem, U,t is given by (2.1), and L,t is given by (2.2) in the light of (2.3) and using the matrices
20
N. D. Sergeev
ut, L’, Y’, X’, 8s’ instead of P, Q, X, Y, S respectively. Here and throughout what follows, the sign ( )’ for matrices with complex elements denotes tr~sposition and replacement of all elements by the complex conjugates. On performing transposition (and complex conjugation) of the resulting matrix expressions, we get
([s-*]n+z)-i(Yu-*)d,
L,=L+L(L-‘X)&-E)
(3.1)
u,=u+(L-fX)~(~S-~],+(E--W-~)Z(E--W-~)f}-~Ot(YU-‘)du,
(3.2)
Z=[YU-'(L-'X)d~'ld=[O(YU-l)dL-'X]d.
(3.3)
2. Let A = VDW, where Y is a left and W a right triangular matrix with unit principal Using the obvious diagonal, and D is a diagonal matrix; similarly, A,=A+XSY=VJlmW,. equations
V-L,
U=DW,
W=Q,
P=VD,
(3 -4)
we obtain from (3.1), (2.2) and (23) or (33)*
V,=v+v(v-‘X),(o_
E) ([S-‘l.+z)
w,=w+(D-TiX)&[S-‘],+Z)-‘(d-E)
-'(Yw-~D-')~,
(3.5)
(YW-‘),W,
(3.6)
(3.7)
~=~yW-‘D-‘(V-‘X),o~],=[o(YW-‘)dD-iV--’Xld. When deriving (3.7) from (2.1), we use the fact that the principal quasi-diagonals are identical. matrices P (P-‘X) do and (v-*x),
(3-8) of the
3. If A and A, are symmetric or Hermitian matrices, then Y = Xr and S is a symmetric and the use of (Hermitian) matrix, in (1.1). In this case, Q=D-‘P’, L=U’D-*, W=V/” expressions (2~2)~ (3,1), and (3.6), as also the storage of the corresponding matrices, becomes superfluous; the elements of D are the same as the elements of the principal diagonal of P or of U. 4. Let A and A,
A,=A+XSX’~G,G,‘;
be positive definite symmetric or Hermitian matrices, and let A =GG', are left triangular matrices. All the elements of D (see Para 2) G, G,
are positive numbers. Denoting by ( )s and ( )-%a the operation of taking the square root of all elements of a diagonal matrix, and the same operation with subsequent inversion of the matrix, we find, in the light of Para. 2 and Section 2, that G,=V,,&‘= P,,,D,‘f*. Using (2.1) and (3.7), and also (2.3) or (3.8) with the obvious equations
P=GD’“,
Q=W=D-‘i*G’,
*One of the possibleversionsof the algorithmfollowing(39,
V=C;D-‘ts
(3.6) for fi = 1, is describedin [7] _
(3.9)
Correction of the results of factorization
21
and recalling that the second term of (3.7) is a diagonal matrix, we obtain
G,=[G+G(G-‘X)~o{[S-‘],+(E-o-‘)Z(E-o-’)’}-’(G-’X),‘]
(3.10)
X[E+(G-‘X),{[S-‘].+(E-o-‘)Z(E-~-1)’}-’(G-1X),L]-‘“,
(3.11) 5. With the aid of (3.4) we can show that the matrices (2.3), (3.3) and (3.8) are identity matrices, while for positive definite A and A,,, , in the light of (3.9) they are identical with (3.11). From this, using (3.4), we can show that the diagonal matrices (V-lx) J( YW-‘) d, D (P-IX) d are identical, and when (3.9) holds, that they are identical &?‘X)dC(YU-‘)d C( YQ-‘)d, with the matrix D (G-lx) ,J’( G-ix) 2, where C is given by (2.7). In view of this, we have, using (3.7) D,=D+D(Z’-iX)J(YQ-‘)d=DS-D(L-iX)C(YU-i)d
(3.12)
and for positive definite A and A,, D,=D+D(
G-‘X) J( G-‘X) df.
(3.13)
Hence, knowing the factors of the factorization of the matrix A and the elements of its modification in accordance with (1 A), we can easily evaluate the determinant of the modified matrix A, as the product of elements of the diagonal matrix given by expressions (3.7), (3.12), or (3.13). 6. The above expressions are obviously applicable for the factorization of matrices which differ slightly from the given factorized matrix (a difference in one element or in the elements of a single row or single column, etc.). When columns Ui,aj change places the correction is performed with Xif=aj-_d<, Y,= If u~T)=O,@')+O (condition (1.2) is not satisfied, (yk), yi=i,Yj=-l, yk=O k+i, j. but the matrix A,= (aiT)> is non-degenerate), the correction has to be supplemented, by introducing as additive columns Xir=a,(m)-al(m) in X, and the row Yl, in Y. The correction is similarly supplemented if necessary when we use at each step alternately a column of one triangular factor of the initial matrix and a row of the other. 7. IfA, is obtained from A by the bordering indicated in (3.14), it can easily be seen that the factorization of A, is given by the following expressions (or similar expressions, for different types of factorization):
pq = a -
rs;
the factorization of the matrix a - rs is performed in accordance with the scheme for factorization of A.
22
N. D. Sergeev
where F and T are triangular factors of the matrix A; 77is a A m=qiA~=qtFTq, rectangular matrix of ones and zeros, there being just one in each column and in each non-zero row. The operation $Aq amounts to truncation of A, i.e., to selection from A of a number of rows equal to the number of columns in q, then to selection from them of columns having the same order numbers as the selected rows. 8.L.d
Recalling that A, is unchanged from any permutation of columns in @F with simultaneous pe~utation in the same order of rows in Tq, we can form triangular matrices FI and T’l of the same type as F and T, and obtain a scheme of modification of the factorized matrix A 1 = F1 T1, corresponding to (1 A):
(3.15)
Similarly, for A = VDW, on also permuting elements of the diagonal matrix D in the same order as the columns in r)t Y and the rows in WQ,we have
A, = (q’V) D (wq) =
(3.16)
= V,D,W, + XSY.
Using the above reevaluation expressions, we obtain the corresponding triangular factors of the truncated matrix A,. The efficiency of the schemes (3.14) and (3.15), (3.16) is obvious, if the orders of matrices a and S are small compared with the order of the initial matrix. 4. Features and efficiency of the computational scheme The operations P-*X and YQ-’ in (2.1)-(2.3) imply the solution of systems of equations with triangular matrices P or Qr and the quasi-columns X or Yt of the unattached terms. Using the structure of the matrices (P’X) d, o; of-E, ( YQmi) d, we can easily see that the left quasi-triangular matrix P (P-“X) da in (2.1) (or in (2.2), the right quasi-triangular matrix with zero principal quasidiagonal), is formed in the process of solving W-E) U'Q-')ciQ) the above-mentioned systems of equations and is represented by the quasi-columns (transposed quasi~olumns) of unattached terms, obtained before each successive step of such solution, starting with the first (with the second) step. The matrices 2, [S-*] .+Z in (2.3) (2.2), and the matrix (2_7), appearing in (2.1), are quasidiagonal with square blocks. The corresponding matrices in (3.1)-(33) (35)-(3.8), and (3..10)-(3.11) are formed in exactly the same way. If h is much smaller than n, analysis of our expressions shows that, when using them, the labour is considerably reduced as compared with the formation and complete factorization of the modified A,. This effect depends on the fact that equations with known triangular matrices and with a quasi-diagonal matrix are solved, and on the fact that the matrix A, is not thereby itself composed.
Correction of the results of factorization
23
We shall not dwell on methods of putting into ~go~t~ic form the computational procedures connected with the realization of the above matrix transformations, and merely confine ourselves to the main details and the efficiency of the computational scheme (2.1), (2.3) in the case of symmet~c matrices A and A, for h = 1, S = 1. When c~cuIating the number of arithmetic operations, we assume that the matrix A (of order n), and the matrices x, Y=X’, A, , and P (the last, on the principal diagonal and below it) do not contain zero elements. The matrix operations and the corresponding number of arithmetic operations in computation according to the scheme (2.3), (2.1) are shown in Table 1. Here, operation 1 is the solution of the system of equations E3E= X with left triangular matrix P; operation 2 (Para. 3, Section 3) costs nothing: the elementsy are the same as the diagonal elements of the matrix P (P-ix) da; when using operation 3 we multiply a row ofy by the diagonal matrix xd and subsequently add the elements of the resulting row; in operation 4, the matrix (E-o-‘)Z(E-w-‘) tt is obtained, as may be seen from (2.4), from the quasi-diagonal matrix 2 by a shift of all blocks of the principal quasi-diagonal along it by one block position downwards and rightwards with annihilation of the last block and with the appearance of a zero block in the place of the first; operation 5, is the solution of the system of equations with diagonal matrix of coefficients and with the same matrices of unattached terms ZUd=$/d and result; operation 6 is multiplication of the left triangular matrix Pxdw by the diagonal matrix ud (as explained above, pxdo is formed when performing operation 1 without extra cost); and operation 7 is summation of two left triangular matrices. TABLE 1 Number of arithmetic operations
No.of operation
Matrix operations Mult~~cat~ve
Additive
I
I n~f3n--3
Total:
I
n2fn-l
TABLE ‘2 Number of arithmetic’operations
No.of operation
Matrix operations Multip~cative
XX’ A,=A+XXt
J&=&Q, Total:
Ii
Additive
0.5n(n+l) ‘/s (n3+3?+-44
0.5n(ni1) ‘la W-n)
I */B(n3+6nZ--n)
I ‘/s (n3+3n2+Zn)
V.A. Karatygin, V. A. Rozov and N. 0. Sokolova
24
In Table 2 we show the matrix operations
and corresponding
operations when composing the symmetric matrix A,
numbers of arithmetic
and factoring it in accordance with the
Gauss method of single division; we take into account the fact that, at the k-th step of this computational
scheme, we perform division of k - 1 auxiliary elements of the k-th row by a
diagonal element and 05k (k - 1) multiplications
of the results obtained by auxiliary elements of the
k-th column, and the same number of subtractions. If we note that the computing additive operation,
time for one multip~cative
operation
is twice that for an
we can see by comparing the totals that the scheme of Table 1 is less
laborious than that of Table 2 if tt 2 4; with n = 10 the labour ratio is 1: 1.88, and with n = 100, it is 1: 169.
Translatedby D. E. Brown REFERENCES 1.
ARGIROS, J., Matrix theory of statics of structures, in: Modern methods of computing complex statically indeterminate systems (Sovremennye metody rascheta slozhnykh staticheskl neopredelennykh
s&em), Sudpromgiz, pp. 372-396, Leningrad, 1961. 2.
FILIN, A. P., Matrices in the statics ofrod systems (Matritsy v statike sterzhnevykh sistem), Stroiizdat, Moscow-Leningrad, 1966.
3.
SERGEEV, N. D., On the computation of statically indeterminate systems when successively modified
4.
MEL’NIKOV, N. A., Ele~t~~a~ne~orks and systems (EIek~icheskie seti i sistemy), Energiya, Moscow, 1969.
5.
TINNEY, W. F., and WALKER, J. W, Direct solutions of sparse network equations by optimally ordered triangular factorization,Proc. I&Z, 55,1801-1809,.1967.
6.
HOUSEHOLDER, A. S., Principles of numerical analysis, McGraw, 1953.
1.
BENNET, J. M., Triangular factors of modified matrices, Numer. Math., 7, No. 3,217-221,
by a multi-step process, Stroit. mekhan. i raschet sooruzhenii, No. 4,26-31,1976.
1965.
U.S.S.R. Cornput..Maths Math. Phys. Vol. 18, pp. 24-33 0 Pergamon Press Ltd. 1979. Printed in Great Britain-
A CASE OF ASYMPTOTIC SUMMATION IN A TWO-DIMENSIONAL DOMAIN* V. A. KARATYGIN, V. A. ROZOV and N. 0. SOKOLOVA Leningrad
(Received 7 March 1977) ASY~~OTIC expressions are obtained for double sums with rapidly oscillating summands, suitable in the case of a sufficiently slow variation of the amplitude and phase increments of the summands over the range of summation. A very general type of computational algorithm is obtained by applying a Poisson transformation for the multiple sums, followed by application the stationary phase method and investigation of the convergence of certain types of double trigonometric series. Computational expressions are given for the case of summation in a rectangular domain. *Zh_ vj%hisl.Mat. mat. Fiz. 18,3,554-562,1978.
of