Requirements on computing programs of linear algebra

Requirements on computing programs of linear algebra

U.SSR. Cornput.M&s. &fQrh.P&y%VoL 20, No.3, pp. 3-14.1980 Printedin Great Britain Al-5553/80~03~03-12SO7.50~0 0 1981. PergamonPressLtd. REQUIREMENTS...

895KB Sizes 0 Downloads 42 Views

U.SSR. Cornput.M&s. &fQrh.P&y%VoL 20, No.3, pp. 3-14.1980 Printedin Great Britain

Al-5553/80~03~03-12SO7.50~0 0 1981. PergamonPressLtd.

REQUIREMENTS ON COMPUTING PROGRAMS OF LINEAR ALGEBRA* I.

N.MOL4XANOV Kiev

(Received 18 April 1979; revised 18 December 1979)

THE PFUCTICAL difficulties of solving problems in linear algebra are discussed, along with the req~emen~ to be met by niece methods and Corning programs in numerical software.

The present paper was read to a working conference - “Performance evahmtion of nmerical software”, IFIP-W, G. 2.5, Baden, Austria, 1l-15 December 1978. Problems of linear algebra arise in many fields of science and technology. Typical problems are considered in [ 11, while a fairly complete bl~~o~phy of numerical methods of algebra may be found in [2 J. Methods for the numerical solution of linear algebra problems are surveyed, up to 1975, in [3]. The constructionof libraries and packets of programs of numerical analysis, including linear algebra, is analyzed in [4]. The difficulties of computer soiution are due, on the one hand, to the non-classical mathematical statement when practical problems are described, and on the other, to under-valation of the role and importance of the computer realization of linear algebra algorithms. These are the reasons for the present interest in evaluating the reliability of computer solutions of applied problems. As a rule, the solution of an applied problems starts with the creation of acceptable mathematical models. The mathematical modelling of applied problems is discussed in [5]. When describing an applied problem, it is very rarely that we encounter systems

X%=5

(1)

irii==xiT

(2)

or

with exact initial data. Most typically, we are given the approximate equations

Ax=& Av-=b with an indication of the error in the initial data: *Zh. vj%hisl. Mat.

mar.Fiz, 20. 3,550~561, 1980. 3

(3) (4)

4

I. N. Molchanov

Il~-~Il=IIAdll~E_4,

lib-~ll=llAbll~s~.

(5)

In short, the applied problem is described by an entire class of equations. As the formal solution of problem (3), (5) we can take any element which converts Eq. (3), with A’ and b’ satisfying inequalities (5), into an identity. Since this class of formally acceptable solutions can be quite large, we need to define more precisely what we mean by a solution of this type of problem. Similar questions arise in processing observations. Not@ also that the question of the exact solution of individually specified systems (l), (2) needs to be discussed. Depending on the statement of the applied problem, we need to define each time what we mean by the mathematical solution of the class of equations. It should be recalled that, in the mathematical solution, an hereditary error is present, dependent both on the properties of the matrix and on the properties of the matrix and on the error in the initial data. The solution of equations, obtained by some numerical method by computer, will be called a computer solution. Our evaluation of the reliability of a solution will include both a definition of what we mean by ‘solution’, and an estimate of the hereditary error and the error of computer realization (the proximity of the computer and mathematical solutions). Before dealing with these topics, we have to consider whether the mathematical problem has a classical or generalized solution, whether a unique classical or generalized solution can be isolated, and the stability of this solution with respect to the input data [6, 71. For certain problems of linear algebra, the hereditary error can be estimated in terms of the conditionality numbers of the problem.

2. Classification of applied problems of linear algebra In order to obtain a reliable solution of a problem, it is essential that the problem be correctly specified. Determination of whether the problem of solving a system of linear algebraic equations is correctly or incorrectly posed implies finding the mathematical solution, defining the method of solution, and taking account of errors both in the initial data and in the computer realization. If the undisturbed system, i.e. the system with exact initial data, has a non-unique solution, the problem is ill-posed. Linear systems with square matrices, degenerate within the range of accuracy of specifying the coefficients, are also ill-posed problems [g, 91. The conditions

IIA-‘AAll<

or

IIA-‘IIllAAII
are violated for the matrices of such systems. Take an example. Let system (3) be 25x,-36x,=

1,

16x,-23x,=-l,

(6)

5

Requirements on computing programs of linear algebra

-23 For this system, A-* = IlAA(l,~O.OZ, IlA~ll~GO.Ol. where it is known that II -16 36 i.e. I/.4-‘]I,=59 and 1lA~~l~1ld~*llm= 1.18, i.e. condition (6) is violated. It is 25 II easily shown that the unique exact solution of this disturbed system is ~~=--59, z2=-41. But variation of the coefficients within the range of accuracy of their specification can lead to the disturbed systems 25.01X,-35.99x,=1,

15.99x,-23.012,=-1;

25.0&---36x,=1,

15*99x,-23*01x,=-1.

The first is incompatible, while the second has the unique solution xi =--369.04315, -256.41025.

z2=

When con~de~g an ill-posed system, the generalized solution (in some sense or other, see [8-l l] , may be found. For systems with rectangular or square matrices, for which conditions (6) are violated, the generalized solution is understood to be the solution of the system A’dX=A’b

(7)

or the vector x that satisfies the condition IIb-AxIt -min.

(8)

If the rectangular matrix A is of incomplete rank i.e. the rank of A is less than min (m, n) , where m is the nwnber of rows and n the mmrber of columns, then, by imposing extra restrictions on the solution of problems (7), (8), e.g. considering the generalized solution that has least Euclidean norm, we can isolate a unique solution: in the present case, the so-called normal solution xN. To solve ill-posed systems, special methods are used to fmd the suitably ~mpletely defined generalized solutions. If the undisturbed system of linear equations has a unique solution in the neighbourhood marked out by inequalities (S), and the disturbed system has a solution which is unique, the problem is correctly posed. We shall now assume that control (6) hold for problems (31, (5). The hereditary error in the unique classical mathematical solution depends both on the errors in the initial data, and on the properties of the system matrices. For instance, the systems

differ from each other in the fifth alit digit. The solution of the first is x1 = 17, x2 = 0, and that of the second is x 1 = 2, x2 = 3. The determinant of the system is unity. When conditions (6) hold for problems (3), (5), we have the inequalities

6

I. N. ~Uoichanov

lb-Zll

IIAIIIA-‘II 5

llxll

i-IIAAI(IIA-‘II

Ilx-211

IIAIIIIA-‘II

ll~ll 5

I-IlAbll/Ilbll

IIAAII + --IlAbll [ --m llbll 1 ’

(9)

IIAAII + IlAbll [ -iillbll I

under the natural assumption that IIA b (I / 11b II< 1. Both these estimates are unimprovable, in the entire class of non-degenerate matrices [ 121. It is obvious from estimates (9) and (10) that the stability of the solution to changes in the initial data depends largely on the number cond A-/IAll llA-‘ll, known as the conditionality number of matrix A. The number m=condA[IIAAII!

IIAll+llAbll/

llbll]

may be called the conditionality number of the system of linear algebraic equations. In actual problems, it is sensible to consider the systems for which m is much less than unity, e.g. m < 0.05. In short, correctly posed problems of solving systems of linear equations may be well conditioned or poorly conditioned. By the solution of a well conditioned system we mean the unique classical solution with an estimate of the hereditary error. Solutions of poorly conditioned systems are unstable with respect to small variations of the initial data, but have a stable projection onto the subspace formed by the eigenvectors of matrix ATA corresponding to the “large” eigenvalues. If the system describing the applied problem is poorly conditioned and it cannot be restated in such a way as to obtain a well conditioned system (e.g. by changing to the study of different parameters of the physical process), then, by the solution of the ill-conditioned system, we may sometimes mean the construction of the stable projection just mentioned [3]. A similar problem of estimating the hereditary error arises when fmding the eigenvalues and eigenvectors of a matrix. To measure the sensitivity of an individual eigenvalue X to variations of the initial data A A, conditionality numbers of matrix A with respect to its eigenvalues, denoted by cond A, have been introduced. In the case of a simple eigenvalue,

where x and y are respectively thz eigenvectors of A and AT, corresponding to the eigenvalue h, and il~II=Ilxll2.ThevariationAh=h-AmaybecharacterizedbothintermsofAandAA, andalsoin terms of the spectral projector PA as follows: for a simple eigenvalue condk = sup-,

lAI,l IIAAlln

PA = zy’ y’x ’

where sup is taken’with respect to ah non-zero increments A A; for an m-tuple eigenvalue A,

7

Requirements on computing programs of linear algebra

IALl

condh=SUP~. cond3L&-, -

IMI

P,=X(YTX)-‘YT,

m

where sup is taken with respect to all disturbances A A for which Xretains its multiplicity, and the cohrrnnsX and Y form a basis of invariant subspacesfor X. If the eigenvaluesof A are distinct, we have (see [13])

IAhfG cond~llA~ll,

IIA~~II -=

IlV,II

W-v,11 ~ n condh IVill

ci=, tki--hjl i4

1Adll.

The disturbances A A may destroy the multiplicity of the eigenvalues,and a “cluster” of close eigenvahresmay be formed. This situation is revealed by the smallnessof IIPAII. To sum up, when solvinga linear system describingan applied problem, it is advisableto determine whether or not the problem is correctly or U-posed, see if a correctly posed problem is well or iIl conditioned, refine the definition of required solution, and on the basis of the new deftition, construct a method of solution, and meantime coIk.t u posreriotiinformation such that the reliability of the soiution can be determined inexpensively as compared with the cost of actuahy solvingthe given problem.

3. Computer realization of numerical metho& The topics of main interest here are how the computer solution is obtained, how to ensure that the computer and mathematical sohttions are close, and how to estimate this closeness. hhnnerical methods for solvingproblems of &rear algebraare usually divided into direct and iterative methods. After feeding matrix A and vector b of system (3) into the computer, and converting them from decimal to binary form, we obtain z and 5. The computer problem is

where the matrix and right-hand side are stored in binary form in the computer. The solutions of the computer and mathematical problems may be different. The over-aheffect of rounding errors in direct methods can be regarded as the appropriate equivalent disturbance of the initial data [ 14, 151. Hence the computer-evaluated solution of system (3) is exact for some disturbed system, e.g.

8

I. N. Molchanov

and is an approximate, different from the mathematical, solution of system (3). Here, dA and db are the corresponding equivalent disturbances. If IIdAIIIIA-‘Il~landlldbll / lIbll<1, we have (lx(‘)-X(1 llx(lJ

II

G

IIAIIIIA-‘II i-llWlllbll

[

lldbll

lIdAIl --

IIA II +llbll

1’

(12)

I

(13)

or IIA II IIA-‘II

11x(‘)-XII IMl

g

I-WAIIIIA-‘II

IldAII

+

lldbll -ii%--’

IIAII

Majorant and probability estimates of some direct methods for solving linear systems are given in [14-163 etc. Notice that the length of the mantissa of the machine word appears as a parameter in these estimates. In accordance with these estimates, and estimates (12), (13), we can talk of good or bad computer stipulation of the system, depending on the numerical stability of the computer solution to over-all rounding errors. But the concept of good or bad computer stipulation is closely linked with the scope of the specific computer. The same system may be classified as badly computer stipulated for one computer, and well stipulated for another. For instance, in Table 1 we compare the mathematical solution of the system

(14)

with the computer solutions obtained by different direct methods on different machines with unitary (u.m.w.) and binary (b.m.w.) machine word. The computer stability of the solution [ 17,181 may be improved by good scaling, i.e. conversion of the initial system into an equivalent system with welI conditioned matrix. When analyzing the computer realizability of algorithms, we also have to estimate the required number of arithmetic operations and required memory, i.e. all in all, the problem-solving time.

Computer Solution ES computer MIR, method of LLT expansion

gygy~

dec. units 0.4

0.5 0.6 0.5

-0.0378848 0.764402 0.710031 0.434782

I dec. units

Method of LV expansion, U_E_w_ I

0.3995346979 0.5002810367 0.600~168362 0.6999307078

0.628702, 0.3612513 0.5423611 0.53438Oi

’ BESM-6 method of LLT expansion U.C.W.

I

0.4oOsm2 0.445510078 0.51717085 0.51353925

b.c.w.

method of LL’~ expansion

0.4OllOOmO1535001 0.m 0.499896998607%48 0.e 0.59Qo-14333 0.598988162 0.5OOc0000002287490.5OOOw13.9

~eq~lrements on

com~tin~programs of linear o&bra

9

Computer solution Normal solution

dec. units Gauss method

/ ztz

dec. units

/

miniit

Egod

/

z=

/

min ftt

0.9794638929 -4.227i68422 6.161331610

Information about the proximity of the computer solution z&f to the mathematical solution x of system (3) can be obtained by means of the iterative process [ 141

The corrections 6 fs), s = 1,2 , . _ . , are found by utihzing the expansion of matrix A into the product of two matrices Q and P. The discrepancies r @lin (15) have to be evaluated with double accuracy. Convergence of the iterative process with respect to the corrections shows that the computer solution is close to the mathematical solution. If there is no convergence or it is excessively slow, the length of computer word has to be increased. If process (I 5) is convergent, we have the following approximate estimate for the conditionality number of matrix A:

where E is the greatest number for which 1.0 + E = 1.O holds in the floating point arithmetic of the given computer (see f 1] ). Using the approximate value of cond A, we can estimate from (9) and (10) the hereditary error of the solution. Ill-posed problems can be solved by using Tikhonov regularization [8,9,19], or by expanding matrix A with respect to singular munbers [20,21] , e.g. the minfit procedure (see [lo]). Both these methods enable an approximation to the normal solution to be obtained. In Table 2 we compare the solutions of the system with degenerate matrix

2x1--~,+)/2s,=5+7)‘2, 3x,+2xz--3xz=-24, 3s,+I’2s,--15~s /2=12-312, obtained by these methods, with the exact normal solution of the problem, and the solution obtained by Gauss’s method with principal element chosen from the entire matrix.

(16)

10

I. N. Molchanov

Notice that, in practical problems, Tikhonov regularization operates roughly three times faster than the singular number expansion method. But a computer with a long machine word is needed for its realization, since otherwise, when matrix A is multiplied by matrix AT, a product ATA may be obtained on the machine, which differs from the true mathematical product of the matrices. If the matrix of system (3) is symmetric and positive semi-definite, matrix spectrum displacement [ 1 I] may be used to solve the system. This method enables a generalized solution of system (3) to be obtained. For solving linear systems with high-order matrices having some special feature (ribbon structure, or easily programmed generators), iterative methods can sometimes be useful. By considering together the memory volume required, the number of arithmetic operations, the required length of machine word, and the problem-solving time on the specific computer, we can decide whether a direct or iterative method is desirable for.solving a concrete system. Theory of iterative methods, and some aspects of their practical realization, are discussed in [6,22,23]. In some cases, a theoretically convergent iterative process, when realized on a computer, may yield a machine solution which is not the mathematical solution. This may be due to a number of reasons. Let us consider some of them. Notice that, instead of system (3), solution of problem (11) is realized in the computer. The special features of the machine arithmetic may lead to situations in which orders disappear, or small quantities are replaced by some machine method, etc. The conditions for terminating the iterations may not be adequately justified. At each step of the process, rounding occurs, and the course of the theoretical iterative process becomes distorted. For instance, investigation of the explicit one-step iterative method, realized according to the relations rc~‘&p._~zw,

5’“+‘)_~(k’+Zp) 9

2=2 / (iS+A)

07)

and used for solving system (3) with symmetric and positive definite matrix, under conditions when estimates of the spectrum 6<‘hjdA, i=l, 2,. . . , n, are known, leads from the point of view of machine realization to the estimate

where x=(k)is the machine solution, obtained after the k-th iteration, ~=IIE-L~II, z(~)=z~ O= mas []&ill, Ei is the rounding error is the initial approximation to the solution l
appearing when the i-th step of the process is realized. The computer-evaluated vector x‘(k)isthe k-th approximation for the iterative scheme (17) with disturbed right-hand side, the equivalent disturbance being dbk = p=O

where & is the over-all rounding error for all k steps. The conditions for terminating the iterations are theoretically linked with the properties of the system matrix, the method of solution, and the length of machine word. For instance, for process (17), satisfaction of the conditions

Requirements on computing programs of linear algebra

(k+il

max

t

IX,

_x

(A) 1

I ~ &*nE

-iTiT-’

Ix? I

11

xjzo, x,‘“+o,

guarantees max

i

IX*-x:A+‘)I ~ l&l

e



where E is an arbitrarily small preassigned number, and X, + 1 is the minimum eigenvalue of A. To estimate the hereditary error of the mathematical solution, we can use a posteriori information obtained during the iterative process. For instance, if, during process (17), the machine solution proves to be reasonably close to the mathematical solution, then the approximate conditionality number cond A can be found from the g posreriotirelation cond A=-2

In

and the hereditary error from relations (9), (10). When solving systems with symmetric and positive semi-definite matrices, we can use either one- or two-step iterative processes, which enable an approximation to a generalized solution to be obtained, i.e. one of solutions (7), without having recourse to Gauss left-hand transformation [24]. Iterative methods for obtaining useful information from incompatible systems of linear algebraic equations are dealt with e.g. by Marchuk and Kuznetsov in [25]. Computer realization of methods for fmding eigenvalues and eigenvectors of problem (4) introduces an error which depends, not only on the properties of the matrix A and the method of solving the eigenvalue problem, but also on the special features of the computer arithmetic. The computer-evaluated eigenvalues are the eigenvalues of the matrix A + dA, where the matrix dA is not unique and depends on the chosen algorithm and the length of mantissa of the machine word. Let matrix A + dA be very close to A. If the quantity jjdA fjE/II A jIE is small, then the error dx in the eigenvalue is upper-bounded

The demands made by some eigenvalue-problem-solving algorithms on the memory and the number of arithmetic operations, and also majorant and probability characteristics of the proximity of the machine to the mathematic solutions, may be found e.g. in [ 15, 161. Before applying a method for finding eigenvalues and eigenvectors, it is useful to scale the initial matrix. After using the program for solving the problem with scaled matrix, we have to employ a special procedure whereby the effects of scaling are eliminated [lo] .

I. N. Molchanov

12

Various devices can be used to estimate the reliability of the machine solution. For instance, a program is described in [26] whereby the conditionality numbers can be found on the basis of the QR algorithm, while by-passing the construction of an orthogonal matrix, converting A to an upper triangular matrix T. The number q of correct decimal places in the computed eigenvalue h is given by the relation

q =

10&o

where 0 is an upper bound of [IdA IIB/

IL4IL.

lB

?b cond

hIMlIE>

For the practical estimation of the result, it is desirable to solve the eigenvalue problem twice, for different E , where e is the quantity appearing in the conditions for terminating the iterative process. The number of coincident digits in the machine solutions can be taken as the number of guaranteed places for the given computer representation of numbers. Let us state the conditions under which the machine solution of the problem can be taken as the mathematical solution: 1) the existence of a classical or generalized solution of the machine problem, 2) isolation during the computer realization of the algorithm, of a unique classical or generalized solution, 3) the computer stability of this isolated solution, 4) good computer conditionality of the computer problem, 5) the correspondence of the solution algorithm to the specification of the machine problem, and 6) satisfactorily based conditions for terminating the computation.

4. Conclusion A lot of work has been done in recent years in various countries on estimating the reliability of solutions obtained by computer. Notable examples are interval arithmetic, see e.g. [27], and significant digit arithmetic, see e.g. 1281. A posteriori information obtained during the course of solution, as well as a priori information, may be used for estimating solution reliability, see e.g. [29]. One device for solving applied problems while simultaneously estimating the reliability of the result is the new type of program complex, see e.g. [30], which jointly specifies the machine problem, constructs the algorithm for its solution, computes the solution, and estimates its reliability. The serious difficulties of solving linear algebra problems, describing applied problems, pose demands both on numerical methods, and on computing programs of linear algebra. These demands include: 1) both computation of the solution, and provision of facilities for monitoring the closeness of the machine to the mathematical solution, and also estimation of the hereditary error of the solution, 2) computer-orientation of methods, i.e. maximum account should be taken of the mathematical and technical potential of the computer, 3) economy in the number of arithmetic operations, and in the volume of the working memory, during machine realization of algorithms, 4) problem-orientation of methods, i.e. a whole class of problems should be solvable, and 5) reasonably simple realization.

Requirements on computing programs of linear algebra

13

REFERENCES 1. FORSYTHE, G, E., and MOLER, C. B., Computer solution of linear algebraic systems, Prentice Hall, New York, 1967. 2.

FADDEEVA, V. N., et al., ‘Computing methods of linear algebra Bibliographic guide (Vychislitel’nye metody lineinoi algebry. Btbliograflcheskii ukazatel’), 1828-1974. VTs SO Akad. Nauk SSSR, Novosibirsk, 1976.

3. FADDEEV, D. K., and FADDEEVA, V. N., Computing methods of linear algebra, in: Computing methods of linear algebra Parallel computations (VychisL metody lineinoi algebry. Paralld’nye vychialeniya), Nauka, Leningrad, 1975. 4. ARUSHANYAN, 0. B., Some modem concepts in the construction of libraries of numerical analysis, Vestn MGU, Ser. vychisL matem i kibemetiki, No. 1,58-72, 1977. 5. TIKHONOV, A. N., Mathematical models, BSE Vol. 15, Sov. entsiklopediya,

Moscow, 1974.

6. SAMARSKII, A. A., Introduction to the theory of difference schemes (Vvedenie v teoriyu raznostnykh &hem), Nauka, Moscow, 1971. 7. TIKHONOV, A. N., On the stability of converse problems,Dokl. 1944.

A&& Nauk SSSR, 39, No. 5, 195-198,

8. TIKHONOV, A. N., On the stability of algorithms for solving degenerate systems of linear algebraic equations, Zh. vychisl. Mat mat. Fiz., 5, No. 4, 718-722, 1965. 9. TIKHONOV, A. N., and ARSENIN, V. Ya., Methods for solving ill-posed problems (Metody resheniya nekorrektnykh zadach), Nauka, Moscow, 1974. 10. WILKINSON, J. H., and REINSCH, C., Handbook for automatic computation, Berlin, 1971.

Linear Algebra, Springer,

11. NOLCHANOV, I. N., and NICOLENKO, L. D., On an approach to integrating boundary problems with a nonunique solution, Inform. Proc Letters,-1,168-172,1972. 12. GLUSHKOV, V. hi., MOLCHANOV, I. N., and NIKOLENKO, L. D., On the choice of programs for solving systems of linear algebraic equations on MIR series computers, Kibemetika, No. 6,1-6,1968. 13. FADDEEV, D. K., and FADDEEVA, V. N., Computational methods of linear algebra (Vychislitel’nye metody lineinoi algebry), Fiamatgiz, Moscow, 1960. 14. WILKINSON, J. H., Rounding errors in algebraic processes, H. M. Stat. Off., London, 1964. 15. WILKINSON, J. H., ?7re algebraic eigenvalue problem, Carendon Press, Oxford, 1965. 16. VOEVODIN, V. V., Rounding errors and the stability of direct methods of linear algebra (Oshiiki okrugleniya i ustoichivost’ pryamykh metodov lineinoi algebry), VT’s MGU, Moscow, 1969. 17. FORSYTHE, G. E., and STRANG, E. C., On best conditioned Amsterdam, VoL 2, 102-104, 1954.

matrices, Proc. Zntemat. Congr. Math.,

18. BAUER, F. L. Optimally scaled matrices, Numer. Math., 5, No. 1,73-87,

1963.

19. MOROZOV, V. A., Estimation of the accuracy of the solution of ill-posed problems and the solution of systems of linear algebraic equations, Zh. v@hisL Mat. mat. Fir., 17, No. 6, 1341-1343, 1977. 20. GOLUB, G., and KAHAN, W., Calculating the singular values and pseudo-inverse of a matrix, SIAMJ. Numer. Analysis, 132,205-224, 1965. 21. GOLUB, G., and KAHAN, W., Least squares, singular values and matrix approximations, 44-51,1968.

Aplikace mat., 13,

22. MARCHUK, G. I., Methods of computational mathematics (Metody vychishtel’noi matematiki), Nauka, Moscow, 1977. 23. SAMARSKII, A. A., and NIKOLAEV, E. S., Methods for solving mesh equations (Metody resheniya aetochnykh uravnenil), Nauka, Moscow, 1978. 24. MOLCHANOV, I. N., NIKOLENKO, L. D., and YAKOVLEV, M. F., On the solution of a class of systems of linear algebraic equations with degenerate matrices, in: Computational methods of linear algebra (vychisl. metody lineinoi algebry), VTs SO Akad. Nauk SSSR, Novosibirsk, 97-109,1977.

K A. Bushenkov and A. K Lotov

14

25. KUZNETSOV, Yu. A., Iterative methods for solving incompatible systems of linear algebraic equations, in: Some problems of computing and applied mathematics (Nekotorye probL vychisL i prikl. matem.), Nauka, Novosibirsk, 199-208,197s. 26. CHAN, S. P., FELDMAN, R., and PARLETT, B. N., Algorithm 517. A program for computing the condition numbers of matrix eigenvahtes without computing eigenvectors, ACM Trans. Math. Software, 3, No. 2, 186-203,1977. 21. MOORE, R. E., Interval analysis, Prentice Hall, New York, 1966. digit computer arithmetic, IRE Trans. Electron.

28. METROPOLIS, N. C., and ASHENHURST, R. L. Signifiit Comput, EC 7, No. 4,265-267,1958. 29. BAUER, FL., Genauigkeitsfragen No. 7,409-421,1966.

bei der liisung linearer Gleichungssysteme,

Z. angew. Math, Me&., 46,

30. MOLTSCHANOW, I. N., ijber Programmpaket zur ldsung wissenschafthch-technischer Techn. Hochschule Otto von Guericke, Magdeburg, 21, No. 2,275-285, 1977.

Aufgaben, Ivies Z.

Translated by D. E. Brown

USSR. Comput. Maths. Math. Phys. Vol. 20, No. 3, pp. 14-24, Printed in Great Britain

1980

0041-5553/80/030014-11507.50/O Q 1981. Pergamon Press Ltd.

AN ALGORITHM FOR ANALYZING THE INDEPENDENCE OF INEQUALITIES IN A LINEAR SYSTEM V. A. BUSHENKOV and A. V. LOTOV Moscow (Received

18 December

1978; revised 30 November 1979)

AN ALGORITHM is described whereby all the inequalities that do not affect the set of solutions can be eliminated from a system of linear inequalities.

Introduction Versions of an algorithm for solving the following problems are outlined below. Roblem

I.

In the finite system of linear inequalities

(ai, s> d b,,

i-l,

2,. . . , N,

(1)

where x E P , ci are given rows, and bi are given numbers, to isolate an equivalent subsystem (i.e. one that has the same set of solutions), which contains no dependent inequalities (i.e. those which are consequences of other inequalities). Roblem II. For all i. = 1, 2, . . . , N, regarding bio as a parameter, while the remaining ‘bi, i # io, are given numbers, to fmd the number b *io such that the io-th inequality, bio < b*io,

is not a consequence of the rest, while the io-th inequality, bi > b*i , is dependent. *Zh. vj%hisl.Mat. mat. Fiz., 20,3,562-572,198O.

14