A. B. Bakushinskii
12
11. MOROZOV, V. A., On a new approach to the solution of linear equations of the fist kind with approximately specified operator, Tr. I Konf: molodykh uchenykh fak. VM i K. M., Izd-vo MGU, pp. 22-28,1913. 12. MOROZOV, V. A., On some applications of the method of splines to the solution of operator equations of the first kind, Dokl. Akad. Nauk SSSR. , 219, No. 2, 300-303, 1976. 13. TANANA, V. P., On an optimal algorithm for operator equations of the fist kind with a disturbed operator,Dokl. Akad NaukSSSR, 226, No. 6, 1219-1282, 1916. 14. TANANA, V. P., On the method of quasi-solutions, in: Methods for solving conditionally posed problems (Metody resheniya uslovno korrektnykh zadach), Sverdlovsk. UNTs Akad. Nauk SSSR, pp. 83-94, 1975.
U.S.S.R.Comput. MathsMath. Phys.Vol. 17,~~. 12-24 0 Pergamon Press Ltd. 1978. Printed in Great Britain
0041/5553/77/1201-0012$07.50/O
METHODS FOR SOLVING MONOTONIC VARIATIONAL INEQUALITIES, BASED ON THE PRINCIPLE OF ITERATIVE REGULARIZATION* A. B. BAKUSHINSKII Moscow (Received 7 June 1976) A PRINCIPLE of iterative regularization variational
inequalities
is described, whereby many methods for solving monotonic
may be modified in such a way as to widen considerably
standard methods. A class of modified methods is considered
the scope of the
in detail.
1 Many important
problems of non-linear
such as the problem of solving a non-linear point of a functional
functional
analysis and mathematical
operator equation,
economics,
the problem of finding the extremum
in a closed convex set Q, or the problem of finding the equilibrium
point
in an n-person game etc., can be stated in a unified way, as follows: to find a point x0 E Q such that (F(G) Here, F(.)
( 50-z)
GO
\‘ZEQ.
(1)
is in general a point-set mapping from Banach space B into its adjoint B*, Q is
a closed convex set in B, and the parentheses indicate the value of the relevant functional on an element. The form of the operator F, and the choice of Q and B,depend on the concrete statement of the problem. For simplicity, we shall assume that F is defined everywhere in Q. The abstract problem (1) is referred to as the problem of solving a variational inequality. (1) are mainly investigated with monotonic operators F in Q,
At the present time, the inequalities
i.e. such that
(F(x,)-F(X?), x,-x,)>0 A fairly large class of problems, inequalities (1) with monotonic functional @ in a convex closed concave functional L (x, y) in a
Vx,, w=Q.
of undoubted practical interest, can be described by means of operators F.Examples are problems of minimizing a convex set QEB or the problem of finding the saddle-point of a convexconvex set of the type Q,XQ2=QzB,XB2.
‘Zh. v&hisl. Mat. mat. Fir, 17.6, 1350-1362,
1977.
(2)
Solving monotonic variationalinequalities
A large number of examples of concrete variational
inequalities
is also presented by the modern theory of partial differential developing general approximate strengthened;
in particular,
equations
13 with monotonic
methods for solving problem (I), condition
it is required that the condition
operators
(see e.g. [l] ). When (2) is usually
for strong monotonicity
[2] be
satisfied, e.g. in the form
In many cases, unfortunately, variational
inequality
Monotonic,
condition
corresponding
(3) is too strong. For instance, it is not satisfied by the
to the problem of finding the saddle-point
but not strongly monotonic,
a problem of mathematical
programming
or when solving so-called “ill-posed”
variational inequalities
to the saddle-point
of a matrix game.
appear e.g. when reducing
problem for a Lagrange function,
extremal problems, etc.
Until recently, theoretically justified approximate methods only existed for individual special classes of inequalities (1) satisfying the general condition (2). At the same time, every new problem demanded in essence a new approach, which took account of its specific features (see e.g. [3] ). The most important requirement was usually that B be finite-dimensional and Q bounded. More recently, a general method (a method of modifying monotonic mappings) was described in [4], whereby it was possible to construct iterative sequences convergent to the solution of (l), provided the solution exists. It is only demanded Unfortunately,
of the operator F in (1) that it satisfy the condition
(2).
the method described in [4] is not universal, in the sense that the sequences
generated by it are strongly convergent
only in the case of finite-dimensional
space B.
In our view, it is of interest to develop general numerical methods for solving problem (l), that do not require any strengthening dimensional
in [S] , suitable for obtaining monotonicity
of condition
(2) and are strongly convergent in iniinite-
space, provided only that a solution of (1) exists. A general approach was described requirement
strongly convergent
iterative methods for solving (1) with a minimal
(2). Unlike the method used in [4], this approach is more universal,
since it does not demand that the space in which the operator in (1) acts be finite-dimensional, and it generates a greater variety of specific methods. Further analysis of this approach makes it possible to state a principle which might be termed the “principle A stimulus to stating and investigating
of iterative regularization.”
this principle in detail was provided by [6], in which
algorithms closely similar to some of those discussed below are found to work extremely effectively, though without theoretical justification. The present paper is a development of [S] _ In it we state the general principle of iterative regularization, whereby we can in essence modify any general iterative algorithm for solving problem (1) under condition (3) (we shall refer to this henceforth as the base algorithm) into a strongly convergent algorithm for solving (1) under condition (2) only. Here, we are generally outside the conditions under which existence and uniqueness theorems hold, so that we have to require a priori that a solution of problem (1) exists. We shall describe our general principle in more detail for a narrower class of base algorithms, and in particular, we shall give a detailed proof of some theorems [5] . One of the algorithms we investigated (the regularized Newton’s method) is similar to some algorithms in [6], but here it is given a complete theoretical justification. To avoid unnecessary
A, B. Bakushinskii
14
complications, we shall assume throughout that the space B is real Hilbert space; though many of our results also hold in the case of Banach spaces.
2 We shall state the principle of iterative regularization in general terms. Along with (l), we consider the auxiliary variational inequality (F(x)feM(s),
x-z)GO.
(4)
In (4), E > 0, and the operator M is strongly monotonic in the sense of (3), is defined in B, and acts in B (since it has been assumed that B is Hilbert). An example of such an operator is M(x) =x, in which case c = 1. The inequality (4) may be termed the regularized inequality. The added component EMis entirely analogous to Tikhonov’s stabilizing component [7] in his theory of ill-posed extremal problems. If we only assume the existence of a solution of problem (l), then, with wide assumptions about F and M (e.g. it is sufficient to require the semi-continuity [2] of F and M in Q, i.e. the continuity of F and M under operation from the strong topology of B into a weak topology), it can be claimed that, given any E > 0, a unique solution xe of (4) exists, and moreover, there exists a strong limit xe in the norm of B for E > 0. This limit is the unique solution of the variational inequality (M (5) , z-z) is the set of solutions of problem (1). GO, ZE@ (If Ql is not empty, it is usually convex and closed). In particular, if M(x) =x, this limit is the solution of (1) of minimal norm. Henceforth, we shall call the element xe, if it exists, the BrowderTikhonov (B.-T.) approximation for solutions of inequalities (1). An exact statement of the claim about the convergence of the B.-T. approximations, and the relevant proofs, may be found in [5,8] . Later, we shall need one simple lemma about the closeness of the solutions of variational inequalities, and in particular, B.-T. approximations, for different E [S] . Lemma 1
Let the operators F and E be strongly monotonic with constants c and T respectively in (3). If the solutions of the inequalities (1) with these operators exist (x0 and Fu), then we have the estimates
or
lb-&lll~
(c)-‘IIF
-JTG) II.
Let us prove e.g. the first inequality. By (3) and (I),
~lls,-z”ol12~(P(to)-_((Zo), t,-5%)~(F(50), qF(&)--F(s,),
so-&)~II~(:(s,)--F(G)
zo-20) IIIIW-&II.
The first inequality is proved. The proof of the second is similar. Corollary. Let M(x) =x and let the B.-T. approximations exist for
E,>E$-O.
Then,
15
Solving monotonic variational inequalities
(5) where y is the solution of (1) with minimal norm. Inequality
(5) is a particular case of our Lemma 1. It has to be borne in mind that, following
from the results of [8] , we have J(z, (I< j/y (1. Assume that inequality
(4) can be solved for some fned E > 0, by applying any suitable
iterative methods, developed for solving inequalities We can attempt,
with strongly
monotonic
operators.
in the following way, to obtain an iterative sequence, convergent
directly
and e. zo=Q be fured. Consider inequality (4) with E = eO and let us perform a step of the iterative method designed to solve (4) with a strongly to the solution of inequality monotonic
(1). Let
operator. We obtain a point xl E Q. We then choose e1 (generally, it is natural to
choose EI < eo) and with the pair el, x I we perform the same operation. general procedure described will be called “iterative
regularization.”
We obtain x2, etc. The
It turns out that, if we start
from a suitable base iterative method for solving the regularized inequality
(4), we can
designate u priori a sequence E /, -0 as n - 00, such that the procedure of iterative regularization is strongly convergent to the solution of problem (l), provided that a solution exists. The general scheme for investigating
the convergence of the iterative regularization
procedure is as follows. Assume that the base iterative method has been chosen. The study of the convergence condition
of iterative methods for solving inequalities
(1) under the strong monotonicity
(3) is usually made with the aid of the so-called Lyapunov function.
of this concept can be found e.g. in [3]. We shall merely mention
An exact definition
that e.g., the function
i.e., the distance from the running point to the solution, may be used as the Lyapunov If inequality
(1) is generated by the problem of minimizing f(z)--min
the functional
function.
j(x), the function
f(r). *=Y
is often taken as the Lyapunov
function.
Let U(e,, x) be the Lyapunov function,
usually employed to study the convergence of the
chosen base iterative method for solving inequality (4) with E = E,. We shall assume that z..>) =O. Next, we can write the obvious inequality .!I(
En+!,
5,+,) GU( E,,,
5,+,) + 1U(E”+iYGtH) -U(% x,+4
I; ( E ,,,
I*
(6)
If the base method also works in the infinite-dimensional
case, then it is usually possible, from the
proof of its convergence,
between
to write directly the connection
U(E,,
z,+r)
and
U( E,,, 5,)
in the form
U(&,, rc,,,>~X(U(~“,
(This is the basic relation, used to prove the convergence and functions,
then, by substituting
(7)
of the base method). If the second term
E,, E~+~ and also the II priorz’ specified (7) into (6), we obtain for U(E,, r,) =U,
on the right-hand side of (6) can be estimated in terms of constants
&I), 4.
A. B. Bakushinskii
16
c’,, E,)
u,+,
+x.(En,En+d.
(8)
We shall also assume that the Lyapunov function U,, is strongly positive, i.e. (9) Here, 7(t) + 0 if and only if t + 0. It is usually possible to estimate from (8) the rate of convergence of U,, to zero as en + 0. If it turns out that lim [ U,lc (en) ] =0 n+co and the theorem holds on convergence of the B.-T. approximations, then it will follow from this that limb,-yII=O. **Da (If M(x) = x, then y is the solution of (1) with minimal norm). This implies the strong convergence of the concrete iterative regularization procedure. The fundamental problem is to find sequences en, en + 0, such that u,=o (c (E,,) ) . The scheme described enables us to modify, in an entirely unified way, the various base iterative methods both to solve general inequalities with strongly monotonic operators, and for narrower classes of such inequalities. It is interesting that the iterative regularization procedure proves useful, not only in the general infinite-dimensional case, where it is essential for constructing any sort of strongly convergent iterative methods for solving problem (1), but also in cases when, due to the specific features of inequality (1) the procedure can be made to embrace a given method for solving strongly monotonic inequalities. For instance, the familiar Frank-Wolfe method for minimizing convex functions in the finite-dimensional case in the absence of strict convexity (absence of strict monotonicity of the gradient of the functional) is convergent only to the set of minimum points of the functional, thus creating the usual difficulties in the way of possible strong oscillations of the terms of the minimizing sequence. On applying the iterative regularization procedure to the Frank-Wolfe method and selecting suitable en, we can obtain a regularized Frank-Wolfe method, strongly convergent (even in the infinite-dimensional case) to the solution of the minimization problem having minimal norm. Following the scheme described above, let us consider in more detail the iterative regularization of a sufficiently general class of base iterative methods. The base method of passing fromx, tox,+I with the aid of inequality (4) amounts to the following. We write inequality (4) with E = E, in the equivalent form (t-r+a”F(z,
E,), 2--z)GO,
%‘O,
F(r,
e.)=F(z)+e,M(z),
or
b-@ (2, an, En), 5-Z) Go.
(10)
We now approximate 0 (5, a,, e,) in the neighbourhood of the point x, by the segment of the Taylor series consisting of p t 1, p Z 0, terms 0, (5, z,,, CC,,en).
Solving monotonic variationalinequalities
The solution of the inequality
17
(which exists and is unique in the concrete cases discussed
below)
is taken as the next approximation for p = 0, 1, even for monotonic
xn + 1. The realizability
of the process can be guaranteed only
operators F. Hence we shall only consider these two cases in more
detail. Case 1. p=O,
00(2,
since, by hypothesis, B is a Hilbert
5,, a,, a,,) =@ (XT,, an, en),
space, and the solution of (11) can be written in the “explicit”
form
5,+i=PP(S,-a,(F(z,)+E,,M(5,))). Here, Pp denotes the operator of projection
onto Q. Notice that the process (12) is meaningful
(12) also
for point-set mappings F. Depending on the properties assumed for F and M, we can state a variety of propositions about the convergence of the sequence (12) to the above-mentioned point. These propositions are proved according to the scheme of Section 2. We can use here the elementary Lyapunov function U ( F, z) = ljz--zcll. This is extremely convenient for the proofs. The sufficient conditions for convergence of the sequence (12) are given by Theorems l-4 (see [5] ). Let the following conditions
hold:
anrE,>O,
lim En=O, n-m
enQn-+,
m
c
lim
anen=w,
n-m
en-En~+’=
0;
(13)
anen
1
lim
aR=(J
n*m
&I
IIF(s) IIGq~+ll4),
(14)
(15)
where L is a positive constant. Theorem 1
D$Q,
Let F be a maximal monotonic operator, and in general, a point-set operator, and let while M(x) = x and the set of solutions of the inequality (1) is not empty. int Q+@, If conditions
(13~(
15) hold, then, given any initial x0 E Q, we have lim Ilsn-yll=O. n-s
(16)
A. B. Bakushinskii
18 E+eorem 2
If F satisfies everywhere in Q the Lipschitz condition
II~~IIS-211,
IV+)+(z) condition
(13) holds, and
lim sup n--m
an(l+E,)*
< 2 L’ ’
En
044
and relation (16) holds. Theorem 3
If F is potential,
IIF’( x) II
condition
lim sup (I+&,)
(13) holds, and
a,<2/N,
(14b)
RIDD and relation (16) holds. 731eorem 4 If F(x) = x - T(x), where T is a non-expanding
operator in Q, condition
lim inf ( ~-cL-cw~) “-PO3
(13) holds, and
20,
(14c)
and the relation (16) holds. As an example we shall prove Theorem 1. The other proofs are similar except for technical details.
and
We first mention that, if F is a maximal monotonic operator, then, if a pair (z, V) , Z, VEB, (F(s) -z, Z-V) 30 exists for anyx EDF, it follows that PSD, and Z&(V) .
Since it is not assumed that F is a strictly pointwise operator, inequality (15) can be understood in the sense that it is satisfied for at least one element F(x). When forming expression (12), we can take as F(x,) any representative
of this set that satisfies (15).
Let us now turn directly to the proof of Theorem F and M, the B.-T. approximations
1. By the conditions
on the operators
exist (and are unique) for any E > 0 [5,8].
Using (5) we have
It remains to discover the concrete form of the inequality (7). Recalling that the projection operator in (12) is non-expanding, we obtain, after standard transformations and in the light of the strict monotonicity of the operator F(x, E):
19
Solving monotonic variational inequalities
In view of (15) and the fact that Ilx,
II is uniformly
bounded,
\iz,,ll,
~II~(~,,)+~~ll~,,ll)2~~*+~zll~~-~~”l12, where cl, c2 are constants. In this concrete case the difference inequality
(8) has the form
U,+,G[ U,2(1-2ccn&n+C2an2) +c,anZl’“+ll!/llyn+i
f
(17)
In order to examine the behaviour of the solutions of (17) we square both its sides and use the elementary
inequality
.
We finally obtain
Since, in the present case, c(e,) = 1 in (9), then, for convergence of the sequence x, it is sufficient to show that lim U,=O. But this follows from conditions (13) and (14) in the light n+s of the lemma proved in [9]. The class of sequences, satisfying (13) and (14), is quite wide. For instance, it includes sequences ofthetype
a,=(l+n)-I”,
e.=(l+n)-“‘.
Let us describe a concrete example, showing the increased scope of the regularized gradient method (12) as compared with the unregularized
(base, E,S 0) method even in finite-dimensional
space. Consider in two-dimensional
Hilbert space E2 the operator
x=
(Xl,22) 1
Q=Ez,
where F is monotonic but not strongly monotonic (the left-hand side of (3) is identically zero). The corresponding variational inequality defines a unique saddle-point of the functional (0,O) in the entire space Ez. Simple arguments show that, in this case, no matter what the choice of o,r , the unregularized process (12) (e, E 0) is not convergent to the point (0,O) (if x0 f (0,O)). Moreover, the iterative sequence x, is not even a “solving” sequence for the inequality (l), i.e. the relation
lim sup (F(G),
n_rDD
~~-2) GO
Vz=E,.
20
A. B. Bakushinskii
is not saitsfied. The regularized process (12) is naturally Theorem 1.
convergent,
e.g. under the conditions
Notice that Theorems 1-4 can be generalized in various directions. assumed that an error is made when evaluating the operator F(x,), evaluated. If the quantity the convergence
IIh, II decreases in a consistent
of sequence (12). Suitable consistence
the thus modified Theorem A more important
For instance, it can be
and that F(x,)
+ h, is actually
way with o,r and en, this does not affect
conditions
are given in [S] . The proof of
1 is just the same as for the case h, E 0.
point is that condition
(15) and the analogous condition
2-4 can be weakened. In fact, it is sufficient to require that such conditions Qn {Z : lllcll
set of the type
of
in Theorems
are satisfied in any
r>O, while
the corresponding constants can depend on r. Then, the sequence (12) is not in general convergent for any initial approximation x0 E Q; it is ~+Qfl{ l/z--yilGr,}. The choice of r. is fairly arbitrary, the fixing of it imposes some supplementary constraints on the choice of ol, and E,. only convergent
in the case
The modified Theorems l-4
though
can be proved in the same way as described above. As a
preliminary, it is shown that, for any n, the sequence z,@fl { Il~-yllG-~} (the boundedness lemma). The corresponding basic theorem can then be used, with the maximal constants in this set in conditions
(15). For instance, to modify Theorem
1 we use the following boundedness
lemma.
Lemma 2 Let the condition of type (15) be satisfied, in the form IIP(cc) /I< L( Ilsll), L(e) is a non-decreasing function. If
~O~~llYII,
then
C-1, 28,(1-1/Z)
O-Ca,<
and lb,-_yll%,
where
E,Z(1--1/;)2+4L2(Po+llyll)lr,”
Ilz,--yll
’
for any n > 1.
In view of this lemma and the proof of Theorem 1, the process (12) is convergent
for any
initial approximation x0 E Q and any growth of IIF(x) II, provided that the choice of CY~is consistent (in the sense of Lemma 2) with the choice of the initial approximation. Lemma 2 is easily proved by induction;
we shall omit the details.
There is a close connection between monotonic variational inequalities and non-expanding then, given mappings. In fact, assume e.g. that F is maximally monotonic: D+Q, int Q#D; any z E B, we define the mapping pr(z) as the unique solution of the variational inequality (F(z)
fs-2,
2-W)
GO,
w=Q.
From the general existence theorems for the solutions of variational inequalities (see e.g. [5] ), it follows that pr(z) exists and is unique. It follows from Lemma 1 that pr(z) is a non-expanding mapping. It is easily shown that the problem of solving the equation
F(z) =pr(z)
-z=o
Solving monotonic variationalinequalities is equivalent monotonic
to solving the variational
is weakly convergent considered).
(1). The operator F(z) is an example of a modified
[4] . We know [4, IO] that the sequence
mapping
infinite-dimensional
inequality
21
case) to the solution of (1). (In [lo]
(in the infinite-dimensional
the general
and 7, s I ; in [4] , the case of finite-dimensional
case is considered,
B is
As applied to the operator F(z), Theorem 4 enables us to assert e.g. the following. The
sequence
1 &I+1 =
is strongly convergent Case 2. performing
Pr(zR)) G=B,
l+(l+n)-‘/’
to the solution of inequality
p=l,
Qi(5,
(1) with minimal norm.
x,, a,, En)=e-cd(2k,
obvious transformations,
En)-cd’(sn,
En) (5
-Xd.
On
we find that x,, 1 is found by solving the variational
inequality (F(sn,
En) (x--2J,
En)SF’(Z,,
+-z)6
(18)
where x0 E Q is given Methods (18) and (12) are respectively method of simple iteration,
analogues of the classical Newton’s method and the
for the solution of equations.
Let us assume that
IIF” where N(s) is any positive non-decreasing
IIay4),
function.
(19)
We have the a ptioti estimate
IIyllGd.
(20)
In addition, 1 GE”<&
lim en =O,
2N(3d)
G d’
en--En+i
< 2(llP-_q)q
G&n+1
a,
E,+f W34 il~:o-~coll
n-m EO
2Eo
N(3d)d
1, p-l), 1
(21)
P
’
Theorem 5 Let B be Hilbert, M(x) = x, and F(x) a twice Gateaux differentiable monotonic operator, and let the numerical function (R”(x + th)h, h) be integrable with respect to I in (0, 1) for any x,x+hEQ. When conditions
(19)-(2
1) are satisfied, the sequence x,, defined by the inequality
(1 S),
22
A. B. Bakushinskii
is convergent with respect to the norm of B toy, Ils~,--~II
Proof: By Lemma I we have
r,--z,,ll+&3d.
Assume that
analogous to (8). As the Lyapunov
function
we take
C/‘(E,,, 5) =IIz---5,,11.
We have, as above,
X(En, En+dG IIyll yn+f In the light of our assumption, Lemma 1 to inequalities
inequalities
of the type (7) can easily be obtained by applying
(18) and (4) respectively.
C’(E”. I”,,)<
Using Taylor expansion,
‘V(3d) V(En: C)&”
we find that
z,).
Finally,
U
The futher arguments
(22)
are based on induction.
Using relation (2 I), it follows from (22) first, that
N(:Sd) 1;,/2E,Gq.
In turn, it follows that
U,+,G2d
and
Ilz,+,-- yll<‘3d.
(23)
Hence the inequality
(23) holds for
all n. Theorem 2 is proved. An example of a sequence en, satisfying the necessary conditions,
is provided by the
sequence an= ( i/ao+cn) -I, where c > 0 is a sufficiently small number, dependent on p and d. Method (18) is more complicated to realize than is method (12). But its use is justified by the wider scope in the choice of
{E”}
and by the existence of the following estimate, which does
not in general hold for method (12):
where C is a constant, dependent on the initial approximation. Practical examples, in which method (18), or the closely similar methods described in (61, is used, have shown that it is rapidly convergent.
A detailed analysis of method (18) and in particular,
mentioned,
can be found in [ 1 I].
a derivation
of the estimate just
4 Let us dwell briefly on the application of the general algorithms described above to the problem of finding the minimum of a functional fin a closed convex set Q and to the problem of finding the saddle-points of a convex-concave functionals. Even if there are no details about the set Q, in which is specified the finite convex functional or the convex-concave function, there in an infinity of monotonic operators, by generalexists,for D(f)+ D(f)={3: f(z)
monotonic, and sem~~ont~n~ous in Q (this is the operator grad f for the m~n~m~zat~on problem, or the operator {grad,f(x, y) , -grad,f(s, y) i for the saddle-point problem). In general, among those operators there is one which is maximally monotonic. One of these operators is the same as the subdifferential affor x E Q, or the same as {c?J, -i&f}, J, y=Q, in the case of the saddle-point problem [ 121. Having chosen an operator appropriate to the problem, we can write a process of the type (12) or (18) for seeking the extremal points or saddle-points, which is convergent without any supplementary conditions of the type (3), e.g. for a matrix game. The method of sFecif~~ngthe set e usually considered is in the form of a system of inequalities QZ=l(z: fi(z)GO,
i=l,
2,. . . 9 n},
Q=QrflQz,
(24)
x E Q, is a closed convex set. The problem of minimizing the functionalfo in Qzi7Q, is a problem of mathematical programming. The set Q1 is usually the entire space B or a simple subset of it, and int QzZa. The most interesting applications of the above theory, and in particular, of the aigorithrn (12) and (fg), are to be found in the following methods for reducing problem (24) to the variational inequality (I ). Method 1. Let& i= 1,2,. . . , n, be downwards cmvex finite functionals, and let f) (fif =>QI_ It is natural to assume that (I&Z@. It is then convex and closed.
Consider the functional
c ”
(1)(x) =
U1(4 1+h,
k31,
{t) +=max (0, t>.
i=i
The set of minimum points of the functional @ in e, is the same as &. fn view of what was said above, the problem of m~im~ing @ in @, is equivalent to solving the variational inequality
Now let f&x) have a strongly monotonic subgradient (e.g. f&) is uniformly convex). Then, on applying process (12) or (18) (if k > 3) with M(x) = a&(x) to (29, we obtain an iterative sequence, convergent to the solution of problem (24). The regularized inequality (25) in this case defines the minimum point of the penalty functional for problem (24), while assertions about the convergence of the B.-T. approximations are assertions about the convergence of the method of penalty functions. We employed method (18) in this version to find the point of a ~o~yhedron closet to zero. Model computations confirm that, given a suitable choice of the parameters en, it is rapidly convergent. Method 2. Wlxn certain regularity conditions are satisfied, problem (24) is equivalent to the problem of finding the saddle-points of a Lagrange functional:
A. B
24 If the same assumptions (1) with the operator programming
. Bakushinskii
about the fi as in method F=
(8~5, -&L)
1 are satisfied, then the variational
inequality
can be used to find the solution of the mathematical
problem (24).
Process (12) can here be regarded as a generalized Uzawa process for solving problem (24). The Uzawa process is obtained
from (12) if E, z 0 [ 131. The example of Section 3 shows that
the Uzawa process is in general divergent. It is shown in [ 141 that, for a linear programming problem, the Lagrange function
can be modified in such a way that the set of its saddle-points
remains as before, yet the Uzawa process becomes convergent function.
in the context of the new Lagrange
However, no simple algorithm is known for constructing
the modified Lagrange function
for the general problem (24). Process (12), with E, f 0, can be regarded as a combination Uzawa gradient method and iterative modification process, the first component
of the saddle-point
of the Lagrange function.
of the
At each step of the
of the modified Lagrange function
is not the
same as the solution of problem (24). They become the same only in the limit. At every step, the of iterations (12) is much simpler than the process of Uzawa iterations modification of the Lagrange function. process
with “regular”
Translated
by
D. E. Brown
REFERENCES 1.
LIONS, J.-L., Nonhomogeneous
boundary value problems and applications, Springer, 1972.
2.
VAINBERG, M. M., The variational method and the method of monotonic operators (Variatsionnyi metod i metod monotonnykh operatorov), Nauka, Moscow, 1972.
3.
VOLKONSKII, V. A., et al., Iterative methods in the theory of games and programming (Iterativnye metody v teorii igr i programmirovanii), Nauka, Moscow, 1974.
4.
GOL’SHTEIN, E. G., Method of modification No. 6,1144-1159, 1975.
5.
BAKUSHINSKII, A. B., and POLYAK, B. T., On the solution of variational inequalities, SSSR, 219, No. 5,1038-1041,1974.
6.
TIKHONOV, A. N., and GLASKO, V. B., Application of the method of regularization to non-linear problems,Zh. vj%hisl. Mat. Fiz., 6, No. 3,463-473,1966.
7.
TIKHONOV, A. N., and ARSENIN, V. YA., Methods for solving ill-posed problems (Metody resheniya nekorrektnykh zadach), Nauka, Moscow, 1974.
8.
BROWDER, F., Existence and approximation of solutions of nonlinear variational inequalities, Proc. Nat. Acad. Sci., USA, 56, No.4, 1080-1086,1966.
9.
BAKUSHINSKII, A. B., and APARTSIN, A. S., Methods of the stochastic approximation linear ill-posed problems, Sibirskii matem. Zh., 16, No. 1, 12-16, 1975.
of monotonic mappings, Ekonomika matem. metody 11,
10. MARTINET, B., Regularization d’inequations variationalles par approximations automat. inform. rech. operat., 4, No. R-3,
154-159,
Dokl. Akad. Nauk
type for solving
successives, Rev. frrq.
1970.
11. BAKUSHINSKII, A. B., A regularizing algorithm based on the Newton-Kantorovich method for solving variational inequalities, Zh. vychisl. Mat. mat. Fiz., 16, No. 6, 1397-1404, 1976. 12. ROCKAFELLAR,
R., Convex analysis, Princeton U.P., 1970.
13. ARROW, K. J., et al., Studies in linear and non-linear programming,
Stanford U.P., 1958.
14. GOL’SHTEIN, E. G., Generalized gradient method for finding saddle-points, Ekonomika matem. metody, 8, No. 4,569-579,1972.