A global minimization algorithm with parallel iterations

A global minimization algorithm with parallel iterations

U.S.S.R. Comput.Maths.~~~th.Phys. ,Vol.29,No.2,pp.7-15,1989 Printed in Great Britain 0041-5553/89 $iO.OO+O.OO 01990 Pergamon Press plc A GLOBAL M...

908KB Sizes 2 Downloads 177 Views

U.S.S.R.

Comput.Maths.~~~th.Phys.

,Vol.29,No.2,pp.7-15,1989

Printed in Great Britain

0041-5553/89 $iO.OO+O.OO 01990 Pergamon Press plc

A GLOBAL MINIMIZATION ALGORITHM WITH PARALLEL ITERATIONSj~

YA.D. SERGEEV and R.G. STRONGIN An algorithm is p'oposed t‘orminimizing multi-extremum functions in which the function is evaluated simultaneouslyat several points (on several concurrently running processors) in each iteration. Conditions are established when the proposed concurrent method does not perform redundant computations compared with the efficient purely sequential method, which is the basis for the concurrent scheme. The algorithm is generalized to the multidimensionalcase. Some applications of the method are described. The advent of multiprocessor computers and local-area networks opens up new possibilities for the development of function optimization methods in which the function being optimized can be evaluated in each iteration concurrently at several points of the domain of definition (by using several processors). Parallel computations may also be used to accelerate the analysis of the mathematical model of the object being optimized, i.e. to speed up the computations required to evaluate the function being optimized at a single point. However, the organization of this acceleration has its own specific features for each particular class of models Eli, whereas the development of principles for the parallel selection of iteration points applies to all optimization algorithms. The present study considers the decision rules for selecting the trial points (i.e. the points for evaluating the function to be optimized) comprising a single iteration of the algorithm that seeks the absolute minimum of a function in a hyperinterval. Such iterations, consisting of groups of trials, are called parallel iterations. Each trial included in a parallel iteration may be realized on a separate processor, using the same shared or copied program. For many problems (in particular, for the computer-aided optimal design of complex technical systems), the time to evaluate the function at a trial point is much longer than the fetch time (i.e. the time to pass the coordinates of the trial point to the processor and to return the value of the function computed at that point) and the time to compute the coordinates of the group of points for the next parallel iteration (i.e. the characteristic time of the decision rule). In these problems, the efficiency of the parallel algorithm can be assessed in terms of the total number of trials required to solve the problem and in terms of the total solution time (i.e. by two criteria). It is desirable to ensure that the mesh generated in the search domain by the trial points of the parallel algorithm has the same (or almost the same) density as the mesh of the efficient sequential algorithm, i.e. the introduction of parallelism does not produce redundant trials. Given this condition, it is advisable to use the largest possible number of processors in order to accelerate the solution of the problem. These considerations are the basis for the development of the parallel methods described below, which generalize the efficient sequential schemes 12, 31 to the multiprocessor case. Section 1 describes the decision rules of the parallel algorithms for the unconstrained one-dimensionalproblem and provides sufficient conditions when all the limiting points of the minimizing sequence are global minimum points of the function. In Section 2 we introduce formal redundancy characteristics of the parallel scheme compared with the purely sequential scheme and derive some bounds on this re&Jndancy. In Section 3 the parallel algorithm is generalized to the multidimensional case. Our technique of modifying the sequential methods of [2, 31 to the case of parallel processing can be similarly used to modify other sequential schemes, such as those from 14-81. We merely note that since the efficiency of sequential schemes (convergence only to global optimal points, low density of trials far from the optimum, etc.) is achieved at the cost of using the results of all the previous iterations when deciding on the next iteration, the possibility of non-redundant paralleling of such schemes characterizes the inherent degree of parallelism of the multi-extremum problem. 1.

A Parallel

Aigorithm

for Unconstrained

One-Dimensional

Problems

The choice of trial points for minimizing the real functionT.(.r),xE[Q,b],by the sequen-

tial global search algorithm of 12, 31 is determined by the following rules. The first evaluation of the function is performed at an arbitrary interior pointz'E(u,b),andevery subsequent(~~~)-thevaluation, k>t, is performed at the point (1.1) x’+‘=S(t),

:‘Zh.vychisZ.Mat.mat.Fiz.,29,3,332-345,198s. 7

8

S(t) = (x,-h,_,)/2

_{

y-‘)‘(2rp).

itc;f;:,tPk+l

(1.2)

inside the interval(Xr-,.x,),corresponding to the maximum characteristicR(t), i.e. where

R(i)=x,-xc-,

+

!zi-z+t)2 - 2(z,+q_,),(rp),

cLL(zI-x,_-I)

R(1)-2(Xl-s,)-4Z,/(r~), ~=max{lz(-zi-ll/(x,-r,_,):

R(k-t1)=2(q+t-xa)-4zJ(rp),

(1.5)

t
(l.(i)

undefined or T 7 0, then we take !J = 1. Here Xi, lsisk, are order of increasing coordinates of the previous trial pointsx',...,z?i.e. if

cI in

(1.6)

is

a=xAx,<

. .
(1.4)

l
indexed in the

<~~
(1.7)

andz,~g,(X~),l~ft9k,are the values of the function being minimized at these points. The pointsX& = 1 andq+,=bare added to the series (1.7) in order to ensure uniform description of all the subintervals&-,,X‘),lGiG&tl,ofthe interval (a, b) (the values Z. and %+I are undefined). The coefficient P in (1.2), (1.4), (1.5) is a parameter of the numerical methodral. The convergence of this sequential algorithm is characterized by the following theorem from 121 (the rate of convergence bounds derived in 12) show that the algorithm is definitely superior to enumerative search schemes). Theorem 1. Let f be a limiting point of the sequence (X"},generatedduring the minimization of the Lipschitzian function (with constant K )cp(X),XE(a,b].Then the following assertions are true: 1) convergence to the interior pointf=(a, b)is two-sided; 2) the point f is locally optimal if the function has a finite number of local extrema; 3)

if in addition to % there exists another limiting point X',thenr.p(P)-gp(X');

4) ~~=cp(i)Pcp(P) for any ka1; 5)

if, at some step, u from (1.6) satisfied the condition

(1.8) then the set of limiting points ,of\ the sequence (ti)is identical with the set of global minimum points of the functionQ(X). The decision rules (1.1)-(1.(S) are based on some stochastic model (see [2]) according to which(X,-,,Xl)can be interpreted as the interval most likely containing a global minimum point (here (1.1) is the maximum-likelihoodestimate of this point). In the framework of this stochastic model, the sequence of indices t,, Idjd,Q+j,from the descending series of characteristics (1.4), (1.5). R(G)) . . . >fqt,Ka. 1. >R(t,+,) (1.9)

orders the subintervals(x,,_,, Xr,)bydecreasing probability of localization of the global minimum point in these subintervals. These considerations suggest a technique for modifying the sequential algorithm to the case whenp-p(l)>lprocessors are available to execute the (Ii-l)-th iteration (step), ensuring concurrent processing of P trials. We naturally assume that these p parallel trials should be realized in the subintervals corresponding to the first P terms of the series (1.9). AS a result, we obtain the following algorithm with parallel iterations. The first-step trials are executed atp=p(i)>larbitrary interior points of the interval (%b).The trial points corresponding to any subsequent(l+l)-thiteration are chosen by the following decision rule: 1) represent the previous trial pointsX',...,X'where

k=k(l)-p(l)+

(1.10)

. . . +p(d),

in the form of the ordered series (1.7); 2) determine (1.61, compute the characteristics (1.4), (1.5), and order them in the form of the series (1.9); 3) execute the current P iterations of the (IfI)-thstep at the points zL*',... XbtP, where x*+'==s(t,),

11.11)

tj,iGjGp,arethe first P indices from the series (1.91, and the function ,$ is We take P=P(U~min(k(l)+l, (0, 1>1. where q is a given constant,q&I. The trials are stopped when

from

(1.2). (1.12)

9

X,-Xt-‘Sk,

(1.13)

where l=t, from (1.9) and e is a given nonnegative number (the accuracy). There is of course no need to perform all these computations in each step from scratch, and many quantities may be saved from step to step. Without going into computational details, we will now compare the efficiency of sequential and parallel algorithms. Below, we write the characteristics (1.4) also in the form

R(i)=A,(i-f)+~(~i--~)*-4pr, w

(1.14)

using the notation

Theorem 2. Assume that under the conditions (1.12) and (1.13) we respectively have id there exists an indexj=j(k), such thatxj=5. Assume that in this case there is no subsequence that converges to the point f from the left (the case without right convergence is examined similarly). Then two cases are possible: either there exists a number C, such that fork>max(c, d)the trials do not fall in the interval(q-,,x,)=(z~,+~) corresponding to the characteristicR(j(k))from(1.4) or for any k>f the trials to not fall in the interval(+c,%i)=(% %)corresponding to the characteristic from (1.5). In the first case. R(j)-A~+Izc-cp(f)lZl(~*~,)-2[~~-cp(~)~j(r~l)-4~(f)j(r~1)-A~(1+~'-2a/r)-4~(5)/(r~), where a=[~'-cp(f and so ) I/(pAr),whenceit follows that la(bl W)3A,(l-r')-4q(z)/(rp).

(1.16)

This inequality holds for the second case also. Similarly for the interval(x,,x,+1)-(f,x,+l)(we denote its number by t = j(k) + l), wehave R(t)=A1(I+Bz-2B/r)-4~(L)/(rlc), whereg=[z,-q(f .) ]/(FAt)whenceit follows thatR(t)<4A&q(f)/(rF).Then, using (1.16), we obtain R(j)+4q(Z)/(rk)

,

W-r-‘)

R(t)+4rp(E)l(rv)

4Ar

Since A,-0 ask-+=,thenR(j)>R(t)forsufficiently large k, in the subinterval(x+,,xj) leads to a contradiction with (1.9). 2. The proof of the second and third assertions is corresponding argument from [2, p. 851. 3. Assume that the fourth assertion is false, i.e.

’ so that the assumption of no trials the decision rule based on series a verbatim repetition of the the iteration d produced the value

(1.17) 2"=cp(x")Ccp(!E). Letj=j(k)be the index of the point xd in the series (1.7) corresponding to the iteration kad, i.e.z,=zd, and consider the characteristic of the interval(x,_,, xj).When(1.4) is satisfied, writingy=pA,/h,andusing (1.171, we obtain (1.18) The inequality (1.18) also holds for the case (1.5). Since V(J) obtain that

is a Lipschitzian function, using the first assertion of the theorem we FR(t(k))++(f)lrfor k+-,

wheret=t(k)is

(1.19)

the number of the interval that contains the point f after the k-th trial. Comparison of the bounds (1.18) and (1.19) shows that the points"=xj also should be a limiting point, because R!j(k))>R(t(k)) for sufficiently large k. By (1.17), the point Zd has a neighbourhood W such that cP(x)-=cpF),x=W. and this neighbourhood (for sufficiently large k)

10

xi),eachsatisfying the condition max{z+,,z,}
(2’) +KAf,

and since inequality (1.8) holds for sufficiently large k we obtain the bound -4P,l(r~)‘-4~(X’)/(rli.)-di+26j/(r~)

which, using (1.14), leads to the inequality (1.20)

R(i(k))>_4y,(I.)l(TCL),

which is also true for the cases j=l andj-kfl,when the characteristicsare estimated by (1.5). Comparing (1.191, (1.201 and using the relationship~(x*)~~(~),we obtain the inequality~(~(k))~~(f(k)),whence it follows that z:~is a limiting point. 2. The Conditions for Non-Redundant Parallelism Accordingto (l.ll), the proposed algorithm withp(l)>i is no longer a purely sequential method. But by (1.12) it is not fully parallel either. The level of parallelism is determined by the conditions (1.101, (1.12) and the function p(l).The efficiency of this parallelism can be estimated from the following considerations. Let{ti}and{gy"]be sequences of trial points generated respectively by the sequential and the parallel algorithm for the same function~(Z),xE[s,~],with e=Oin the stopping rule (1.131 (infinite search). If the points of these sequences coincide, i.e. (2.1) G+{VV, the parallel algorithm performs trials at the same points as the purely sequential algorithm. This condition does not require that the sequences coincide element (i.e. it is not required that x'=y',s&i). Definition. When the condition (2.11 is satisfied, the parallelism is called nonredundant. If{xb)P{ym},but there exists a regionWc(a,k)such that (2.2) and the difference(a,b)\Wcontains only a finite number of points of the sequences{x")and {y"},theparallelism is called asymptotically non-redundant. If the regionW in (2.2) is some neighbourhood of the absolute minimum point, then asymptotic,non-redundancycharacterizes the efficiency of parallelism in the last phase of the search process. In order to characterize the redundancy of the parallel scheme in the initial search phase, we introduce the redundancy factor (2.3) where (2.4) is the number of redundant points generated by the parallel scheme in m trials. This definition assumes the inclusion(ti}~{y"]. Case (2.1) clearly corresponds to t(m)-0, m&i. In the theoretical analysis that follows, we assume that the function being minimized m(x),x~[a,b],isLipschitzian with constantK, and the parallel algorithm satisfies the conditions

i.e. the initial trial points are the same for the parallel and the sequential methods. In (1.21, (1.4) and (1.5), M is a constant majorizing II and the choice of the parameterr ensures the sufficient condition of covergence (1.8). i;e. pa&

r&2.

(2.6)

Theorem 3 (non-redundancyconditions for pair trials). Let p=Z.in (1.12). Then the following assertions hold: 1) if x.,x-are global minimum points of the function~(x} and x.%5” and y*=(a+b)/2, then t(m)=Wm/6)lmc0.17,

m>l,

(2.7)

where E(u)denotes the integer part of U. Proof 1. sequesmethod

If after the m-th trial of the parallel method and the k-th trial of the we have the equality (&I, &)-(x~, qf,r~et-t(m), j-j(k),

(2.8)

11

and the current then

trials at the points gm+l and ti*ifall within these respective intervals, R(f(n))=R(i(n))

and by (l.l), (1.2) and (1.11)

(2.9)

m+l,*L+19

b,

(2.10)

because the conditions of the theorem and (2.6) assume constant p and p. Condition (1.8) and Theorems 1 and 2 imply that the only limiting points of the sequences (2")and(y" the absolute minimum points of the function~(x)and thus the set of limiting points of the sequence{gzn)coincides with the set of limiting points of the sequence{zl). be ordered by the scheme (1.7) and let Let the first k points of the sequence &@I j = j(k) be the number of the interval [x+,,xj],containing some global minimum point x:+of the function q(x). By condition (1.8), the characteristic of this interval satisfies the bound (1.201, and since x* is a limiting point, then by (1.19) R(j(kf)-c--icp(s*)/(rp)+~

(2.11)

for k-+=x

Hence, applying rule (1.31, we conclude that any interval(z+,, &),i=i(k), whose characteristic satisfies condition (1.20) contains at least one point of the sequence (2").Conversely, any interval{&-,,x‘), such that R(i(k))<--4q(+‘)/(rp),

(2.12)

does not contain any points of the sequence(Y). By (2.5) we have for m--k--l (%. t!*)=bo,

511,

(Ys, !A)=(%

sf,

and so by (2.8)-(2.101 and decision rule (1.11) based on the series (1.9) we obtain that (2.13) The inclusion (2.13) makes it possible to estimate redundancy in terms of the ratio (2.3). Each redundant test generated by rule (1.1) and counted in (2.4) corresponds to an interval (g+,,yi),satisfyinginequality (2.12). 2. By the conditions of the first assertion of the theorem, at each step kai there exist at least two intervals of the form [xt-(, x,1=x.,

[xl-r,211=x*,i+t,

whose characteristics satisfy inequality (1.20) and, therefore, the choice of trial points by the rule (1.11) does not produce redundant points forq-2 3. Let us prove the second assertion. Return to the interval[xt-C, x&j= j(k), containing the global minimum Point x' The current trial at the points=&‘, falling in this interval generates two new subintervals. [x,--I, xl, b.( 4, (2.14) one of which contains x'.Assume that this is the first of the two intervals, i.e. (2.15)

x*~Ix,-~,s].

We note that such a pair of intervals always exists for k-m--l. From (2.151 we conclude that the inequality (1.20) holds for the interval(xd-i,xf.We will show that this inequality also holds for the interval(x,q).Inorder to estimate the characteristicR. of this interval. we use the formula (1.141, setting (see Fig. 1 and (1.15)) Ah,-=x,-x,8.==Iz,-21,w-minh

z==d+‘, We start by estimating p_ Let+rQ.

2) ,

CJ== (J,+“,-*)/2. Then, by (1.2), (2.61 and (2.15), (2.16)

~.sz~cP(~‘)+~(X-~‘)SE;(p(l’)

+@,.

For Z+I)Z,consider two cases,x'>q and x'
12

Substituting the estimate (2.16) into formula (1.14) for the interval(x,z,)we obtain after transformations R,>A,(l-4r-‘-r’)-4cp(z’)/(r~),

6s

whence it follows that under the conditions of 'j-1 . the theorem V* (2.17) %‘-4W’)l(rfi). 4.z The inequality (2.17) holds also when z,=b, because by (1.2) we have in this case Z3cj and sj 3-f I' JJ 9I by the Lipschitzian property of q(x) we obtain zG”p(x’)+KA,, which after substitution into Fig. 1 (1.5) gives R.~2A,[1-2K/(rlc)]--4cp(s”)l(r.CL). Finally, (2.17) also holds for k=l, because in this case z,=Zkand by (1.5)R(!)=R(Z),withone of the intervals[sO, I,],[x,, s,]containingthe pointr'. Thus, if after k trials we find a pair of intervals of the form (2.14), then simultaneous choice of two trials by the rules (1.9), (1.11) will not produce redundant points. If one of the trials in this pair falls within the interval (2.15), then a new pair of the form (2.14) is obtained. Therefore, the only source of redundant trials is the case when the current pair of trials contains a point from (5,+),but does not contain a point from the interval (2.15). In this case, the next pair may place one point in the interval (2.15) and one point in some interval (s+,, XC)whose characteristic satisfies the condition (2.12). The latter point is a redundant trial. Outside the interval (2.15) there are no other may be intervals where inequality (1.20) holds, and therefore the following points Y"'E(z*} located only in the interval (2.15), which already contains one trial point. Thus, we again obtain a pair of intervals of the form (2.14). This means that every three iterations with pair trials may contain at most one redundant trial, which proves the bound (2.7). Theorem 4 (conditions of asymptotic non-redundancy). Letz*E(a,b)be the unique absolute minimum point of the function and assume that there exists a neighbourhoodw of this point such that cP(r)=z+-z*j,

XEW.

(2.18)

Then the sequence of trials generated by the parallel algorithm forq=2, r>2and p=K, asymptoticallynon-redundant.

is

Remark 2. The function q(s),possessing some smoothness properties in the neighbourhood of the optimum may be adaptively reduced to the form (2.18) during optimization (see [2]). The choice of the constant p=K is of interest because adaptive estimation of (1.6) also leads to this value by (2.18) and by two-sided convergence to the point 2'. Proof. 1. Since I' is the unique limiting point of the sequences {z')and {Y"'}, the number of elements of these sequences lying outside Wis finite. At some step k the neighbourhoodWincludes a pair of intervals of the form (2.14) considered in the proof of Theorem 2. Herea
1 ( Z--xi-t 11.

R(~j_,,2)=R(s,q)-(s-q_,)1 Hence

O-a?(r,_l, z)=a(5, q),,

where the left inequality follows from (2.15), (1.20) and the equality (p(x')=O. 2. The current trial at the point u of the interval (2.15) again generates a pair of intervals of the form (2.14) with positive characteristics. If we assume that some trial is performed at the point u of the interval(s,x,)where the function is linear, then by (1.2) and (1.4) we obtain the equalityR(x,v)=R(u, +,).Thus,a trial inside a linearity interval of the function generates two subintervals with equal characteristics. 3. Assume that in some step 1 there exists a numberj--j(m)suchthat +=(Yi_*, Y,)CW, and the parallel algorithm in step(lfi)executeda redundant trial (note that for z'=Y" there can be no redundant trials in steps l>n/2because two intervals containing z* exist). Then the first trial point y"'+' of this iteration is contained in the interval(Yj-1, Yj)and therefore produces a pair of the form (2.14), which contains the points aE(Y,-i, Ym+') and uE(Ym",Y,) of the next, (lf2)-th iteration, because the characteristicsof all other intervals are not positive. As a result, we obtain four intervals (YJ-i* u),

(k

yrn+l),

(y”‘+‘. v).

(v, YJ),

(2.19)

13

two of which have positive characteristicsand two have equal characteristics, For three of these intervals, a subsequent trial falling within one of them produces two new subintervals with equal characteristics (because the function is linear in three intervals from (2.19)). The trial in the remaining fourth interval containing the pointx',producestwo subintervals similar to the pair (2.14) witn posrtive characteristics. Thus, after any (l+s)-thiteration,sX& we have an even number of intervals with positive characteristics, which rules out the possibility of redundant trials.

b

Fig. 2 Numerical examples. As an illustration, Fig. 2 presents the results of minimizing a one-dimensionalmulti-extremum function by sequential (q=*fand parallefffp=3) algorithms with r = 1.4 ande/(6-a)=10-3,where e is the accuracy from the stopping rule (1.131, p was estimated adaptively from (1.6). The function to be minimized is plotted in Fig, 2b (the smooth oscillating curve). The straight-line segments in Fig. 2c successively (from top to bottom) join the points corresponding to the pairs(s',k), (zI+',kfl);whereah is the coordinate and k is the number of the iteration in the sequential method (we show the images of all the 42 trials executed before the algorithm stopped on condition (1.13)). Figure 2a is the image of the first 42 trials executed during 15 iterations of the parallel method fl.e* three processors produced almost a factor-of-three speedup). The points joined by broken lines correspond to triples of parallel trials. One trial is performed in the first step, two in the second step (see (1.12) and (2.5)). The sloping sections join (from top to bottom) the first trial points of every two iterations (in order of increasing iteration numbers from 1 to 2-+ 1). The series of vertical bars in Fig. 2b identifies the points Q, lgi&Q,from (1.7) for the sequential method. The shaded bar represents a group of densely placed trials. The undermined bars are the trial points of the sequential method where no trials were executed by the Farallel algorithm. The bars located above the entire series correspond to the trial points of the parallel method without matching trial points in the sequential method. 3. Generalizations to the ~~ltidim@n~~Qnal Case The proposed method can be generalized to minimization of a multi-extremum Lipschitzian function v&fin the hyperinterval r)={y=W: U,Q,Cb$,la;iGN}

(3.1)

by reduction of this N-dimensional problem to an equivalent one-dimensional problem [21. The reduction technique uses a continuous single-valuedmap y(s)of the interval [O, 11 of the real line z onto the region (3.1), i.e. a Peano curve map. By the continuity of yixl and the Lipschitzian property of g(g)we have the equality (3.2) where the one-dimensional funetionm(fi(af)satisfies in Lo, 13 the uniform H'dlderinequality with exponentN-1 and coefficient4KNa.where K is the Lipschitz Constant of the function

14

Table 1

Table 2

1 2 z 3 i 6 9

f ; d ; 3

386 1 1

ii" 45;)

3" i 2 3

2:; 540 460 357

i.6 1.7 3 2.1 2.2

0 0.14 0.05 0.16 0.21 0.14

3J3 4.1 i.1

E. 0

386 221 235 I27 185 Ii6 106 93 54

v(y),y=D(the value of the coefficient corresponds to the specific map Y(l)for which the computationalscheme is developed in [2]). The one-dimensionalproblem from the right-hand side of (3.2) can be computed by the parallel algorithm of Section 1, replacing the distances Ar in (1.4)-(1.6)with their A'-I-thpowers Ai’lN and the ratio 6,/pin (1.2) with its ,v-th power(&/p)N.This modification, following [2], will be called a generalized algorithm. An analogue of Theorem 2 holds for the generalized algorithm. Program implementationof generalized algorithms is considered in [9]. Numerical examples. Table 1 presents the minimization of the two-dimensionaltest function from [lo] on a unit squareD. We used the generalized parallel algorithm for various Q from (1.12) in combination with the reduction scheme (3.2). As I/(s)we used the Peano-like piecewise-linearmap from [2], which approximates the Peano curve with an accuracy of 2-12 in each coordinate. In the algorithm we specifiedr=1.7and an accuracy of iO-3in .r from [O, 11 for the stopping rule (1.13). In Table l,n=m(q)and l=Z(q)respectivelyis the number of trials and the number of iterations executed by the q-processor parallel method until stopping;o= I(l)/l(q) is the acceleration achieved by the parallel method relative to the sequential method;~Smax{o,[m(q)--m(i)]/m(p)}is the redundancy of the parallel method. The constant level Fig. 3 lines of the minimized function in the region D are shown in Fig 3; the points are the trials for the case q=3. Another dimension reducing technique relies on the multistep minimization scheme of nested one-dimensionalsubproblems mincF(Y)=min ... mia cp(~~,...,Y~), !,ED .X
(3.3)

each of which is Lipschitzian with constant K common to all subproblems. Any of the nested one-dimensionalproblems can be parallelized by the algorithm of Section 1. For instance, for N=2 the application of the parallel algorithm with 41 trials in one iteration to solve the outer one-dimensionalproblem of minimization over y, produces 41 inner one-dimensionalproblems of minimization over Yz.each of which may be solved by the same parallel algorithm with (12trials in one iteration; this results in the parallel execution of 4142 trials. As an illustration,Table 2 presents different combinations of levels q, and q1 of outer and inner parallelism for minimizing the test function from [lOI by the scheme (3.3). All the combinations satisfy the condition Q+=p, where q is the total number of available processors. Note the termination (by stopping rule (1.13)) of some inner subproblem of minimization over YZ releases qI processes (some other subproblems of minimization overY,.may still not be completed). These processors may be utilized to solve a new inner subproblem, chosen by analyzing the current series of characteristics (1.9) of the outer problem of minimization over 11~Some processors may be temporarily waiting in view of condition (1.12), which implies that only one processor is used in thelst iteration (see (2.5)), only two processors are used in the 2nd iteration, and so on, until iterationQz, starting with which all the 42 processors are used (until the algorithm stops). Similarly, parallelizationof the outer subproblem first generates one inner subproblem, then two inner subproblems, and so on, until q, subproblems in each step, which also causes temporary waiting of some of the Q-qrQa processors (when q, and q, are constant). The results in Table 2 correspond to an accuracy of IO-*in each coordinate (i.e. the same stopping accuracy for outer and inner problems). Here, m is the total number of trials executed during the solution, and 1 is the time taken to solve the problem (3.3) (in iterations).

1.

REFERENCES VOEVODIN V.V., Mathematical Models and Methods in Parallel Processes, Nauka, Moscow, 1986.

15

2. 3.

4. 5. 6.

7.

0. 9. 10.

STRONGIN R.G., Numerical Methods in Multi-Extremal Problems. Information-Statistical Algorithms, Nauka, MOSCOW, 1978. GRISHAGIN V.A. and STRONGIN R.G., Optimization of multi-extremal functions subject to monotonically unimodal constraints, Izv. Akad. Nauk SSSR, Tekh. Kibern., 4, 203-208, 1984. SUKHAREV A.G., Optimal Extremum Seeking Procedures, Izd. MGU, Moscow, 1975. STRONGIN R.G., and MARKIN D.L., Minimization of multi-extremum functions subject to nonconvex constraints, Kibernetika, 4, 64-69, 1986. MARKIN D.L. and STRONGIN R. G., A method for solving multi-extremum problems with nonconvex constraints using prior information about optimum estimates, Zh. vychisl. Matem. mat. Fiz., 27, 1, 52-62, 1987. PINTER J., Global optimization algorithms: an axiomatic approach, in: 30 Internat. Wiss. Koll. TH Ilmenae 1985, Votragsreihe "Math. Optimiezierung - Theorie und Anwendungen*, 117-120. EVTUSHENKO YU.G. and POTAPOV M.A., Methods for the numerical solution of multicriterion problems, Dokl. Akad. Nauk SSSR, 291, 6, 25-29, 1986. STRONGIN R.G. and GERGEL'V.P., On the computer implementationof a multidimensional generalized global search algorithm, Voprosy Kibernetiki, 45, 59-66, 1978. GRISHAGIN V.A., Operating characteristics of some global search algorithms, in: Topics in random Search. Adaptation Problems in Technical Systems, Zinatne, Riga, 198-206, 1978. Translated by Z.L.

II. S.S.R.

Comput.Maths.

Math.Phys.

, Vo1.29,No.2,pp.15-21,1989

0041-5553/89 510.00+0.00 0 1990 Pergamon Press plc

Printed in Great Britain

STUDY AND NUMERICAL SOLUTION OF FINITE DIFFERENCE APPROXIMATIONS OF DISTRIBUTED-SYSTEM CONTROL PROBLEMS%

K.R. AIDA-ZADE An approach based on a finite difference approximation is considered for obtaining computational expressions for the numerical solution of optimal control problems for systems with distributed parameters. The approach is illustrated by the study of particular problems, for one of which the results of a numerical solution are quoted. Optimal control problems for systems with distributed parameters present great difficulties, particularly because the boundary value problems for the partial differential equations that describe the operation of the object are themselves very difficult to solve even when present-day computers are used. The most powerful method for solving such problems is to use the necessary conditions in the form of a maximum principle; in many problems, however, there are theoretical difficulties in obtaining these conditions [l, 21. While attention has been paid in recent years to direct methods of solving optimal control problems [3, 41, the resulting optimization problems then have high dimensionality, so that it becomes difficult in principle to apply directly to them the numerical methods of mathematical programming (m.p.). In this paper we reduce optimal control problems for distributed systems to finite difference problems, and give computational expressions such that efficient methods of first order m.p. can be used. It should specially be noted that the approximations of the direct and conjugate problems satisfy certain matching conditions as a natural result of our approach (a similar situation is familiar in control problems for lumped systems [4, 51). Our studies are on the one hand an extension of the results of [21 to the case of distributed systems, and on the other hand, are allied to problems concerned with the differentiation of finite-dimensionalfunctions under constraints of the equation type (see e.g. [6, 71). 1. Formulation of the problem and its finite-dimensional approximation Let the operation of the controlled object be described by a non-linear system of partial differential equations of the evolution type:

ad

. . . .z

%h.vychisZ.Mat.mat.Fiz.

pdp

,...

1a3X”‘~“.7ax

,29,3,346-354.1989

au*

-,v’,...,

V”,U’,...(

u”,x,t

1( i--1,2,. ..,n,

(1)