Random search techniques for optimization problems

Random search techniques for optimization problems

~tutomatica. Vol. I, pp. 111-121. Pergamon Press, t963. Printed in Great Britain. RANDOM SEARCH TECHNIQUES PROBLEMS FOR OPTIMIZATION DEAN C. KAR...

663KB Sizes 0 Downloads 75 Views

~tutomatica. Vol. I, pp. 111-121. Pergamon Press, t963. Printed in Great Britain.

RANDOM

SEARCH

TECHNIQUES PROBLEMS

FOR

OPTIMIZATION

DEAN C. KARNOPP Massachusetts Institute of Technology, Cambridge, Mass. Summary--The attributes of flexibility and efficiency attainable with random search techniques are discussed as well as a scope of application ranging from uses involving high speed general purpose computers to extremely simple adaptive devices. 1. I N T R O D U C T I O N

ONLY IN recent years has serious study been given to the concept of applying random search techniques to general engineering problems. The applications range from simple devices, hardly worthy of the name computer, which use random methods to avoid complex logical structures, to techniques for sophisticated general purpose computers which use random searching in the interest of efficiency [-1-13]. It is the purpose of this paper to indicate that there is an entire spectrum of applications in which random searching methods offer great and novel advantages over more traditional methods, The particular type of problem to be discussed here will be called an optimization problem and is characterized by a search, conducted in a multidimensional space, x = x t , x2 . . . . . x~ (I) tbr a set of values of the x parameters which yield an absolute extreme (maximum or minimum) of a criterion or reward function, C(x) = C ( x l , x2 . . . . . .~'n) (2) In fact, either the location in the x space which yields the extreme of C, say, Cm~X or the value of Cm~x itself may be of primary concern [l I] (Section 2,2.2). Such problems occur in a variety of contexts as this short list of examples will demonstrate. 1. Extremal or adaptive control devices or automatic optimizers are often constructed as maximum seeking devices [7-9, 11-15]. [1] [2] [3] [4]

W. R. ASHBY, Design for a brain, 2rid Ed. John Wiley, New York (1960). H. S. AVERET'rE,Mech. Engng. Department Bachelor's Thesis, M.I.T. (1962). S. H. BRooKs, Oper. Res. J. 6, 244 (1958). D. T. CAMPBELL, Self Organizing Systems (Eds. M. C. YOVlTS and S. CAMERON). Pergamon Press, London (1960). [5] V. K. Ct-tICHINAOZE, Proc. 1st Congr. Int. Fed. Automat. Contr. Moscow (1960); Butterworths, London (1960). [6] R. F. FAVREAUand R. FRANKS,Proc. of the 2rid Int. Analog Computation Meeting, Presses Academiques Europeenes, Brussels (1959). [7] A. A. FEL'D BAUM, Avtomat. i Telemek. 19 (8), 731 (1958). [8] A. A. FEL'D BAUM, Vychislitel'nye Ustroistva v Avtomaticheskikh Sistemakh, Moscow (1959), [9] R. HERSCHEL, Electron. Rechenanlagen, 3, 30 (I961). [10] R. HOOKE and T. A. JEEVES, Asso. Comp. ?¢Iach. J. 8, 212 (196l). [I 1] D. C. KARNOPP, Mech. Engng. Dept. Doctoral Thesis, M.I.T. (1961). [12] L. A. RAS'rrUGtN,Avtomat. i Telemek. 21 (9), 1264 (1960). [13] I. M. VtTENBERG, Voprosy Teorii, ~/[athematicheskikh Mashin, Moscow Sbornik 1, 149. (1958). [14] C. S. DRAPER and Y. T. LI, Principles o f optimalizing control systems and an application to internal combustion engine, Amer. Soc. Mech. Engrs. (1951). [15] E. M1smc,IN and L. BRAUN, Jr., Eds., Adaptive control systems, pp. 108-110. McGraw-Hill. New York (1961). I11

1 i2

DEAN C. KARNOPP

2. In almost every conceivable design or synthesis problem, the parameters of a system are varied in order to attain near optimum performance. In some cases the terms "parameter" and "function" must be allowed to encompass non-numerical quantities or concepts. However, insofar as the design is based on rational grounds, the logical structure of the problem is the same as for the mathematical problem. 3. Problems of mathematical programming are optimization problems in ~hich the emphasis is placed on constraints which the .v parameters must satisfy as well as on the criterion function. Linear and recently quadratic programming problems have received much study. 4. Many standard computer techniques can be cast into the form of optimization problems. For example, some boundary value problems can be solved as initial value problems in which some of the initial conditions are varied during repeated solutions until the difference between the values of the solution at the boundary and the desired boundary conditions approaches a minimum of zero. 2. PROBLEM F O R M U L A T I O N Despite the ubiquitous nature of optimization problems, seemingly small differences in the formulation of the problems often drastically affect the appropriate solution techniques. Areas of study such as the Design of Experiments, Search Theory in Operations Research, Decision Theory, Linear Programming, Root Finding Methods and even Game Theory often treat special forms of optimization problems, but the techniques vary considerably. A few of the differences which can exist between problems are discussed below. 2 r ~

gll: C}

v ; / / , ~

'7 ......... i

t

;

~

Xl

,

V !

---×1

(b} F~G. I. (a) x~ takes on five dis~ete levels, x2 varies continuously between the limits

gt(xl, x 2) _~ 0, g2(xt, x2) _~<0. [b) The search is to be restricted to the heavily shaded region satisfyinggl(xt, x2) <_ O, g2(xD x2) _< 0. The larger simply bounded space shown lightly shaded may be used in the search if the search scheme is arranged so that trial points falling outside of the heavily shaded region are recognized and discarded. I. Parameter space Some of the important aspects of the parameter space x concern the number n of parameters, the nature of the parameters--continuous or discrete valued, the nature of the bounds on the space to be searched and any restrictions on successive points in x for trial evaluations of C. In Fig. I some examples of two-dimensional spaces are shown. 2. Nature o f the criterion function Perhaps the most commonly assumed characteristic for the criterion function of eqn. 2 is the property of continuity with respect to continuously variable parameters. For many computational schemes, of course, mathematical continuity itself may not be sufficient, and smoothness with respect to small but finite increments in x is the property actually desired. This property is affected by the scaling of the function. Most deterministic maximum seeking methods, such as gradient or local curve fitting methods, are based on concepts of differential calculus and hence on the property of continuity.

Random search techniques for optimization problems

113

Another crucial distinction exists between monomodal and mutt]modal functions. In reality, most deterministic techniques are refinement techniques rather than search techniques, since small step methods can, in themselves, find only the top of a local peak. Hence such methods must be combined with some sort of true search procedure if a search for the absolute maximum of a multimodal function is required. When a true search is necessary, the dimension of the problem n assumes great significance [1 I, 16]. In those cases in which the value of C,,,~ is known a priori the formulation of a criterion for terminating the search is much simplified. This is analogous to the formulation of a stop rule in decision theory [l 1, 13]. Finally, in cases such as adaptive control, the function being searched may vary spontaneously as the search progresses [1.8, l l, 15]. In such cases particularly, there is a trading of value between search time and accuracy such that an approximate solution to the optimization problem quickly found may be more valuable than a highly accurate solution found only after an extended period of time.

3. Operations invoh'ed in the evaluation of the criterion function The criterion function may be a relatively simple algebraic function which can be quickly evaluated by a computer. In other cases, time consuming operations are required for the evaluation of C. For a nonlinear control problem one may have to find solutions to complicated differential equations and perform operations of these solutions in order to evaluate C for each choice of the x parameters. In physical experiments long periods are often required before the effect of a change in parameters can be measured. When there is excessive noise in trial evaluations of the function, as in medical or agricultural experiments, repeated trials at the same point in x are necessary. In such cases only very simple spaces can be searched [18]. The random search techniques to be illustrated here will be predicated on a criterion function which is not time varying, which can be evaluated accurately and which requires a true search to find an absolute rather than merely a relative extreme. 3. GENERAL RANDOM SEARCH TECHNIQUES Before discussing random search techniques, it is appropriate to note that the class of random search techniques includes the class of deterministic search techniques as a special case. Thus, refinement techniques can be considered methods for utilizing information about past trial evaluations of C to choose the next point in x for a trial. Actually there are many ways to introduce various degrees of determinism into a random search by varying the probability distribution associated with trial points in x and the purely deterministic schemes are merely extreme examples of this procedure.

1. Pure random searches Some of the aspects of random methods can be studied by restricting the discussion temporarily to pure random searches ; i.e., searches in which the trial points in x are chosen by means of a random process which does not change from trial to trial. Such a method is the antithesis of a purely deterministic search. First of all, a mechanism must be established for choosing the trial points in the x space. The random process can be defined by a probability density function, p(x), expressing the probability of choosing any particular point in x. In the diNtal computer field the problem [16] R. BELLMAN,Dynamic Programming, p. Lx, 6. Princeton University Press, Princeton (1957). [17] A. WALD, Statistical decisionfimctions. John Wiley, New York (1950). [18] W. G. COCH~ANand G. M. Cox, Experimental Designs, 2nd Ed. John Wiley, New York (1957).

114

D~:A~4C. KARNOPP

of generating random distributions has received much study and there are a number o~ standard techniques and programs available. These techniques are generally based on a pseudo-random number generator which establishes a string of numbers in the interval 0,1. This string of values can then be scaled and transformed in order to produce an approximation to any desired distribution. In the case of an analog computer or device, the random numbers can be obtained by sampling independent noise sources or even by sampling wave forms generated by independent oscillators. The oscillators and the sampling process must clearly be as nearly asynchronous as possible. We have found this to be perfectly practical for two parameter problems although the difficult?' of detecting hidden periodicities in multiparameter problems would suggest that the use of true noise sources would be a safer source of random numbers [11]. Given a scan distribution p(x) the distribution of values of C(x) obtained when the trials in x are conducted according to p can be imagined. The operation of finding F(C), the probability that on any trial the value of the function will be less than or equal to C, given p(x) and C(x) is as follows: F(C) = ff

. . . f p ( x L , x2 . . . . .

x,) dxt, dxt ...

d.r~

(3)

All x such thac the value o f the function < C

F(C) can be defined for arbitrary C and p but in generai cannot be known a priori. A simple and powerful pure random search technique is to conduct a sequence of trials according to p and to store only the largest value of C found on all previous trials C* and :(×)

(a)

Fm(C*)

i

m=

/ // / Cn%n

Cma~ Ib}

FiG. 2. (a) Typical scan density function, p(x). (b) Resulting distribution functions, F,n(C*).

Random search techniques for optimization problems

115

the location in x which produced this value x*. After each trial a sinNe comparison is made between the current value of C and C*. The distribution function for C* after m trials, F,,(C*) is simply given by the expression F,, (C*) = (F(C*))" (4) In Fig. 2 a typical p(x) and resulting F,,(C*) are shown. From considerations of F,.(C*) the c~nvergence of C* to C,.~ can be ~t,-'died [ 1, pp. 53-61]. Clearly, as far as the convergence of C* is concerned, only the shape of F(C) is important. Considerations such as the number of dimensions of the search space n or the peakedness or flatness of C(x) enter the problem by affecting F(C). For example, simple second order peaks yield F's which result in fast convergence of C* when n is a small number, but when n is increased, the F's change their shape and convergence is slower. See Fig. 3. F(C)

"~C

I Crnin

Cma x

FtG. 3. A study of the shapes of F(C) for parabolic peaks in n dimensions with a pure, equiprobable random scan p(x). A peculiarity of C* convergence may be stated as follows: let C* be the highest value of C found during m I trials conducted according to a pure random scan and let P be the probability of finding one or more values of C which exceed C* during m2 succeeding trials, Then P -

m.~ (5) m 2 +nl 1 The importance of eqn. (5) lies in the fact that the distribution function, F(C) does not appear [2, 19]. Thus, there are some aspects of pure random searches, such as the mean number of trials between improvements in C*, which do not vary in the sli~test from problem to problem. The convergence of x* to the location which produces Cma~ is somewhat more difficult to discuss than the convergence of C* to Cm~~. As shown in Fig. 4, even for continuous C these two aspects of convergence need not be at all strongly related until the search process has approached the location and value of Cm,x very closely. One statement that can be made is that the error between x* and the actual location of Cm,~ will be of the order of the trial spacing in the vicinity of x* or greater. If the function is smooth and if sufficient trials have been performed to locate the absolute peak, then this lower bound on the location error will actually be attained. [19] E, J. GUMBEL,National bureau of standards applied mathematics series, 33, p. 9 (1954),

116

DEaN C. KARNOPP

A pure r a n d o m search is a rational procedure in many cases. For example, no matter how discontinuous C may be. a F(C) exists and the scheme will converge, in C* at ieast. as discussed above. Also, in the beginning of a search over a continuous C a pure random search is a simple and effective way to find the absolute peak, after which any of numerous refinement techniques may be applied. c'ix~

(×)

:'°'[ ..... C~e~

,

7 ~

X

/ X ~

F~G. 4. (a) A good approximation to the value of ('max but a poor approximation to its location in x. (b) A good approximate location for Cmax but a poor approximation to its value. In searching a complex criterion function defined on a space of high dimension, a great many unsuccessful trials must be made. This "'curse of dimensionalit.v'" [16] is a fundamental problem in a great many fields and cannot always be circumvented. Indeed the extremely rapid development of many fields following a long sought, successful pioneering effort testifies to the fact that the finding of a good approximate soIution to a problem is usually much more difficult than refining that solution.

2. General random methods There are many ways to introduce an element of determinism into a random search. As a simple example, consider a narrow range scan density function p(x, x*) which is always centered on the current x*. Assuming C to be smooth and C* not to be a relative maximum, there will be points in the vicinity of x* which will yield a somewhat higher value of C than the current C*. In fact, if p(x, x*) covers such a small area in x that C approximates a hyperplane in this region, then the probability approaches 50 per cent of bettering C* on any trial. These frequent, small changes in C* and x* result in a drunken walk roughly along the steepest ascent path to the nearest relative maximum. Thus without the necessity of computing approximations to partial derivatives, the process behaves Iike a gradient method of maximization. The r a n d o m process has the advantages of simplicity, insensitivity to discontinuities in slope of C or in C itself and in having an easily adjustable parameter, the range of x, which can be adjusted as the process proceeds in order to accelerate the rate of convergence. In Section 5.3 a program for a self-adjusting scan of this sort is discussed which has proved remarkable both for its simplicity and its effectiveness in attaining near optimum convergence rates on a variety of problems. Efficient schemes can be devised in which a narrow, moving mean scan is combined with a wide, pure r a n d o m scan in such a way that in the beginning of the search, the emphasis is on the wide range search and later the emphasis shifts to the refinement process. It can be demonstrated that the improvement per trial can be optimized by a process such as this [11]. It is an exercise of the imagination to devise schemes for using essentially local information, such as the current x* in the above example, for speeding convergence to a relative maximum. What is important is that good results can be achieved by very simple means.

Random search techniques for optimization problems

117

Of even greater interest is the possibility, unique to true search procedures, of utilizing global information obtained during a wide range search. For example, during a pure random search, it is possible using extremely limited storage facilities to construct a histogram which is an approximation to F(C) or to the corresponding density function, F(C) = aF(C)/ac

(6)

if the derivative exists. (See Fig. 5). N

N

il;~i ii !i~ i;]':l!~ (o ~

(b)

FIG. 5. Histograms plotting N, the number of trials yielding C values within certain bracketed ranges, vs. C for (a) a relatively flat C(x) and (b) a sharply peaked C(x). Such histograms can be regarded as efficient means for condensing data obtained on many trial evaluations of a multidimensional function. Not only does the histogram provide a measure of the range of C and hence some estimate of the losses associated with a failure to achieve the optimum, but also it aids in the recognition of a lucky strike in which a value of C is recorded which is unusually high relative to the average. This will usually mean that a refinement technique may be profitably instigated. Another type of global information concerns the "average wavelength" of multiply peaked functions. This may be estimated using space correlation techniques [11]. Also techniques can be developed for detecting ridges or plateaus in C. In any case, global information obtained from a pure random search helps to settle the vexing and often crucial question concerning the decision to shift emphasis from true searching to the relatively rapid final refinement, 4. ADVANTAGES OF RANDOM SEARCH TECHNIQUES The following is a brief listing of some of the advantages associated with random search techniques.

1. Ease of programming Good results can be attained with simple programs on general purpose computers. Other devices with primitive logical structures can be made into automatic optimizers using random methods. This has been done on an analog computer for example [6, 11, pp. 143-148].

2. Inexpensive realization The random variables needed for random methods usually need only be pseudo-random and can thus be generated in inexpensive ways. Also, many of the methods need only very simple storage and comparison facilities. Thus an inexpensive device which can evaluate C rapidly can often compete favorably with much more complex devices. When differential equations must be solved to evaluate C, for example, an analog computer using random search techniques can compare favorably with a very expensive digital machine.

3. Insensitivity to type of criterion function Since random methods need not rely on continuity of C, the emphasis on artful contrivance of criteria for analytical or computational convenience is reduced.

l IS

DEaN C. K,.tm~oPP

4. Efficiency Many times a highly random search scheme wilt be found to be more efficient than a highly deterministic one. In many cases time spent in elaborate schemes in deciding which point to try next, in zoning offrelative maxima, etc. is better spent in extra evaluations of C. A demonstration of this appears in [1 I, p. 63]. Further, if one considers the simplicity and universality of the random methods and the inherent short set up time in the definition of efficiency, then the disparity between random and deterministic schemes can be even greater. 5. Flexibility Even simple random methods possess the flexibility to vary from pure random to highly deterministic depending upon the problem at hand and the stage of the search. Thus if good behavior is not attained on the first try, it is a simple matter to vary the strategy by a simple variation of the program of the search. 6. Information provided and used Random methods are capable of providing information about the function being searched and using this information to guide the search. While this information may be of the local type, most often used in deterministic schemes, it also may concern the overall pattern of the function. Up to the present, the use of such informaticm has been limited to functions which could be graphed, as by contour plots, and perceived by the human eye. The value of such information has been recognized for some time [20-2l] but untiI recently there have been few suggestions for attempting to find this type of information in the case of multidimensional functions [I I]. 5. A N E X A M P L E

PROBLEM

The example below will serve to demonstrate that even in the case of simple systems with analytically tractable criterion functions true searches may be required and that the random techniques discussed above provide simple and effective means for accomplishing such searches. l. Problem statement The system shown in Fig. 6 is the same one used by Truxal to point out the difficulties associated with multiparameter system design [15] except that we shall consider multiple inputs, Nl, N2, N3 and outputs, Ct and Cz. The optimization problem will involve adjusting the active element gains xl and x, so as to minimize a measure of the noise at Ci and C, due to possible noise inputs at l'V 1, N 2 or N 3. N2

N~

<

~+T+~

+

'~'13

'

+i

c2

C

~G-S-.7og~--+~.zgs+o.7-7~l

; FIG. 6. Block dia~am for example problem.

[20] D. R. HARTREE, Numerical analysis, p. 209. Clarendon Press, London (1952). [21] F. A. WrLLERS, Practical analysis, Dover Publications, New York pp. 212, 225. (19a8).

~ I

I

1

Random search techniques for optimization problems

119

While there are six transfer functions involved, they all can be expressed in terms of the primary transfer functions and one derived function H(s). C1 (s) = x~GtH,• C1

N---t

=

C1

=

H

Cz (s) = x t x z G I H ; Cz (s) = x t x z H ; Cz (s) --= x2H where H(s) =

62 1 +x1G 2 +x2G 2 +xtx,_G1G2 s 3 +s z +3 s 3 + (i + x t +xa) s 2 + ( l + x l + x , )

s+0"5625 + 3 x l + 3 x 2 + 2 x l x z

For illustrative purposes, we will choose the simple case outlined below: 1. N~, N,_ and N 3 will represent possible injections of white noise of unit power spectral density. 2. The over all criterion will be based on the mean square value of the noise appearing at Ct or 6"2 due to noise injected at any one of the three possible inputs. In particular, the criterion will weight equally and additively the mean square of the noise component at each Ct due to each Nj.

FIG. 7. Contour plots of C(xl, x2) = constant for example problem,

120

D'tAN C. K~RNOPP

Thus the criterion function to be minimized C, is given by the expression: 2 C(.'¢~.

X 2) =

~-V"

3

__~ [ij

~7,

i= I j = 1

where I u represents the mean square noise at Ci due to noise at IVj.. The values o f / o can be found from a table of integrals, as given, for example, in [22].

2. Characteristics of r/re problem It is interesting to note that each of the sub criteria, l,:, can be made to assume an absolute minimum of zero by setting the parameters .vt and x, to zero or infinite values. The problem at hand is typical of real optimization problems in that the subcriteria have relatively trivial solutions for the optimum parameters which are clear y contradictory. Thus the over all criterion must express a trading between antagonistic goals. The C(xI.x2) for the case at hand has been computed using a deterministic raster scan and a contour plot is shown in Fig. 7. Some features of this function are noted below: 1. Although C(xt.x2) is nearly symmetric in .v t and x2, it is not exactly so and the minimum in the vicinity of .v~ = 0, -re = 2.75 is the absolute minimum, 2. In the region shown, two stability limits appear, one circular and the other hyperbolic. 3. The shaded regions represent ~alues of .v~, .v2 for which the system is unstable and it must be recognized that the formulae used to evaluate C are not meaningful for unstable systems. Indeed if this restriction is not taken into account, values for .v,~..v, can be found for which C approaches negative infinity. The shaded regions are typical of regions m the search space which must be carefulI5 dealt with in order to avoid spurious results..ks indicated above, there are many ways to handle such regions either by" performing tests after choosing trial .v~. -v2 values in order to decide whether the trial lies in a forbidden region, or in modifying the criterion function itself so that the search scheme will automatically shun the regions. The point of this example is that even in ~hat appears to be a simple situation involving a linear system and a simple type of criterion, a fairly complicated type of true search is required if one is to have any confidence that the o p t i m u m parameters have been found. In more realistic cases in which complicated systems with possibly discontinuous criteria may be involved and in which more than two parameters are to be searched, true search techniques such as those discussed above are the only rational methods for attacking the optimization problem.

3. Solution methods The first step in the attack of a problem is usually a pure r a n d o m search. The simplest

p(x), namely" p ( x ) = constant over a reck.angular region in x, can be used in the first exploratory stages. With the aid of a histogram routine, one can often avoid mistakes in searching an inappropriate region in the parameter space and get a good idea o f the scale of variations of C with a modest number of trials. Although the pure r a n d o m search will eventually reach the absolute extreme of C, one will wish to switch to a refinement scheme at some stage in order to find some relative extreme to good accuracy in the total aliowabie time. If there is no knowledge about the number of relative extremes, then the pure r a n d o m search should consume most of the alloted time in an attempt to locate the absolute peak. If, on the other hand, one were able to make an informed guess that the problem has. as the example, only two extremes, it would be wise to locate one of these using a rapid refinement [22] W. W. SIEFERT and C. W. STEEG, Jr., Eds., York / 1960).

Control systems engineering, p. 951. McGraw-Hill, New

R a n d o m search techniques for optimization problems

121

scheme and then to search for the other. Thus, common sense and the state of one's partial knowledge about the problem at hand combine to enable one to solve a variety of problems which seem theoretically almost unsolvable. A number of negative results such as implied be eqn. (5) and those discussed in the fundamental work of KIEFEa [23], make it impossible to give firm rational rules for switching from true search techniques to refinement techniques. In engineering practice this dilemma is resolved by considerations of the economic worth of the solution compared to the cost of extended searching. Once the decision to embark on a refinement scheme has been made, the search problem is much simplified. The literature abounds with so-called optimum methods for finding relative extremes of certain classes of functions, We give below a simple and universal scheme based on random searching methods which has proved quite effective [2]. A narrow range, moving mean scan, p(x, x*), is centered on x* and the range of possible trial values of each parameter is constrained as follows" *

r

*

F

x~ - 2 < xi < xi +2

(8)

Generally, the scan range, r, should be just large enough so that the scan just overlaps the position of the relative maximum[[ l"]. I f r is too small, the probability of finding a trial C which betters C* approaches 0.5 but the improvement in value is very small. On the other hand, if r is too large the improvements in value are generally larger but are very infrequent. Thus a simple rule for adjusting r is as follows: If an improvement in C* occurs in two or less trials, increase r by some factor. If no improvement in C* occurs for say three trials, decrease r by some factor.

(9)

Using rule (9), the scan range cannot be very far from the optimum for long. Of course there is some judgment involved in picking the numbers and factors in rule (9) and these parameters should be varied from problem to problem, but the scheme is not sensitive to these choices for most problems because of the self-correcting nature of the rule. Further examples and discussion of this scheme can be found in [2]. Depending on the state of one's knowledge about the function, problems such as that represented by Fig. 7 can be solved using efficient random search methods for both search and refinement. The simplicity and speed of such programs is often dramatic when compared with the traditional alternatives. [23] J. KIEFER,J. Soe. Industr. Appl. Math. 5, 105 (1957).

Resura4--Les attributs de souplesse et de rendement, susceptibles d'etre assur4s par les techniques d'exploration al4atoire, sont discut4s ainsi qu'une s~rie d'applications allant de l'emploi de calculateurs tr~s rapides ~ usages multiples jusqu'a celui de dispositifs autoadaptatifs extr~mement simples. Zusammenfassung--Die Kennzeichen der mit Zufallssuchverfahren erreichbaren Fle.'dbilit/it und Gfite werden diskutiert, ebenso wie der Verwendungsbereich, der die Benutzung von sehr schnellen Universalrechnern bis zu extrem einfachen adaptiven Anordnungen umfasst. AfcTpaRT--Paec~[aTp~t3aK, r c n aTTp~t6~n't,t rtt6KOCT~t ~t FIo.~e3HOCTII ~ t o r y ~ n e ,3bITb o6ecFleqeHtlbt?,tH TeXHIIt./O~'I C.]yqafiHOrO FIOIICKa a TaHoe II pFl~l npIl.Io;-.~eHllfl, II~,,'lliIix or I;CIIO.~F3OBaHII~ GMeTp O~,ef[CTBylOILIIIX CqeTHo-peIiIatoulitX yCTpO['ICTB OOulet'o xapa~;Tepa xo [[CrlO.2b3OBaHHa qpe3abIt[af~HO npOeTblX castorlpncrlOeOCo.~gIoIIIlIXCg CIICTeSI.