Techniques for simulation response optimization

Techniques for simulation response optimization

Operation Research Letters 8 (1989) 1 - 9 North-Holland TECHNIQUES FOR SIMULATION February 1989 RESPONSE OPTIMIZATION * Sheldon H. JACOBSON Dep...

804KB Sizes 21 Downloads 138 Views

Operation Research Letters 8 (1989) 1 - 9 North-Holland

TECHNIQUES

FOR SIMULATION

February 1989

RESPONSE

OPTIMIZATION

*

Sheldon H. JACOBSON Department of Operations Research, Weatherhead School of Management, Case Western Reserve Universi(v. Cleveland, OH 44106, USA

Lee W. S C H R U B E N S.O.R.LE., Cornell University, Ithaca, N Y 14853, USA Received September 1987 Revised October 1988

Techniques for discrete event simulation optimization are classified into four groups: path search methods, pattern search methods, random methods, and integral methods. Each class is described and a survey of recent literature is presented. simulation * design of experiments * optimization

Introduction

This paper is a summary of research and applications in simulation optimization over the past 15 years. Its purpose is to give a quick tour of the literature. Here we classify the different approaches of searching for an optimum into four basic categories: path search methods, pattern search methods, random methods and integral methods. Path search methods have been the most widely studied approach for solving the simulation optimization problem, hence dominate the discussion and references in this paper. Although no general method has been devised which works well on an arbitrary simulation, there has been a considerable amount of work in the area. Over the years a number of other review papers have been written on this subject [26,27,34,56,74,75,83] as well as a comparison of optimization algorithms on stochastic functions [4]. Glynn [34] gives an excellent discussion of several general strategies for stochastic optimization paying particular attention to the convergence rates of different approaches. * This research was partially supported by A T & T Information Systems and the US Air Force System C o m m a n d , Rome Air Development Center, Griffiss Air Force Base, Rome, NY by subcontract from Syracuse University, Syracuse, NY.

The problem is to optimize a simulation expected response E ( F ( X)), over a region S c R p, with respect to an ordered p-tuple of input factor settings X = ( X1, X 2. . . . . Xp) ~ S. The difficulty with this problem is that the output F ( X ) is a random variable equal to a constant plus a random variable which represents the noise of the system (i.e. F( X ) = E( F( X ) ) + e, where e is a random variable with E ( e ) = 0 and V ( e ) < + oo. Note that additive error is only one possible model). Although we want to optimize E ( F ( X ) ) , only F ( X ) can be observed. This puts the simulation optimization problem in the class of stochastic optimization problems, which are known to be difficult to solve. This paper concentrates on techniques for finding local optima for discrete-event simulations. Optimization studies, particular to continuous simulation [11,74] and global optimization [12,38, 58] are not discussed extensively.

Path search methods

Path search methods involve estimating a direction to move from a current factor setting to an improved point in the feasible factor set. Typically only local information is used. Once a direction of

0167-6377/89/$3.50 © 1989, Elsevier Science Publishers B.V. (North-Holland)

1

Volume 8, Number 1

OPERATIONS RESEARCH LETTERS

movement is found, a distance to move in that direction is determined. The most common direction of movement is the gradient. A widely studied approach in this class of techniques is Response Surface Methodology (RSM) (see [15,41] or [62] for general reviews). RSM attempts to fit locally a polynomial response surface model to a set of sample observations in a particular region of S. There are many types of experimental designs specifically tailored to choosing these points [20]. One hopes that the shape of the response function is captured by the polynomial fit to the data. The polynomial is usually of first or second order. Conventional RSM involves fitting a linear regression model around a current factor setting, and using this linear model to estimate an improving direction from the current point. Suppose k factor settings are needed for the experimental design used to fit the model. Let Y = Xfl+e, where X is a k by p + 1 matrix, whose rows are the factor settings around the current point; fl is a p + 1 dimensional vector of unknown coefficients for the linear model; Y is a k dimensional vector of response measurements corresponding to the k factor settings in X; and e is the random noise of the system, which we will assume has mean zero. The ordinary least squares estimate of fl is /3 = ( x ' r x ) - I X x Y . So Y = X/~, the current RSM model for E ( F ( X ) ) , defines a linear response surface, from which an optimal direction (e.g. the gradient) is easily obtained. One then moves in this direction a distance which optimizes the response function (line search). This is called Phase I of RSM. This procedure is repeated until the linear model stops providing a sufficiently good fit (e.g. the gradient of the linear response surface stops improving the response function value). Phase II is then implemented. Here a quadratic model is fit to the response. P

P

Yh = flo = E Xhifli + E i~l

i=l

P

E XhiX.jflij + eh, j=l i ~
h = 1 , 2 . . . . . k, and X~j is a component of X. One then uses this fit to obtain a direction, the gradient, which will improve the response. A similar (or identical) procedure as that used in Phase I is performed in

February 1989

Phase II, such that an improved factor setting is obtained. When the magnitude of the gradient becomes sufficiently close to zero, the procedure stops. An advantage of this method is that it is founded on a statistical theory that is easy to understand. The method is also easy to implement [86]. The major disadvantage of RSM is that a large number of simulation runs may be needed. A significant amount of research has been done on RSM. Smith has used ideas involving first order polynomial fits (Phase I) and second order polynomial fits (Phase II), as described above [54,82-86]. He has also studied screening of unimportant factors so that the number of variables in the polynomial model is reduced [87]. Daugherty and Turnquist have done RSM, subject to constraints based on the cost to run the simulation [19]. They have also used spline functions to create artificial experimental points [18]; this enabled them to obtain a reasonably good fit of the model using fewer simulation runs, hence make more efficient use of the data. Heller and Staats [40] used RSM, in conjunction with optimization techniques which are gradient-based, to solve problems subject to costs and constraints. They called the direction obtained a direction of 'Cheapest Ascent'. They modified Zoutendijk's method of Feasible Directions [96] for this approach. Zoutendijk's method is a gradient-based optimization algorithm for constrained optimization problems. It involves creating a linear program, with the feasible region defined by first derivative linear approximations of all the binding constraints. One then solves the LP to obtain a direction of ascent, which is used to move to an improved factor setting. Zoutendijk's method is a relaxation method. Convergence may be slow, depending on the feasible region's steepness [53]. Mihram [57] used RSM, with simplex experimental designs [13]. A simplex experimental design requires p + 1 points where X ~ R p (hence the name simplex). These p + 1 points are all the same distance from the current point. Moreover, the distance between any two of these p + 1 points is the same. Biles studied multiple response surface fitting [8,9] and multiple objective optimization [6,7,10]. He also did some work in first and second order response surface fitting, with direct search along the direction of the gradient, and gradient projection methodology [53]. Gradient projection

Volume 8, Number 1

OPERATIONS RESEARCH LETTERS

methods project the gradient of the objective function onto the boundary of the feasible region, hence yielding a feasible improving direction, which is not necessarily optimal, but is easy to obtain. Cooley and Houck [17] looked at the use of common and antithetic pseudorandom number streams as a means of reducing the variance of an estimated response surface. They demonstrated RSM, with this variance reduction modification, on an inventory system example. Safizadeh and Thornton [76] extended this work by considering alternative ways of using the pseudorandom number streams. Tew and Wilson [92] incorporated the Schruben-Margolin [80] variance reduction design in metamodel estimation for a simulation response. Eldridge [23] used a 5-phase procedure: (1) screening, ( 2 ) g r o u p i n g the variables into subspaces, which can be evaluated individually, (3) optimizing over each of these subspaces, (4) fitting an approximating function over each subspace, (5) identifying the global solution over the entire space. A novel aspect of Eldridge's paper (the techniques he used were all standard) was the use of a random factorial design to better identify the optima of a multimodal response surface. This design is a combination of a complete factorial design and a random balance design [77]. This design was used, in Phase II, to model the response as a set of unimodal surfaces, hence simplifying the problem and enabling one to solve each unimodal surface individually. Montgomery et al. looked at RSM by considering different experimental designs suitable for fitting second order models to simulation responses [61]. In another paper they investigated screening methods which reduce the number of factors [59]. In addition, they looked at multivariate RSM using the G e o f f r i o n - D y e r Interactive Vector Maximal Algorithm [60]. This algorithm is a multicriteria optimization procedure, which they adapted for use in multiple response surface optimization. It involves a concave utility function which gives a measure of the relative weighting between each of the response surfaces. Using gradient information from the utility function and the response surfaces, an approximation to the tradeoff weights between the response surfaces is determined. The algorithm then uses these weights to move toward an improved solution.

February 1989

All of the above references involve modifying RSM to obtain better fits to the response function of the simulation so as to obtain gradient estimates that will most likely point in a response improving direction. Work has also been done on experimental designs to obtain better fits, as described, for example, in [16,59,61]. Other reviews [26,27] contrast RSM with other methods. Smith conducted a comparison study [83] among seven different techniques. It was found that for a unimodal response function, RSM with 2 k factorial design and Random Search yielded the largest relative gain of the response function towards its optima. Single Factor or Univariate Search yielded the smallest relative gain. Stochastic Quasi-Gradient methods use finitedifference Monte Carlo estimates of the gradients, applied to standard gradient-based optimization algorithms. Ermoliev [24] discussed these methods and their applications. Stochastic A p p r o x i m a t i o n (SA) [50,71] is another path search method. Using the notation defined previously, let X, be the factor setting for the n-th iteration of the algorithm, where the optimal factor setting is X.. If g(X,,) denotes an estimate of the gradient of E ( F ( . ) ) at X~, then SA chooses the next factor setting Xn+l = X, + X,,g(Xn) where Xn is a scalar parameter which satisfies the convergence criteria given by Dvoretsky [22]. Dvoretsky generalized the criteria given by Kiefer and Wolfowitz and by Robbins and Monro. The Kiefer-Wolfowitz ( K - W ) algorithm defines g to be a 2-sided gradient estimate of F. We assume the variance of F ( X ) is finite for all X ~ S, and E ( F ( X ) ) is unimodal. The K - W algorithm is roughly as follows: Given two sequences, a n and ~,~, satisfying the convergence criteria: 5 2 ( % ) = +o¢, l i m ( y , , ) = 0 , )Z(%/y~)2 < + o¢, lel X,+ 1 = X, + (c%/2),~)( F ( Xn + y,, ) - F( X n - y~)). Then X, converges in mean square and with probability 1 to a local maxima of F. Even if the convergence criteria are satisfied, experience has shown that convergence may be very slow and require a large number of simulation runs. A general summary of such results, including convergence rates, can be found in [1,93]. Azadivar used one-sided [2] and two-sided [3] gradient estimates in applying SA (in [3], only the sign of the gradient estimate was used). Kushner

Volume 8, Number 1

OPERATIONS RESEARCH LETTERS

and Gavin [51] developed a generalized SA method by searching for, as opposed to using predetermined, step lengths to move in the direction of the gradient estimate. Pflug [69] used two-sided finite difference gradient approximations with SA; he called his approach the 'Stochastic Quasi-gradient method'. Ruppert et al. [73] used SA applied to a Monte Carlo simulation of a fish harvesting model. Much recent work has been concerned with estimating the response gradient. Ho and his colleagues have developed a technique called Perturbation Analysis (PA) [37,42-46,89-91,94]. PA uses a basic idea from calculus, the chain rule, to obtain a gradient estimate using only one run of a simulation, for any p >I- 1. If w is the simulation response (e.g. the average waiting time in an M / M / 1 queue), s is the measure of interest (e.g. the service time of a customer), and X is the factor of interest (e.g. the service rate), then we have (Ow/aX)= (aw/as)(Os/aX). They assume that as/aX is known and can be evaluated for a given s and X, using the distribution function F(s, X). They make the basic assumption that the sequence of events remains the same for both X and X + A X, where A X is an infinitesimal perturbation. If this assumption holds, the effect of AX can be measured as it propogates through the simulation. This effect is kept track of in accumulators for each of the indices of the factor, X (e.g. (Os/aX), evaluated at s and X, is added to the appropriate accumulator whenever a customer is served in the current busy period). It is important to note that one must be able to calculate the derivative of the measure of interest, so that the accumulator can correctly sum the effect of the perturbation on the given response. PA has many desirable properties. For simple simulations it can take a small amount of work to implement the algorithm; conceptually, one need only add a few lines to a computer program for each factor (to update the accumulator so perturbation effects are being measured and to reset the accumulator to zero when each perturbation has died out). As the simulation models become more complex, PA can become less straightforward to implement. Note also that unlike RSM and other noninvasive methods, PA forces the programmer to explicitly change the simulation code. On the other hand, PA only requires one

February 1989

simulation run, while RSM may require many. An important analysis of PA was done by Heidelberger [39], where necessary and sufficient conditions for PA to produce strongly consistent estimates were given for regenerative processes. Heidelberger also identified and quantified a potential weakness of PA: the assumption that the order of events does not change when a factor is perturbed by an infinitesimal amount; this assumption can not be justified in general. His work explains why PA can give very poor gradient estimates in certain simulation models. Glynn and Sanders [36] proposed a gradient estimation technique, using likelihood ratios and Monte Carlo estimation procedures. Variations of the likelihood ratio method algorithm can be found in Glynn [33], Reiman and Weiss [70], and Rubinstein [72]. A summary of the Likelihood Ratio method is presented in [35]. Fox [31] estimated gradients for transient Markov chains. He also gave a complexity analysis of the estimation procedure [30]. Simon [81] introduced a method of applying light traffic queueing theory to estimate system sensitivities. In recent work using the frequency domain experimental approach of Schruben and Cogliano [79], Jacobson and Schruben [48,49] have developed estimators for the gradient direction as well as a Newton step size in this direction. By inducing specific patterns (sums of sinusoidal oscillations) in the inputs and analyzing the simulation output power spectrum, they were able to obtain good sensitivity estimates for a set of simulation models where PA performed poorly. As described above, gradient estimation has been the major focus of path search methods. In general, estimating gradient directions can be both difficult and expensive, using the techniques presented. This is why very little work has gone into estimating Hessians or developing the quasi-Newton method to solve the simulation optimization problem. Zazanis and Suri [94] used PA to estimate the Hessian of a G / G / 1 queue from a single sample path. Such Hessian estimates could be used in a quasi-Newton algorithm. If an efficient technique of reducing the noise effect on the Hessian matrix estimate and its inverse is developed, more work will be done with quasi-Newton methods for solving the simulation optimization problem.

Volume 8, Number 1

OPERATIONS RESEARCH LETTERS

Pattern search methods

Pattern search methods are those which require no gradient estimates, nor randomization procedures, but rather use some characteristic or pattern of the observations to obtain an improved point. Typically gradient estimation methods implicitly assume that the inputs and the output are continuous. This assumption is not always necessary in pattern search methods. The most common technique in this class is the Hooke and Jeeves method (H J) [47]. HJ is based on the idea that if a direction has produced a favorable change in the optimal value, then one should continue to move in this direction. It uses the pattern from which previous improving changes have been made to obtain better factor settings and, eventually, the optimal setting. The method starts by initially selecting a set of incremental values for each factor. Starting at an initial base point, it checks if an incremental change in the first variable yields an improved response value. The resulting improved setting becomes the new intermediate base point. This is repeated for each of the p variables until a new setting is obtained, where the intermediate base points act like the initial base point as described above for the first variable. The method then moves directly from the initial base point in the direction towards, and through, the new setting. This procedure is continued until optimal changes cannot be made with the given incremental values. Then the incremental values are decreased and the procedure is repeated from the beginning. When the incremental values reach a prespecified tolerance, the procedure is terminated, and the current factor settings are reported as the solution. Nozari and Morris [64,65] have applied HJ in conjunction with the Dudewicz-Dalal ( D - D ) Selection and Ranking procedure [21]. The D - D method is a two-staged sampling procedure, used to identify with specified probability the best factor setting among a set of k factor settings. The first stage involves running the simulation for each of the factor settings to be compared. Since D - D requires independent normal observations, the simulation runs must be long enough to ensure approximate independence and normality among the batch means. The second stage of D - D efficiently takes an additional sample of runs in order to determine a search direction; this direction is

February 1989

subsequently used in the HJ algorithm. Pegden and Gately also applied the HJ to problems written in SLAM [68] and GASP IV [67]. Other pattern search methods include the Simplex method [63,66,88]. The Simplex method starts with a set of p + 1 factor settings in R P. Then, by comparing their response values, it eliminates the factor setting with the smallest function value (in the case of maximization) and replaces it by a new factor setting, determined by the centroid of the p remaining factor settings and the eliminated factor setting. The resulting simplex either grows or shrinks, depending on the response value at the new factor setting. The procedure is repeated until no more improvements can be made by eliminating the smallest valued point and the resulting final simplex is small. Meier [55] looked at applying the simplex method to simulation studies. Lefkowitz and Schriber [52] have also investigated a Univariate Search method, applied to simulation programs written in GPSS. Recently, Schruben [78] proposed using frequency domain experiments embedded within RSM experiments. Schruben demonstrated how one might obtain useful information on the shape of the local response surface using the output power spectrum.

Random methods

Random methods are those which use a random approach to select factor settings, with the hope of obtaining an improving and, eventually, optimal response. The major problem with these methods is that they are slow to converge (if they converge at all) to an optimum. Previous information is typically not used at each iteration. Therefore, the methods require a large number of simulation runs. The reliability of the solution can be expressed only in terms of probabilities. These probability statements are in turn based on rather restrictive assumptions. Very little has been done with these methods in trying to solve the simulation optimization problem. Garcia-Diaz et al. [32] used the Out-of-Kilter Algorithm [28] with a Monte Carlo sampling procedure applied to a specific simulation model. Smith looked at random search as a possible method [82,83,86] and Farrell mentioned random methods in his review and comparison papers [26,27]. Fox [29] presented ideas using quasiran-

Volume 8, N u m b e r 1

O P E R A T I O N S R E S E A R C H LETTERS

d o m numbers as a possible alternative to pseudo r a n d o m number streams. Quasirandom numbers try to minimize the d i s c r e p a n c y of a sample defined as follows: given a set of points x 1, x 2. . . . . x N ~ I s&[0,1] s and a subset G c I s, define the counting function SN (G) as the number of points x i ~ G. For each x = (x 1, x 2. . . . . xs) I s, let Gx be the rectangular s-dimensional region Gx = [0, xl) × [0, x2) × . . . ×[0, xs) with volume x l x 2 . . . x s. The discrepancy of the points x a, x 2 . . . . . x s is defined as D*(x

1, x 2 . . . . .

February 1989

whose level is set by the current largest known value of the function. The implementation of his algorithm uses Monte Carlo integration to estimate this volume and the largest value of the function. Zheng's integral optimization approach does not require the Lipschitz constant but at the expense of a slow speed of convergence and considerable work required at each iteration.

Conclusions

x N)

_a_ Sup [ Su(Gx) -

Nxlx

2 " ' " x~ I"

x~l"

Fox conjectured that the smallest discrepancy possible is O(logpN), where N is the number of points generated and p is the dimension of each point. Bratley and Fox [14] compared different q u a s i r a n d o m generators. They d e m o n s t r a t e d generators which yielded number streams with the minimum discrepancy possible. Thi.s idea, in conjunction with R a n d o m methods, holds promise in solving the simulation optimization problem. However, at present it has not been fully developed or tested in a stochastic optimization environment.

Integral methods Evtushenko [25] gave an algorithm for Lipschitz continuous functions which introduced the so-called integral optimization approach. This technique, unlike the three discussed so far, is designed specifically for global optimization rather than local optimization. His algorithm is based on space-covering; i.e. one tries to eliminate regions where the optima cannot occur. When all regions where an improved point might exist have been eliminated, the algorithm stops. Regions are eliminated based on the number of function evaluations and the best value of the function one has obtained so far. This algorithm has not been successfully implemented in a stochastic environment since obtaining the necessary Lipshitz constant is difficult if not impossible. Zheng [95] outlines an integral approach to solve the global optimization problem. Zheng tries to minimize the volume of space bounded by the function he is trying to maximize and a plane

The simulation response optimization techniques can be classified into four basic categories: path search methods, pattern search methods, rand o m methods, and integral methods. Although there has been a significant amount of research in the area, no general approach has been developed into an efficient and practical algorithm. Barton [5] gives testing strategies for comparing simulation optimization techniques, with emphasis on the choice of test functions and the r a n d o m variability of this function. The literature clearly points out that particular algorithms tend to produce consistently good results for certain types of simulations. For example, simulations which have an elementary queueing structure seem ideally suited for Perturbation Analysis, while a simulation which appears to have interactions between the factors or a more clearly defined polynomial relation between the factor settings and the response may better respond to RMS or frequency domain methods. Efficient use of computer (and the analyst's) time and robustness are the main considerations in choosing a method. Each method discussed in this paper has advantages and disadvantages. Most methods consider very basic local optimization ideas (e.g. gradient estimates). In a stochastic environment, it is difficult to use higher level optimization techniques. Until new ideas are brought into this area, these methods will continue to be applied. However, with a growing interest in this field, it seems likely that there will be new approaches discovered and developed which will take advantage of more sophisticated statistical and optimization concepts. The advent of high-speed parallel/distributed simulations along with new ideas in computer graphics, projection geometry, and factor screening should also lead to successful

Volume 8, Number 1

OPERATIONS RESEARCH LETTERS

interactive simulation model optimization approaches.

Acknowledgement The authors would like to thank the referees for their helpful comments. In addition, the authors wish to thank Prof. Russell Barton of Cornell for several suggestions.

References [1] A.E. Albert and L.A. Gardner, Jr., Stochastic Approximation and Nonlinear Regression, Research Monograph No. 42, M.I.T. Press, Cambridge, MA, 1967. [2] F. Azadivar, "A simulation optimization approach to optimum storage and retrieval policies in an automated warehousing system", Proceedings of the 1984 Winter Simulation Conference, 207-214, 1984. [3] F. Azadivar and J.J. Talavage, "Optimization of stochastic simulation models", Mathematics and Computers in Simulation 22, 231-241 (1980). [4] R. Barton, "Minimization algorithms for functions with random noise", American Journal of Mathematical and Management Sciences 4, 109-138 (1984). [5] R. Barton, "Testing strategies for simulation optimization", Technical Report No. 754, S.O.R.I.E., Cornell University, Ithaca, NY, 1987. [6] W.E. Biles, "A gradient regression search procedure for simulation experimentation", Proceedings of the 1974 Winter Simulation Conference, 491-497, 1974. [7] W.E. Biles, "Optimization of multiple-objective computer simulations: A non-linear goal programming approach", Proceedings of the 1977 AIDS Conference, Chicago, 1977. [8] W.E. Biles, "Strategies for optimization of multiple response simulation models", Proceedings of the 1977 Winter Simulation Conference, 134-142, 1977. [9] W.E. Biles, "Optimization of multiple response simulation models", University of Notre Dame, South Bend, Indiana, 1978. [10] W.E. Biles and J.J. Swain, "Mathematical programming and the optimization of computer simulations", Math. Programming Studies 11, 189-207 (1979). [11] L.G. Birta, "A parameter optimization module for CSSLbased simulation software", Simulation 28, 113-123 (1977). [12] C.G.E. Boender, A.H.G. Rinnooy Karl, L. Stougie and G.T. Timmer, "A stochastic method for global optimization", Mathematical Programming 22, 125-140 (1982). [13] G.E.P. Box and D.W. Behnken, "Simplex-sum designs: A class of second order rotatable designs derivable from those of first order", Annals of Mathematical Statistics 31, 838-864 (1960). [14] P. Bratley and B.L. Fox, "Implementing Sobol's quasirandora sequence generator", Technical Report, University of Montreal, Montreal, Quebec, Canada, 1986.

February. 1989

[15] H.J. Brightman, "Optimization through experimentation: Applying response surface methods", Decision Sciences 9, 481--495 (1978). [16] D.S. Burdick and T.H. Naylor, "Design of computer simulation experiments for industrial systems", Communications of the Association for Computing Machinery 9, 329-.338 (1966). [17] B.J. Cooley and E.C. Houck, "A variance-reduction strategy for RSM simulation studies", Decision Sciences 13, 303-321 (1982). [18] A.F. Dangherty and M.A. Turnquist, "Simulation optimization using response surfaces based on spline approximations", Proceedings of the 1978 Winter Simulation Conference, Vol. 1, 182-193, 1978. [19] A.F. Daugherty and M.A. Turnquist, "'Budget constrained optimization of simulation models via estimation of their response surfaces", Operations Research 29, 485-500, (1981). [20] O.L. Davies, (ed.), The Design and Analysis of Industrial Experiments, Longman Group Linlited, London and New York, 1978. [21] E.J. Dudewicz and S.R. Dalai, "Allocation of measurements in ranking and selection with unequal variances", Sankhya 1337, 28-78, (1975). [22] A. Dvoretsky, "On stochastic approximation", Proceedings of the 3rd Berkeley Symposium on Mathematical Statistics and Probability, Vol. 1, 39-59, 1956. [23] D.L. Eldridge, "Experimental optimization of statistical simulation", Proceedings of the 1974 Winter Simulation Conference, 503-510, 1974. [24] Y. Ermoliev, "Stochastic quasi-gradient methods and their appfication to system optimization", Stochastics 9, 1-36. [25] Y.P. Evtushenko, "Numerical methods for finding global extrema of a non-uniform mesh", USSR Computing Machines and Mathematical Physics 11, 1390--1403 (1971). [26] W. Farrell, C. McCall and E. Russell, "Optimization techniques for computerized simulation models", Report to Office of Naval Research, NTIS AD-A011 844/8G1, 1975. [27] W. Fan-ell, "Literature review and bibliography of simulation optimization", Proceedings of the 1977 Winter Simulation Conference, 117-124, 1977. [28] L.R. Ford, Jr. and D.R. Fulkerson, Flows in Networks, Princeton Univ. Press, N J, 1962. [29] B.L.Fox, "Implementation and relative efficiency of quasirandom sequence generators", Working Paper, University of Montreal, 1984. [30] B.L. Fox, "Complexity of gradient estimation for transient Markov chains", Technical Report, S.O.R.I.E., Cornell University, Ithaca, NY, 1987. [31] B.L. Fox, "Gradient computation for transient Markov chains", Technical Report, S.O.R.I.E., Cornell University, Ithaca, NY, 1987. [32] A. Garcia-Diaz, G.L. Hogg, D.T. Phillips and E.J. Worrell, "Combined simulation and network analysis of a production distribution system", Simulation 40, 59-66, 1983. [33] P.W. Glynn, "Sensitivity analysis for stationary probabilities of a Markov chain", Proceedings of the Fourth Army Conference on Applied Mathematics and Computing, 1986.

Volume 8, Number 1

OPERATIONS RESEARCH LETTERS

[34] P.W. Glynn, "Optimization of stochastic systems", Proceedings of the 1986 Winter Simulation Conference, 356-365, 1986. [35] P.W. Glynn, "Likelihood ratio gradient estimation: An overview", Proceedings of the 1987 Winter Simulation Conference, 366-375, 1987. [36] P.W. Giynn and J.L. Sanders, "Monte Carlo optimization of stochastic systems: Two new approaches", Proceedings ASME Computers in Engineering Conference, 1986. [37] W.B. Gong and Y.-C. Ho, "Smoothed (conditional) perturbation analysis of discrete event dynamical systems", 1EEE Transactions on Automatic Control 32, 858-866 (1987). [38] L. de Haan, "Estimation of the minimum of a function using order statistics", Journal of the American Statistical Association 76, 467-469 (1981). [39] P. Heidelberger, "Limitations of infinitesimal perturbation analysis", IBM Research Report RC 11891, IBM Watson Research Labs, Yorktown Heights, NY, 1986. [40] N.B. Heller and G.E. Staats, "Response surface optimization when experimental factors are subject to costs and constraints", Technometrics 15, 113-123 (1973). [41] W.J. Hill and W.G. Hunter, "A review of response surface methodology: A literature survey", Technometrics g, 571-590 (1966). [42] Y.-C. Ho, "Perturbation analysis methods for discrete event dynamical systems and simulations", Proceedings of the 1984 Winter Simulation Conference, 171-173. [43] Y.-C. Ho, "Perturbation analysis--Theoretical, experimental and implementational aspects", Proceedings of the 1987 Winter Simulation Conference, 376-377. [44] Y.-C. Ho, "Performance evaluation and perturbation analysis of discrete event dynamic system, perspectives and open problems", 1EEE Transactions on Automatic Control 32, 563-572 (1987). [45] Y.-C. Ho and X. Cao, "Perturbation analysis and optimization of queueing networks", Journal of Optimization Theory and Application 40, 559-582 (1983). [46] Y.-C. Ho and S. Li, "Extensions of infinitesimal perturbation analysis", 1EEE Transactions on Automatic Control 33, 427-438 (1988). [47] R. Hooke and T.A. Jeeves, "A direct search solution of numerical and statistical problems", Journal of the Association for Computing Machinery 8, 212-229 (1961). [48] S.H. Jacobson, "Discrete-event computer simulation response optimization in the frequency domain", PhD Dissertation, S.O.R.I.E., Cornell University, Ithaca, NY, 1988. [49] S.H. Jacobson and L.W. Schruben, "Gradient estimation of stochastic dynamical system responses in the frequency domain", Technical Report, S.O.R.I.E., Cornell University, Ithaca, NY, 1988. [50] J. Kiefer and J. Wolfowitz, "Stochastic approximation of the maximum of a regression function", Annals of Mathematical Statistics 23, 462-466 (1952). [51] H.J. Kushner and T. Gavin, "A versatile method for the Monte Carlo optimization of stochastic systems", Int. Journal of Control 15, 963-975 (1973). [52] R.M. Lefkowitz and T.J.Schriber, "Use of an external optimization algorithm with a GPSS model", Proceedings of the 1971 Winter Simulation Conference, 162-171. [53] D.G. Luenberger, Introduction to Linear and Nonlinear Programming, Addison-Wesley, Reading, MA, 1973. 8

February 1989

[54] J. Mandelbaum, F.L. Jorgenson, D.E. Smith and C.E. Storck, "Man-machine role identification in seeking improved solutions to large-scale computer simulation problems", Proceedings of 31st Military Operations Research Symposium, 1973. [55] R.C. Meier, "The application of optimum-seeking techniques to simulation studies: A preliminary evaluation", Journal of Financial and Quantitatioe Analysis 2, 31-51 (1967). [56] M. Meketon, "Optimization in simulation: A survey of recent results", Proceedings of the 1987 Winter Simulation Conference, 58-67. [57] G.A. Mihram, "An efficient procedure for locating the optimal simulation response", Proceedings of the 4th Conference on the Application of Simulation, 154-161, 1970. [58] H.M. Monroe and R.L. Sielken, "Confidence limits for global optima based on heuristic solutions to difficult optimization problems", American Journal of Mathematical and Management Sciences 4, 139-168 (1984). [59] D.C. Montgomery, "Methods for factor screening in computer simulation experiments", Technical Report, Georgia Institute of Technology, 1979. [60] D.C. Montgomery and V.M. Bettencourt, Jr., "Multiple response surface methods in computer simulation", Simulation 29, 113-121 (1977). [61] D.C. Montgomery, D.M. Evans, Jr., "Second order response surface designs in computer simulation", Simulation 25, 169-178 (1975). [62] R.H. Myers,, Response Surface Methodology, Allyn-Bacon, Boston, 1971. [63] J.A. Nelder and R. Mead, "A simplex method for function minimization", Computer Journal 7, 308-313 (1965). [64] A. Nozari and J.S. Morris, "An optimization procedure for simulation models", Working Paper, College of Business and Economics, University of Idaho, Moscow, ID, 1983. [65] A. Nozari and J.S. Morris, "Application of an optimization procedure to steady-state simulation", Proceedings of the 1984 Winter Simulation Conference, 217-219. [66] D.M. Ollson and L.S. Nelson, "The Nelder-Mead simplex procedure for function minimization", Technometrics 17, 45-51 (1975). [67] C.D. Pegden and M.P. Gately, "Decision optimization for GASP IV simulation models", Proceedings of the 1977 Winter Simulation Conference, 125-133. [68] C.D. Pegden and M.P. Gately, "A decision-optimization module for SLAM", Simulation 34, 18-25 (1980). [69] G.Ch. Pflug, "Optimizing simulated systems", Simuletters 15, 6-9 (1984). [70] M.I. Reiman and A. Weiss, "Sensitivity analysis via likefihood ratios", Proceedings of the 1986 Winter Simulation Conference, 285-289. [71] H. Robbins and S. Monro, "A stochastic approximation method", Annals of Mathematical Statistics 22, 400-407 (1951). [72] R.Y. Rubinstein, "Modified importance sampling for performance evaluation and sensitivity analysis of computer simulation models", Technical Report, School of Engineering and Applied Science, The George Washington University, 1988. [73] D. Ruppert, R.L. Reish, R.B. Deriso and R.J. Carroll, "Optimization using stochastic approximation and Monte

Volume 8, Number 1

[74]

[75]

[76]

[77] [78]

[79]

[80]

[81]

[82]

[83]

[84]

OPERATIONS RESEARCH LETTERS

Carlo simulation (with application to harvesting of atlantic menhaden)", Biometrics 40, 535-545 (1984). J.S. Rustagi, "Optimizing methods in simulation", Technical Report, Department of Statistics, Ohio State University, 1981. M.H. Safizadeh, "Optimization in simulation: Current issues and the future outlook", Naoal Research Logistics Quarterly, forthcoming. M.H. Safizadeh and B.M. Thornton, "An alternative variance-reduction strategy for RSM simulation studies", Decision Sciences 13, 322-330 (1982). F.E. Satterthwaite, "Random balance experimentation", Technometrics 1, 111-137 (1959). L.W. Schruben, "Simulation optimization using frequency domain methods", Proceedings of the 1986 Winter Simulation Conference, 366-369. L.W. Schruben and V. Cogliano, "An experimental procedure for simulation response surface model identification", Communications of the Association for Computing Machinary 30, 716-730 (1987). L.W. Schruben and B.H. Margolin, "Pseudorandom number assignment in statistically designed simulation and distribution sampling experiments", Journal of the American Statistical Association 73, 504-520 (1978). B. Simon, "Light traffic perturbations of queueing systems", Technical Report, University of Colorado, Denver, CO, 1988. D.E. Smith, "Studies with a prototype optimizer for use in computer simulation", Proceedings of the 17th Conference on the Design of Experiments in Army Research, Development and Testing, 1972. D.E. Smith, "An empirical investigation of optimum seeking in computer simulation", Operations Research 21, 475-497 (1973). D.E. Smith, "Requirements of an optimizer for computer simulation", Naval Research Logistics Quarterly 20, 161-179 (1973).

February 1989

[85] D.E. Smith, "Automatic optimum-seeking program for digital simulation", Simulation 27, 27-32 (11976). [86] D.E. Smith, "Optimization of a computer simulation response", Technical Report 106-3, Penn. State Univ., presented at the ORSA/TIMS Joint National Meeting, 1976. [87] D.E. Smith and C.A. Mauro, "Factor screening in computer simulation", Simulation 38, 49-54 (1982). [88] W. Spendley, G.R. Hext and F.R. Himsworth, "Sequential application of simplex designs in optimization and evolutionary operations", Technometrics 4, 441-461 (1962). [89] R. Suri, "Infinitesimal perturbation analysis for general discrete event systems", Journal of the Association for Computing Machinary 34, 686-717 (1987). [90] R. Suri and Y.T. Leung, "Single run optimization for closed loop assembly systems", Proceedings of the 1987 Winter Simulation Conference, 738-748. [91] R. Suri and M.A. Zazanis, "Perturbation analysis gives strongly consistent estimates for the M / G / 1 queue", Management Science 3,1, 39-64 (1988). [92] J.D. Tew and J.R. Wilson, "Metamodel estimation using integrated correlation methods", Proceedings of the 1987 Winter Simulation Conference, 409-418. [93] D.J. Wilde, Optimum Seeking Methods, Prentice-Hall, Englewood Cliffs, N J, 1964. [94] M.A. Zazanis and R. Suri, "Estimating second derivatives of performance measures for G / G / 1 queues from a single sample path", Working Paper, Harvard University, Division of Applied Sciences, 1985. [95] Q. Zheng, "Theory and methods for global optimization - - A n integral approach", Optimization Days Meeting, Montreal, Canada, 1986. [96] G. Zoutendijk, Methods of Feasible Directions, Elsevier Science Publishers, Amsterdam, 1960.