European Journal of Operational Research 72 (1994) 135-145 North-Holland
135
A meth.odolo for solving multi-objective s mulatlon-optlmlzat on problems Radi Teleb Department of Mathematics, University of Wisconsin-Stout, Menomonie, WI 54751, USA Farhad Azadivar Department of Industrial Engineering, Kansas State University, Manhattan, KS 66506, USA Received April 1990; revised January 1992
Abstract: For many practical and industrial optimization problems where some or all of the system components are stochastic, the objective functions cannot be represented analytically. Due to the difficulties involved in the analytical expression, simulation may be the most effective means of studying these complex systems. Furthermore, many of these problems are characterized by the presence of multiple and conflicting objectives. The goal of this paper is to introduce a new methodology through an interactive algorithm for solving this multi-objective simulation optimization problem. Keywords: Simulation; Optimization; Multi-criteria; Stochastic
1. Introduction In the modelling process of many complex systems, there is no neat analytical formulation available for the system under investigation. Furthermore, even if an analytical formulation exists, it is usually nonlinear and stochastic. Therefore, modelling by computer simulation is one of the most effective means of studying such complex systems. Moreover, real life problems in engineering design, manufacturing, planning, medical and social sectors are characterized by the presence of several conflicting objectives or criteria. In most cases, some or all of the objective functions and constraints are implicit functions of the decision variables, which can be evaluated only through computer simulation. For instance, consider the design of a flexible manufacturing system. For such a system, one might be interested in maximizing machine utilizations while keeping the work-in-process at a minimum. These are often two conflicting objectives. Obviously, these measures of performance of an FMS will depend on many design parameters of the system such as the number of machines in each work station, the distance between each pair of stations, the number of buffer spaces provided in front of each station, and the number of transporters. In addition, the performance is affected by other uncontrollable and often probabilistic variables such as processing times, and quality characteristics. Complex interrelations among these factors often dictate using a simulation model for evaluation of the performance measures such as utilizations and the work-in-process. Furthermore, the multicriteria optimization of such a system may be complicated by a set of constraints on the decision variables or other performance measures. For instance, it might be required Correspondence to: Farhad Azadivar, Department of Industrial Engineering, Kansas State University, Manhattan, KS 66506, USA.
0377-2217/94/$07.00 © 1994 - Elsevier Science B.V. All rights reserved SSDI 0 3 7 7 - 2 2 1 7 ( 9 2 ) 0 0 1 6 1 - 4
R. Teleb,F. Azadivar / Multi-objectivesimulation-optimizationproblems
136
that the system throughput be above a specified limit. The multicriteria optimization of this problem then could be summarized as the simultaneous maximization of machine utilizations and minimization of the work-in-process while making sure the system throughput is above a certain limit. In this particular case all three measures of performance of the system (two objective functions and one constraint) are stochastic and implicit functions of the decision variables, and can be evaluated efficiently only by computer simulation. There are a large number of such problems in engineering, manufacturing, and service systems. This work is an attempt to provide a heuristic method that can be applied, under certain conditions, to such problems and obtain reasonable results. The mathematical formulation of the problem is presented in Section 2. In Section 3 a brief review of related works is discussed. The methodology is demonstrated, through concepts, definitions, and an algorithm, in Section 4. Examples of applications are given in Section 5. Section 6 presents conclusions.
2. Mathematical formulation of the problem The problem in its general form can be formulated as Maximize (Minimize)
F(x) =E[Z(x)]
s.t.
x ~D,
(1)
where F(x) is a (k × 1) vector-valued function of k functions fy(x), j = 1,..., k, whose analytical forms are unknown and their values can only be estimated by the response vector Z(x) through computer simulation; D is the feasible region, which may be defined by a set of constraints gi(x) < O, i = 1..... m (some or all the constraints could also be stochastic and be estimated by the responses obtained from computer simulation); and x is an (n x 1) vector of controllable (decision) variables. For instance, in the example presented above, F(x) consists of two elements fl(x) and f2(x), fl (x) and f2(x) are estimated by Zl(x) and Zz(x), which are computed values of the average utilizations and the average work-in-process respectively, using the computer simulation model. Furthermore, D can be defined by another stochastic function Wl(X), the estimated throughput of the system using the same computer simulation model. The feasible region D can further be restricted by adding other stochastic constraints or deterministic functions such as upper and lower limits on the decision variables. Stochastic constraints create a major problem in optimizing such systems as they present fuzzy boundaries for the feasible regions. In order to make these constraints manageable, a chance constraint type approach has been adopted in this work. Since not all constraints can be satisfied all the time, in this approach the decision maker is asked to express his or her risk tolerance levels in terms of the probability of constraint violation that still could be tolerated. Assuming 3' to represent such a probability, a stochastic constraint can be converted into a deterministic one by the following transformation. Let Wl(X) > b I represent a constraint in which Wl(X) is a response of the simulation model. In order for the risk of violating the constraint not to exceed a certain value 3', we must have
P [ w , ( x ) b 1, which is in fact a determininstic constraint and guarantees that under no circumstance the probability of violation will exceed y. Obviously for less than or equal constraints the upper limit of the confidence range should be used.
R. Teleb,F. Azadivar / Multi-objectivesimulation-optimizationproblems
137
Using the above transformations, the general form of the problem formulation will be optimize s.t.
E[Z(x)]=F(x) x ~D
where D is the feasible region satisfying constraints that were deterministic originally, for example,
gl(x)
l= l , . . . , s ,
or are converted stochastic constraints like
wiLe(x ) > b i,
i=l,...,m.
3. Review of related work
A comprehensive review of related work is presented in [14]. A brief summary, however, will be enough for this report. It must be noted that in the past twenty years there has been a growing interest in conducting research in the area of multi-criteria decision making. However, due to the theoretical difficulties associated with the stochastic multi-objective optimization, only limited work has been done in this particular area. For instance, the probabilities trade-off development (PROTRADE) method for solving a special class of stochastic multi-objective programming problems is developed in [6]. This method was designed for problems with known linear objective functions, for which coefficients are normally distributed with known means and variances. It also assumes complete knowledge of the preference structure in the form of a multi-attribute utility function. In [8] a stochastic goal programming approach is presented that assumes linearity and knowledge of a required level of achievement for each objective. This approach restricts the solutions to analytically expressed stochastic multi-objective linear programming problems. Another stochastic multi-objective programming approach is given in [10]. This method assumes the existence of a threshold value for the objective function and it maximizes the probability of the objective functions, given that they are at least equal to this value. All the objective functions, and constraints are analytically represented and the importance of each objective function is known. Only a few attempts have been made to solve multi-objective simulation optimization problems; see [2,4,11,13]. In [2] several approaches for combining mathematical programming techniques with computer simulation are mentioned. This includes an approach for multicriteria optimization. In [11], an application of the utility function approach is used to solve the problem as a deterministic scalar optimization problem. The main disadvantage here is that the stochastic nature of the problem is ignored. A modified pattern search within the framework of goal programming is used in [4]. The main drawbacks of this method are neglecting the stochastic nature of the simulation response, imposing the integrality restrictions on decision variables, and assuming the existence of a known preemptive priority of the goals. In [13], a satisficing solution for the simulation model is found through a lexicographical goal programming framework by optimizing the most preferred goal one at a time in a descending order of importance using a response suface methodology. The main restrictions associated with [13] are the assumption of the preemptive priority of the goals, neglecting correlated errors, and the inefficiency of the analysis of variance resulting from the highly skewed distribution of the system responses. Going through the literature, it seems that no work was done on combining multi-variate statistical analysis, computer simulation, and multi-objective optimization. Here, we present a stochastic multi-objective multidimensional modified complex search for a stochastic compromise solution.
R. Teleb,F. Azadivar / Multi-objectivesimulation-optimization problems
138
4. The methodology The objective of the present methodology is to introduce a new approach for solving a stochastic multi-objective problem in which the objective functions and some of the constraints can only be evaluated through computer simulation. It is also assumed that the decision maker is unable to provide sufficient 'a priori' preference information due to the complexity of the problem. In the remainder of this section, we will explain the basic concepts, definitions, theorems, and the procedure of the algorithm.
4.1. Concepts and definitions The methodology is associated with a stochastic multi-objective direct search method for a stochastic compromise solution at a specified level 06 and it is based on the constrained scalar simplex search method [14]. The main idea of constrained scalar simplex search (complex search) is to compare the values of the objective function at q vertices, n + 1 < q < 2n (where n is the dimension of the vector of decision variables) of a general simplex (complex in case of complex search) and to move gradually towards the optimal point. To achieve this in each trial, the worst point in the complex is replaced by a new and better point through three steps: reflection, expansion, and contraction. In reflection, after dropping the worst point, the new vertex is obtained by connecting the old point to the centroid of the remaining vertices and extending the line of connection as much as the distance between the old point and the centroid. In expansion, the new point is obtained by the same process except that the distance of the new vertex from the centroid is more than the distance of the old point by a specified factor larger than one. In contraction, the same process is applied with a factor less than one. This procedure is repeated until the complex shrinks into its centroid. The choice of reflection, expansion or contraction depends on the outcome of each comparison. For instance, when a reflection results in a point that falls outside the feasible region, the point is brought back into the feasible region by a contraction. Note that Box's complex method [3] requires the convexity of the feasible region. In the case of stochastic constraints this property cannot be guaranteed. The assumption in this paper is that after converting the stochastic constraints, the resulting feasible region will be convex. Even if the region is not convex, the algorithm may still provide a reasonable feasible solution, because constraints are always checked, but the results will not be as reliable. The worst point in the scalar deterministic simplex search is defined as the vertex corresponding to the worst objective function value in the simplex, which is the minimum for a maximization problem and the maximum for a minimization problem. However, one of the important tasks in this work is to define the worst point for the stochastic multi-objective search. The concept of the ideal and anti-ideal solution is defined in the deterministic multi-objective optimization with respect to the compromise solution as presented in [15]. For stochastic problems it is necessary to define the expected ideal and anti-ideal vector of mean responses. We assume that yi, the vector of average responses at the vertex i of the complex, is normally distributed with unknown mean vector ~,, and unknown variance-covariance matrix Z. Since responses obtained from simulation are often the average of a large number of observations, the normality assumption is not unreasonable. In order to gain confidence that the responses are in fact normally distributed, more cautious users of this procedure are advised to conduct a normality test. One such test is provided in [7]. The principle in this test is to perform a X 2 test on the square distance d 2 defined by
d2 = (Y(5)- y i ) T s - I ( y ( 5 ) _ yi), where Y(~), Y(~), ... Y(~v) are the vectors of responses at each of N independent simulation runs at vertex i; y i is the vector of the average responses as defined above; and S is the estimate of variance covariance matrix Z. The following definitions are required to introduce the new concepts. Without loss of generality it is assumed that the optimization problem is a maximization.
R. Teleb, F. Azadivar / Multi-objective simulation-optimization problems
139
Definition. The current estimated ideal (k × 1) vector of m e a n responses for the complex r is defined as V 1 whose components t,) are defined by:
v)=maxyj,
t=l,2
. . . . . T, j = l , 2 . . . k ,
t
where y~ is the average of the response of the j-th criterion at vertex t, and T is the total number of vertices evaluated up to the construction of the complex r. Definition. The current expected anti-ideal (k x 1) vector of mean responses for the complex r is defined as V2 whose components t,~ are defined by: t,~=miny], !
t=l,2
. . . . . T,
j=l,2
. . . . . k,
An appropriate definition for the worst point in a stochastic multiobjective setting is defined as follows: Definition. Let /.£i be the vector of mean values for the objective function vector at vertex i. A point x W in the decision space is said to be a statistically multiobjective worst point at level a if the result of a multivariate test in the objective functions space about the m e a n r e s p o n s e / x i at the vertex i is to reject H 0 only f o r / 2 = V i at level a and fail to reject at all the other vertices i, i = 1, 2 . . . . . q, i 4= w. The worst point according to the above definition can be found by looking at each point i and finding the maximum significance level at which H0: /xi = V i is rejected. Let this level be a i for i = 1, 2 . . . . . q, and let a w be the minimum over all the ais. The statistically multi-objective worst point can be defined by the following definition. Definition: A point x w in the decision space is said to be the statistically multi-objective worst point at level olW in the complex in hand if o l w = m i n a i, i
i = l . . . . . q,
where ai is the observed significance level corresponding to the average response yi. Specifically, a i is the probability of rejecting the null hypothesis (H0: /xi = V 1) while H 0 is true (type-I error). Now assuming that yi is the average response vector of a random sample of N normally distributed vectors with estimated variance-covariance matrix S, one can get the following results: 1. For any point i of the simplex, the probability ~i is"
oli=P (Yi-
V')T S-I{yi-
v 1) >
~ 4 - ( ~ _ k ) Fk, N_k(ai)
,
(2)
where F k , f _ k ( a i) is the u p p e r (100 ai)-th percentile for an F-distribution with degrees of freedom k and N - k and N ( Y ~ - v I ) T s - l(y~ _ V 1) is the squared statistical distance that is known as the Mahalanobis distance. The distribution of ( y i _ v 1 ) T s - I ( y i _ V1) is the well-known Hotelling's T 2, which is F distributed with k and N - k degrees of freedom [7]. As mentioned earlier, responses are assumed to be normally distributed. However, if there is evidence of a significant deviation from normality, the significance levels obtained may not be correct. 2. One can easily show that the following problems are equivalent. (a) min oli,
i = l . . . . . q,
i
(b) max Ti2,
i = l . . . . . q,
i
(c) m a x ( Y i - v 1 ) T s - I ( Y
i - V 1)
i= 1,...,q.
(3)
140
R. Teleb, F. Azadivar / Multi-objective simulation-optimization problems
3. At the final simplex a statistically best compromise solution at level a may be defined as follows: Definition. A point x p is said to be the statistically best compromise point at level ap if in the last complex the mean response vector Yp is such that:
ap=maxai, i
i = 1 , 2 .... ,q,
(4)
where p is the index of the optimal solution. By the same argument as the one given in the case of the worst point, one can find x p by solving:
v1)Ts-l(yi V 1)
rain
(yl_
i s.t.
yi ~ {yl,...,yq}.
(5)
4. A simultaneous confidence interval for all the objective function components fi(xP), i = 1. . . . . k, is
L~ (~P-~, Y(+~)
(6)
with probability 1 - a, where 6 is given by:
6=
~k(NN(g-
l) k ) eTSeiFk'n-k(°t)'
(7)
where e i is the unit vector in the direction i. In other words, fi exceeds the lower limit of the confidence interval with probability of at least 1 - a / 2 . A narrower simultaneous confidence interval, however, is obtained through the Bonferroni method for multiple comparisons. This method is adopted in this paper. This confidence interval is given as YjP + I N _ I ( O t / / 2 k ) ~ / e j T S e j / N ,
j = 1, 2 . . . . . q,
(8)
A detailed discussion and description of simultaneous confidence intervals is found in [7]. Since this algorithm is based on a heuristic approach, three criteria are presented to the user for identifying the worst point. If one criterion does not result in a satisfactory solution, one of the other two can be employed. These criteria are defined as follows. Criterion 1. Fl-criterion (the farthest away from the estimate of the expected ideal vector V1). This criterion is the one used so far; it defines the worst point as the point l at which the null hypothesis H0: /xI = V 1 is rejected at level a, while for all the other complex points i, i 4: I, i = 1, 2 . . . . . q, H 0 cannot be rejected at this level. Criterion 2. F2-criterion (the closest to the expected anti-ideal vector V2). In this criterion, the worst point is the point stochastically closest to the expected anti-ideal vector V 2. In other words, the worst point is the point at which the null hypothesis Ho: tzt 4: V 2 is rejected at level a, while at all the other vertices i, i 4: l, i = 1, 2 . . . . . q, H 0 cannot be rejected at this level. Criterion 3. F 1 - F 2 criterion (the farthest from V 1 and the closest to V2). This criterion is the result of applying both the F1 criterion and the F2 criterion to every point in the simplex to select the worst point.
4. 2. Algorithm The algorithm is divided into the following four main phases: initialization, optimization, simulation, and decision.
R. Teleb, F. Azadivar / Multi-objective simulation-optimization problems
141
Phase 1: Initialization Select the n u m b e r or simulation experiments that you want to run Step 1. Select the criterion for selecting the statistically compromise solution and the required level of Step 2. significance, a. Select N, the n u m b e r of independent simulation trials at each vertex of the simplex. Step 3. Select weights if you have 'a priori' information about the preference of the objective Step 4. functions. Choose a p s e u d o r a n d o m seed to randomly select an initial q and points x 1, x 2. . . . . x q such Step 5. that x i ~ D, i = l . . . . . q. Phase II: Simulation (function evaluation) Conduct N simulation trials at vertex i of the complex [repeat Step 6 for the q points of the Step 6. complex for the first time only] and estimate the average of the N runs. Compute the vector of average responses yi, i = 1, 2 , . . . , q, such that Yi = (y~, y~ . . . . ,y~)i T, Step 7. and compute S i the estimate of the variance covariance matrix of the average response vector Yi and its inverse S i l . Initialize or update the values of the current ideal and anti-ideal k × 1 vectors V 1 and V 2 Step 8. respectively, in the response space. In fact, V 1 and V 2 are the vectors whose components are the maximum and minimum values of the average responses among the complex points evaluated up to this point. Phase III: Optimization (multi-objective complex search) Step 9. Find the stochastically worst point in the current complex and replace it with a new feasible point through reflection, expansion or contraction to develop a new complex. The feasibility of each new point is checked against each constraint. As stated earlier, constraints could be either in an analytical form or be the response obtained from simulation. Infeasible points are brought back into the feasible region by contraction. Step 10. Check the convergence of the test statistic T 2 a s a measure of the convergence of the average responses. Check the difference in the coordinates of the complex points on hand, and check if the number of trials used is within the specified limit. If any of these stopping criteria are satisfied, proceed to Step 11; otherwise, return to Step 6. Phase IV: Interactive (decision) mode Step 11. Find a statistically compromise solution point x p as defined in (5) based on N × q vectors of responses. Display that solution to the decision maker. The decision maker can either accept the solution, or if not satisfied by the values of certain responses at a specific level, h e / s h e can select another criterion and return to Step 2. Stopping criteria The algorithm stops as soon as one of the following conditions is satisfied: 1. The total number of simulation runs reaches the specified limit. 2. The differences among the complex coordinates are smaller than a value defined by the user in terms of the accuracy requirements on the decision variables. 3. The difference between consecutive values of the multivariate test statistic of the vector-valued function is below a specified value. A simplified flowchart of the algorithm is given in Figure 1.
5. Examples of applications In this section the results of applying the algorithm to a physical and a numerical problem are presented. The physical example consists of formulating and solving the problem of finding the optimum n u m b e r of buffer spaces in a class of flexible manufacturing systems (FMSs). In the numerical example a vector of objective functions is defined in terms of three noise corrupted analytical functions. The example of the physical system demonstrates the modeling aspects and limiting conditions. The
R. Teleb, F. Azadivar / Multi-objective simulation-optimization problems
142
INITIALIZATION SIMULATION MODEL
DECISION MAKER
NO
F i g u r e 1. T h e flow c h a r t for the a l g o r i t h m
numerical example allows evalution of the level of achievements of the goals, as in this case the ideal solution vector is known.
5.1. Application to a physical system Consider the following class of FMSs as presented in reference [1]. The system consists of a set of work stations, each with several parallel machines. There is a central storage area for the system, and there is a local buffer storage for each work station. The material handling system transfers jobs from the central storage area to the work stations by a set of carts. The finished parts are taken away from the output side of the work stations by a return conveyor. In reference [1], this problem is formulated as follows. Assume that x is the n × 1 vector of the system decision variables Xl, x 2 , . . . , x n. These variables correspond to the number of buffer spaces for each machine and the total n u m b e r of carts. Let Yl(X) be the objective function representing the expected or m e a n in-process inventory, and let y2(x) be the mean utilization of work stations. Consider the constraints set D defined by
D= {xlgl(x) <_bj, j= 1..... k, x ~ R " } . The problem is formulated as: Minimize subject to
yl(X) =E[f~(x)] y 2 ( x ) =E[f2(x) ] >_fo, x~D,
where fl(x) and fz(X) are the observed responses from the simulation models corresponding to the average in-process inventory and machine utilization, and f2°, is a minimum level of utilization to be achieved.
R. Teleb, F. Azadiuar / Multi-objectil~e simulation-optimization problems
143
A multi-objective formulation of the problem is: Maximize
F(x)
subject to
x ~D.
= [-yl(x),
y2(x)] T
To be more specific, the problem is formulated for an FMS that consist of three work stations, each with two parallel machines. Decision variables x 1, x 2, x 3 correspond to the number of buffer spaces in work stations 1, 2, and 3 respectively, and x 4 is the total number of carts. The problem can be written as max
[ - Y l , Y2] T
s.t.
1 _
1 _
01 = 0.9 (station 1 ) ;
~ = 1,/z 2 = 0.25, 01 = 0.7 (station 2), where 01 represents the weight for the distribution 1. The service time of station 3 is Erlang E 2 with mean 1.9. The arrival rate of external jobs is Poisson with mean 3. I The probability of routing internal jobs to each work station is 5. The probability of recirculating a completed job to the central storage is 0.1. To apply the algorithm to this problem, the objective functions must be distributed normally for all values of x. This is not necessarily true in this case. However, since observations on the average work-in-process and utilizations are made through computer simulations and each observation consists of the average of a large number of observations, the normality assumption seems not drastically violated. The algorithm was applied to this problem using a S L A M I I [12] simulation model for the FMS. The optimum results obtained by the algorithm are as follows. The number of buffer spaces in work station 1 = 6. The number of buffer spaces in work station 3 = 3. The total number of carts = 8. Overall, 450 simulation runs were made in a total of 45 steps. The expected in-process inventory = 6.876. The expected machine utilization = 0.867. The 90% simultaneous one-sided confidence intervals for the objective functions are: Average in-process inventory will be at most 6.9728; Average machine utilization will be at least 0.8647. From these results it is not clear how well the compromise solution compares with the achievable values of the objective functions, because the ideal values are not known. In the following example, the performance of the algorithm can be tested because the expected ideal solutions are available. -
R. Teleb, F. Azadivar / Multi-objective simulation-optimization problems
144
5.2. Application to a numerical example Consider the following stochastic vector optimization problem:
=Elf(x)]
max s.t.
and where
Y(x) =
3x I - - X
2
,
XIX 2
~ N(0, 4I), and I denotes the identity matrix. This problem is generated and f ( x ) = y ( x ) + e, where by adding random noise to each objective function of a deterministic problem. The noise samples are generated from an independently normally distributed vector. For the sake of simplicity, the random (3 × 1) error vector e is not made a function of the decision variables x. In the deterministic case, the following are the individual maximum and minimum for each objective function along with the values of the decision variables:
V 1 =
max F ( x ) =
1.0]
at (0, 1),
15.0 [ 12.0 ]
at (5, 0), at (4, 3),
and -10.0] V 2 = min F ( x )
=
- 1.0
at ( 5 , 0 ) , at (0, 1),
0.0
at (s, 0)
s ~ [0, 5].
The deterministic case shows that the objective functions are conflicting. A compromise solution may be defined as the solution that minimizes a distance measure from the ideal vector V 1 (minimizing a regret function representing the regret or loss from not attaining the ideal vector VX). The concept of the compromise solution in the deterministic case is developed in [15]. We, however, use the concept of the stochastic compromise solution at a specified significance level, as introduced earlier in this paper. The stopping criteria specified in this example are: 1. The maximum number of simulation runs allowed is 500. 2. The difference between the corresponding components of the objective functions vector in two consecutive complexes is at most 0.01. 3. The accuracy requirements for the coordinates of the complex is 0.001. The results obtained by applying the algorithms to this problem are as follows. A total of 465 simulation runs was used in 93 trials. The resulting vector of the optimum values for the decision variables is x = [3.708, 2.01] T. The vector of the average compromise solution for the objective function is [ - 5.381, 9.105, 7.751] T. The simultaneous 80% confidence intervals on the objective functions are Y2
Y3
=
7.3091
10.9002
6.0136
12.0218
R. Teleb, F. Azadivar / Multi-objective simulation-optimization problems
145
Comparison of these results with the ideal maximum for each objective function indicates that a compromise solution has been obtained. This solution maximizes the probability of achieving a global optimum for each objective function. This maximum probability is, of course, a function of the variability induced on the objective functions. 6. Conclusions
This work presents a new approach for multicriteria optimization of systems that are evaluated through computer simulation. The methodology suggests a new way of defining a compromise solution in terms of the maximum likelihood that a solution will contain the ideal solution consisting of optimum values for all objectives. This methodology can be applied to problems where the objective functions and the stochastic constraints are distributed reasonably close to a normal distribution. Although average responses obtained from simulation often satisfy normality conditions, the user needs to be aware of this assumption. It is not claimed that the complex search used here is suitable for all problems, at all times. Neither is it implied that the optimality definition used here satisfies the utility functions of all decision makers. The method is presented as one possible way of multiple criteria optimization of complex stochastic systems. An automated algorithm is developed that can be employed when linked with a computer simulation program. This algorithm allows an interactive approach where the user interferes with the findings of the algorithm. Although potentially the algorithm can be applied to any system with stochastic multicriteria responses, it is most suitable for systems modelled by computer simulation. This is true because of the ease of running simulation experiments instantaneously and obtaining enough samples of the responses on line to be able to perform the required statistical analysis. The paper also demonstrated the application of the algorithm to two problems. References [1] Azadivar, F., and Lee, Y.H., "Optimum number of buffer spaces in flexible manufacturing systems", in: K.E. Stecke and R. Suri (eds.), Proceedings of the Second ORSA / TIMS Special Interest Conference on Flexible Manufacturing Systems Operations Research Models and Applications, Ann Arbor, MI, August 12-15, 1986, Elsevier Science Publishers, Amsterdam, 181-189. [2l Biles, W.E., and Swain, J.J. "Mathematical programming and the optimization of computer simulations", Mathematical Programming Study 11 (1979) 189-207. [3] Box, M.J., "A new method for constrained optimization and a comparison with other methods", Computer Journal 8 (1965) 42-52. [4] Clayton, E.R., Weber, E., and Taylor, B.W., III, "A goal programming approach to the optimization of multi-response simulation models", liE Transactions 14/4 (1982) 282-287. [5] Fishman, G.S., Principles of Discrete Event Simulation, John Wiley, New York, 1978. [6] Goicoechea, A., Duckstein, L., and Fogel, M.M., "Multiple objective under uncertainty: An illustrative application of PROTRADE", Water Resources Research 15/2 (1979) 203-210. [7] Johnson, R.A., and Wichern, D.W., Applied Multivariate Statistical Analysis, Prentice-Hall, Englewood Cliffs, NJ, 1988. [8] Kall, P., "Stochastic programming", European Journal of Operational Research 10/2 (1982) 125-130. [9] Law, A.M. and Kelton, W.D., Simulation Modeling and Analysis, 2nd ed. McGraw-Hill, New York, 1991. [10] Leclercg, J.P., "Stochastic programming: An interactive multi-criteria approach", European Journal of Operational Research 10/1 (1982) 33-41. [11] Montgomery, D.C., and Bettencourt, V.M., "Multiple response surface methods in computer simulation", Simulation 29/4 (1977) 113-121. [12] Pritsker, A.A.B., Introduction to Simulation and SLAM H, 3rd ed., John Wiley, New York, 1986. [13] Rees, L., Clayton, E.R. and Taylor, B.W., III, "Solving multiple response simulation models using modified response surface methodology within a lexicographic goal programming framework", liE Transactions 17/1 (1984) 47-57. [14] Teleb, R.A., "An algorithm for solving stochastic multi-objective optimization problems by simulation", Ph.D. Dissertation, University of Illinois at Chicago, IL (1990). [15] Zeleny, M., Multiple Criteria Decision Making, McGraw-Hill, New York, 1982.