Optimal computing budget allocation for Monte Carlo simulation with application to product design

Optimal computing budget allocation for Monte Carlo simulation with application to product design

Simulation Modelling Practice and Theory 11 (2003) 57–74 www.elsevier.com/locate/simpat Optimal computing budget allocation for Monte Carlo simulatio...

181KB Sizes 5 Downloads 45 Views

Simulation Modelling Practice and Theory 11 (2003) 57–74 www.elsevier.com/locate/simpat

Optimal computing budget allocation for Monte Carlo simulation with application to product design q Chun-Hung Chen

a,*

, Karen Donohue b, Enver Y€ ucesan c, Jianwu Lin d

a

b

Department of Systems Engineering and Operations Research, George Mason University, 4400 University Drive, MS 4A6, Fairfax, VA 22030, USA Department of Operations Management Science, University of Minnesota, Minneapolis, MN 55455, USA c Technology Management Area, INSEAD, Fontainebleau 77305, France d Department of Systems Engineering, University of Pennsylvania, Philadelphia, PA 19104, USA Received 1 September 2001; received in revised form 1 April 2002

Abstract Ordinal optimization has emerged as an efficient technique for simulation and optimization, converging exponentially in many cases. In this paper, we present a new computing budget allocation approach that further enhances the efficiency of ordinal optimization. Our approach intelligently determines the best allocation of simulation trials or samples necessary to maximize the probability of identifying the optimal ordinal solution. We illustrate the approachÕs benefits and ease of use by applying it to two electronic circuit design problems. Numerical results indicate the approach yields significant savings in computation time above and beyond the use of ordinal optimization. Ó 2002 Elsevier Science B.V. All rights reserved. Keywords: Monte Carlo simulation; Intelligent simulation; Manufacturing design; Yield analysis; Computing budget allocation

q

This work has been supported in part by NSF under grants DMI-9732173, DMI-0002900, DMI0049062, by Sandia National Laboratories under contract BD-0618, and by George Mason University Research Foundation. * Corresponding author. Tel.: +1-703-993-3572; fax: +1-703-993-1521. E-mail addresses: [email protected] (C.-H. Chen), [email protected] (K. Donohue), [email protected] (E. Y€ ucesan), [email protected] (J. Lin). 1569-190X/03/$ - see front matter Ó 2002 Elsevier Science B.V. All rights reserved. doi:10.1016/S1569-190X(02)00095-3

58

C.-H. Chen et al. / Simulation Modelling Practice and Theory 11 (2003) 57–74

1. Introduction In developing a new product or manufacturing process, designers often break the system down into components and examine how each component should be designed to support a set of performance criterion for the overall system. This is often carried out by first defining a set of tolerance levels for each component and then identifying component designs that best meet these targets. Defining appropriate tolerance levels can be difficult if the value (e.g., dimensions, frequency, cycle time) of a component is highly variable for a given design or correlated with the designs of other components. Another method, which is commonly used in designing electronic circuits, is to evaluate the interaction of alternative component designs directly with the objective of identifying the design combination that achieves the highest expected performance. This approach is commonly referred to as ÔStatistical DesignÕ. The method involves running a series of experiments or simulations to model the performance behavior of each design. The definition of a good design can be quite general in this framework. For example, performance objectives may include minimizing a productÕs output deviation relative to a target value or range; or maximizing product yield relative to a set of firm performance criteria. The advantage of the Statistical Design method is that it evaluates component alternatives in terms of their entire performance distribution and captures possible performance dependencies among components. This gives a more accurate characterization of a designÕs performance and helps identify a more robust design. However, the Statistical Design method has the drawback that its computational cost grows quickly with the number of designs and the desired accuracy of the performance estimation. For example, estimating yield for a given design requires performing a series of simulation trials where random values are assigned to a designÕs statistical variables and the system performance targets are checked against each trial. The number of trials needed to guarantee a good estimate is often quite large due to the slow convergenceprate ffiffiffiffi of Monte Carlo simulation. In general, the rate of convergence is at best Oð1= N Þ, where N is the number of simulation trials [12,16]. If N and the number of design alternatives, k, are both large, the total number of simulation trials, kN, may be prohibitively high. In such a case, the designer must either limit the number of design alternatives considered or settle for a less accurate yield comparison across designs. In this paper we introduce a design optimization scheme, which utilizes Statistical Design in a more efficient manner than traditional Monte Carlo approaches. This allows a designer, with a fixed computing budget, to compare a larger set of designs with no loss in solution accuracy. The scheme utilizes ordinal optimization to significantly reduce the number of simulation trials needed to accurately compare designs. Ordinal optimization is appropriate in this context since we are interested in identifying the design with the highest performance, rather than an accurate estimation of performance for all designs. Dai, [10] shows that the convergence rate for ordinal optimization pffiffiffiffi can be exponential, which is much faster than the convergence rate Oð1= N Þ of a value estimate. The convergence rate increases significantly under ordinal comparison since an inferior design can be disregarded when the value estimate

C.-H. Chen et al. / Simulation Modelling Practice and Theory 11 (2003) 57–74

59

is still poor. This is valuable when selecting among finite design alternatives [15,18] and over a continuous design space [1,2,6,14,20]. Our scheme further improves the performance of ordinal optimization by intelligently determining the number of simulation trials to perform across different designs as the algorithm unfolds. Intuitively, to ensure a high ordinal comparison probability, a larger portion of the computing budget should be allocated to those designs that are critical in the process of identifying the best design. On the other hand, limited computational effort should be expended on non-critical designs that have little effect on determining the optimal solution. Overall simulation efficiency is improved as less computational effort is spent on simulating non-critical designs and more is spent on critical designs. Ideally, one would like to allocate simulation trials to designs in a way that maximizes the probability of selecting the best design within a given computing budget. We present an innovative optimal computing budget allocation (OCBA) technique to accomplish this goal. Previous researchers have examined various approaches for efficiently allocating a fixed computing budget across solution alternatives. Chen, [3] formulates the procedure of allocating computational efforts as a nonlinear optimization problem. Chen et al., [5] applies the steepest-ascent method to solve the budget allocation problem. The major drawback of the steepest-ascent method is that extra computation cost is incurred to iteratively search for a solution to the budget allocation problem. This extra cost could be significant if the number of iterations is large. Chen et al., [7] introduces a greedy heuristic to solve the budget allocation problem. This greedy heuristic iteratively determines which design appears to be the most promising for further simulation based on a simple metric. However, the budget allocation selected by the greedy heuristic is not necessarily optimal. For the special case where the simulation costs of all designs are equal, Chen et al., [8] offer an asymptotic solution to the budget allocation problem. In this paper, we develop an asymptotically optimal approach for solving the budget allocation problem. This is accomplished by replacing the objective function with an approximation that can be solved analytically. The computation cost required to solve the approximate allocation problem is significantly lower than previous approaches (it is essentially negligible). This approach also allows for a more general formulation of the allocation problem, than that of previous approaches, by relaxing the restriction that the cost of simulation is identical for all designs. In sum, this paper offers both a theoretical and practical contribution. At a theoretical level, we introduce a new OCBA solution technique that is asymptotically optimal and more general than previous approaches. This technique is quite general and thus could be applied within any ordinal optimization procedure. At a practical level, we introduce a new approach for efficiently incorporating Statistical Design evaluation techniques within a design optimization framework. We offer two circuit design problem examples to illustrate the benefits this approach offers over traditional ordinal optimization methods. The paper continues in Section 2 with a formulation of the optimal computing budget allocation problem and discussion of major solution issues. We also provide background on the Bayesian model on which our solution technique is based.

60

C.-H. Chen et al. / Simulation Modelling Practice and Theory 11 (2003) 57–74

Section 3 presents our asymptotic solution technique. We illustrate the performance of this technique, as well as the overall optimization scheme, in Section 4, through two circuit design problems. Section 5 provides final conclusions.

2. Notation and problem formulation In this section, we introduce the necessary notation and formulate the OCBA problem. Because our solution technique is quite general, we define the following notation in general terms rather than specifically in the context of a product design problem. k: ci : Xij : Xi :

the total number of alternative designs, the computation cost per simulation run for design i, the result of the jth i.i.d. simulation run from design i, the vector representing all simulation output for design i; Xi ¼ fXij : j ¼ 1; 2; . . . ; Ni g, Ni : the number of simulation runs for design i, PNi li : the sample mean performance measure for design i, li ¼ N1i j¼1 Xij , li : the mean performance measure; li ¼ E½Xij , r2i : the variance for design i, b: the design having the largest sample mean performance measure, i.e., lb P maxi li , s: the design having the second largest sample mean performance measure i.e., lb P ls P maxi6¼b li , dj;i lj li . The definition of Xij will vary depending on the application. If we are interested in maximizing product yield, then Xij will equal either 1 or 0, depending on whether or not the output of the jth run for design i satisfies the system performance specifications. Consequently, li ¼ E½Xij  represents the yield of design alternative i. If we are interested in minimizing the mean square error of performance relative to a target value, g, then Xij represents a performance value and li ¼ E½ðXij g2 Þ. The goal in either case is to find the design i that maximizes li . Our design performance estimate li is a good approximation for li when Ni is large since, according to the law of large numbers, P flimNi !1 li ¼ li g ¼ 1. We formulate our design selection problem as an ordinal optimization problem. As mentioned earlier, ordinal optimization is appropriate in this context since we are interested in identifying the design with the highest expected performance, rather than an accurate estimation of performance for all designs. In general, ordinal optimization is concerned with maximizing the probability of selecting the best design. We refer to this measurement as the correct selection probability (P{CS}). While li converges to li slowly, the P{CS} of ordinal comparison converges to 1.0 exponentially. The goal of the OCBA technique is to take advantage of this exponential convergence and maximize P{CS} subject to a given computing budget.

C.-H. Chen et al. / Simulation Modelling Practice and Theory 11 (2003) 57–74

61

More precisely, we wish to choose N1 ; N2 ; . . . Nk such that P{CS} is maximized, subject to a limited computing budget T. That is, max

N1 ;...;Nk

s:t:

P fCSg k X

ci Ni ¼ T

ðIÞ

Ni 2 N; i ¼ 1; . . . ; k:

i¼1

Here N is the set of non-negative integers and c1 N1 þ c2 N2 þ    þ ck Nk denotes the total computational cost. To solve problem I, we must be able to estimate P{CS}. There exists a large literature on assessing P{CS} based on classical statistical models. Goldsman and Nelson, [13] provide an excellent survey on available approaches to multiple comparisons. However, most of these approaches are only suitable for problems with a small number of designs. Recently, Chen, [4] introduced an estimation technique that quantifies the confidence level P{CS} for ordinal comparison when the number of designs is large. This technique has the added benefit of providing sensitivity information that is useful in solving problem I. We will incorporate this technique within our OCBA solution approach. Chen, [4] technique is based on the following Bayesian model. Assume that the simulation output, Xij , has a normal distribution with mean li and variance r2i . After the simulation is performed, a posterior distribution of li , pðli j Xi Þ, can be constructed based on two pieces of information: (i) prior knowledge of the systemÕs performance, and (ii) current simulation output. Let b denote the design with the largest sample mean, based on the simulation output. If we select this design, the probability that we selected the best design is P fCSg ¼ P fdesign b is actually the best designg ¼ P flb > li ; i 6¼ b j Xi ; i ¼ 1; 2; . . . ; kg:

ð1Þ

To simplify the notation used, we rewrite Eq. (1) as P f^ lb < l^i ; i 6¼ bg where l^i denotes the random variable whose probability distribution is the posterior distribution for the performance of design i. Assume that the unknown mean li has the conjugate normal prior distribution. We consider non-informative prior distributions. This implies that no prior knowledge is available about the performance of any design alternative before conducting the simulation. In that case, DeGroot, [11] shows that the posterior distribution of li is ! Ni 1 X r2i l^i  N Xij ; : Ni j¼1 Ni After the simulation is performed, li can be calculated, r2i can be approximated by the sample variance, and then P{CS} can be estimated using a Monte Carlo simulation. The problem with this approach is that Monte Carlo simulation can be quite timeconsuming. Recall that we are interested in estimating P{CS} as a means to solving (I). Since the purpose of solving (I) is to improve simulation efficiency, we need a

62

C.-H. Chen et al. / Simulation Modelling Practice and Theory 11 (2003) 57–74

relatively fast and inexpensive way of performing this estimation. Efficiency is more crucial than estimation accuracy in this setting. For this reason, we use the following, more efficient, procedure for approximating P{CS}. Let Yi , i ¼ 1; . . . ; k, be variables. According to the Bonferroni inequality, T Prandom k k P f i¼1 ðYi > 0Þg P 1 i¼1 ½1 P fðYi > 0Þg. We replace Yi by ð^ lb l^i Þ to provide a lower bound for the probability of correct selection. That is, 8 9 <\ = k k X P fCSg ¼ P ð^ lb l^i > 0Þ P 1 ½1 P f^ lb l^i > 0g : i¼1 ; i¼1;i6¼b i6¼b

¼1

k X

P f^ lb < l^i g ¼ APCS:

i¼1;i6¼b

We refer to this lower bound on the correct selection probability as the approximate probability of correct selection (APCS). APCS can be computed very easily and quickly; no extra Monte Carlo simulation is needed. Numerical testing in Chen, [4] shows that this measure provides a reasonably good approximation to P{CS}. We therefore use APCS to approximate P{CS} as the simulation experiment proceeds. The OCBA problem can now be restated as problem (II): k X max 1 P f^ lb < l^i g ðIIÞ N1 ;...;Nk

s:t:

i¼1;i6¼b k X

ci Ni ¼ T

ð2Þ

i¼1

Ni P 0;

for i ¼ 1; . . . ; k:

ð3Þ

In the next section we present an asymptotic allocation rule with respect to the number of simulation runs, Ni , which suggests an effective method for solving this problem. 3. An asymptotic allocation rule To simplify the analysis of problem (II) we initially ignore the non-negativity constraints (3). We will show later that our solution technique always provides positive values for Ni , i ¼ 1; . . . ; k. We also assume that Ni is a continuous variable, which is a reasonable approximation for large T. We will denote this simplified version of the problem as problem III. The major P issue in solving problem III is the estimation of the gradient information of ki¼1;i6¼b P f^ lb < l^i g. This is difficult because there is no closed-form exprePk lb < l^i g. However, we can estimate the gradient of this measure ssion for i¼1;i6¼b P f^ with the help of Chernoff bounds, as defined in Lemma 1.   r2 r2 Lemma 1. Suppose the random variable Y is distributed according to N db;i ; Nbb þ Nii , lb li Þ > 0: Then where db;i ¼ ð

C.-H. Chen et al. / Simulation Modelling Practice and Theory 11 (2003) 57–74

0

P fY < 0g 6

2 B db;i exp@  r2 r2 2 Nbb þ Nii

Proof. See Ross, [19].

63

1 C A:



Using the bounds defined in Lemma 1, we construct the following estimator, 0 1 2 k k X X B db;i C APCS ¼ 1 P f^ lb l^i < 0g P 1 exp@  r2 r2  A ¼ EAPCS; i b 2 Nb þ Ni i¼1;i6¼b i¼1;i6¼b where EAPCS denotes the estimated approximate probability of correct selection. 1 Note that when Nb and Ni ! 1, expðð d2b;i =2Þðr2b Nb þ r2i =Ni Þ Þ ! 0 and P flimNi !1 EAPCS ¼ APCS, for i ¼ 1; 2; . . . ; b; . . . ; kg ¼ 1. Substituting this EAPCS measure into the objective function of problem (III) yields an approximation that is significantly easier to solve. Restating this approximation as a minimization problem further simplifies the formulation 0 1 2 k k X X B db;i C min exp@  r2 r2  A s:t: ci N i ¼ T : ð4Þ N1 ;...;Nk 2 Nbb þ Nii i¼1 i¼1;i6¼b Let F be the Lagrangian relaxation of (4): 2 13 0 ! 2 k k X X db;i 6 C 7 B F ¼ 41 exp@  r2 r2  A5 k ci N i T : 2 Nbb þ Nii i¼1 i¼1;i6¼b The Karush–Kuhn–Tucker (KKT) conditions of this problem can be stated as follows. 0 1 2 2 2 2 d oF B C db;i ri =Ni b;i ¼ exp@  r2 r2  A  2 2 kci ¼ 0 ¼ 0; for i ¼ 1; 2; . . . ; k; rb oNi r2i 2 Nbb þ Nii þ Nb Ni i 6¼ b:

and oF ¼ oNb

k

k X

k X

i¼1 i6¼b

ð5Þ 0

2 B db;i exp@  r2 r2 2 Nbb þ Nii

1

d2b;i r2b =Nb2 C A 2 2 kcb ¼ 0; rb r2i þ Nb Ni

ð6Þ

! ci N i T

¼ 0;

and

k P 0:

i¼1

We now examine the relationship between Ni and Ns for i ¼ 1; 2; . . . ; k, and i 6¼ b 6¼ s, where b and s are the designs with the largest and the second largest sample means, respectively. Consider the cases of i and s in Eq. (5),

64

C.-H. Chen et al. / Simulation Modelling Practice and Theory 11 (2003) 57–74

0

2 B db;i ci exp@  r2 r2 2 Nbb þ Nii

1

d2b;i r2i =Ni2

C A

r2b Nb

2 ¼ 2

r

þ Nii

0

2 B db;s cs exp@  r2 2 2 Nbb þ Nrss

1

d2b;s r2s =Ns2 C A 2 2 : rb r2s þ Nb Ns

To maximize P{CS}, Nb should be increased relative to Ni (this is indeed the case we observe from the actual simulation experiments), for i ¼ 1; 2; . . . ; k, and i 6¼ b. Therefore, we assume that Nb  Ni , which enables us to simplify the above equation as 0 1 0 1 d2b;i d2b;i r2i =Ni2 cs d2b;s d2b;s r2s =Ns2 exp@  r2  A  2 2  exp@  2  A  2 : ri ci r2s 2 rs 2 i Ni

Ni

Ns

Ns

Then, 0

1 0 1 d2b;i d2b;i cs d2b;s d2b;s exp@  r2  A 2  exp@  2  A 2 : ri ci rs 2 Nii 2 Nrss On rearranging terms, the above equation becomes 0 0 11 2 d2b;s d2b;i 1 cs db;s r2i exp@ @ r2 r2 AA  : s 2 ci r2s d2b;i i Ns

Ni

Taking the natural log on both sides, we have ! 2 d2b;s d2b;i cs db;s r2i Ns 2 Ni  2 log : r2s ci r2s d2b;i ri This suggests the following relationship between Ni and Ns , !! 2 2 r2i db;s cs db;s r2i Ni  2 Ns 2 log for i ¼ 1; 2; . . . ; k; ci r2s d2b;i db;i r2s i 6¼ b 6¼ s:

and ð7Þ

We further investigate the relationship between Nb and Ns . From Eq. (5) and Eq. (6), 0 1 0 1 2 2 2 2 2 2 2 2 k X B db;s  C db;s rs =Ns B  db;i  C db;i rb =Nb cb exp@  r2 ¼ c exp : ð8Þ A @ A   s   2 2 2 r2b r2i r2b r2b r2i r2s i¼1 2 Nbb þ Nrss 2 þ þ þ N N i6¼b i b Nb Ns Nb Ni Again, we can simplify the above equation based on the assumption of Nb  Ni for i ¼ 1; 2; . . . ; k, and i 6¼ b, then

C.-H. Chen et al. / Simulation Modelling Practice and Theory 11 (2003) 57–74

0

d2b;s

1

d2b;s r2s =Ns2 cb exp@  2  A  2 r2s 2 Nrss Ns

 cs

k X i¼1 i6¼b

2

0

1

d2b;i d2b;i r2b =Nb2 6 4 exp@  r2  A  2 2 ri 2 Nii Ni

65

3 7 5:

ð9Þ

The relationship between Nb and Ns can be obtained by plugging Eq. (7) into Eq. (9). In order to simplify the derivation, we consider the asymptotic behavior that in Eq. (7), Ni and Ns will increase as simulation proceeds. This implies that on the righthand side of Eq. (7), Ns is much greater than the log term. Namely, Ni 

r2i d2b;s d2b;i r2s

as Ni ; Ns ! 1:

Ns

ð10Þ

Then the right-hand side of Eq. (9) becomes 3 0 12 ! 2 4 k db;s 6 db;s X 1 7 r2b exp@  2  A4  2 5; 2 Nb r2s d2b;i i¼1 2 rs Ns

Ns

i6¼b

after using the results in Eq. (10). Furthermore, canceling   ! d2b;s r2s 1 exp 2 Ns 2

and d2b;s ðr2s =Ns Þ

on both sides, Eq. (10) can be simplified as follows 2 !31=2 2 k X d cs rb 4 b;s 5 Ns : Nb  cb rs i¼1 d2b;i

ð11Þ

i6¼b

Pk Note that all Ni Õs have the same sign from Eq. (7) and Eq. (11). Since i¼1 ci Ni ¼ T and T > 0, it implies Ni > 0, for i ¼ 1; 2; . . . ; k. In conclusion, if a solution satisfies Eq. (7) and Eq. (11), then the KKT conditions must hold. According to the KKT sufficient condition, this solution is a local optimum to problem (III). On the other hand, when T is large, based on asymptotic results obtained, the number of simulation runs of each design will be large, so that the objective function of problem (III) will approach the objective function of problem (II) (i.e., EAPCS approaches APCS). This implies the following result. Theorem 1. Given a total number of simulation runs T to be allocated to k competing designs whose performance is depicted by random variables with means l1 ; l2 ; . . . ; lk , and finite variances r21 ; r22 ; . . . ; r2k , respectively, the APCS can be asymptotically maximized when 2 !31=2 k d2b;s cs rb 4 X 5 Ns ; 1. Nb ! cb rs i¼1 d2b;i i6¼b

66

C.-H. Chen et al. / Simulation Modelling Practice and Theory 11 (2003) 57–74

r2 2. Ni ! 2i db;i

d2b;s Ns 2 log r2s

2 cs db;s r2i ci r2s d2b;i

!! ;

for

i ¼ 1; . . . ; k

and

i 6¼ s 6¼ b;

where Ni is the number of replications allocated to design i; db;i ¼ lb li and lb P ls P maxi6¼b li . Remark 1. In the case of k ¼ 2 and b ¼ 1, Theorem 1 yields " 2 #1=2 N1 c2 r1 d1;2 c2 r1 ¼ ¼ : N2 c1 r2 d21;2 c1 r2 This result is identical to the well-known optimal allocation solution for k ¼ 2. Using the results of Theorem 1, we now present a cost-effective sequential approach for selecting the best design from k alternatives with a given computing budget. A sequential algorithm for optimal computing budget allocation Step 0. Perform n0 simulation replications for each of the k designs, N1l ¼ N2l ¼    ¼ Nkl ¼ n0 ; B ¼ kn0 ; and l 0: Step 1. Compute li , db;i , and estimate r2i using the sample variance l

Si2

Step 2. Step 3. Step 4. Step 5. Step 6.

Ni 1 X 2 ¼ l ðXij li Þ ; Ni 1 j¼1

after conducting Nil replications at design i for i ¼ 1; . . . ; k, If B > T , stop. Otherwise, go to Step 3. B B þ D; Calculate Nilþ1 for all i using Theorem 1. Compute (Nilþ1 Nil ) and find the set SðmÞ fi : ðNilþ1 Nil Þ is among the highest mg Perform additional D=m simulation runs for design i 2 SðmÞ. Nilþ1 ¼ Nilþ1 þ D=m for i 2 SðmÞ; otherwise, Nilþ1 ¼ Nil .

l l þ 1; go to step 1: The algorithm can be summarized as follows. In the first stage, we run n0 simulation trials for every design to get some initial information about each designÕs performance. At this and each subsequent stage, we compute each designÕs sample mean and variance from the data already collected up to that point. Based on this collected simulation output, an incremental computing budget, D, is allocated using Theorem 1. This budget allocation dictates the number of trials assigned to each design in the next stage. Ideally, each stage should bring us closer to the optimal solution. This procedure is continued until the total budget T is exhausted. 4. Applications and numerical testing To illustrate how our algorithm can be used to solve a range of optimization problems, we offer two diverse circuit design examples. The first example identifies

C.-H. Chen et al. / Simulation Modelling Practice and Theory 11 (2003) 57–74

67

the optimal design of a double-T filter circuit, where the designÕs performance is defined in terms of its deviation from a fixed performance target. The second example identifies the optimal design of a 2-to-1 impedance matching transformer with a passband of one octave. Here performance is defined in terms of the designÕs expected yield. In both examples, our algorithm significantly reduces the amount of computing time necessary to select the best design, compared with ordinary ordinal optimization. 4.1. A double-T filter design problem A double-T filter is widely used as a basic bandstop-filtering unit in circuit design ([9]). The double-T filter has three capacitors, C11 , C12 , and C2 , and three resistors, R11 , R12 and R2 as shown in Fig. 1. Double-T filters are designed to meet a specified stop-frequency, say fT . A filterÕs stop-frequency is a simple function of the frequencies of the capacitors and resistors, fs ¼ ðxR11 xR12 x2R2 xC11 xC12 x2C2 Þ

1=4

;

ð12Þ

where xi denotes the value of item i (i.e., the resistance or capacitance). When the values of xi (for i ¼ R11 , R12 , R2 , C11 , C12 , C2 ) are constant, designing a double-T filter is straightforward. One simply selects resistors and capacitors, with associated resistance and capacitance, to satisfy fs ¼ fg . Unfortunately, slight variations in the manufacturing process and in the handling of components can cause these values to vary across components for a stated component design. In this environment, xi is a random variable and the filterÕs stop-frequency is a function of random variables. This implies that the stop-frequency may deviate from its target. The extent of this deviation will depend on the value distributions of the resistors and capacitors. Our objective in this environment is to select a design, which minimizes this expected deviation. We will use the mean square error statistic, E½ðfs fg Þ2 , to measure expected deviation, although other metrics are possible. Now consider the following numerical example. Tables 1 and 2 list characteristics of ten resistors and capacitors that can be used in the design of the double-T filter with fg ¼ 50. We limit our analysis to symmetric designs (i.e., C11 ¼ C12 and R11 ¼ R12 ) since they are the most common in industry. There are 104 possible

Fig. 1. The double-T filter considered in example 1.

68

C.-H. Chen et al. / Simulation Modelling Practice and Theory 11 (2003) 57–74

Table 1 Ten choices of resistors Choices

R1

R2

R3

R4

R5

R6

R7

R8

R9

R10

Resistance (kX) Standard deviation

2 0.15

4 0.25

6 0.4

9 0.4

12 1.0

15 1.0

18 1.0

21 2.0

24 2.0

30 2.0

Table 2 Ten choices of capacitors Choices

C1

C2

C3

C4

C5

C6

C7

C8

C9

C10

Capacitance (lF) Standard deviation

1 0.1

2 0.15

5 0.4

10 1.0

20 1.0

30 2.0

40 2.0

50 2.0

75 2.0

125 2.0

symmetric design combinations. Among these 10,000 combinations, 14 designs are centered on the target stop-frequency. In other words, 14 designs satisfy fs ¼ ðE½xR11 E½xR12 E½x2R2 E½xC11 E½xC12 E½x2C2 Þ

1=4

¼ 50:

These designs are listed in Table 3. We focus on these 14 designs in our first experiment. To test the ability of our algorithm to choose the best design, we first simulated all designs using traditional Monte Carlo simulation to identify which design was the best (i.e., which offered the lowest mean square error). Design 7 turned out to be the best with a mean square error of 4.875. Next we ran our algorithm under various computing budget constraints. In each case, we performed 10,000 independent experiments to estimate P{CS}, where P{CS} is computed as the ratio of the number of times our algorithm successfully finds the true best design to the total number of experiments. Results are listed in Tables 4 and 5. Table 4 highlights the accuracy gained by adding the OCBA technique to an ordinal optimization (OO) routine. The second column lists the probability of correct selection when the computing budget is distributed equally among all designs. The third column lists this same probability when the computing budget is dynamically allocated based on the OCBA technique. With a computing budget of only 420 trials,

Table 3 Fourteen ‘‘centered’’ filter designs Components

R11 and R12 R2 C11 and C12 C2

Designs 1

2

3

4

5

6

7

8

9

10

11

12

13

14

R1 R1 C2 C8

R1 R1 C3 C5

R1 R1 C4 C4

R1 R1 C5 C3

R1 R1 C8 C2

R1 R2 C1 C8

R1 R2 C8 C1

R1 R2 C4 C3

R1 R2 C3 C4

R2 R1 C1 C8

R2 R1 C3 C4

R2 R1 C4 C3

R2 R1 C8 C1

R2 R2 C3 C3

C.-H. Chen et al. / Simulation Modelling Practice and Theory 11 (2003) 57–74

69

Table 4 The impact of using OCBA on P{CS} for 14 filter designs T

P{CS} OO alone (%)

P{CS} OCBA þ OO (%)

420 560 770 784 1176 1806 2870

79.2 84.9 89.7 90.1 95.0 98.0 99.5

90.9 95.1 98.0 98.2 99.5 100 100

Table 5 Time savings factors using OCBA for 14 filter designs P{CS} (%)

90 95 98 99.5

Number of simulation trials (T) needed to achieve at least P{CS} OO alone

OCBA þ OO

784 1176 1806 2870

420 560 770 1176

Time savings using OCBA (%)

46.4 52.3 57.3 59.0

the combined OCBA þ OO routine provides a probability of correct selection of nearly 91%. This is an improvement of 14.7% over OO alone. Table 5 illustrates the time savings achieved by adding OCBA. The number of required simulation trials in this example reduces by at a minimum of 46% for probability levels of 90% or higher. The savings factor is as high as 59% when P fCSg ¼ 99:5%. Note that the savings factor increases with P{CS} since the larger number of simulation trials gives the algorithm more flexibility in allocating the budget. Using this reasoning, one might expect the time savings to also increase with the problem size (i.e., number of designs considered). We test this conjecture in our next experiment. Now suppose, rather than limiting our design choices to ‘‘centered’’ designs, we also consider designs slightly off-center. In particular, we consider designs centered in the range 2.5% of fg . This criterion yields 51 designs, using the components available in Tables 1 and 2. We perform the same series of simulations described in our previous experiment to determine the optimal design. Expanding our design space to include non-centered designs turned out to be a good idea. An off-centered design, consisting of 3 R4-type resistors, 2 C3-type capacitors, and 1 C1-type capacitor, dominates the performance of all centered-designs. The mean square error of this design is 3.962. If we had relied simply on deterministic analysis, we would have never uncovered this solution. We performed the same series of simulations described in our previous experiment to determine the relative performance of ordinal optimization with and without

70

C.-H. Chen et al. / Simulation Modelling Practice and Theory 11 (2003) 57–74

Table 6 The impact of using OCBA on P{CS} for 51 filter designs T

P{CS} (OO alone) (%)

P{CS} (OCBA þ OO) (%)

1071 1326 1632 2091 2856 4131 6018 9588

64.7 71.8 77.5 83.6 90.1 95.1 98.0 99.5

90.3 95.4 98.0 99.5 99.9 100 100 100

Table 7 Time savings factors using OCBA for 51 filter designs P{CS} (%)

90 95 98 99.5

Number of simulation trials (T) needed to achieve at least P{CS} OO alone

OCBA þ OO

2856 4131 6018 9588

1071 1326 1632 2091

Time savings using OCBA (%)

61.6 66.9 72.3 77.9

OCBA. Tables 6 and 7 highlight the results. Both the improvement in P{CS} and computing time is higher than that seen in our previous experiment. For example, OCBA þ OO now takes 1071 trials to provide a P{CS} of roughly 90% (compared to 420 trials needed for the smaller problem) but the improvement relative to OO alone is now 39.5% (compared to 14.7%). The time savings now runs as high as 77.9%, compared with a high of 59% for the smaller problem. These results are encouraging since the number of designs considered in some industrial applications can run into the hundreds or thousands. The results suggest that the benefits of OCBA, over ordinary ordinal optimization, will be significant in such settings. Example 2. Impedance matching transformer design The second circuit design problem involves a 2-to-1 impedance matching transformer. The purpose of the transformer is to provide a balance between an end-user resistor and an input resistor by supporting the necessary input/output current ratio. The transformer is designed to maintain this balance when subjected to some specified range of frequencies, known as the transformerÕs passband. A 2-to-1 impedance matching transformer is designed to match resistors where the end-user to input ratio is 2-to-1. In designing such a circuit, the designer wishes to choose components C1 , C2 , L1 , and L2 (see Fig. 2), which maximize the likelihood that the resistance is matched within the stated passband. The circuitÕs ability to match a specified resistance ratio is measured by the circuit input reflection coefficient, Ref, where a smaller Ref indi-

C.-H. Chen et al. / Simulation Modelling Practice and Theory 11 (2003) 57–74

71

cates better performance. This coefficient is a complicated function of C1 , C2 , L1 , L2 , and R1 , which is beyond our discussion here (see [17] for more information). The coefficient is often expressed in terms of dB, which is defined as dB 20logRef. With this background, let us examine the following numerical example. Consider a 2-to-1 impedance matching transformer with a passband range of 200–400 MHz, end-user resistor of 100 X, and input resistor of 50 X. We wish to choose components L1 , L2 , C1 , and C2 that maximize the designÕs expected yield relative to a Ref threshold of )10.5 dB. Like our first example, this problem is easy to solve if we ignore component variability. For example, suppose we focus on mean component values (i.e., average inductance and capacities) and assume no performance variation. Fig. 2 illustrates the performance of one design where lL1 ¼ lL2 ¼ 20, lC1 ¼ lC2 ¼ 5, and rL1 ¼ rL2 ¼ rC1 ¼ rC2 ¼ 0. Here the circuit was subjected to frequencies ranging from 100 to 500 MHz to view the out-of-band response as well as the passband response. The design passes the Ref threshold within its passband so its expected yield is 100%. Now consider the actual performance of this same design when we factor in component variability. When the values of each component are random variables, a designÕs average yield must be evaluated through a series of simulation trials, where the output of each trial is a graph similar to that of Fig. 3. The average yield is estimated as the number of trials passing the Ref threshold divided by the total number of trials. In this design example, the actual variation of each component is normally

Fig. 2. The impedance matching transformer considered in example 2.

Fig. 3. The frequency response of a transformer with no component variation.

72

C.-H. Chen et al. / Simulation Modelling Practice and Theory 11 (2003) 57–74

distributed with standard deviations of rL1 ¼ rL2 ¼ 2:0, rC1 ¼ 1:0, and rC2 ¼ 0:5. Simulation reveals that the designÕs average yield is actually 88.6%, an unsatisfactory performance level. When the performance of a candidate design is unsatisfactory, designers often identify new design candidates by simply adjusting the mean values of the current designÕs components by some small fraction. For example, by adjust the mean values of each of our design components by three factors, )20%, 0%, and þ20%, we create a total of 34 ¼ 81 candidate designs each with the same variance characteristics (i.e., rL1 ¼ rL2 ¼ 2:0, rC1 ¼ 1:0, and rC2 ¼ 0:5). Our combined OCBA þ OO algorithm can be used to effectively select the best design from this set of candidates. To test the benefit of the OCBA technique for this problem, we perform the same set of simulation experiments used in the previous double-T filter examples. Tables 8 and 9 illustrate the performance of the combined OCBA þ OO algorithm over ordinary ordinal optimization. The results are once again quite impressive. With the total number of simulation trials as low as 972, the P{CS} is already 90.3%. This is an accuracy increase of 58.4% over ordinal optimization alone. The OCBA technique also increases the time savings factor by as much as 92%. The time savings continue to increase with the desired P{CS}, as we saw in the previous examples. It is interesting to note that the optimal solution in this case gives an average yield of 99.5%, a vast improvement over our previous candidate design. This design is characterized by lL1 ¼ 16, lL2 ¼ 24, lC1 ¼ 5, and lC2 ¼ 4.

Table 8 Time savings factors using OCBA for transformer design problem T

P{CS} (OO alone) (%)

P{CS} (OCBA þ OO) (%)

972 1134 1458 2025 8100 11907 17415 26163

57.0 58.9 62.0 67.6 90.1 95.0 98.0 99.5

90.3 95.4 98.2 99.5 100 100 100 100

Table 9 Time savings factors using OCBA for transformer design problem P{CS} (%)

90 95 98 99.5

Number of simulation trials (T) needed to achieve at least P{CS} OO alone

OCBA þ OO

8100 11907 17415 26163

972 1134 1458 2025

Time savings using OCBA (%)

87.5 89.8 91.2 92.0

C.-H. Chen et al. / Simulation Modelling Practice and Theory 11 (2003) 57–74

73

5. Conclusions In this paper, we introduced an OCBA technique for efficiently solving a general design selection problem. The technique is based on an asymptotic allocation rule developed in the paper. We apply this technique to a series of circuit design problems to show how it can be effectively incorporated into a design optimization study. Numerical results suggest that the technique further enhances the simulation efficiency of ordinal optimization. When the technique is combined with ordinal optimization, it offers significant computational savings over the traditional value optimization methods used in Statistical Design. The numerical examples indicate that the time savings can be as much as 92% when a designer desires a high P{CS}. Also, the increase in accuracy can be as much as 58.4% when the available computing budget is small. We anticipate that both of these factors will become even larger as the number of design alternatives grows, since more designs provide more flexibility in allocating the computing budget.

References [1] C.G. Cassandras, G. Bao, A stochastic comparison algorithm for continuous optimization with estimations, Proceedings of the 33rd IEEE Conference on Decision and Control, December, 1994, pp. 676–677. [2] C.G. Cassandras, V. Julka, Descent algorithms for discrete resource allocation problems, Proceedings of the 33rd IEEE Conference on Decision and Control, December 1994, pp. 676–677. [3] C.H. Chen, An effective approach to smartly allocate computing budget for discrete event simulation, Proceedings of the 34th IEEE Conference on Decision and Control, 1995, pp. 2598–2605. [4] C.H. Chen, A lower bound for the correct subset-selection probability and its application to discrete event system simulations, IEEE Transactions on Automatic Control 41 (8) (1996) 1227–1231. [5] H.C. Chen, C.H. Chen, L. Dai, E. Y€ ucesan, New development of optimal computing budget allocation for discrete event simulation, Proceedings of the 1997 Winter Simulation Conference, 1997, pp. 334–341. [6] C.H. Chen, V. Kumar, Y.C. Luo, Motion planning of walking robots using ordinal optimization, IEEE Robotics and Automation Magazine 80 (June) (1998) 22–32. [7] C.H. Chen, S.D. Wu, L. Dai, Ordinal comparison of heuristic algorithms using stochastic optimization, IEEE Transactions on Robotics and Automation 15 (1) (1999) 44–56. [8] C.H. Chen, J. Lin, E. Y€ ucesan, S.E. Chick, Simulation budget allocation for further enhancing the efficiency of ordinal optimization, Journal of Discrete Event Dynamic Systems: Theory and Applications 10 (July) (2000) 251–270. [9] E. Christian, Filter Design Tables And Graphs, Wiley, New York, 1966. [10] L. Dai, Convergence properties of ordinal comparison in the simulation of discrete event dynamic systems, Journal of Optimization Theory and Applications 91 (2) (1996) 363–388. [11] M.H. DeGroot, Optimal Statistical Decisions, McGraw-Hill Inc., New York, 1970. [12] V. Fabian, Stochastic approximation, in: J.S. Rustagi (Ed.), Optimization Methods in Statistics, Academic Press, New York, 1971. [13] G. Goldsman, B.L. Nelson, Ranking, selection, and multiple comparison in computer simulation, Proceedings of the 1994 Winter Simulation Conference, 1994, pp. 192–199. [14] W.B. Gong, Y.C. Ho, W. Zhai, Stochastic comparison algorithm for discrete optimization with estimations, Discrete Event Dynamic Systems: Theory and Applications (1995) 231–237. [15] Y.C. Ho, R.S. Sreenivas, P. Vakili, Ordinal optimization of DEDS, Journal of Discrete Event Dynamic Systems 2 (2) (1992) 61–88.

74

C.-H. Chen et al. / Simulation Modelling Practice and Theory 11 (2003) 57–74

[16] H.J. Kushner, D.S. Clark, Stochastic Approximation for Constrained and Unconstrained Systems, Springer-Verlag, New York, 1978. [17] J. Ogrodzki, Circuit Simulation Methods And Algorithms, CRC Press, Boca Raton, FL, 1994. [18] N.T. Patsis, C.H. Chen, M.E. Larson, SIMD parallel discrete event dynamic system simulation, IEEE Transactions on Control Systems Technology 5 (3) (1997) 30–41. [19] S. Ross, A First Course in Probability, Prentice Hall Inc., New York, 1994. [20] D. Yan, H. Mukai, Optimization algorithm with probabilistic estimation, Journal of Optimization Theory and Applications 79 (1993) 345–371.