Goal-seeking problem in discrete event systems simulation

Goal-seeking problem in discrete event systems simulation

~ Microelectron. Reliab., Vol. 37, No. 3, pp. 391-395, 1997 Copyright © 1996ElsevierScience Ltd Printed in Great Britain.All rights reserved 0026-271...

413KB Sizes 1 Downloads 40 Views

~

Microelectron. Reliab., Vol. 37, No. 3, pp. 391-395, 1997 Copyright © 1996ElsevierScience Ltd Printed in Great Britain.All rights reserved 0026-2714/97 $17.00+.00

i Pergamon PII: S0026-2714(96) 00042-X

GOAL-SEEKING PROBLEM IN DISCRETE EVENT SYSTEMS SIMULATION H. ARSHAM Information and Quantitative Sciences, Robert G. Merrick School of Business, University of Baltimore, 1420 North Charles Street, Baltimore, MD 21201, U.S.A.

(Received for publication 30 January 1996)

Abstract--For most complex stochastic systems such as microelectronic systems, the Mean Time To Failure (MTTF) is not available in analytical form. We resort to Monte-Carlo Simulation (MCS) to estimate MTTF function for some specific values of underlying density function parameter(s). MCS models, although simpler than a real-world system, are still a very complex way of relating input parameter(s) to MTTF. This study develops a polynomial model to be used as an auxiliary to a MCS model. The estimated metamodel is a Taylor expansion of the MTTF function in the neighborhood of the nominal value for the parameter(s). The Score Function methods estimate the needed sensitivities (i.e. gradient, Hessian, etc.) of the MTTF function with respect to the input parameter in a sinole simulation run. The explicit use of this metamodel is the target-setting problem in Taguchi's product design concept: given a desired target MTTF value, find the input parameter(s). A stochastic approximation algorithm of the Robbins-Monro type uses a linear metamodel to estimate the necessary controllable input parameter within a desired accuracy. The potential effectiveness is demonstrated by simulating a reliability system with a known analytical solution. Copyright © 1996 Elsevier Science Ltd.

1. INTRODUCTION Let J(v) = E{Z[ r(v)]}

(1)

be the Mean Time to Failure ( M T T F ) of a stochastic system, e.g. in a microelectronic system, Z[Y(v)] is the random Time To Failure (TTF), f ( y ; v) is the underlying density function (d.f.) of the T T F and v is the shape or scale parameter of d.f. Assume that the explicit form of f(y; v) is known but that of Z [ Y(v)] is either unknown, or too complicated to calculate analytically. It can only be evaluated through simulation. In reliability systems Monte-Carlo simulation is usually needed to estimate J(v) for a given value v = Vo. By the law of large numbers

J(vo) = 1/n Z Zi(Yi)

(2)

i=1

converges to the true value, where y~, i = 1, 2 , . . . , n are independent identically distributed random vector realizations and n is the number of independent replications. The numerical result based on equation (2) is only a point estimate for J(v) at v = v0. The simulation models, e.g. equation (2), although simpler than the real-world system, are still a very complex way of relating input (v) to output J(v). Sometimes a simpler (analytic) model may be used as an auxiliary to the simulation model. This auxiliary model is often referred to as a metamodel (see e.g. Friedman [1], and references therein). The explicit use of a metamodel in post-simulation analysis has

many benefits, some of which are model simplification, enhanced exploration of system behavior, optimization, model interpretation, and generalization to models of the same type. In addition, we can solve "what-if" problems, see e.g. [2-4] without the need for additional simulation runs. The explicit use of a linear metamodel is the parameter selection in Taguchi's product design: given a desired output value, find the prerequisite input parameter (v), see e.g. [5]. An indirect approach to solving this goal-seeking problem is by inverse interpolation based on regression methods. In this treatment we estimate the M T T F by simulating the system for many different values of v = v0 and then approximate the M T T F response surface function J(v) by using a "goodness-of-fit" criterion. Finally the fitted function, also referred to as a "metamodel", is used to interpolate for the u n k n o w n parameter. This approach is tedious, time consuming and costly; moreover, in the presence of a noisy (random) environment the fitted model would be quite "shaky", having usually unstable coefficients. The technique proposed in this paper is based on the use of Taylor's expansion of J(v) in the neighborhood of the nominal input value v = vo. Without loss of generality we consider the case where v is scalar. The following polynomial model can be used as an auxiliary model.

J(v) =J(vo)+6v.J'(vo) +(6v)2J"(Vo)/2 + . . .

(3)

where 6v = v - Co and the primes denote derivatives. This metamodel approximates J(v) for small 6v. To estimate J(v) in the neighborhood of v0 by a linear 391

392

H. Arsham

function, we need to estimate the nominal MTTF based on (2) and its first derivative. Traditionally this derivative is estimated by Crude Monte Carlo, i.e. finite differencing which requires rerunning the simulation model. Methods which yield enhanced efficiency and accuracy in estimating at little additional cost are of great value. Perturbation Analysis (PA) and Score Function (SF) approaches are the two major methods for estimating the performance measure and its derivative, while observing only a single sample path from the underlying system, see e.g. Refs [6, 7]. Our focus is the SF, also known as Likelihood Ratio (LR) method. The basic idea of SF(LR) is that the derivative of the performance function, J'(v), is expressed as expectation with respect to the same distribution as the performance measure itself. The Taguchi method, essentially, is a framework for arriving at a target value for product, process, and service attributes through a set of experiments which include Monte-Carlo experiments. To solve the product design problem, we will restrict our model to the first order expansion. From equation (3) we have, 6v = [J(v) - J(vo)]/J'(vo)

(4)

provided J'(vo) does not vanish in set V. For a given J(v) the estimated 6v is 3v = IS(v) -

)(Vo)]/)'(Vo).

(5)

The next section contains sensitivity (i.e, gradient, Hessian, etc.) estimations to be used in a polynomial metamodel of type (3) for M T T F function. Section 3 deals with Taguchi's target setting. This is followed by construction of an accuracy measure for these estimates in Section 4. Section 5 develops an iterative solution algorithm for parameter selection problem, An illustrative numerical example is presented in Section 6. Finally, Section 7 provides some concluding remarks and ideas for some further research and extensions.

ON A SINGLE RUN Let

S(v) = E { z [ Y(v)]}

(8)

= E[Z(y). S]

where S = f'(y; v)/f(y; v) is the Score Function. Differentiation is wrt v. This is subject to the assumptions that the differentiation and the integration operators are interchangeable, f'(y; v) exists, and f(y; v) is positive for all v e V, where V is an open interval. A necessary and sufficient condition for the interchangeability used above is that there must be no discontinuity in the distribution with position depending on the parameter v, [6]. Similarly, the second derivative is

S"(v) = f [Z(y)S'f(y; v) + Z(y)Sf'(y; v)] dy = E[Z(y). H] where H = S' + S z. In the multidimensional case the gradient of J(v) could be obtained in a straightforward manner by generalizing these results (Arsham et al. [6]). Estimates obtained by using these results are unbiased and consistent. An application of these results is given in

S'(vo) = ~ Zi[yi(vo)].S[yi(vo)]

(9)

i=l

J"(Vo) = ~ Zi[yi(vo)].H[yi(vo)]

(10)

i=1

where

S[y,(vo)] = f'(Yi, v)/f(yi, v)

(I1)

H[Yi(Vo)] = f'~'(Yi, v)/f(yi, v)

(12)

and

3. TAGUCHI'S TARGET SETTING PROBLEM (6)

be the MTTF performance measure, where f is p.d.f., Y is a random vector distributed f(y; v) and v is an adjustable input parameter. Then J '(v) --- f [Z. f ] ' dy

J '(v) = f Z(y(v))f'(y; v) dy

evaluated at v = Vo and y~ are the same n independent replications used in equation (2) for estimating the nominal performance J(vo). The estimated derivative based on eqns (9) and (10) converges to the true derivative in the sense of the mean squared error [6].

2. A METAMODEL FOR MTTF BASED

= f Z [ Y(v)]. f(y; v) dy

to v (see e.g. Reif [9]). From equation (7) it follows that

(7)

where the prime (') denotes the derivative wrt v. Note that despite the fact that y depends on v, only the function Z. f is subject to differentiation with respect

The Taguchi method, [8] has revolutionized product, process and service design. Essentially this method is a framework for arriving at a target value for product, process and service attributes through a set of experiments which include Monte-Carlo experiments. This section focuses on parameter selection. A random quality loss function L(Z~) for a given system can be expanded in the neighborhood of the target value z as follows:

L(Zi) = L(z) + (Z~ - z)L'(z) + (Z~ - z)2L"(z)/2 + . . . (13)

Discrete event sytems simulation It can be shown that L(Z~) converges in mean squared error if IZi - zl < 1 and derivatives are finite. Since the optimal loss is zero at z, this reduces to the following quadratic approximation L(Zi) = K ( Z i - z) 2

(15)

As in Taguchi's quality philosophy this risk function measures the average loss due to a product performance which is proportional to the square of the deviation of M T T F from its target r (14). The non-adjustable variational noise, i.e. Var(Zi I v) = Var(Zi), is a measure of variation among products. However, the role of product design is to reduce the (J - r) 2 part of risk, which is our interest in this paper. Note that all estimates involved in computing 6v based on (5), i.e. in 3v = [J(v) -- J(vo)]/J'(Vo)

(16)

are computed simultaneously from a single simulation run of the nominal system (v = %). This was achieved by transforming all probability space to the nominal one using the Radon-Nikodym theorem [10]. Note that to estimate the derivative we do not need to rerun the simulation; it only adds moderate computational cost to the base simulation. 4. ACCURACY OF THE ESTIMATE Clearly, in Taguchi's product design problem the controllable input parameter is random, while the output is fixed and given as a target value. Upon estimating the input parameter, we must provide a measure, such as a confidence interval, to reflect the precision of the estimate. To construct a confidence interval for 6v using the estimator eqn (16). Let A i :

A i =

J(v)

-

Zi[Yi(Vo)],

B i = L[yi(vo) ] . S[yi(vo)]

(17)

(18)

and denote A = ~, Ai/n and B = Z Bi/n, S: = SZll - 2~v.Stz + (~v)2S22

An exact 100(1 - ~ ) % confidence interval for 6v is given by P I n~/z lJv - v' ~ 1 - ~ S/B x.l

(19)

where t,_L~_~/2 is the 1 0 0 ( 1 - ~ / 2 ) percentile of Student's t-distribution with (n - 1) degrees of freedom (d.f.). 5. A RECURSIVE SOLUTION ALGORITHM The solution to the inverse problem is a solution of the stochastic equation J(v) = J, which we assume lies in some bounded open interval V. The problem is to solve this stochastic equation by a suitable experimental design to ensure convergence as ,~v approaches zero. The following is a Robbins-Monro (R-M) type of algorithm for 6v. This algorithm involves placing experiment j + 1 according to the outcome of experiment j immediately preceding it. That is, %

~ = vs + d j [ ~ - Y ( v i ) ] / Y ' ( v , )

where d j, is any sequence of positive numbers satisfying the following conditions: ~ d,=~ j=l

$22 = Z ( B i - B)2/(n - 1)

and $12 = Z (Ai - A)(B i - B)/(n - 1).

and

~ d2
The first condition is a necessary condition for the convergence 6v to approach zero, while the second condition asymptotically damps the effect of the (simulation) random errors. These conditions are satisfied, for example, by the harmonic sequence d r = 1/j. With this choice, the rate of reduction of di is very high initially but may reduce to very small steps as we approach close to the root. We will use the sequence dj = 9/(9 + j ) , see e.g. [11]. Since the adjustments are made in proportion to the recent value, we must be sure that the results remain finite. This requires that J'(v) does not vanish for v e V, V an open interval. To prevent excessive overcorrection we assume further that the solution lies in some finite interval V. Under these not unreasonable conditions this algorithm will converge in mean square; moreover it is an almost sure convergence. For some generalizations, studies concerning speed of convergence, and acceleration techniques, see Ref. [11] and references therein. Finally, as in Newton's root finding method, it is impossible to assert that the method converges for just any initial v = Vo, even though J'(v) may satisfy the Lipschits condition over V. Indeed, if the initial value Vo is sufficiently close to the solution, which is usually the case, then this algorithm requires only a few iterations to obtain a solution with very high accuracy.

where $1, = Z ( A i - A)2/(n - 1),

(20)

(14)y

where K is some constant which can be determined in terms of the customers' tolerance limit z - 6 that the product performs unsatisfactorily when Z~ slips below this limit. Given that the cost to the customer is A dollars, the K = A/6 2. Without loss of generality, for simplicity let K = l. The goal of parameter design in Taguchi's quality philosophy, [8] is to choose the setting of the design parameter v that minimizes the average loss, i.e. the risk function. The risk function R(z) is the expected value of the loss function, which can be shown as: R(r) = E{L(Zi) } = (J - r) 2 + Var(Z~).

393

Algorithm 0. Inputs = desired output, j = iteration number,

394

H. Arsham

v~ = controllable input parameter v, n = sample size, U = desired upper limit for absolute increment u = vi+1 - vj and = a desired significance level.

The score function is S[y(v)] = f ' ( y ; v)/f(y, v) = 4/v - ~, y,,

i = 1, 2, 3, 4

H[y(v)] = f"(y; v)/f(y, v) : [ V 2 ( ~ yi) 2 -- 8V()-'.Yi) + 12]/v2,

i = l, 2, 3, 4.

1. Initialization

Set j = 1 Set vj = v0

The estimated average life-time and its derivative, for the nominal system (v = Vo = 0.5) based on eqns (2) and (9), are

2. Estimations

J(vo) = Z max[min(Y3,i, Y4.j), min(Yl.j, Y2.j)]/n

J(vj) using equation (2) J'(vj) using equation (9)

and

3. Computations u = 9[r - ](vj)]/(9 + j ) If lul < U Construct 100(1 - a ) ~ o confidence interval for v using equation (20) Stop. Otherwise Set vj+l = v j + u a n d j ~ j + 1 Reset the seeds of random number generators to their initial values. Go to step 2. Note that by resetting the seeds to their initial values, we are using the C o m m o n Random Variate approach as a Variance Reduction Technique (VRT).

6. GOAL SEEKING PROBLEM IN A

RELIABILITY MODEL Consider the coherent reliability structure, shown in Fig. 1 with homogeneous (i.e. manufactured by an identical process) components having independent random life-times Y1, Y2, 113 and 1"4 which are distributed exponentially with rates v -- v o = 0.5. The system lifetime is

L(Y1, Y2, Y3, 114;Vo)= max{min(Y3, Y4), rain(Y1, Y2)}. It is readily seen that the theoretical expected life-time of this system is d(vo) = 3/(4Vo). Now we apply our results to compute a necessary value for v to obtain a particular value for J(v), say J ( v ) = 2. For this reliability system the underlying probability density function is (by independence):

f ( y ; v) = v 4 e x p ( - v ~ Yi), i = 1, 2, 3, 4.

A

(21)

A

t "k A

V

V

Fig. 1. A simple reliability system.

J'(vo) = ~ max[min( Ya.j, Y4.i), min( 1"1,i, Y2.j)]

s [ r,.j]/n J"(Vo) = 5Z max[min(Y3,j, Y4,j), min(Yl.j, Y2,i)]

. H[r~,j]/n, respectively where Y~.j is the j t h observation for the ith component (i = 1, 2, 3, 4). We have performed a Monte-Carlo experiment for this system by generating n = 10,000 independent replications using SIMSCRIPT 11.5 random n u m b e r stream 1-4 to generate exponential variates Y1, Y2, 1"3, I"4 (on a VAX system), respectively. The estimated performance is J ( 0 . 5 ) = 1.5024, with a standard error of 0.0348. The first and second derivative estimates are - 3.0933 and 12.1177 with standard errors of 0.1126 and 1.3321, respectively. The response surface approximation in the neighborhood v = 0.5 is:

J(v) = 1.5024 + (v - 0.5)(-3.0933) + (v - 0.5)2(12.1177)/2 + - . . 6.0589v 2 - 9.1522v + 4.5638. A numerical comparison based on this metamodel is given in Table 1. Now assume that the manufacturer wants to improve the average life-time of the system to J(v) = z = 2. To achieve this goal we have set Vo = 0.5 and U = 0.0001 in the proposed algorithm. The numerical results are tabulated in Table 2. Table 1. A second-order polynomial approximation and direct simulation v 0.40 0.42 0.44 0.46 0.48 0.50 0.52 0.54 0.56 0.58 0.60

Analytic 1.8750 1.7857 1.7045 1.6304 1.5625 1.5000 1.4423 1.3889 1.3393 1.2931 1.2500

Simul.

Metamodel

Abs. error (%)

1.8780 1.7885 1.7072 1.6330 1.5650 1.5024 1.4446 1.3911 1.3414 1.2951 1.2520

1.8723 1.7887 1.7098 1.6359 1.5667 1.5024 1.4430 1.3884 1.3386 1.2937 1.2537

0.14 0.17 0.31 0.33 0.27 0.16 0.05 0.04 0.05 0.05 0.29

Discrete event sytems simulation Table 2. lterative decision parameter estimate for the reliability system shown in Fig. 1 (l) I 2 3 4 5 6 7

(2)

(3)

0.5000 0.3487 0.3694 0.3740 0.3751 0.3754 0.3755

1.5024 2.1544 2.0333 2.0083 2.0025 2.0009 2.0003

(4)

(5)

(6)

-2.9598 - 6.0862 -5.4217 -5.2888 - 5.2583 - 5.2498 -5.2471

-0.1513 - 0.0208 +0.0046 +0.0011 + 0.0003 + 0.0001 + 0.0000

0.3487 0.3694 0.3740 0.3751 0.3754 0.3755 0.3756

(1) Iteration number j, (2) fixed input t'j, (3) estimated MTTF, (4) estimated derivative, (5) change in t), (6) New input parameter t)+ j. The estimated inpul parameter to achieve the output J ( v ) = T = 2 is 0.3756. A 90% confidence interval based on this estimate using equation (20) is: P[-0.3739 <~ v ~< 0.3773] >/0.90. Comparing the theoretical value for v0 obtained from J(v) = 3/4Vo = 2, i.e. v = 0.3750 with our computational value suggests that the results based on the proposed algorithm are quite satisfactory. In fact, running this system with v = 0.3756, and n = 10,000, we obtained an estimated M T T F J ( v ) = 2.0000. Hence the discrepancy in the estimated input parameter by this algorithm must be considered as a pure random error which can be reduced, e.g. by increasing n.

7. CONCLUSIONS Almost all Monte-Carlo computations such as MTTF, can be formulated as estimating an expected value of system performance which is a function of an input parameter of the underlying probability density function. In the usual Monte-Carlo simulation this input parameter must be known in advance to estimate the output of the system. This paper considers the goal-seeking problem: what input value is needed to achieve a particular value for output? An efficient algorithm based on first-order Taylor's expansion of output function is proposed. The estimated derivative used in the Taylor's expansion is based on a single run method. That is, given an initial value for the input parameter the algorithm computes and refines an estimate for the input parameter to achieve a desirable output value. The efficiency of the proposed method is tested on a simple reliability network with satisfactory results. In the course of future research, we expect:

37-3-6

395

(i) To study the effect of other variance reduction techniques (VRT). The C o m m o n Random Variate as a VRT is already embedded in the algorithm. (ii) To extend our methodology to the goal seeking problems with two or more unknown parameters by considering two or more relevant outputs (to ensure uniqueness). By this generalization we could construct a linear system of stochastic equations to be solved simultaneously by a multidimensional version of the proposed algorithm. The simulation design is more involved for problems with more than a few parameters [-12, 13]. Acknowledgement--This work is supported by The National Science Foundation Grant CCR-9505732.

REFERENCES

1. Friedman, L. W., The Simulation Metamodel. Kluwer Academic, Norwell, MA, 1996. 2. Adlahka, V. and Arsham, H., A simulation technique for estimation in perturbed stochastic activity networks. Simulation, 1992 58, 258-267. 3. Arsham, H. and Kahn, A., What-if analysis in computer simulation models: a comparative survey with some extensions. Math. Comput. Modelling, 1990,13, 101-106. 4. Arsham, H., Perturbation analysis is discrete-event simulation. Int. J. Modelling Simul., 1991, 11, 21-28. 5. Arsham, H., On the inverse problem in Monte-Carlo experiments. Inverse Prob., 1989, 5, 927-934. 6. Arsham, H., Feuerverger, A., McLeish, D. L., Kreimer, J. and Rubinstein, R. Y., Sensitivity analysis and the 'what-if' problem in simulation analysis. Math. Comput. Modelling, 1989, 2, 193-219. 7. Rubinstein, R. Y., Monte Carlo Optimization, Simulation and Sensitivity of Queueing Networks. John Wiley, New York, 1986. 8. Taguchi, G., Experimental design for product design. In Statistical Design and Analysis of Industrial Experiments, Edited by S. Ghosh, pp. 1-33. Marcel Dekker, New York, 1990. 9. Rief, H., Monte Carlo uncertainty analysis In CRC Handbook of Uncertainty Analysis, Edited by Y. Ronen, 1988. 10. Arsham, H., Kreimer, J. and Rubinstein, R., Application of Radon-Nikodym theorem for simulation of queueing system. Discrete Event Simulation and Operations Research, 1987, pp. 95-99. SCS Publication, Belgium. 11. Goldstein, L., On the choice of step size in the RobbinsMonro procedure. Statist. Probability Lett., 1988, 6, 299-303. 12. Wild, R. and Pignatiello, Jr, J., An experimental design strategy for designing robust systems using discrete-event simulation. Simulation, 1991, 57, 358-368. 13. Bechhofer, R., Santner, T. and Goldman, D., Design and Analysis of Experiments for Statistical Selection, Screening, and Multiple Comparison. John Wiley, New York, 1995.