Monte Carlo Simulation

Monte Carlo Simulation

Monte Carlo Simulation Krishnamurty Muralidhar University of Kentucky I. DEFINITION AND INTRODUCTION II. POTENTIAL APPLICATIONS OF MCM III. BOOTSTRA...

131KB Sizes 2 Downloads 383 Views

Monte Carlo Simulation Krishnamurty Muralidhar University of Kentucky

I. DEFINITION AND INTRODUCTION II. POTENTIAL APPLICATIONS OF MCM

III. BOOTSTRAPPING IV. CONCLUSIONS

GLOSSARY

I. DEFINITION AND INTRODUCTION

ABC classification An inventory classification system that identifies those groups of items that comprise the bulk of the total inventory’s value. bootstrapping A statistical procedure that uses resampling to construct an empirical sampling distribution of a statistic. inventory The stock of an item or resource used by an organization. Monte Carlo methods Those techniques that involve the use of random numbers. simulation The reproduction of a real-world process or system.

Simulation models can be generally classified into one of three major types, namely, continuous event simulation, discrete event simulation, and Monte Carlo simulation or Monte Carlo methods (MCM). In simple terms, MCM may be any procedure that uses randomly generated numbers to solve a problem. By this definition, any simulation model can be considered as using MCM. Law and Kelton use the term Monte Carlo Method to define a procedure that uses randomly generated variables to solve problems in which the passage of time is not of consequence (i.e., the problem is static in nature). Pritsker (1995) defines the aim of discrete and continuous event simulation as reproducing the activities of the entities in a system over time, to better understand the behavior and performance of a system. We use both these definitions to classify MCM problems as those simulation problems that either do not involve the passage of time and/or do not involve entities and activities. The earliest use of MCM can be traced to mathematicians in the 17th and 18th centuries interested in investigating games of chance. Later developments included the use of MCM for solving complex differential equations and for solving combinatorial problems. Monte Carlo methods are often applied in situations where the objective is to evaluate the integral of a function in a given region. In many instances, the properties of such functions are complex and/or unknown and hence, it may be difficult to develop closed form solutions. Numerical techniques (other

MONTE CARLO METHODS are a class of simulation techniques that offer the ability to analyze and understand complex problems. Their potential applications include decision making under uncertainty, inventory analysis, and statistical analysis. The application of Monte Carlo methods in practice has been rather limited. This paper outlines the possible reasons for such limited applications. This paper provides illustrations of potential situations where Monte Carlo methods can be applied effectively. The paper also discusses the bootstrap method, a new statistical resampling technique that uses Monte Carlo methods extensively. Finally, this paper discusses the impact of recent advances in information technology and changes in the paradigm used to teach quantitative methods can have on Monte Carlo methods.

Encyclopedia of Information Systems, Volume 3 Copyright 2003, Elsevier Science (USA). All rights reserved.

193

194 than MCM) also may not be useful because of the complexity of the function being evaluated. In such cases, MCM offers a simple yet effective tool for solving the problem. Problems of the type described above are found in practically every scientific discipline. It is not surprising then that MCM is used effectively in many scientific disciplines including chemistry, engineering, operations research, physics, and statistics. Of the disciplines mentioned above, the two that are of relevance in an organizational decision-making context are operations research and statistics. In operations research, MCM is used mainly in analyzing queuing systems (discrete event simulation). Monte Carlo methods are also used in for some optimization problems. However, a discussion of this topic is beyond the scope of this paper. In statistics, as in other disciplines, MCM is used to solve problems that are not analytically tractable. It is also used to understand the behavior of statistical techniques when the assumptions underlying such techniques are violated. They are also often used to determine the power of statistical tests for which analytical derivation is not possible. Recently developed re-sampling techniques also use MCM. Thus, it is clear that MCM plays a large role in evaluating statistical techniques. Monte Carlo methods have been and continue to be used extensively as analytical tools in statistical research. Even a cursory glance at the leading journals in statistics (such as, Journal of the American Statistical Association, Annals of Statistics, and Communication in Statistics) reveal a large number of papers that use MCM. Quantitative journals in business (such as Management Science and Decision Sciences) as well as journals in operations management (such as Journal of Operations Management and International Journal of Production Research) also contain numerous papers that use MCM as a tool for analysis. Given the wide acceptance of MCM by researchers both in statistics and business, one would expect that MCM would be used extensively in an organizational decision-making context as well. This, however, does not seem to be true. An analysis of practitioner-oriented journals (such as Interfaces) reveals that there are few papers that use MCM in an organizational decision-making context. There are several possible reasons for this situation. One possible explanation is that, unlike other statistical techniques, MCM does not receive extensive coverage in courses that teach basic quantitative methods. Typically, extensive coverage of MCM is provided in graduate and doctoral level classes, and in some specialized undergraduate classes. In a general quantitative methods class, coverage of MCM is very limited. Even in classes on simulation, while MCM is of-

Monte Carlo Simulation ten mentioned as a potential tool, more emphasis is placed on development of discrete event simulation models rather than MCM models. A second possible reason is that, in the past, implementing MCM required extensive programming effort, specialized software, and the use of mainframe computers. Thus, while it is clear that MCM has the potential to be an extremely effective analytical tool for organizational decision making, it has currently not reached its potential. More importantly, unless more attention is paid to informing and educating decision makers regarding MCM, it will not realize its full potential. Hence, the objective of this study is to illustrate the applicability of MCM in an organizational decision-making context. The next section illustrates different scenarios that lend themselves as MCM applications. The third section provides a description of the use of bootstrapping (a resampling technique that relies heavily on the concepts of MCM) and its application in organizational decision making. The final section provides the conclusions.

II. POTENTIAL APPLICATIONS OF MCM As observed earlier, MCM has the potential to be an effective analytical tool in organizations. In this section, we identify some examples of problems that lend themselves to such analysis. The examples provided in this section are by no means intended to be an exhaustive list of potential applications of MCM, only a small selection of such applications.

A. Decision Making under Uncertainty Monte Carlo methods can be used effectively in any situation where there is inherent uncertainty and the problem complexity is such that an analytical derivation is extremely difficult or even impossible. For the purposes of illustration, consider the news vendor problem. In its simplest manifestation, the problem can be described as follows. The objective of the problem is to determine the number of newspapers to purchase so as to maximize expected profit. The decision is to be based on a historical, simple, discrete demand distribution for newspapers, the purchase price, and selling price of the newspaper. In the simple case, the number of potential alternatives being evaluated (the number of newspapers to purchase) is small. The profit function is also a simple linear function of the number of newspapers sold, number of newspapers not sold, purchase price, and selling price. In this sim-

Monte Carlo Simulation ple case, an analytical solution can be easily derived. The news vendor problem is often used to illustrate the basic concepts of expected values, probabilities, and decision making under uncertainty. In its simplest form, the news vendor problem has few direct analogies in an organizational context. In an organizational context, problems involve: 1. 2. 3. 4. 5. 6.

A large number of potential alternatives A complex demand distribution A complex pay-off function Non-mutually exclusive alternatives Multiple, related alternatives for related products Decisions to be made at multiple stages

When a real-life problem consists of one or more of the above, an analytical solution to the problem may not be available. Consider for the sake of illustration, the pay-off function. In the simple news vendor problem, the pay-off function is linear and is computed as the (newspapers sold  Profit per unit)  (newspapers unsold  Loss per unit), where profit and loss per unit are simple functions of cost, selling price, and some simple penalty for unsatisfied demand. Let us consider the last factor, namely, the penalty for unsatisfied demand. An organization may wish to evaluate, when demand exceeds supply, the impact of a policy of providing unsatisfied customers with a coupon toward their next purchase. In such a case, it would be inappropriate to consider the value of the coupon itself as the cost since it is unknown whether the customer will actually use the coupon. Marketing research indicates that whether a coupon is redeemed or not may be affected by the coupon proneness of the customer, the attractiveness of the coupon, coupon expiration dates, etc. The models for coupon redemption are, in most cases, complex and not easily tractable analytically. In situations such as these, the only feasible analytical tool may be MCM. The organization could use MCM effectively to analyze the impact of this policy. In addition, the organization could also easily conduct sensitivity (“what-if”) analysis to study the impact of changes in the characteristics of the customers and/or coupons on the solution. It is easy to visualize other complexities in the problem as well. For instance, the pay-off function in the simple problem was simply the profit resulting from the sale of the newspapers. In real-life situations, decisions such as these may result in revenue flows over periods of time. The organization may then wish to evaluate the not just profit, but the net present value of these revenue flows or the internal rate of return.

195 Unless these revenue flows have well-defined mathematical properties, it may not be possible to analytically identify the best alternative. Similarly, a complex demand distribution may also prevent the analytical identification of the best alternative. Considering a family of substitutable products will also complicate the problem further, as would the case where competing products are considered. Finally, decision makers may also want to consider decision criteria other than profit maximization. Thus, although the simple news vendor problem may not have direct use in an organizational context, adding even small complexities to the simple model makes it more relevant in an organizational context. When such complexities are added to the model, analytical derivations may not always be possible, and MCM may present the only viable alternative method of analysis. Evans and Olson provide an excellent example of the application of MCM for the evaluation of a new energy service offered by the Cinergy Corporation. The example illustrates the effectiveness of MCM as an analytical tool for evaluating alternatives and/or for performing sensitivity analysis for such problems.

B. Inventory Analysis Elements of statistical analysis also play a crucial role in inventory analysis. Organizations often implement policies and procedures that reduce time, effort, and expense, required to monitor and control for low value, high volume inventory items (often referred to as type “C” items). The EOQ model (or a variation of the EOQ model) is often used to determine the order quantity for such items. Continuous review is used in conjunction with the EOQ model and a fixed quantity is ordered as soon as a pre-specified reorder point is reached. In some cases, the organization may implement a periodic review system where the product is automatically reordered at fixed intervals. The quantity to be ordered varies depending on the inventory on hand at the time of reordering. A comprehensive discussion of the EOQ model, periodic, and continuous review systems can be found in any introductory operations management textbook. Monte Carlo methods are often used by researchers to analyze the impact of different statistical assumption on inventory control. A casual perusal of the leading journals that publish research in operations management reveals a large number of papers using MCM as the analytical tool for research in inventory control. While there are some papers that have

196 addressed the use of MCM in practice, these articles tend to provide suggestions for the appropriate use of MCM, rather than describing the actual use of MCM. Monte Carlo methods can be used effectively to understand and implement inventory control due to the inherent nature of the problem. Statistical distributions play a crucial role in the inventory control of type C items in practice. The simple EOQ model assumes that both the demand and lead times are deterministic. It is unlikely that in real-life applications that this assumption will be satisfied. Hence, the normal distribution is often used to describe demand and/or lead time distributions in analyses, and it is possible to analytically derive the reorder point and order quantity. Just as with the news-vendor problem, however, the inventory analysis could quickly be complicated when some of the simple assumptions are relaxed. In practice, inventory analysis rarely conforms to the simple assumptions that are used. In these cases, the distribution of demand and/or lead-time could follow distributions other than the normal distribution. This in itself should not be a problem as long as the distribution being evaluated has clear mathematical properties (such as the gamma, beta, log-normal, etc.) in which case closed form solutions are still possible. However, even in these cases MCM can be used as a sensitivity analysis tool to evaluate the impact of the selection of a specific distribution for demand and/or lead-time. Another practical aspect of inventory analysis is the common “bundling” of products. In many instances, organizations order items not on an individual basis, but order several items from the same supplier at the same time. Organizations may “bundle” orders for several reasons including price breaks, reduction of paper work and ordering costs, and potential shipping advantages. Bundling would also imply that for some products, it would be necessary to use order quantities and/or reorder points that are not “optimal.” When such complexities are included in the simple inventory model, MCM provides a simple and effective tool for analyzing a complex situation. Such analysis, in addition to providing estimates of costs, can also provide valuable insights into the process that can then be used to modify policies and procedures if the situation warrants.

C. Statistical Analysis Organizations recently have shown a renewed interest in the use of statistical analysis for improving operations. Initiatives such as the “Six Sigma initiative” pro-

Monte Carlo Simulation mote the use of statistical analysis for the “near elimination of defects from every product.” A recent article in the American Statistician states: “Statistical and related methods are being used extensively by Six Sigma companies, and by literally thousands of practitioners systematically using statistics on focused projects yielding significant financial results.”

It is perhaps in this context that the application of MCM has the greatest potential. From the statement above, it is clear that organizations are using statistical procedures. These procedures are usually based on assumptions regarding the population from which the sample is derived. Some of these procedures are robust (i.e., they are not sensitive to deviations from the underlying assumptions) while others are not. In many cases, the extent to which a given statistical procedure is robust can be found in textbooks, but not necessarily for all procedures and for all types of violations of assumptions. In cases where such information is not available, MCM can be used effectively to determine the extent to which a statistical procedure is robust to violations of assumptions. For the purposes of illustration, consider the following scenario. Assume that an organization is attempting to determine whether the variance in processing time of an older machine is greater than that of a newly installed machine. The result of this test will be used to determine whether the older machine needs to be modified or replaced. Modifying or replacing the older machine is an expensive proposition and hence the organization would like to be sure that there is a definite need for this action before it is taken. The null and alternate hypotheses in this case can be stated as follows: H0 : 21  22 Ha : 21  22 where 12 and 22 are the variance in processing times of machine 1 and machine 2, respectively. Further let the specified level of significance be . This specification implies that, under the null hypothesis (namely that the variances are equal), the probability of incorrectly concluding that the variances of the two populations are different (the type I error resulting from the hypothesis test) is approximately . By specifying small values for  (usually 0.01, 0.05, or 0.10), the organization minimizes the probability that null hypothesis will be incorrectly rejected (and hence also minimizes the unnecessary cost of modifications to the machine when it is not necessary). Assume that the organization has conducted an experiment and

Monte Carlo Simulation

197

observed 50 processing times for each of the machines (n1  n2  50). A descriptive analysis of the sample data has indicated that the samples, while similar in shape, are heavily skewed. A frequency distribution of the sample data for one of the machines is provided in Fig. 1. The statistical procedure most often used to compare equality of variances is a simple F test. Using the 50 observations for each machine, the variances in processing times for machine 1 (s21) and machine 2 (s22) are computed. The test statistic for this procedure is the ratio of the sample variance of the older machine to that of the new machine as follows: s21 R  2 s2 The null hypothesis is rejected if the value of R is greater than F, 1, 2 (the critical value from an F distribution with a specified significance level of , 1 (n1  1) represents the numerator degrees of freedom, and 2(n2  1) represents the denominator degrees of freedom). One of the key assumptions of the test described above is that the populations from which the samples are derived are normally distributed. Earlier studies have shown that the test is very sensitive to the normality assumption. These studies have shown that when the samples are drawn from skewed populations, even if the significance level is specified as , the actual level of type I error is different from the specified level. Thus, for some distributions, when the null hypothesis is rejected, it is possible that the probability that this decision is made in error is much higher than acceptable, which in turn could result in unnecessary expenses in modifying one machine. Given that the processing time data is known to be heavily skewed, the organization would obviously like to know whether this has an impact on the decision at hand (namely, whether the variances are equal).

While earlier studies provide a general conclusion that for skewed populations, it is unlikely that the results for every specific case are readily available. In such cases, MCM can be used as a tool to investigate the impact of skewness on the F test described above. A description of a simple experiment using MCM is described below. The objective of this experiment is to investigate the sensitivity of the F test for comparison of variances to samples drawn from a population whose characteristics are similar to those observed in the sample data. The frequency distribution provided in Fig. 1 shows that the sample data has an (approximate) exponential distribution. Hence, the organization would like to determine, for samples drawn from an exponential distribution, under the null hypothesis (that is both samples are drawn from the same population), the actual level of type I error that results when the specified level of significance is . The organization would also like to investigate whether using larger samples would alleviate the problem. In the simulation experiment, the population to be used for the experiment was specified by using the gamma distribution and specifying the shape parameter as 1.0. For the sake of simplicity and, since both samples are drawn from the same population, without loss of generality, the scale parameter was also assumed to be 1.0. Using GammaInv function in Microsoft Excel, n observations were generated to represent the sample of observations for machine 1 and another n observations for Machine 2. The variances of the two sets of observations and the test statistic (the ratio of the sample variance of machine 1 to the sample variance of machine 2) were computed. The p value resulting from the test was computed using the FDist function in Excel. The null hypothesis was rejected if the p value of the test was less than the  value specified. The entire experiment was repeated 10,000 times. At the end of the 10,000 replications,

Figure 1 Frequency distribution of processing time.

Monte Carlo Simulation

198 the percentage of cases resulting in the rejection of the null hypothesis was recorded for the three levels of significance (  0.01, 0.05, and 0.10). The entire experiment was then repeated by varying the sample size from 50–200 in increments of 50. The results of this experiment are provided in Table I. A test would be considered robust if, when the assumptions are violated, the performance of the test will be the same as when the assumption is satisfied. One critical aspect of any statistical test is that, under the null hypothesis, the observed level of type I error (the percentage of rejections when the null hypothesis is true), must be approximately the same as the specified level of significance (). The results provided in Table I indicate that for small sample sizes, the F test is clearly not robust when samples are drawn from an exponential population. For a sample of size 50, the observed probability of rejecting the null hypothesis when it is true (type I error) is much higher than the specified level of significance (). Even for samples size of 100 and 150, this trend continues. However, for a sample of size 200, the observed and specified significance levels are approximately the same. The results in Table I provide valuable information to the organization. First and foremost, the results of the experiment indicate that, for the specific case being considered (exponential population and sample size  50), even if the null hypothesis is rejected, there is a large probability that there is no difference in the variances of the two machines. In other words, if the organization undertakes modifications/repairs to the old machine based on the fact that the F test indicated that the variance is higher, then there is a

Table I F Test

large probability (as high as about 25%), that such modifications were unnecessary. The results in Table I also suggest that the best option under these conditions would be to increase the sample size from 50 to 200. When the sample size is 200, the observed specified significance levels are approximately the same, and hence, the results of the F test can be relied upon. It is important to note that all computations in the above experiment were performed on a personal computer using only Excel functions and Visual Basic as the programming language. The entire experiment required 4500 seconds to complete. Note that the number of replications was specified as 10,000. Reducing the number of replications would result in a corresponding (linear) reduction in computation time. In many cases, even 1000 or 2000 replications may be adequate. Increasing the number of replications beyond 10,000 rarely produces any improvements in accuracy. Hence, it is possible that the same results could have been achieved with even less computation time. The situation presented above commonly arises when using statistical procedures. Real life situations rarely conform to the strict assumptions underlying many statistical procedures. In many cases, while information regarding the general behavior of a procedure to violations of the assumptions is available, information regarding the behavior of the procedure in specific cases may not be readily available. In such cases, MCM offers a simple, effective, and efficient tool either to generate new information for the specific case or to verify/validate existing information. In the following section, we describe a new MCM-based statistical procedure that provides decision makers with the ability to perform even more advanced statistical analysis.

Specified versus Actual Significance Levels of the

Sample size

Specified level (%)

Observed level (%)

50

1.00 5.00 10.00

10.64 18.68 24.64

100

1.00 5.00 10.00

4.23 11.18 17.61

150

1.00 5.00 10.00

1.74 7.09 12.80

200

1.00 5.00 10.00

0.86 4.45 9.30

III. BOOTSTRAPPING Bootstrapping is a relatively new, computer-intensive statistical methodology introduced by Efron. The bootstrap method replaces complex analytical procedures by computer intensive empirical analysis. The bootstrap method relies heavily on MCM where several random resamples are drawn from a given original sample. The bootstrap method has been shown to be an effective technique in situations where it is necessary to determine the sampling distribution of (usually) a complex statistic with an unknown probability distribution using these data in a single sample. The bootstrap method has been applied effectively in a va-

Monte Carlo Simulation riety of situations. Efron and Tibshirani provide a comprehensive discussion of the bootstrap method. The bootstrap methodology can be described in simple terms as a resampling procedure that is used to construct an empirical distribution of the sample statistic. The empirical distribution (often referred to as the bootstrap distribution), can be used in the same way as the theoretical sampling distribution. To illustrate the bootstrap method, assume that a sample, X { X1, X2, . . . Xn) of size n has been collected. Further assume that the parameter of interest is and let

ˆ represent the sample statistic used to estimate . In the bootstrap method, a simple random sample of size n is resampled, with replacement, from X. From this bootstrap sample, the sample statistic ( ˆ 1B) is computed. The process of generating the bootstrap samples and computing the sample statistic is repeated a large number (say m) times resulting in the collection { ˆ 1B, ˆ 2B, . . . ˆ Bm}. This collection allows for the conˆ which can struction of the bootstrap distribution of , be used to estimate standard error, construct confidence intervals, or perform hypothesis tests. Note that the bootstrap method is free of parametric assumptions common in traditional statistical techniques. The only assumption underlying the bootstrap method is that the sample is representative of the population, a basic assumption which underlies any statistical technique. However, since the bootstrap method relies heavily on the original sample, the bootstrap method may not be effective for small sample sizes. The bootstrap method is most effective in cases where the sampling distribution of the statistic is so complex that it cannot be analytically derived and/or where the sampling distribution can be derived only under strict parametric assumptions and cannot be generalized when these assumptions are not satisfied. Consider the example of the comparison of the variance in processing times for the two machines described earlier. Even though the performance of the F test for a sample size of 200 was adequate, the decision maker may be concerned with the violation of the normality assumption and its impact on the test. The decision maker in this case may consider the bootstrap method as a viable alternative to perform a test to check whether the variances in processing time of the two machines are different. In this example, the statistic of interest is the ratio of the variance of the old machine to the variance of the new machine. As before, let s12 and s22 represent the sample variance for the old and new machines, respectively. The statistic of interest is the ratio of the two variances namely R.

199 In the traditional F test, when the samples are drawn from normal populations, under the null hypothesis that the variances are equal (i.e., R  1), the sampling distribution of R has an F distribution with 1 and 2 degrees of freedom. This allows a hypothesis test to be performed using the F distribution. However, when the samples are drawn from other distributions (such as the exponential) it is not always possible to analytically determine the sampling distribution of R. In these cases, the bootstrap method can be used to construct the bootstrap distribution of R. For the purposes of this illustration, two samples of 50 observations each were generated for the old and new machine processing times from a gamma distribution with shape parameter  1.0 and scale parameter  1.0. Since the samples were generated from the same population, the correct decision is not to reject the null hypothesis. Using the original sample collected for the old machine (n  50), a resample of size 50 was drawn, with replacement, from the original sample. The variance of this (bootstrap) sample ([s12]B1 ) was computed. The process was repeated for the new machine and ([s22]B1 ) was computed. The ratio of the variance of the old machine to the variance of the new machine was computed as: [s21]B1 RB1  2 [s2]B1 The entire process of selecting resamples, computing variances, and computing the ratio was repeated 5000 times. Using the collection {RB1 , RB2 , . . . RB5000}, a frequency distribution was constructed (Fig. 2). This frequency distribution (the bootstrap distribution of R) is an estimate of the true sampling distribution of R. As indicated earlier, the bootstrap distribution can be used in placed of the sampling distribution of R in order to estimate the standard error, construct confidence intervals, or perform hypothesis tests. In this case, we are interested in performing hypothesis test. Figure 2 also indicates the location of the (null hypothesis) value of R  1.0. Assume that the specified level of significance is . Let the area of the curve to the left of 1.0, as a proportion to the total area, be represented by p. The proportion p represents the percentage of bootstrap samples where the R was less than 1.0. Conversely, (1  p) represents the proportion of cases where the R was greater than 1.0. If the value of p is very small, then it can be concluded that the variance of machine 1 was less than that of machine 2 only in a small proportion of the bootstrap samples. Specifically, if p , then it implies that

Monte Carlo Simulation

200

Figure 2 Bootstrap distribution of R.

fewer than  proportion of the bootstrap samples provided support for the null hypothesis. Hence, if p , the null hypothesis can be rejected. In this example, from the frequency distribution of R (also provided in Fig. 2), it can be seen that p  0.40. This implies that in approximately 40% of the bootstrap samples, the ratio of the two variances was  1.0. Since p is larger than the usually specified levels of  ( 0.01, 0.05, or 0.10), the null hypothesis cannot be rejected. Thus, the data in this case does not support the statement that the variance of machine 1 is greater than the variance of machine 2. The above illustration shows the ability of the bootstrap method to provide a distribution-free means for performing a hypothesis test regarding the variance of the two machines. As with the earlier example, all computations were performed using only Excel functions and Visual Basic. The total computation time to perform the hypothesis test using the bootstrap method was approximately 50 seconds.

VI. CONCLUSIONS Just a few years ago, the experiments described above could be performed only using a mainframe computer and specialized software. Recent advances in technology have made these requirements unnecessary. All compu-

tations in this paper were performed on a personal computer (Dell Dimension with a 450-MHz processor). The only software used was Excel and Visual Basic was used as the programming language. While it is true that some specialized knowledge (specifically, programming in Visual Basic) is still needed, the fact that this software is available even in the most basic computer makes it easier for decision makers to use MCM. It is also important to note that the times reported in this study were not CPU times, but were actual elapsed times. Educators are also adapting to the advances in information technology to enhance the capabilities of their students. There has been a recent trend toward teaching quantitative methods courses using spreadsheets. The objective of this approach is to treat students (decision makers) as the “end-users” rather than the traditional approach treating students as “aninformed-consumer-of-MS-consulting.” In order to achieve this objective, this approach focuses on the use and application of statistical and management science techniques using spreadsheets. This approach reduces the focus on numerical aspects of the techniques that is often found in the traditional method of teaching such courses. With greater acceptance of this approach, recent textbooks also present the relevant information in a spreadsheet environment. The major impact of this trend has been that techniques and procedures, such as MCM, that were, un-

Monte Carlo Simulation til recently, considered too advanced and/or complicated to be covered in introductory classes, are now receiving coverage. Recent textbooks on quantitative methods now provide coverage of MCM. Several Excel add-in packages (such as @Risk and Crystal Ball) have also been developed for implementing MCM. In conclusion, MCM remains a simple yet powerful alternative for analyzing complex decision making problems involving uncertainty. However, the application of MCM in practice does not seem to match the potential of MCM. This can be attributed mainly to the fact that, in the past, application of MCM required special technology and software. In addition, a lack of coverage of the topic in quantitative methods courses meant that only students who specialized in quantitative methods were likely to use MCM. These problems have been alleviated, thanks to the recent advances in technology and a shift in the paradigm for teaching quantitative methods. With these changes, it is likely that the application of MCM in practice will finally reach the potential it offers for decision makers.

SEE ALSO THE FOLLOWING ARTICLES Continuous System Simulation • Decision Support Systems • Discrete Event Simulation • Executive Information Systems •

201 Game Theory • Optimization Models • Simulation Languages • Strategic Planning, Use of Information Systems for

BIBLIOGRAPHY Efron, B. (1979). Bootstrap methods: Another look at the jackknife. Annals of Statistics, Vol. 7, No. 1, 1–26. Efron, B., and Tibshirani, R. (1993). An introduction to the bootstrap. New York: Chapman & Hall. Erkut, E. (1998). How to “Excel” in teaching MS. ORMS Today, Vol. 25, No. 5, 40–43. Evans, J. R., and Olson, D. L. (1999). Statistics, data analysis and decision modeling. New Jersey: Prentice-Hall. Fishman, G. S. (1996). Monte Carlo: Concepts, algorithms, and applications. New York: Springer-Verlag. Law, A. M., and Kelton, W. D. (1982). Simulation modeling and analysis. New York: McGraw-Hill. Markland, R. E., Vickery, S. K., and Davis, R. A. (1998). Operations management: Concepts in manufacturing and services, 2nd Edition. Mason, Ohio: South-Western College Publishing. Moore, D. S., and McCabe, G. P. (1993). Introduction to the practice of statistics. New York: W.H. Freeman and Company. Pritsker, A. B. (1995). Introduction of simulation and SLAM II. New York: John Wiley & Sons. Williams, G. J., Hill, W. J., Hoerl, R. W., and Zinkgraf, A. (1999). The impact of six sigma improvement—A glimpse into the future of statistics. American Statistician, Vol. 53, No. 3, 208–215.