Using the taguchi paradigm for manufacturing system design using simulation experiments

Using the taguchi paradigm for manufacturing system design using simulation experiments

Computers ind. EngngVol. 22, No. 2, pp. 195-209, 1992 Printed in Great Britain. All fights reserved 0360-8352/92 $3.00+ 0.00 Copyright © 1992 Pergamo...

1MB Sizes 0 Downloads 21 Views

Computers ind. EngngVol. 22, No. 2, pp. 195-209, 1992 Printed in Great Britain. All fights reserved

0360-8352/92 $3.00+ 0.00 Copyright © 1992 Pergamon Press pie

USING THE TAGUCHI PARADIGM MANUFACTURING SYSTEM DESIGN SIMULATION EXPERIMENTS

FOR USING

R. J. MAYERand P. C. BENJAMIN Department o f Industrial Engineering, Texas A&M University, College Station, TX 77843-3131, U.S.A.

(Received for publication 18 June 1991)

1. I N T R O D U C T I O N

I.I. Purpose and outline of the paper The purpose of this paper is to describe and demonstrate a method to design robust manufacturing systems using computer simulation experiments. The method developed uses the concepts and techniques introduced by Genichi Taguchi and modifies them for simulation based design studies. A background for this research is given in Section 1.2. In Section 1.3, the motivation for this work is outlined. Section 1.4 explains the relationship of this work to a closely related area, the use of experiments to design products. In Section 2, an outline of previous research work in the area is given. A brief description of the Taguchi method as applied to the design of systems is presented in Section 3. Section 4 discusses some of the limitations of the Taguchi approach for simulation based design studies and suggests a few modifications to overcome these drawbacks. An example from the manufacturing domain is presented in Section 5 to illustrate the method described. A few details of the software and hardware used to perform the simulation and statistical analyses are presented in Section 6. In Section 7, a few areas for further research are outlined, and Section 8 provides a brief summary of the paper. 1.2. Background With the increase in complexity of man-made systems, computer simulation has become a widely used and often indispensable decision support tool. Moreover, the rapid proliferation of sophisticated computer resources in recent times has enhanced the attractiveness of computer simulation as a problem solving and design tool. The popularity of simulation amongst competing quantitative tools can probably be attributed to the fact that it is both simple and intuitively appealing. It facilitates experimentation with real world systems which would either be impossible or not cost effective otherwise. Moreover, according to Jay Forrester [1], " . . by using simulation, a great deal more is learned, because the experimental conditions are fully known, controllable, and reproducible, so that changes in systems behavior can be traced directly to the causes."

Through simulation has been used for a variety of purposes, we restrict ourselves in this research to the application of simulation for the purpose of system design. By "system", we mean, in the words of Shannon [2], " . . a group or set of objects united by some form or regular interaction or interdependence to perform a specified function."

We thus are not restricting ourselves to any specific system at the outset though we demonstrate the concepts developed on a specific domain area. By "design", we refer to one of two kinds of activities--improvement in the design of an existing system, or the design of a new system. When used for the purpose of system design, simulation studies are often goal oriented. Simulation analysts are usually interested in finding the settings of model parameters which would enable the model to meet certain desired performance criteria. Since the behavior of a simulation model is usually stochastic in nature, the search through the parameter space for improved solutions is seldom straightforward. Owing to the complex structure of real life models there is CAIE 22/2--G

195

196

R.J. MAYER and P. C. BENJAMnq

usually no explicit functional relationship between the model performance metrics and the independent variables of the model. The areas of study which investigate the search for improved model performance include simulation optimization, goal driven simulation, and sensitivity analysis. Broadly, the efforts of this research seek to formulate an effective methodology for goal, or target driven simulations, for the purpose of system design. Quality contro! the world over has been revolutionized in the last three decades by the ideas of the Japanese engineer, Genichi Taguchi. The methods he developed in the late 50's adopted a uniquely different philosophy for approaching quality. He introduced and vitalized cost consciousness into quality improvement efforts, and modified the way in which the statistical principles of experimental design and analysis were used so as to make them more understandable to practicing industry personnel. His approach emphasized the need to build quality into the product at the design stage itself, rather than expending most effort into control of quality by corrective feedback action at the production stage. Hence his methods focus on Off-line rather than On-line quality control.

1.3. Motivation Central to Taguchi's approach to "quality by design" is this simple principle--Instead of trying to eliminate or reduce the causes for product performance variability, adjust the design of the product so that it is insensitive to the effects of uncontrolled (noise) variation. This intuitively appealing though simple concept has been exploited by Japanese companies in the past three decades with telling effect; they have delivered products at quality levels that are both superior and consistent. It is our contention that the strategy he adopted to achieve what he calls "Robust Designs" of products, or designs which are less sensitive to noise can be adapted and used advantageously for the design of the heterogeneous complex man-machine systems that manufacture the products. In the past, the application of simulation studies for the design of systems have generally focussed on the goal of achieving a desired value of a given "criterion function" (see Shannon [2]), or a yardstick of performance. Such "measures" generally refer to nominal or average values (along with a confidence bandwidth for stochastic models). It is our observation that little attention has been given to the variability of these performance measures that occurs in practice when system parameters deviate from their designed settings. As a consequence, the managers of such systems often spend considerable time with "firefighting" efforts to offset the effects of such performance variation. Though one may argue that such is the nature of business (irrespective of the efficacy of planning and design), we contend that there is a case for designing-in robustness. The primary motivation for this research is the perceived need for developing methodologies which consider system performance variability as a significant design criterion. Such robust design methodologies would therefore attempt to create designs that are resilient to the contingent variability of the systems operating conditions. 1.4. Product design vs system design To put this research in perspective, it is interesting to compare the process for product design by physical experimentation with that for system design by computer simulation: • In product design, we seek to improve or establish the design of a product using physical prototypes. In simulation based system design, we attempt to establish or improve the design of a system (manufacturing system, service facility, etc), using mathematical~logical/statistical models. • In product design, we conduct physical experiments to generate the information needed to perform the design analysis. In simulation based system design, the information needed for the analysis is generated by executing simulation "experiments" on a computer. • In product design, the focus is on physical products which are generally produced in mass quantities. In simulation based system design, on the other hand, the focus of the design is often large scale heterogeneous systems which are almost never "'mass produced." • In product design, the measures of performance are the desiredfunctional characteristics of the product. In simulation based system design, the performance measures are the "model performance measures" for the system being designed. The analogy between the design development cycles for these two processes is shown in Fig. 1.

Using the Tagnchi paradigm for manufacturing system design

197

TRADITIONALPRODUCT DESIGNCYCLE SIMULATIONBASEDSYSTEMDESIGNCYCLE

I

) TAGUCHI METHOD

MODIFIED TAGUCHI METHOD

Fig. I. Design cycles for product and system design.

It is seen from the above comparison that at an abstract level, the design processes in the two different contexts are essentially the same. Hence what we attempt in this research is essentially a transfer of concepts from the design process in one setting to another. 2. STATE-OF-THE-ART

IN R E L A T E D

AREAS

The methodology presented in this paper is an attempt to synergize work in two vastly different areas of Industrial Engineering: optimum seeking methods in simulation for design, and the use of Taguchi's methods in quality control. W e therefore group our survey of previous work under two major headings, and outline these sections 2.1 and 2.2. Moreover, within the area of optimum seeking methods for simulation, our work relates to methods which use statistically planned experiments. We thus focus our attention on optimization techniques used for simulation which employ statistically planned experiments.

2.1. Simulation optimization using statistically planned experiments Response surface methodology (RSM), developed by Box and Wilson [3], is a set of techniques used for process exploration and optimization using statistically planned experiments. RSM uses the data generated by a sequence of planned experiments to fit a polynomial regression function to approximate the performance measure (response). The gradients of this polynomial are used to guide the search towards the optimum. The technique has been refined over the years, and more recent descriptions of RSM are presented in Box and Draper [4], and Myers et al. [5]. The early applications of RSM were in conjunction with physical experiments. Since the 70's RSM has been widely used for optimization by simulation. Smith [6], made an empirical study to compare the performance of RSM and its variants with other simulation optimization methods. Montgomery and Bettencourt [7], and Biles and Ozmen [8], give applications of RSM to computer simulation experiments, with special emphasis on optimization with multiple objective functions. A recent and exciting area in simulation optimization research is the use of frequency domain experiments (FDE). Although they use statistically planned experiments to a very limited degree, they are similar in some aspects to RSM. The FDE were introduced by Schruben and Cogliano [9], and are used for both sensitivity analysis of simulation models and for the identification of a meta-model of the response surface. The method establishes the strength of relationships between the output and the independent variables of the model by tracking the variation in the performance measure in response to sinusoidally oscillating inputs. This method appears to offer promise for use as a front end to response surface tools. Neither RSM nor the FDE provide explicit strategies for searching for robust designs. The former is an optimization tool and the latter is a factor screening technique. Both methods aid in moving towards improved values of the nominal performance measure but do not attempt to find solutions which are insensitive to noise variation. This work is thus an attempt to fill the perceived void in terms of simulation analysis research focussed on designing systems which are insensitive to noise variation.

198

R.J. MAYERand P. C. B~JAmN

2.2. Research in the theory and application of the Taguchi method Although the Taguchi method was developed in the late 50's, it was only two decades later that it gained widespread application in the United States. The original work of Taguchi is documented in his own two-volume book [10]. Other books which present the Taguchi method in considerable detail are those of Taguchi and Wu [11], and Ross [12]. Kackar [13] presents a very elegant and precise account of the Taguchi method. He modifies some of the terminology to make it more understandable. Hunter [14] compares and contrasts the Taguchi method with the statistical techniques of experimental design. Over the past few years, Taguchi's methods have been subject to considerable critical review. Gunter [15] has criticized the use of Orthogonal Arrays by Taguchi, in view of the fact that many of the interaction effects are ignored. Leon et al. [16] have investigated the validity of using the signal-to-noise ratio as the performance metric in the quest for robust designs. Box [17] provides an excellent critique on the methods of Taguchi, and recommends the separate analysis of location and dispersion effects rather than the use of the signal-to-noise ratio. There is no evidence in the published literature of work dealing with the design of complex heterogeneous systems using the strategies introduced by Taguchi in conjunction with computer simulation experiments. However, numerical methods have been used to simulate the behavior of physical devices on the computer. According to Kackar [13] and Hunter [14], such numerical simulation experiments can be used in conjunction with the Taguchi method to arrive at more robust product designs. Nazeret and Klinger [18] and Pao et al. [19], have used the Taguchi technique to optimize the design of computer systems. In these applications, however, physical experimentation was carried out on a "product"--the computer system itself, and not on a model of the system. Though it is apparent from the above that this research charts new areas in the field of system design methodologies, one may contend that post-optimality sensitivity analysis studies pursue similar goals. The main disadvantage of sensitivity analysis studies, however, is that they are almost exclusively done in a one-factor-at-a-time manner (See Pignatiello [20]). The strategies we adapt from Taguchi in this research, on the other hand, enable the simultaneous variation of two or more factors--improving the effectiveness of the optimization procedure. Moreover, noise sensitivity information is directly factored into the objective function--an improvement over conventional sensitivity analysis strategies. 3. THE TAGUCHI METHOD OF DESIGN FOR SIMULATIONEXPERIMENTS In this section, we present a broad overview of the Taguchi approach. In Section 3.1, we discuss the main features of the Taguchi paradigm, with emphasis on the philosophy that motivates his method, and the novel concepts of robust design. Since our objective is to demonstrate the Taguchi technique in a different setting--that of System Design using Simulation Experiments, we modify the interpretation of his approach as we find appropriate. A brief description of the mechanics of the Taguchi method in the context of System Design is given in Section 3.2.

3.1. Distinguishing features To initiate our description and give a flavor for the concepts that follow, we present an example from Byrne and Taguchi [21]. In 1953, a Tile manufacturing company in Japan was having a major problem with their new $2 million kilm. The problem was the wide variation in the size of the tiles which were baked inside the kiln. Those which were stacked further inside the kiln had dimensions very different from those which were located near the kiln wall. The cause of this variation was the temperature gradient from the center to the wall of the kiln. It was obvious to the engineers of the company that the temperature variability was a "Noise" to their design. One approach to solve the problem would have been to eliminate the cause of the tile size variation. To do this, the company would have to redesign the kiln--a countermeasure which would cost the company about a half million dollars. Owing to budgetary restrictions, this was not a feasible alternative to the company at the time. Hence a different approach was formulated to solve the problem.

Using the Taguchi paradigm for manufacturing system design

199

The process experts at the company studied the problem more closely and identified seven major controllable (Design) factors which would affect the size of the tile: content of limestone, fineness of additive, content of agalmatolite, type of agalmatolite, raw material quantity, content of waste return, and content of feldspar. In addition to the conditioning temperature, two other uncontrollable (noise) factors were identified: conditioning time and conditioning relative humidity. After performing several experiments with the kiln using orthogonal designs, it was discovered that the content of limestone had the most significant effect on the sensitivity of the tile size to temperature variation. It was found that by increasing limestone content from 1% to 5%, and by choosing better levels for the other factors, the percentage of size defects could be reduced from 30% to less than 1%. Moreover, the process engineers found that by changing the size of the mold (called an Adjustment factor), the tile quality was further improved without increasing its sensitivity to temperature fluctuations. 3.1. I. Off-line quality control. Off-line quality control refers to the collection of efforts directed at designing quality into the product by conducting experiments to scientifically determine the best settings of the manufacturing process and the variable features of the product. The philosophy behind this approach is that if quality is designed upfront into the process, then the quantum of effort needed for inspection and corrective action at production would be minimal. Taguchi outlined a three-step methodology to implement off-line quality control: system design, parameter design, and tolerance design. System design refers to activities needed to formulate an initial prototype design. This involves among others, decisions about choosing the independent and dependent variables of the design, the ranges of values for the chosen parameters, etc. Parameter design, the technique used in the tile company example, refers to the investigation process carried out to choose the best settings of the product and process parameters to achieve the performance goals of reaching the target performance while keeping sensitivity to noise minimal. This involves planning and conducting fractional factorial experiments to generate observations which are then analyzed using statistical methods such as ANOVA. Tolerance design refers to the methods aimed at determining the tolerances on the parameter settings that would minimize the cost of the product from both long and short term perspectives. In this paper, we will restrict our attention to only system design and parameter design. 3.1.2. Loss functions and performance statistics. Central to Taguchi's approach to quality is his attempt to quantify "losses" which occur because of poor quality, using "loss functions". He considered that the loss incurred to society because of a product performing poorly is equivalent to the cost that the manufacturer of the product must incur for inferior quality. Moreover, he reasoned that such a loss function must be quadratic in nature, with losses increasing in proportion to the square of the deviation of the performance from the target. In order to measure and factor in the loss function in the actual design process, Tagnchi introduced a class of metrics called signal-to-noise ratios. In its simplest form, the signal-to-noise ratio is proportional to the inverse of the coefficient of variation of the products performance to noise. A more detailed description of loss functions and signal-to-noise ratios is given in the Appendix. 3.1.3. The robustness design principle. A key feature which distinguishes the Taguchi approach is what we call the Robustness design principle: instead of trying to reduce or eliminate the causes for product performance variability, adjust the design of the product so that is is insensitive to the effects of the uncontrolled (noise) variation. To use this principle, the space of parameters is to be decomposed into two major segments: the "controllables" or the design factors, and the "uncontrollables" or the noise factors. In the tile company example, for instance, one of the design factors is the content of limestone, and one of the noise factors is the conditioning time. Of course, the appropriateness of the Taguchi method for any design problem hinges on the existence of such a decomposition. 3.1.4. Decomposition of the optimization process. The tile company example illustrates another important strategy which is unique to the method: decompose the optimization process into two steps. First optimize to reduce performance sensitivity to noise. Then, tune performance towards target by adjusting factors that do not contribute substantially to noise sensitivity. For this strategy to work, however, there must exist adjustment factors: factors which have little effect on the noise

200

R.J. MxYFatand P. C. BENJAmN

sensitivity, but have a significant effect on the performance characteristic of the product. Though Taguchi refers to this approach as an "optimization" strategy, it is really a heuristic procedure to improve the performance. 3.1.5. Use of orthogonal arrays. Taguchi recommends the use of Orthogonal Arrays for the construction of the experimental design matrices. Orthogonal Arrays, introduced by Rao [22], are generalized Graeco-Latin Squares, which have the pairwise balancing property that every setting of a parameter of the design occurs with every other setting of all the other parameters the same number of times in the design (Kackar [13]). Consequently, any two columns of an orthogonal array form a complete two factor factorial design. Since the use of orthogonal arrays assumes the existence of an underlying additive model for the observed response, the validity of their use in the presence of substantial interactions in the model is questionable. We will discuss this issue further, and suggest a solution in Section 4.1.

3.2. Using the Taguchi method for simulation based system design The Taguchi method of parameter design (for simulation studies) consists of the following steps: 3.2.1. Identify factors and specify targets. For the system under study, identify the relevant performance measure, design factors and noise factors. A performance measure is a carefully chosen metric whose measured value characterizes some aspect of the system performance. For the purpose of this study it is assumed that there exists a goal, or a target value of system performance which the designer hopes can be achieved as an outcome of the analyses. The set of independent variables of the model of the system is divided into two disjoint subsets: the design factors and the noise factors*. The design factors are the independent variables of the model whose values are (presumably) within the control of the designer. The noise factors, on the other hand, are those variables of the model which cannot be manipulated by the system designer, i.e. the "uncontrollables" of the design (although any parameter of the model can be "modified" theoretically, we are interested only in those changes which are possible in the corresponding real system which the model purports to represent). At this stage, the target value for the performance measure is set and the ranges of permissible values of the design and noise variables are specified. 3.2.2. Formulate the experimental design matrices. To facilitate the study of the model response to variation in the parameters of the model, and to measure the sensitivity of the model performance to noise, the sequence of experiments is planned in a systematic manner. In statistically planned experiments, the "plan" of the experiment is specified through "design matrices" (See Box et al. [23]). The entries of these matrices unambiguously specify the settings of the parameters (the experimental conditions), for each of the experiments to be run. The matrix of the experimental design is made up of two components. The design matrix is a matrix of design factors. In it are stored the various level combinations of the design factors. The noise matrix is a matrix of level combinations of the noise factors. The design and noise matrices of a typical design are shown in Fig. 2 (Adapted from Kackar [13]). Suppose that the design matrix has m rows and that there are n rows in the noise matrix. For each of the experimental conditions specified by a row i of the design matrix, the experiment is replicated n times to yield measurements of the performance characteristic Y,j, Y,2.... Y~, in order to assess the sensitivity of the performance characteristic to controlled noise variation. This procedure is repeated for each of the m rows of the design matrix, making a total of m × n experimental conditions. It is seen that this strategy of experimentation is really a "sensitivity analysis", where we try to estimate the sensitivity of performance to artificially introduced (though scientifically planned) perturbations in the noise factors. The observations recorded by these experiments are then used to compute the performance statistics (the signal-to-noise ratios). As discussed in Section 3.1.5, orthogonal arrays are used to construct the experimental matrices. The design matrix is usually a 3 + level design and the noise matrices typically have two or more levels for each factor. The choice of how many levels to use for each factor is determined from prior knowledge of the anticipated behavior of the performance characteristic. It any non-linearity is expected in response behavior, the use of three or more levels is recommended. *We use terminology used by Kackar [13], who simplified the original terminology of Taguchi.

Using the Taguchi paradigm for manufacturingsystem design DESIGN MATRIX

201

NOISE MATRIX

RUN

h~ YII 11

1

222

12

2

1

333

21

2

4

2

1 2

22

1

S

223

1

6

231

2

7

3

1 3

2

8

3

2

3

9

332

1

1

1 1

2

1

3

1

1

3

1

v

...................4~Y12 Y13

~ ,...

YI4

D" Y91 111

Y92

122

v[

1

2

1

221

2

Y93 Y94

(NOTE : 1. In this figure,m = 9, and n = 4, makinga totalof 36 experimental conditions. 2. The numbers 1, 2, and 3 refer to the levelsof the factors.) Fig. 2. Typical design matrices.

3.2.3. Conduct the experiments (simulation runs) and compute the performance statistics. As explained in Section 3.2.2, a total o f m x n experiments are conducted and the results are compiled. Of course, these experiments can be replicated further to enable more precise estimation of the experimental error though Taguchi does not emphasis this. The performance statistics of interest, the signal-to-noise ratios (S/N), are now computed for each of the m rows of the design matrix. 3.2.4. Find parameter settings to maximize S /N. ANOVA is performed using the signal-to-noise ratio as the response. At this stage, factors which have a significant effect on S/N are identified. These factors are now adjusted and set at the levels which would maximize S/N. The levels of those factors are now treated as fixed, and hence will not be considered for further adjustments to improve performance. 3.2.5. Tune performance to target. The set of design factors which significantly influence the model performance measure are identified by performing ANOVA with the model performance measure as the response. Among the factors which have a negligible effect on S/N, those which have a significant effect on the model performance measure are identified. These factors, called adjustment factors, are now the basis for further performance gain while retaining the robustness of the design. The model performance is "tuned" to bring the value of its performance measure closer to target by properly setting the adjustment factors. Though Taguchi refers to this process as "optimization", it is really a heuristic method of performance enhancement. 3.2.6. Perform confirmation experiments. To confirm that the chosen settings of the model indeed yield the desired behavior, confirmation experiments are run at the new parameter settings. If the model performs as predicted, the chosen design is chosen as adequate, and the analysis stops. If not, a new design cycle will have to be initiated since this would indicate that some of the assumptions made during the analysis are not valid (For example, ignoring interaction effects). 4. LIMITATIONSOF THE TAGUCHI PARADIGM FOR SIMULATION ANALYSIS AND SUGGESTED CHANGES From the description of the main concepts and techniques of the Taguchi approach given in Sections 3.1 and 3.2, it is evident that the Taguchi method needs to be modified. These modifications, in addition to overcoming some of the limitations of the Taguchi approach, render the method suitable to be applied to a new domain--manufacturing system design using computer simulation experiments. In this section, we will briefly describe the changes that we propose to the Taguchi method.

202

R.J. MAV~Rand P. C. BmqJAbnN

4.1. Use of orthogonal arrays Since the Taguchi method has primarily been used in the quality control domain, the use of orthogonal arrays may be mandatory to reduce experimentation because of the large number of variables and the high cost of running experiments. However statisticians have critiqued his approach on this count since interaction effects are usually ignored (see Gunter [15]). In simulation studies, however, the cost of extensive experimentation is often not a severe limitation. Hence we suggest the use of appropriate fractional factorial designs of Resolution IV or V, so that the designs are not severely biased.

4.2. When adjustment factors cannot be found The Taguchi method presupposes the existence of adjustment factors---factors which significantly affect the model performance measure, but have a negligible effect on the signal-to-noise ratio. What needs to be done in the absence of such factors is not specified in his method. In the attempt to design noise insensitive manufacturing systems it is possible to encounter models where no adjustment factors can be identified by the Taguchi method. The relevance of this issue is demonstrated by example in Benjamin and Mayer [24]. We suggest a heuristic ranking procedure to overcome this problem: Using the results of the ANOVA on S/N, rank the design factors in descending order of effect (using mean square values). By working down this list one factor at a time, identify a subset of factors which cumulatively contribute 70% or more to the total mean square value (excluding the error mean square value). The subset of factors which is so generated consists of those factors that are likely to have a substantial influence on the sensitivity of performance to noise. Next, the levels of the factors from this subset are adjusted in a way such that the value of SIN is maximized. These factors are not considered for further adjustment in the subsequent steps of the analysis. Next, consider the results of the ANOVA on the performance measures. By examining only those design factors which remain after the previous pruning, choose the factors which contribute significantly to the model performance measure as the "adjustment factors".

4.3. Tuning performance towards target Once the adjustment factors are identified, Taguchi recommends that these factors are set at levels which are likely to shift performance towards target. To do this, he once again considers only main effects and suggests the use of a graphical representation to aid the analysis. If the underlying model is not additive, however, this procedure may not be valid. We suggest a modification to his procedure: after performing ANOVA on the model performance, conduct an adequacy of fit check for the (assumed) first order model. If the result of this test indicates that the first order model is adequate, the regular Taguchi method is used. If not, we suggest that the optimization from this stage onwards is done using the second order techniques used in RSM. Although this may necessitate more experimentation, the search proceeds in those regions of the parameter space where the performance remains "robust". This is true because we have already designed for noise sensitivity by adjusting the factors which affect the signal-to-noise ratio. 5. A MANUFACTURING JOBSHOP EXAMPLE

To demonstrate how robust systems can be designed using the modified Taguchi approach, we show the working of the method on a constructed example from the manufacturing jobshop domain. Consider a metal cutting manufacturing jobshop which partially processes different types of gear housings. Figure 3 shows the main elements of the jobshop along with a possible process flow route. The factory has two groups of different kinds of milling machines, a set of boring machines, two groups of drilling machines, and an inspection workstation. Currently the shop makes three types of gear housings. Each of these have different process flow routes and manufacturing times. Each of the parts made is inspected before being sent out. Many of the parts found defective can be rectified by rework, while some are scrapped. The process flow routes and the operation times of the three types of parts are given in Table 1. The quality and supply schedule of the input castings is highly unreliable, and consequently the enterprise cannot consistently adhere to its customers delivery requirements. All attempts to

Using the Taguchi paradigm for manufacturing system design

203

MILLI

.ORE

DRL, I

I REWORK

I

MILL2

I

INSPECT

DRILL2

I GOODb..~

L

~ SCRAP Fig. 3. The manufacturing jobshop example.

improve the reliability of supply have not been successful. At this stage, the manager decides to hire a simulation consultant to study his system and suggest ways of improving his system design so that his production commitments are more consistently met. Since the timely supply of type 1 gear housings is most critical from the customers viewpoint, it is decided that the goal of the simulation study is to find the best way of reducing the average time in system of housings of type I. The elements of the jobshop which can possibly be adjusted are the number of milling machines of type 2, the number of boring machines, the inspection time, and the scheduling rule. We now illustrate the steps of the (modified) Taguchi method for this example.

5.1. Factors identification and target setting As a first step in the simulation study, the following design factors are chosen: • • • •

Number of type 2 milling machines (X1) Number of boring machines (X2) Inspection time (X3) Scheduling rule (X4)

These are the variables of the model that can be changed by the manager (in the corresponding real system, and at some cost). Two Noise factors are identified to be: • The inter-arrival time of input castings (N1) • The rework rate after inspection (N2) (This factor is influenced by input quality) These factors are highly variable and are beyond the factory manager's control. The signal-tonoise ratio (S/N) used in this example is S/N = 20 log ()7/s), where )7 is the mean time in system, and s is the standard deviation of the time in system to induced noise variation (See the Appendix for a description of signal-to-noise ratios). Table 1. Process flow details for the jobshop Job type I Operation Time Mill I Mill 2 Bore Drill 1 Inspect Mill 1 (rework) Mill 2 (rework)

Job type 2 Operation Time

(151) Mill I (24 2) Mill 2 (25 2) Bore (302.5) Drill I 7 Inspect Mill 1 (10 t) (rework) Mill 2 (151) (rework)

(12 1) 06 1) 08 2) (242) 7 (8 I)

Job type 3 Operation Time Mill 2 Bore Drill 1 Drill 2 Inspect Mill 2 (rework)

(242) (273) (12 1) (152) 7 (161)

(10 1.5)

Note: (I) The figures in parenthesis are the mean and the standard deviation respectively of normal random variables. (2) The boldface figures indicate constant values. (3) All the time values are in minutes.

204

R . J . MAYER a n d P. C. BENJAMIN THE

DESIGN

MATRIX

THE

NOISE

MATRIX

Table 2. The ranges of values of the factors Factor

1111 2 1 1 2 1212 2211 1122 2 1 2 1 1221 2 2 2 2

Low value

High value

2 2 4 FIFO 5 0.05

8 8 10 SPT 15 0.25

Xl X2 X3 X4 NI N2

2 1 1 2

Note: (I) All measures are deterministic, with time values in minutes. (2) FIFO refers to the first-in first-out rule, and SPT to the shortest processing time rule.

N o t e : 1 a n d 2 r e f e r to the t w o levels. Fig. 4. T h e e x p e r i m e n t a l d e s i g n m a t r i c e s .

The performance measure of the model is chosen to be the time in system of gear housing type 1. The goal of the analysis is to reduce the performance measure from its current average value of 194.23 min (with a design configuration of: X1 = 3, X2 = 3, X3 = 7, X4 = FIFO) to a target average value of 100 rain while reducing its sensitivity to noise. 5.2. Planning the experiment 5.2. I. Construct the experimental design matrices. The design and noise matrices are constructed. Since there are 4 design factors, we chose a two level 24- ~Resolution IV fractional factorial design for the design matrix. Since there are only two noise factors, we use a full factorial two-level design for the noise matrix. Two-level designs are chosen since model behavior is not expected to have significant curvature. The experimental design matrices are shown in Fig. 4, and the values of the factors at their high and low levels are given in Table 2. 5.2.2. Tactical simulation experiment design issues. Since we need to obtain reasonable confidence intervals for the performance measure, we need to chose an appropriate run length of simulation and an adequate number of runs. We treat this study as a terminating simulation since we are interested in measuring the performance for a limited time--1200min of operation. To choose the number of runs, we adopt the fixed-sample-size approach, and find that 10 replications of the simulation are needed at each of the experimental conditions. (The relative precision achieved is less than 0.05). Thus a total of 320 simulation runs are performed for the initial stage of the analysis. 5.3. ANOVA results on S / N and mean time in system The ANOVA results with S/N as the response are presented in Table 3. It is observed from the results of the analysis that all the main effects, except X2, are statistically significant at a level less than 0.01. To investigate the existence of adjustment factors, we now perform ANOVA using the mean of the performance measure (i.e. the mean time in system) as the response of interest. The results of the ANOVA on the mean time in system are given in Table 4. The results from Table 4 indicate that the main effects of all the factors are highly significant (at a level of significance less than 0.01). Moreover, the R-square value of 0.81 for the ANOVA indicates that the (assumed) first-order model is probably adequate. From the two ANOVA results, we conclude that X2 is the only adjustment factor for this design; X2 has a significant effect on the mean time in system, but does not have a significant effect on SIN. We are thus in a position to refine the design, and complete the analysis using the Taguchi approach. Table 3. Results of A N O V A on Source X1 X2 X3 X4 Error

M . S . Value 5963 18 2297 4164 190

*The F-value of the test.

F-Value 31 0.09 12 22 --

S/N Prob* > F 0.000 I 0.7622 0.0008 0.0001 --

Table 4. Results of A N O V A on the mean time in system Source XI X2 X3 X4 Error

M.S. Value

F-Value

Prob. > F

81314 75086 62436 450033

38 36 30 213

0.0001 0.0001 0.000 I 0.0001

--

--

2115

205

Using the Taguchi paradigm for manufacturing system design Table 5. The mean values of SIN at different factor settings Factor

XI X3 X4

Mean S/N at low

28.52 42.51 29.94

Mean S/N at high

45.79 31.80 44.37

Table 6. Time in system at different settings of the adjustment factor Factor

Mean time in sys. at low

Mean time in sys. at high

244.23

182.96

X2

5.4. Optimization using the Taguchi heuristic The two step heuristic approach to "optimize" the performance suggested by Taguchi is now demonstrated. We first adjust the factors which have a significant effect on SIN only. From the results shown in Table 2, we see that the levels of the factors X1, X3, and X4 need to be set at their best levels. The mean values of SIN at the low level and at the high level of the above factors are given in Table 5. Since we are interested in maximizing S/N, we set the values of the above design factors as follows: XI: number of type 2 milling machines = 8 (high). X3: inspection time (min)= 4 (low). X4: scheduling rule = SPT (high). We now need to set the value of the adjustment factor, X2. To do this, we need to study the values of the mean time in system at the boundary levels of X2--which are shown in Table 6. We now set X2 at its high value of 8, in order to obtain a lower value of the mean time in system (we are interested in reducing the mean time in system). We observe in this example that an adjustment factor does, in fact, exist. However, the existence of adjustment factors is not guaranteed/n general for manufacturing system design problems. A modification of the Taguchi procedure to deal with design problems in which adjustment factors are non existent is described in Section 4.2, and demonstrated by example in Benjamin and Mayer [24].

5.5. Perform confirmation experiments In the Taguchi method of parameter design, the last step is to perform additional experiments with the factors set at their "best" levels in order to confirm that the new design actually performs better, as predicted. The levels of the four design factors are adjusted as determined in Section 5.4, and 10 more replications of the simulation are conducted. The mean time in system obtained with the final design is 113.69 + 0.97 min (95% confidence interval). At the initial configuration of the design (X 1 = 3, X2 = 3, X3 = 7, and X4 = FIFO), the mean time in system obtained by simulation is 194.23 + 11.99 min (95% confidence interval). Hence a reduction of 41% is achieved in the time in system of the model as an outcome of the analysis. A paired-t test is done to compare the two mean time in systems before and after the analysis. The result of the test indicates that there is sufficient evidence to support the hypothesis that there is a significant decrease in the average time in system as a result of the analysis (1% significance level). Recall that the main goal of this study was to generate a system design which is robust to noise variation. Since the signal-to-noise ratio was used as the metric to guide the search for better designs in our example, the resulting design is likely to be a robust one. We notice that the confidence interval bandwidth of the time in system reduced considerably as a result of the design modifications. At the current state-of-the-practice in system design, however, design analysts do not explicitly factor-in robustness as a design criterion and there are no standard yardsticks to assess the goodness of robust designs. Thus, this is an area with potential for more research, and is discussed further in Section 7.4. 6. I M P L E M E N T A T I O N D E T A I L S

The simulation language used to conduct the experiments was OBSlM--an object oriented simulation language designed and built at the Knowledge Based Systems Laboratory at Texas A&M University. OBSIM is implemented in LISP, and runs on the SYMBOLICS 3600 series of

206

R.J. MAYERand P. C. BENJAMIN

workstations*. In its current implementation, OBSIM is designed to model and simulate a wide class of discrete manufacturing systems. The primitives used by OBSIM include some of the standard constructs needed to model manufacturing systems such as QUEUE, SEIZE, DELAY, RELEASE, BATCH, SPLIT, ASSEMBLY, and BREAKDOWN. The object oriented data structure of the language permits the user to combine and modify these blocks in order to construct specialized models. OBSIM also provides a separate experimental frame and a model frame, so that the experimental conditions can be varied independent of the model. An intelligent user interface has been built at the front end of OBSIM to facilitate the rapid construction of models. The interface provides extensive viewing and editing capabilities, supporting a combination of graphical and textual input. The user first creates the 'objects' of the model and defines their properties. The model construction is completed by specifying the interrelationships between these objects. The software allows the hierarchical decomposition of a model at varying levels of detail, facilitating the construction and extension of complex models in a natural way. OBSIM's object oriented data structure provides the user greater modeling flexibility as compared to traditional simulation languages. The ease of model refinement facilitated by an object oriented data structure makes the construction of complex and non standard models relatively simple; the user can customize models to meet special requirements without having to do extensive programming. Moreover, the modularity and re-use of code supported by the object oriented programming paradigm facilitates the maintenance and extensibility of the language. These features of the language make it suitable for building large simulation models--which evolve and change over extended periods of time. Detailed descriptions of OBSIM and its software environment are given in Mayer [25] and Lin [26]. Since the simulation model had to be repeatedly modified according to a fractional factorial experimental plan, a LISP program was written to automate the resetting of model parameters. The separation of the experimental frame from the model frame supported by OBSIM made this task less difficult. The statistical analysis of output was done with the Statistical Analysis System (SAS), on an IBM 3090-300E mainframe computer. SAS supports a variety of statistical analysis procedures. For the purpose of this research only the GLM and RSREG procedures (see Freund and Littell [27]) were used to perform the analysis of variance on the simulation output. 7. FUTURE RESEARCH POTENTIAL

The research described in this paper is rather exploratory in nature. Consequently, one of the main benefits of the study has been the identification of several avenues for further investigation. We have identified the following areas with potential for more research. 7.1. Characterization of problems We demonstrated the use of the Taguchi approach for system design by simulation on an example from the Discrete Part Manufacturing Domain. Whether this domain is really appropriate is an open question. One of the problems we encountered in applying the Taguchi method to the manufacturing domain was the existence problem (see Section 4.2). There is thus a need to classify problems based on their potential to benefit from the use of the Taguchi approach. To facilitate such a classification, research needs to be done to identify the characteristics of a system that make the application of a robust design strategy appropriate and productive. Z2. Performance metric selection To achieve the goal of minimizing performance variability, Taguchi suggested the use of the Signal to Noise Ratio (S/N) as the metric to estimate noise sensitivity. Taguchi justified its use by arguing that it would ensure that noise sensitivity is minimized while simultaneously keeping performance on target. Leon et al. [16] have questioned the appropriateness of the S I N metric *Symbolics workstations are powerful 'LISP machines'--computers designed to run LISP code efficiently, and used mainly for advanced software development and rapid prototyping.

Using the Taguchi paradigm for manufacturing system design

207

under certain conditions, and developed a class of metrics called PERMIA's (Performance Measures Independent of Adjustment). PERMIA's can be used for a broader range of problems than SIN. Box [17] has also suggested alternative approaches---such as studying the mean and the variance of the response separately, and the use of the logarithm of the data instead of S/N. In view of the problems associated with the use of S I N from a statistical viewpoint, we perceive a need for more research to develop alternative performance metrics which would aid in the search for robust designs.

7.3. Development of an optimization procedure Since the Taguchi method is more of a performance improvement method rather than an optimum seeking procedure, the technique could be modified to be an optimizer for simulations. The main benefit of developing such a method is that it would broaden the scope of applicability of the technique. Moreover, it would facilitate the comparison of the method with other established optimization procedures used with simulation, such as RSM. As with other optimization methods, such a technique would need to be adapted to different settings such as constrained optimization, multiple criteria optimization, discrete optimization, and so on. 7.4. Comparison with other methods It is our observation that there are no well established methods of system design which explicitly consider robustness as a significant design criterion. Consequently, it is difficult to make meaningful comparisons with other techniques. However, further research efforts would help set the stage to facilitate such comparisons: Firstly, suitable yardsticks to measure robustness need to established. Secondly, as mentioned earlier in Section 7.3, the Taguchi method has to be modified to be an optimization procedure. The preliminary results of recent research that we have undertaken (See Benjamin and Mayer [28]), indicates that the Taguchi paradigm offers considerable promise as a system design methodology. In the above referenced work, we have addressed the issues described in Sections 7.2 and 7.3 above in developing a new method for robust system design. 8. SUMMARY

In this research, the use of a methodology for the design of robust manufacturing systems using computer simulation experiments was demonstrated. An overview of relevant research in related areas was given. A brief description of the Taguchi approach as applied to simulation based design studies was presented. In view of some of the limitations of the Taguchi method, a few modifications were suggested to improve this method. The concepts and methods described were illustrated through an example from the Discrete Part Manufacturing Domain. A few avenues for potential future research were outlined. It is our belief that the concepts presented here can have a significant impact on large scale system design and problem solving involving design modifications. REFERENCES 1. J. W. Forrester. Industrial Dynamics. The MIT Press, Cambridge, MA (1961). 2. R. E. Shannon. Systems Simulation. Prentice-Hall, Englewood Cliffs, NJ (1975). 3. G. E. P. Box and K. B. Wilson. On the experimental attainment of optimal conditions. J. R. Stat. Soc. Series B 13, 1-45 (1951). 4. G. E. P. Box and N. P. Draper. Empirical Model Building and Response Surfaces. Wiley, New York, NY (1987). 5. R. H. Myers, A. I. Khuri and W. H. Carter. Response surface methodology: 1966-1988. Technometrics, 3, 137-157 (1989). 6. D. E. Smith. An empirical investigation of optimum-seeking in the computer simulation environment. Opns Res. 21, 475-497 (1973). 7. D. C. Montgomery and V. M. Bettencourt. Multiple response surface methods in computer simulation. Simulation 29, 113-121 (1977). 8. W. E. Biles and H. T. Ozmen. Optimization of simulation responses in a multicomputing environment. Proc. 1987 Winter Simulation Conference, pp. 402--408 (1987). 9. L. W. Schruben and V. J. Cogliano. An experimental procedure for simulation response surface model identification. Commun. ACM. 30, 716-730 (1987). 10. G. Taguchi. System of Experimental Design: Engineering Methods to Optimize Quality and Minimize Costs. Vols. 1 and 2. American Supplier Institute. Dearborn, MI. (1987).

208

R.J. MAYER and P. C. BENJAMIN

11. G. Tagnchi and Y. Wu. Introduction to Off-line Quality Control. Central Japan Quality Control Association. Nagaya, Japan (1985). 12. P. J. Ross. Taguchi Techniques for Quality Engineering. McGraw-Hill, New York, NY (1988). 13. R. N. Kackar. Off-line quality control, parameter design and the Taguchi method. J. Qual. Technol. 17, 176-188 (1985). 14. J. S. Hunter. Statistical design applied to product design. J. Qual. Technol. 17, 266-285 (1985). 15. B. Gunter. A perspective on the Taguchi methods. Qual. Prog. 44-52 (June 1987). 16. R. V. Leon, A. C. Shoemaker and R. N. Kackar. Performance measures independent of adjustment: an explanation and extension of Taguchi's signal to noise ratios. Technometrics. 29, 266-285 (1987). 17. G. E. P. Box. Signal to noise ratios, performance criteria and transformations. Technometrics. 30, 1-17 (1988). 18. W. A. Nazeret and W. Klinger. Tuning computer systems for maximum performance: a statistical approach. In Quality Control, Robust Design and the Taguchi Method (Edited by K. Dehnad). Wadsworth Brooks, Pacific Grove, CA, pp. 175-185 (1989). 19. T. W. Pao, M. S. Phadke and C. S. Shererd. Computer response time optimization using orthogonal array experiments. IEEE Int. Communications Conf. Chicago, IL. Conf. Record, 2, 890--895 (1985). 20. J. J. Pignatiello, Jr. An overview of the strategy and tactics of Taguchi. l i E Trans. 20, 247-254 (1988). 21. D. M. Byrne and S. Taguchi. The Taguchi approach to parameter design. Qual. Prog. 19-26 (December 1987). 22. C. R. Rao. Factorial experiments derivable from combinatorial arrangement of arrays. J. R. Statis. Soc., Suppl. 9, 128-139 (1947). 23. G. E. P. Box, W. G. Hunter and J. S. Hunter. Statistics for Experimenters. Wiley, New York, NY (1987). 24. P. C. Benjamin and R. J. Mayer. Using the Taguchi paradigm for target driven simulations. Working Paper No. INEN/KBS/WP/01/05-09, Dept. of Industrial Engineering, Texas A&M University, College Station, TX (1990). 25. R. J. Mayer. Cognitive skills in modeling and simulation. Ph.D. Dissertation. Dept. of Industrial Engineering, Texas A&M University, College Station, TX (1988). 26. Min-Jin Lin. Automatic simulation model design from a situation theory based manufacturing system description. Ph.D Dissertation. Dept. of Industrial Engineering, Texas A&M University, College Station, TX (1990). 27. R. J. Freund and R. C. Litell. S A S for Linear Models: A Guide to the A N O V A and G L M Procedures. SAS Institute, Cary, NC (1981). 28. P. C. Benjamin and R. J. Mayer. Towards a new method for the design of robust systems using simulation. Working Paper No. INEN/OR/WP/08/I 1-90, Dept. of Industrial Engineering, Texas A&M University, College Station, TX (1990). APPENDIX Loss Functions and Performance Metrics

Let Y be the measured value of the product's performance characteristic, and suppose we assign it a target (ideal) value T. Deviations of Y from T are what cause "loss" to the manufacturer. This loss, perceived as a loss to society by Taguchi, can be represented as a quadratic Loss Function, I ( r ) = K × ( r - 7")2

(l)

where K is a constant which can be determined from knowledge of the customers tolerance specifications. A typical Loss Function is illustrated in Fig. A1. Since the measured performance is really a random variable, we need to consider the expected value of the loss function. If the set of independent variables of the design are donated by X, and assuming for convenience that K is 1, the expected loss is given by E[I(Y)] = E[ r ( x ) - T]2

(2)

Since in general, both the mean, Iz = E[Y(X)], and the variance, o 2 = E [ Y ( X ) - g]2 depend on X, and denoting expected loss by F(X), we can rewrite (2) as follows E[L(Y)] = E [ Y ( X ) - T] 2 = F(X) = E{[Y(X)] 2 + T 2 - 2 r ( x ) r } = E[ Y(X)] 2 + E t T ) 2 - 2 TE[Y(X)]

I

T

Y

Fig. AI. A typical loss function.

Using the Taguchi paradigm for manufacturing system design

209

= #2(x) + ~2(x) + r 2 - 2T~(X) = # 2 ( X ) + [ ~ ( X ) - 772.

Hence F(X) = o2(X) + [~(X) - 712

(3)

Since F(X) is unknown, it needs to be estimated. To do this, we need what Kackar [13] calls a performance statistic. Tagnchi introduced more than 60 such performance statistics called signal-to-noise ratios. In its simplest form, the signal to noise ratio is proportional to the inverse of the coemcient of variation. When the objective of the design is to reach the nominal value of a performance characteristic, the signal-to-noise ratio (S/N) takes on the following form

S/N -- 20 log -~

(4)

where ~ is the observed mean performance, and S is the observed standard deviation. Leon et al. [16], have shown that minimizing (3) is equivalent to maximizing (4), under certain conditions. In other words, minimizing the loss function is equivalent to maximizing the signal-to-noise ratio, under certain conditions.