Copyright © IF AC Infonnation Control Problems in Manufacturing Technology, Toronto, Canada, 1992
A FACTOR-IMPACT-DRIVEN GRAPHICAL ANALYSIS APPROACH FOR OUTPUT ANALYSIS OF SIMULATION EXPERIMENTS A. E.-J. Lee· and K.G. Main·· *Alcoa Technical Center, 100 Technical Drive, ATC-B-MMS, Alcoa CUller, PA 15069, USA uAlcoa Tennessee Operations, Alcoa, Tennessee 37701, USA
Abstract The standard statistical output analysis for simulation s~dies involving more th~ ~ factors ofter: produces conclusions that are counter-intuitive. Presenting such conclUSIOns merely from a statIstical persp.ectIve be difficult to convince an audience that has little statistical background. This paper introduces the factor-unpact-dnven graphical analysis approach (FIDGA) to support statisticallr-derived conclusions. I!l this approach, charts ~ plotted for output variables against the serial numbers of the expenments that are systematIcally sequenced accordi~g to the relative impact of the factors. This sequence is obtain~ by ~rrst p~orming F-test for ~ch factor, th~n ranking ea~h factor in the order of relative importance, and finally usmg this ranking to perform a multi-level sortatIon on the senal numbers. . . . I I this ud ~t. y, Finally, this paper also presents the output analysIS performed on a SImulatIon study at.an Alcoa pant . n the FIDGA approach produced a final analysis that not only supported the concluSlon~ reache? by the statistIcal analysis but was also better able to explain them in terms of ~yst.e~ process~s . ThiS graphical approach was successful in convincing skepticaI team members of some counter-lOtwtIve conclUSIOns.
car:
Key words Simulation, Output Analysis, Factor-Impact-Driven Graphical Approach, Graphical Analysis Approach, Statistical Anal ysis I, INTRODUCTION In the manufacturing community, computer simulation is becoming more accepted as a powerful rr:odelin~ . t.o ol for evaluating the design of new manufactunng faClhtIes, for investigating the operational aspects of existing pl~ts and for testing control strategies for automated manufactunng systems. However, the benefits of a simulation study depend on careful attention to performing the simulation methodology ~ectly. [~ Law and McComas (1990»). In particular, the analysIs actlVlty phase of the simulation study often ~uires the ~se of adv~ced statistical techniques such as expenmental deSign, analysIs of variance (ANOVA), and statistical empirical modeling. This is particularly true when analyzing a simulation study which evaluates the impact of more than two improvement factors. A common challenge to the analyst employing these ad~anced statistical techniques relates to the later effort of presenting the statistically-derived conclusions to an audience that .have li~le or no statistical background. This can becorr:e v~ difficult I~ the statistical analysis reveals any counter-mtultIve conclUSions because the audience will usually insist on supporting evidence from non-statistical perspective. This paper describes the factor-impact-driven graphical analysis (FIDGA) approach developed to present the simulation analysis results of a recent simulation study in a process-oriented perspective that supporte? the statistically-derived c~Jl~clusions and convinced some skeptIcal team members of the validity of the findings. The specific simulation study undertaken was one of two such studies recently conducted at Aluminum Company. of America (Alcoa) Tennessee Operations' automated manufactunng facilities. The simulation models were developed using the AutoMod 11 simulation software package from AutoSimulations, Inc.. In all, these simulation studies helped Alcoa avoid spending $8 million for proposed capital investment. This paper will first outline the FIDGA approach (in Section 11), then describe the manufacturing process involved, the factors investigated, and the output responses used (Section Ill). Finally, Section IV will illustrate, in detail, the application of the graphical analysis approach for this simulation study.
11. THE FIDGA APPROACH Tennino1o~y . To aid in the discussion, this section first mtroduces some relevant terms. In a simulation study, a factor refers to a variable entity being investigated and may be qualitative (if it takes on non-numerical values such as use of different operational strategies) or quantitative (if it takes on numerical
157
values such as the number of machines used). A level or level setting of a factor corresponds to one of several possible values being investigated for the factor. The simulation study determines the different factors and the different level settings of each factor to investigate. In the context of the simulation study, an experiment involves running the simulation model with each factor set at one of its level settings. Each experiment is associated with a serial number for identification, and collects quantitative information, or output responses, necessary for analysis and process understanding. In particular, the quantitative information specifically related to the objectives of the simulation study are referred to as the performance measures. A data set comprises the set of data for one particular output response from a number of experiments. Since each data set will be analyzed to study the impact of the factors, determining the number and combination of experiments to use is important, especially if the analyst .plans to run <,>~y a subset of all the possible experiments. In this regard, statIstIcal techniques such as experimental design, analysis of variance (ANOVA), F-tests, and empirical statistical modeling can help provide the tools for an effective simulation analysis. The details of these techniques may be found in many excellent texts [Box, Hunter and Hunter (1978); Banks and Carson (1984); and Law and Kelton (1990)]. Other useful references relating to the simulation methodology or specific simulation issues for manufacturing systems may be found in articles by Law (1986); Law, and McComas (1989); and Law and McComas (1989)-besides the texts by Banks and Carson (1984) and Law and Kelton (1990). Factor-Impact-Driven Graphical Analysis :sis and then integrates the discussion of the FIDGA approach. Figure 1 presents a schematic of the discussion to follow. Once the computer model development phase of a simulation study is completed, verified and validated, th~ model becomes a tool with which to perform analysis of the Impact of selected factors to investigate. Since the model includes stochas.tic characteristics representative of the real system, a comprehenSive analysis of the factors require a careful design of experiments and subsequent statistical analysis. The technique of experimental design provides a means to minimize the number of experiments
or
Perl"ormancc
STATISTICAL ANALYSIS
Significoncc
Significance Grid
PMl
PM2
PM3
Column
PM4
FactorFl FactorF2 FaclOrF3
• R-Squan:
!
Chart #If
}~~====~~--.
FlDGA
ANALYSIS
Fig. 1. A Schematic of the Factor-Impact-Driven Graphical Approach. should be increasing; and the final model therefore has the highest R-square value.
that need to be run in order to still be able to capture the factorresponse correlations -- without resorting to running the entire set of experiments corresponding to all possible combinations of factor levels. Therefore, the first step to the statistical anal ysis is the generation of a subset of experiments through experimental design.
To aid the factor-impact-driven graphical analysis later, the final significance value of each remaining term in the final model are tabulated into a column (referred to as a significance column) and, for reference purposes, the R-square value is also included. Since there are usually several performance measures used in a simulation study, and therefore, as many empirical mathematical models derived for each, there will also be as many significance columns constructed. Augmenting all the significance columns forms the significance grid as shown in Figure 1.
Next, the selected experiments are then run using the computer model and two output data sets collected for each experiment -one data set corresponds to the performance measures and the other to other process-related data
The ANOVA process ends once all the final individual mathematical models for each performance measure have been generated. These models may then be used for (1) predicting responses using other factor level combinations (usually those not included by the design of experiments) and (2) suggesting the optimal factor level combination. The significance levels of the remaining terms in each model may also be used to identify the factors that have greater impact on the performance measures.
At the end of conducting all the experiments, an overall test of significance (or ANOVA) is performed on the data sets corresponding to the performance measures . For each performance measure, the ANOVA performs an F-test on a factor to compute the F-ratio -- a ratio of two variance estimates. As a guide, if the differences in the level settings of a factor do not have an impact on the performance measure, then this ratio will be very close to the value 1.00. If the F-ratio is much larger than LOO, it is possible to use tables to evaluate the probability that this occurs entirely by chance. This probability value corresponds to the significance level of the F-ratio. When the significance level is sufficiently low, usually pS; 0.05, the F-test indicates that the differences in the level settings of the factor do have an impact on the performance measure and therefore are statistica11 y significant.
In general, the statistical analysis activities of the analysis phase of the simulation study ends here. However, in most cases, the anal yst would attempt to plot charts from the data sets to present the results derived from the above statistical analysis. The analyst usually faces the uphill challenge of identifying the types of charts to use and how to arrange the data sets in each chart so as to graphically reflect the statistically-derived conclusions.
The ANOVA process is an iterative one which involves such steps as data checking and data transformation. In essence, it is trying to fit the empirical data set into a mathematical model which consists of an equation that attempts to correlate the performance measure response (on the left side of the equation) with individual factors and interaction of different factors (on the right side of the equation). Each item on the right side of the equation is referred to as a term in the mathematical model. The ANOVA step computes the significance level for each term. The magnitude of the significance level reflects the relative impact/contribution of the term to the mathematical model. Therefore, those terms whose significance levels are very high indicate that they have little impact/contribution to the mathematical model and, thus, should be eliminated from the model. As the ANOVA step is repeated, a new set of significance values are obtained for the remaining terms of the refined (reduced) mathematical model. After performing a series of ANOV A steps and term eliminations, corresponding to the model refinement process, a final model is accepted as an empirical representation of the correlations between the response and the factors.
The factor-impact-driven graphical analysis approach is an attempt to provide a systematic means to arrange the data sets in each chart so as to graphically reflect the impact of the factors and, thus, support the statistically-derived conclusions. Briefly, the FIDGA approach involves two major steps. First, the individual factors are ranked according to each's overall impact on all the performance measures. Next, it uses this ranking to rearrange the set of experiments by means of a multi-level sortation; thus, defining a factor-impact-driven sequence with which each data set may be rearranged for plotting in each chart. The significance grid constructed from the previous statistical analysis phase provides the means to determine the factor ranking. Either by visual examination (for small-sized grid) or some form of weighting function among the performance measures, each factor is scored based on its individual significance levels obtained for each performance measure. If a particular performance measure is recognized to be the most important one, then the ranking ofthe factors should be based on the relative significance levels achieved for this primary performance measure. Any ties among the factors may then be broken by eval uating their relative significance levels with respect to the next most important performance measure, and so on ... Thus, proceeding this way until all the factors are evaluated, a ranked listing of the factors may be constructed.
A parameter, called the R-square value, is associated with each mathematical models obtained during the model refinement process. The R-square value measures how well each model fits the empirical data set --the closer the value is to the number I, the better the model is said to fit the data set. During the iterative model refinement process, the R-square value of the model
158
built before entering an anneal furnace. The level settings investigated are 11 and 22.
The next step of generating the factor-impact-driven sequence follows easily by using the ranked listing obtained above to perfonn a multi-level sortation on the experimental serial numbers. This generated sequence first sorts the experiments according to the different levels associated with the factor appearing fIrst on the ranked factor listing. Since this factor has the highest ranking, plotting the data set corresponding to the most important perfonnance measure should graphically highlight the effects of each of the different levels associated with the factor. Next, the subgroup of experiments corresponding to one level value of this factor should be sorted again -- but, this time, by the different level settings of the next-highly ranked factor. (The same sortation is performed for the other subgroups of experiments.) The effect of this, so far, two-level sortation helps to capture the interaction effect between these two factors. This multi-level sortation process should continue until all the factors have been involved or when it is clear that the next sortation will be inconsequential or impossible.
A third factor investigated, Patb, concerns the AGV transportation path for a hot line coil to the furnace. The study investigated two possible schemes: an Indirect path (which sends the coil first to the large CS area and, later, to a TS location at the furnace area) and a direct path (which sends the coil directly to a TS location at the furnace area; when no TS location is available, it takes the indirect path). The next factor, Tbermo, concerned inserting thermocouples on the coils before annealing. One scheme involved moving a batch of coils to a station to insert coils before entering a furnace. After the batch fInished annealing, the coils are moved to another station for thermocouple removal. The alternative scheme simply skips the thermocouple instrumentation. Finally, the fifth factor, #BAF, concerns the total number of furnaces needed to meet the production needs. The level settings investigated are 6 and 8.
Plotting the data sets by rearranging them according to this factor-impact-driven sequence will definitely produce more effective graphical charts than merely plotting charts based on the random sequence (such as according to increasing experiment serial numbers). Also, other appropriate process-related data sets may also be augmented to each performance measure chart in order to help provide a process-related perspective on the experimental runs and also support the statistically-derived conclusions.
The five factors and their respective level settings are are summarized below: TABLE 1. Factors and their level settings investigated Factor #AGV
The next section will describe the manufacturing facility and process involved, the factors investigated, and the perfonnance measures and output responses collected in a recent simulation study at Alcoa. This is then followed by an illustration of how the FIDGA approach was applied toward that study which helped convince some skeptical members of the statistically-derived conclusions.
#TS Path
Thermo
Ill. THE SIMULATION STUDY The Manufacturiul: Process Alcoa Tennessee Operations manufactures coils of aluminum sheets. The simulation study in this paper involved three production centers of the manufacturing facility linked by an automated guided vehicle (AGV) path system. These production centers include the hot line (HL), the batched anneal furnaces (BAF), and the continuo us cold mill (CC M). The manufacturing process is a sequential one that begins with rolled coils leaving from the hot line (the upstream production center) to be transported to the furnaces for annealing. Annealed coils, after a brief cooling period outside the furnaces, are next transported to a large coil storage (CS) area for further cooling. After cooling, coils are sent to the downstream production center at the continuous cold mill. Figure 2 presents a schematic layout of the manufacturing facility.
#BAF
Description The number of AGVs to use (7 11 16) 1 he number ot T~ locallons (11 and 22) Opllons lor Ali V transportatIOn path for coils produced from hot line • Indirect: Send hot line coil first to CS. • direct: Send hot line coil directly to TS. Opllons to msert/remove thermocouple 0: No 1: Yes The number of anneal furnaces to use (6 and 8)
Performance Measures To evaluate the impact of these factors, statistics of three perfonnance measures related to the batched anneal furnaces were collected. These are: a) "the cumulative number of coils annealed (ANL_P)"; b) "the Between-Batch-Time (BBn"; and c) "the Anneal Productivity Time (AP1)." Briefly, BBT corresponds to the time lapse between the instance an annealed batch leaves a furnace and the instance a new batch is put into it. The APT corresponds to the time a batch of coils remain in the furnace until it is completely annealed; this measure is important because the time required for annealing is dependent on the temperature of the coils before annealing. The objective of the simulation study is to examine the anneal area to determine the factors, or combination of factors, that will decrease the BBT and the APT while meeting the target production at the anneal area (ANL_P). Therefore, the statistics collected for these two performance measures are their respective medians and the interquartile ranges. Technically, the median refers to the point at which any data point in the distribution has as good a chance of being less than this value as it does of being of a greater value. The interquartile range, on the other hand, is the difference between the first and the third quartiles of the distribution. Essentially, the median locates the central point of the distribution while the interquartile range measures its spread or variability. The choice of median over the mean is to minimize the effects of the outlyers in the data set.
III I: :! : IIII!
AGV Path System
With respect to their relative importance, the performance objectives may be summarized as follows : Fig. 2. A simple schematic layout of the manufacturing facility. • Primary Objective: Reach the target cumulative no. of coils annealed (ANL_P) • First Secondary Objective: Minimize the median of BetweenBatch-Time (BBT_M) • Second SecondaIy Objective: Minimize the median of Anneal Productivity Time (APT_M)
The objective of the simulation study was to investigate the potential impact of 5 improvement factors on the anneal area with respect to the perfonnance measures to be described later. The first factor, #AGV, concerns the number of AGVs needed to transport the coils among these production centers. The study investigated 3 different level settings at 7, 11, and 16.
Additional output data were also obtained to track the perfonnance of different areas and the material handling equipment in the production system. These data sets provided a
The second factor investigated, #TS, concerns the total number of tray storage (TS) locations in front of the furnace. Each TS location serves to hold a tray on which a batch of coils could be
159
more global perspective of the entire production process and a means for each experimental run to be validated. Desi~n of Experiments and Statistical Analysis An experimental design was used to determine the number and combination of experiments to run. In the simulation context, experimental design provides a way of deciding beforehand which particular combination of factor level settings to simulate so that the desired information can be obtained at minimal cost, i.e., with the minimum number of simulation runs. This was necessary because each experimental run takes about 6 hours to execute on the Silicon Graphics workstation 4D/20G. In addition, it also requires about an hour for the output data to be entered into StatGraphics, a PC-based statistical software package, to generate the performance measures. Using RS/I, a statistical package on the VAX minicomputer, the experimental design yielded a set of23 experiments. lbis compares favorably with the full factorial design that requires 48 experiments (since there are four factors with two levels and one factor with three levels). The RS/l statistical package was also used for later statistical analysis.
IV. DISCUSSION OF RESULTS This section shall illustrate how the FlDGA approach was applied in the analysis of this simulation study. Note that the statistical procedures for conducting ANOVA, F-tests, and empirical modeling were performed with the help of RS/t and are not discussed here. The discussion therefore begins with the assumption that all these statistical procedures have been completed. RankinL: the Factors with respect to Response Impact Recall in Section II that the qualitative analysis first constructs a significance grid and performs ANOVA on the terms associated with each empirical model. Since there are 3 performance measures. the significance grid for this simulation study comprised 3 significance columns. Table 2 shows the final significance grid with the bottom row showing the respective Rsquare values. Since these R-square values are relatively high, each empirical model may be said to satisfactorily represent the data set of its respective performance measure. The rest of the rows in Table 2 contains the values of the significance level of theF-tests performed on each term of the final empirical models. Note that those cell entries whose values reflect significance levels ofless than 0.05 have been highlighted in bold; those cells which are empty correspond to those terms dropped from the final empirical models. TABLE 2. Significance Grid for the 3 Performacne Measures. ANL._P Thermo .Path I#TS IIfAuV [ 1f8At<:
T·P .... # ... T·#A IP#B IP·#T P·#A [P·#B #T·#A [#'P#B I#A·#B
'0.0021
.HBT M 0.0001 0.1'}40 0.7856 0.7234 O.UUUU 0.8281 0.3011 0.86'J2 0.1657
0.0656
0.168'}
I u.uu02
10.0173
10.0002 !
0.0000
0.0016 U.UZ3'.1 R-Square 0.
APT M 10.0030
RANK 2 3 I
0.0126
O.UUI
: 0.
TABLE 3. The 23 experiments rearranged in FIDGA Sequence. New Ex. # A B C 0
E F G H I
Orlg . Ex. # 2 7 12 11 18 22
4
R S T
3 10 9 17 6 8 13 16 19 21 1 5 15
U
14
V W
20 23
J K L M
N 0
P
a
#BAF Thermo 6 0 6 0 6 0 6 0 6 0 6 0 6 6 6 6 6
1 1 1 1 1
8 8 8 8 8 8 8 8 8 8 8 8
0 0 0 0 0 0 1 1 1 1 1 1
#AGV 7 7 11 11 16 16
7 7 11 11 16
7 7 11
11 16 16
7 7 11 11 16 16
Path indirect direct indirect direct indirect direct indirect direct indirect direct indirect indirect direct indirect direct indirect direct indirect direct indirect direct indirect direct
#TS 11
22 22 11
22 11
22 11 11
22 11
22 11 11
22 11
22 11
22 22 11
22 11
(Before proceeding further, it is worthwhile to note that the ANOVA procedure revealed that the F-statistics achieved for the factor #TS (corresponding to the number of tray storage locations) were so high that they were eliminated in the final empirical mathematical models for all 3 performance measures. This implied that this factor the had no impact on the performance measures at all. Unfortunately, this conclusion was counterintuitive to several of the team members who requested the simulation study. It was apparent that convincing the team members of these conclusions would require more than just presenting the numerical results of statistical terms churned out from a "black box" that several members did not even have faith in! This therefore challenged the authors to explore a graphical analysis approach that may help support the "black box" analysis -- resulting in the FIDGA approach developed here.)
0.2536
I If A ·IfA
I\pJ1lyinL: the FIPGA APJ!IOach The ranked list of the factors was next used generate a factorimpact-driven sequence on the experiment serial numbers via a multi-level sortation. Thus. using the most important factor obtained in the ranked list. #BAF. the 23 experiments were first sorted by 2 level settings of 6 and 8. resulting in 2 subgroups of (A. B • ... • K) and (L. M •...• W) in Table 3 respectively. Each of these 2 subgroups was further sorted by the level settings of the next most important factor. Thermo. This resulted in 4 subgroups. (A. B •.. .• Fl. (G. H•...• K). {L. M •...• Ql and (R. S •...• W) -- each of which was again sorted by the third most important factor. #AGV. After repeating this multi-level sortation with the fourth most important factor. Path. it became clear that no more sortation was possible. Thus. the final sequence of the experiment serial numbers was as shown in Table 3.
More specifically. assigning new experiment numbers to the final sequence in Table 3 results in experiments A to W in column 1. Thus. experiments A to K correspond to those that use 6 furnaces and experiments L to W to those that use 8 furnaces. Within these subsets. experiments A to F and L to Q require thermocouples while experiments G to K and R to W do not. This generated sequence on the experiments will be used as the horizontal axis for all subsequent graphical plots in the quantitative analysis and will always be from A to W in alphabetical order.
4
5
: 0.1414 i U.UUUU
1. #BAF 2. Thermo 3. #AGV 4. Path 5. #TS
0.82D
Since the number of performance measures in this simulation study is small, the ranked list of the individual factors may be determined easily by a visual examination of Table 2. Since, ANL_P is the primary performance measure. the factors are first evaluated with respect to their impact on ANL_P. Examining column t quickly reveals that #BAF is the factor with the most significance. followed by a tie between Thermo and #AGV; Path is the fourth most significant factor while #TS appears to have no significance at all. To break the tie for second place. the factors Thermo and #AGV are next evaluated with respect to the next important performance measure of BBT _M . Consequently. the fmal ranked list results as follows (see rightmost column of Table 2) :
Charts were plotted to examine the impact of the factors on the primary performance measure of target cumulative number of coils annealed (Chart 1) and the other secondary performance measures (Chart 3). In addition. another chart was plotted to help provide evidence to support the counter-intuitive notion that the factor #TS was not significant (Chart 2). Below are the brief descriptions of each chart:
160
responsible for the lower ANL_P values obtained. From the manufacturing process perspective, the insufficient number of AGVs probably restricted the flow of the coils among the production centers.
Chart 1: Plots the production ratio for the cumulative number of coils produced in the Anneal (ANL_P), the Hot line (I-IL_P), and the Continuous Cold Mill (CCM_P) with respect to the target production (fargecP). The vertical axis corresponds to the ratio of the number of coils produced to the target production during the 9-week period, while the horizontal axis refers to the new experiment serial numbers.
Note, also, that both M and S had considerably larger ANL_P values than the other 2 experiments. A closer examination revealed that both experiments M and S had Path = direct whereas both experiments L and R had Path = indirect. Thus, the interacting effect of having 7 AGVs and using the indirect path further limited the transportation capability of the AGV system from meeting the capacity needed to transport coils to the anneal furnaces.
Chart 2: Plots the maximum and average number of trays with unannealed coils (MAX_BLD, AYE_BLD) and the maximum and average number of non-empty trays (MAX_USE, AYE_USE) at the TS locations. The vertical axis corresponds to the number of trays, while the horizontal axis refers to the respective experiment numbers. Note that these are, again, process-related output data that are selected to help understand the process that occurred at the TS locations during the simulation trials.
Finally, to support the conclusion that having 8 furnaces would provide sufficient anneal capacity, observe that, contrary to the case with the rest of the 23 experiments, the experiments with 8 furnaces (excluding those 4 exceptions) produced more annealed coils than the cumulative number of coils produced from the hot line. Oearly, this must have occurred because buffer stocks of unannealed coils exist in the CS area and must have been transported to the anneal area to supplement the supply of coils from the hot line when the production at the anneal furnaces outpaced the hot line's.
Chart 3: Plots time ratio for the Medians and Interquartile ranges of the time (in minutes) taken for BBT (BBT_M, BBT_I) and for the anneal productivity (APT_M, APT_I) with respect to the largest time value encountered among the data values. The interquartile ranges were included to help examine the variability of these secondary performance measures as well.
In summary, the analysis on Chart 1 suggested the need for more than 6 furnaces to meet the targeted production level and favored
Chart1 : Target Production (Target]) and #Coils Produced in Anneal (ANL_P), Hotline (HL_P), and CCM(CCM])
using direct AGV path and more than 7 AGVs. As an aside, it is true that an analyst could have plotted a chart by sorting the experiments with respect to the levels of the most important factor and then proceeded to identify the anomalies and detecting the causes for them. In fact, the chart would be somewhat similar to Chart 1 (but, most probably, with the experiments within each of the two major subgroups in different orders). However, as could be seen in Chart 1, the FIDGA approach provided a more systematic and effective graphical charting capability to quicldy highlight the causes for the anomalies.
1.10
p r 1.05
o d 1.00 u c 0.95 t
~ 0.90
n 0.85
R
a 0.80
:
Chart2: Max. and Ave. HTrays with unannealed Coils (MAX_BLD and AVE_BLD) and Max. and Ave. HNon-empty Trays (MAX_USE and AVE_USE) at Tray Storage Locations
0.75~;:::::~~~~~::f
o
0.70 -H-+-t-T-i-+-1-T-i-+-1-T-i-+-1-T-i-+-1--t-1H
24.00
ABCDEFGHIJKLMNOPQRSTUVW New Experiment Number
~
20.00
m b
Chart 1 was used to investigate the impact of the factors on the primary performance measure related to the cumulative number of annealed coils (ANL_P). To assist process understanding, Chart 1 included 2 other relevant process-related data sets corresponding to the cumulative number of coils produced at the upstream production center at the hot line (HL_P) and the other to the cumulative number of coils produced at the downstream production center at the CCM (CCM_P).
e 16.00
r 0
f
T
8.00
Y
4.00
•
Observe that for all the 23 experiments, the ANL] and CCM_P values were very close to each other, thus suggesting that the anneal area production performance influenced the downstream production performance at the CCM. Observe, further, that the values for experiments A to K were consistently less than those for experiments L to W. In addition, the cumulative number of coils produced at the upstream production center at the hot line (HL_P) was consistently close to the target production (Target_P) for all the 23 experiments, suggesting that experiments A to K lacked the furnace capacity to anneal all the hot line coils.
12.00
0.004_~~~~~~~~::::~~ ABCDEFGHIJKLMNOPQRSTUVW New Experiment Nurnbtt
Next, Chart 2 provided a helpful graphical chart to support the statistically-derived conclusions that the factor #TS is not significant. To several team members at Alcoa Tennessee Operations, this result was counter-intuitive because the plant floor experience frequently witnessed swamping of unannealed coils at the 11 TS locations. It appeared that doubling the number of TS locations should allow more coils in front of the anneal furnaces and so increase the anneal production.
To confirm this, note that experiments A to K were those that use only 6 furnaces while experiments L to W were those that use 8 furnaces . With the exception of experiments L, M, R and S, the ANL_P value exceeded the target production (Target_P). This therefore suggested that having 8 furnaces could provide sufficient furnace capacity. However, it was necessary to understand the anomaly associated with the 4 exception experiments.
Using the same FIDGA experiment sequence for the horizontal axis, Chart 2 plotted 4 data sets collected for examining the condition of the TS locations in the 23 experiments. In particular, MAX_USE and AYE_USE referred to the maximum and average number of non-empty trays used at the TS locations respectively. By examining the generated experiment sequence in Table 3, observe that the level settings for the number of TS locations had been somewhat randomly distributed after the multi-level sortation. However, despite this, Chart 2 still exhibited some distinct pattern. More specifically, experiments with only 6 furnaces (A, B, ... , K) generally require more 1'5 locations than those with 8 furnaces {L, M, ... W) . Excluding experiments L, M, R and S for reasons already discussed in Chart 1, observe that the maximum number of non-empty trays was about 11 for the experiments with 8 furnaces regardless of the number of TS locations used. On the other hand, the
Observe that the level settings for these exception experiments were respectively: L={ #BAF=8,Thermo=O,#AGV =7 ,Path=indirect,#TS=22) M={#BAF=8,Thermo=O,#AGV=7,Path=direct,#TS=11 ) R={#BAF=8,Thermo=l,#AGV=7,Path=indirect,#TS=11 ) S={ #BAF=8,Thermo= 1,#AGV =7 ,Path=direct,#TS=22) Since these were the only experiments that had the minimum number of AGVs, it was clear that this level setting must be
161
corresponding values for those with only 6 furnaces were very close to the number ofTS locations used in the experiment. In addition. note that the A YE_USE values for experiments with 6 furnaces were again very close to their respective MAX_USE value whereas the A YE_USE values of those experiments with 8 furnaces were about constant at 6 and was not dependent on their respective MAX_USE values or the total number of TS locations in the experiments. In other words. with 6 furnaces. all the available TS locations were fully occupied; on the other hand. with 8 furnaces • there were still some TS locations available regardless of whether 11 or 22 locations were used. This suggested that most of the TS locations must have been used to hold coils waiting to be annealed because there was not enough furnace capacity when only 6 furnaces were used. Conversely. because having 8 furnaces would provide enough furnace capacity. coils did not have to wait on the TS locations for a long time -- resulting in less than the current number of 11 TS locations required! To support this claim. note the plots of MAX_BLD and A YE_BLD which depicted the maximum and average number of unannealed coils at the TS locations. As expected. the MAX_BLD and AYE_BLD values for those experiments with 6 furnaces were very close to each other and were relatively high (about 6 for those with II TS locations and 16 for those with 22 TS locations). As for those experiments having 8 furnaces. an average value of only about 2 TS locations were needed regardless of of the number of TS locations available. This. therefore. confirmed and supported the result by the earlier statistical analysis that doubling the number of TS locations was not significant. Clearly. the FIDGA experiment sequence helped provide these insights about the situations of the tray locations more effectively and systematically than the plotting of any simple chart could. The analysis in Chart 2 also showed that several insights into the production process as affected by different combinations of investigated factors could be examined effectively by collecting relevant process-related output data and applying the FIDGA experiment sequence to it. Thus. the FIDGA approach can provide the analyst the ability to not only understand which of the factors are more important and why (achievable through the ANOV A procedures) but al so to plot any set of output data that may add to understanding the effect of the factors on various processes in the manufacturing facilities.
• #BAF > 6; • Tbermo =0; ·7 < #AGV ~ 16; • Patb = direct; and • #TS = 11. In the recommendations. only 3 of the 5 factors had been determined at fixed level settings while the other 2 needed further experimentations. A more detailed study was conducted and the FIDGA approach again applied -- resulting in the final settings as above. but with #BAF =7 and #AGV = 11 . V. CONCLUSIONS This paper described a factor-impact-driven graphical analysis (FIDGA) approach for simulation experiment analysis to support statistically-derived analysis. This approach extends the analysis activities of statistical techniques such as experimental design. ANOVA. F-tests. and empirical statistical modeJing to include (a) ranking the factors being investigated according to their relative significance with respect to the impact they have on the set of performance measures and (b) systematically generating a factorimpact-driven sequence on the experiment serial numbers by a multi-level sortation based on the factor ranking. This FIDGA experiment sequence then served to provide the analyst an effective framework with which to plot graphical charts -- both relating to the performance measures and process-related variables. This FIDGA approach provided a more effective and thorough means of explaining and supporting statistically-derived results from a process-oriented perspective to an audience with little or no statistical background. This approach was illustrated in this paper by the analysis of a recent simulation study conducted at Alcoa Tennessee Operations. The analysis helped to convince skeptical team members of some statistically-derived conclusions. and provided clearer process understanding regarding the impact that each factor had on the manufacturing facility. In all. the results of this simulation study and another helped Alcoa avoid a capital investment of $8 million. ACknowled2ements The authors wish to thank the fellow A1coans M. R. Emptage. B.A. Kissick. M. D. Waltz. and P. A . Zalevsky for their insights and assistance provided at various stages of the simulation study. REFERENCES Banks. J. and J.S. Carson (1984). Discrete-Event Systems Simulation. Prentice-Hall. Englewood Oiffs. NJ.
Chart3: Medians and Interquartile Ranges of Between-BatchTime (BBT_M and BBT_1) and Anneal Productivity Time (APT_M and APT_I)
Box. G.E.P .• W .G. Hunter and J.S. Hunter (1978). Statistics for Experimenters: An Introduction to Design. Data Analysis. and Model Building. John WiJey & Sons.
1.00 0.90
APT. M 0.80 T0.70 ................._ ............................-"\
Law. A.M. and W.D. Kelton (1991). Simulation Modeling and Analysis. McGraw-HiIl, NY.
~ 0.60
Law, A. M. (1986), "Introduction to Simulation: A Powerful Tool for Analyzing Complex Manufacturing Systems." Industrial Engineering, May. pp. 46-63 .
R 0.50 0.40
0.30
Law, A.M. and M.G. McComas (1989). "Pitfalls to Avoid in the Simulation of Manufacturing Systems," Industrial Engineering, May, pp. 28-31.
0.20 0.10
0.00
...,~~¥lt~R~~~~.;:l-t-t-++-t ABCDEFGHIJKLMNOPQRSTUVW
Law, A.M. and M.G. McComas (1990), ··Secrets of Successful Simulation Studies," Industrial Engineering, May. pp. 47-72.
New Experiment Number
Finally. Chart 3 was used to investigate the impact of the factors on the remaining secondary performance measures. In Chart 3. observe that the anneal productivity median was. in general. less for those with 8 furnaces than those with 6 furnaces--although the BBT_M was higher. More importantly. observe that for those experiments with 8 furnaces. the last 6 experiments {R. S • ...• W} showed more erratic behavior and had higher values than the 6 preceding them {L. M •...• Q} . This was attributable to the factor. Therrno. being set at 1. It was found that the first 6 experiments had level settings of Thermo = 0 and they appeared to have less variability (i .e .• smaller interquartile range). Thus. in view of this. it was recommended to set Thermo to O. In summary. the FIDGA approach applied to this simulation study analyzed the 23 experiments more thoroughly and supported the statistical conclusions leading to the following recommendations:
162