Designing machine operating strategy with simulated annealing and Monte Carlo simulation

Designing machine operating strategy with simulated annealing and Monte Carlo simulation

ARTICLE IN PRESS Journal of the Franklin Institute 343 (2006) 372–388 www.elsevier.com/locate/jfranklin Designing machine operating strategy with si...

204KB Sizes 0 Downloads 26 Views

ARTICLE IN PRESS

Journal of the Franklin Institute 343 (2006) 372–388 www.elsevier.com/locate/jfranklin

Designing machine operating strategy with simulated annealing and Monte Carlo simulation Raid Al-Aomar Industrial Engineering, Jordan University of Science and Technology, Irbid-22110, Jordan Received 8 February 2006; accepted 8 February 2006

Abstract This paper describes a simulation-based parameter design (PD) approach for optimizing machine operating strategy under stochastic running conditions. The approach presents a Taguchi-based definition to the PD problem in which control factors include machine operating hours, operating pattern, scheduled shutdowns, maintenance level, and product changeovers. Random factors include machine random variables (RVs) of cycle time (CT), time-between-failure (TBF), time-to-repair (TTR), and defects rate (DR). Machine performance, as a complicated function of control and random factors, is defined in terms of net productivity (NP) based on three key performance indicators: gross throughput (GT), reliability rate (RR), and quality rate (QR). It is noticed that the resulting problem definition presents both modeling and optimization difficulties. Modeling complications result from the sensitivity of machine RVs to different settings of machine operating parameters and the difficulty to estimate machine performance in terms of NP under stochastic running conditions. Optimization complications result from the limited capability of mathematical modeling and experimental design in tackling the resulting large-in-space combinatorial optimization problem. To tackle such difficulties, therefore, the proposed approach presents a combined empirical modeling and Monte Carlo simulation (MCS) method to model the sensitive factors interdependencies and to estimate NP under stochastic running conditions. For combinatorial optimization, the approach utilizes a simulated-annealing (SA) heuristic to solve the defined PD problem and to provide optimal or near optimal settings to machine operating parameters. Approach procedure and potential benefits are illustrated through a case study example. r 2006 The Franklin Institute. Published by Elsevier Ltd. All rights reserved. Keywords: Parameter design; Stochastic modeling; Simulated annealing; Monte Carlo simulation

Tel.: +962 2 720 1000; fax: +962 2 709 5018.

E-mail address: [email protected]. 0016-0032/$30.00 r 2006 The Franklin Institute. Published by Elsevier Ltd. All rights reserved. doi:10.1016/j.jfranklin.2006.02.019

ARTICLE IN PRESS R. Al-Aomar / Journal of the Franklin Institute 343 (2006) 372–388

373

1. Introduction Because of the reduced business safety margins and the growing competition in the marketplace, companies are increasingly focusing on boosting the performance of their production systems while maintaining low operating cost and high products’ quality. A production system is often referred to as a group of machines that are logically organized to facilitate the production of a certain product or a certain product mix [1]. As value-adding units, therefore, machines play a critical role in determining the overall performance of any production system. Metal removal machines, i.e. turning, milling, and drilling, are of a particular importance since they commonly carry out core manufacturing processes in a wide range of production systems [2]. In conjunction with other machine running conditions, a machine’s performance is highly dependent on its operating strategy in terms of selected operating parameters, i.e. operating pattern, maintenance plan, product changeovers, etc. As macro-level operational decisions, operating parameters are complementary to the micro-level machining parameters, i.e. cutting speed, depth of cut, and feed rate [3]. However, since both design levels are typically interdependent, they are both critical to a machine’s key performance indicators, i.e. throughput, reliability, and production quality. To improve these metrics, therefore, such parameters are often set depending on a complex combination of process capability, product specifications, and production schedule. Although their impact on machine performance is widely recognized, little research is directed at designing machine operating parameters and no literature is dedicated to a complete analysis of machine operating parameters. Many of machine performance studies are still limited to analyzing the impact of machining parameters [4–6]. A large body of this research is aimed at providing mathematical and heuristic methods for setting machining parameters so that a certain metric of machining economics is optimized [7–10]. Another considerable body of research has been directed at methods of analyzing and improving machine performance by optimizing both running conditions and process configuration [11,12]. Research that analyzes the operational aspects of a single machine did not highlight the link between operating parameters and machine performance. Instead, it was limited to applying various production scheduling algorithms so that the machine setup cost or jobs makespan is minimized [13–16]. Examples of literature in which machine operating parameters are partially analyzed include analysis of machine operating parameters for a high-speed milling machine [17], integration of planning and controlling manufacturing processes [18], automated planning and optimization of machining processes [19], optimum maintenance planning [20], and a decision support system for operational decisions [21]. Several difficulties have shifted the focus of researchers from the problem of designing machine operating parameters. Firstly, it is often difficult to link machine operating parameters to specific machine performance measures such as productivity and reliability under stochastic running conditions. Secondly, operating parameters are typically interrelated with multiple process random variables (RVs) such as cycle time (CT), operational failures, and production defect rate. Representing the sensitivity of stochastic processes associated with machine RVs is often too complicated for analytical modeling. Thirdly, it is usually unrealistic to rely on using deterministic mathematical optimization methods and it is ineffective to use factorial design in solving stochastic complex combinatorial optimization problems of large solution space. Because of such difficulties,

ARTICLE IN PRESS 374

R. Al-Aomar / Journal of the Franklin Institute 343 (2006) 372–388

it is often the case in practice where machine operating parameters are set arbitrary and changed frequently by trial and error, which understandably results in various operational problems, wasted effort, and costly changes. This paper provides a definition and a solution approach to the problem of designing machine operating parameters. The focus is on tackling the three limitations outlined through the review of related literature. The multiple impacts of operating parameters on the machine performance in terms of throughput, reliability, and quality are combined into a unified machine net productivity (NP) measure. This measure is often critical to accurate capacity and production planning at both machine and production system levels. Based on an influence diagram, the interrelations among operating parameters and RVs are modeled empirically using multiple linear regression. Monte Carlo simulation (MCS) is used to propagate randomness of machine RVs into the NP estimation. Finally, the approach utilizes an efficient combinatorial optimization method that is based on simulatedannealing (SA) to search the often large and complex domain of the parameter design (PD) problem and to provide optimal or near optimal settings to machine operating parameters. The remaining of this paper is organized as follows: in Section 2, the underlying PD problem is formalized based on Taguchi’s P-Diagram. In Section 3, the details of the proposed solution approach are discussed. In Section 4, a case study example is presented. Finally, in Section 5, concluding remarks are summarized. 2. Formalizing the PD problem This section defines the problem of designing machine operating strategy based on Taguchi’s PD approach [22]. Using a P-Diagram, Taguchi’s approach categorizes factors impacting machine performance (Y) as control factors (C-factors) and noise (random) factors (X-factors), as shown in Fig. 1. The proposed PD problem definition involves, but not limited to, five machine operating parameters that are defined in terms of a set of control factors C ¼ fC 1 ; C 2 ; C 3 ; C 4 ; C 5 g, where: C1: Machine operating pattern (distribution of work breaks per shift). C2: Machine operating hours per shift (hours of machine operation per shift). C3: Number of changeovers per shift (number of times the product type is switched per shift). C-factors

Production

Machine (M)

X-factors Fig. 1. P-Diagram of the PD problem.

Y

ARTICLE IN PRESS R. Al-Aomar / Journal of the Franklin Institute 343 (2006) 372–388

375

C4: Number of scheduled shutdowns per year (the machine is stopped for regular maintenance). C5: Machine maintenance level (the type of required repair is low, medium, or high). The problem definition also involves, but not limited to, four machine random factors. These factors are defined in terms of a set of RVs X ¼ fX 1 ; X 2 ; X 3 ; X 4 g, where: X1: Machine CT: time required to process one unit on the machine. X2: Machine time between failure (TBF): machine operating time until next failure. X3: Machine time to repair (TTR): time required to prepare, repair, and test the failed machine. X4: Machine defect rate (DR): the machine’s percent defects due to scrap and quality rejects. Due to variations in X-factors, the P-Diagram response (Y) in terms of the machine’s NP will exhibit a stochastic behavior. Several factors can contribute to the randomness encompassed in X-factors. Therefore, both mathematical and empirical models can be used to model the impacts of variation sources on machine performance. Randomness in CT, for instance, comes from operator variability and changes in running conditions. In machining operations, the total per-part CT is determined based on machining time (tm) and times for loading (tl) and unloading (tu) the part; that is: CT ¼ tl þ tm þ tu .

(1)

While operator variability results in variation in both tl and tu, machining time tm is a function of machining parameters, i.e. cutting speed (N), feed rate (f), and depth of cut (d). For the lathe machine, tm in minutes for a part of length (L), initial diameter (D0), and final diameter (Df) can be estimated in multi-pass turning as follows [23]:   L D0  Df tm ¼ , (2) Nf 2d where dxe is round off x to the next integer number. Hence, variability in CT can be related to variations in operator efficiency as well as in micro level machining parameters in terms of N, f, and d. Similarly, randomness in TBF and TTR comes from stochastic failures and operation interruptions caused by tool wear, tool change, tool breakage, and machine failures. Finally, randomness in DR can be related to process variation in terms of material properties and working conditions. By definition, a stochastic process is a sequence of RVs that change value discretely or continuously over time [24]. Based on the assumption that the future values of RVs are independent from their past given the present, a Markovian chain X ¼ fX n ; n ¼ 0; 1; . . .g is used to represent a discrete stochastic process while a Markovian process X ¼ fX t ; tX0g is used to represent a continuous stochastic process. Thus, the defined X-factors are continuous RVs that can be modeled using Markovian stochastic processes. Probability models are often used to assign probability to random occurrences in a stochastic process. In general, a probability space can be defined in terms of sample space (O), a collection of events (E) from the sample space, and p as the probability law (distribution). In the defined PD problem, the probability space is the state space of the stochastic process that comprises the ranges of the process RVs (X-factors) while the

ARTICLE IN PRESS 376

R. Al-Aomar / Journal of the Franklin Institute 343 (2006) 372–388

C-factors

ε

X-factors

∆ x = f (C)

Y = f (X)

Fig. 2. Influence diagram of the PD problem.

collection of events includes the occurrence of throughput changes, machine failures, and product defects. Since, a RV is a function that assigns a real number to each outcome in a sample space, the occurrence of these events is governed by the probability distributions. As discussed in [25], such distributions are developed by fitting the historical data of each RV to a standard or an empirical probability distribution. Based on the defined C-factors and X-factors, the problem definition outlines the interrelations of C-factors with X-factors, where C-factors impact machine performance by influencing the occurrences of events related to the machine RVs. An influence diagram, shown in Fig. 2, is used to depict the interrelations among C-factors, X-factors, and NP in the defined PD problem. The relationship between X-factors and C-factors is first defined in terms of an empirical model Dx ¼ f ðCÞ that estimates the amount of X-factors sensitivity to changes in C-factors. A subsequent stochastic model Y ¼ f ðX Þ is then used to propagate the randomness in X-factors (RVs) into the machine performance in terms of NP. Randomness in X-factors influences the accuracy of the Y ¼ f ðX Þ model and the amount of sensitivity error ðeÞ in the empirical model Dx ¼ f ðCÞ. Therefore, solving the defined PD problem implies determining the optimum levels for C-factors (machine operating parameters) so that the process response Y is maximized under the impact of randomness encompassed in X-factors. This requires a unified definition of the machine performance in terms of NP, an accurate representation of Dx ¼ f ðCÞ sensitivity model, a stochastic modeling of Y ¼ f ðX Þ relationship, and an effective combinatorial optimization engine for searching the complex and large domain of the PD problem. 3. Solution approach Fig. 3 depicts a prototype of the proposed solution approach to the defined PD problem. Available data on the defined C-factors and X-factors are first used to develop empirical regression models of Dx ¼ f ðCÞ, which predict changes in X-factors in terms of Dx. Using Dx values, a MCS model is then utilized to estimate the machine performance (Y) under the stochastic changes in X-factors. MCS-based estimations of machine performance are used to guide the SA search towards optimal operating PD. Upon search termination, optimum levels are C-factors are produced by the SA search engine. The proposed approach can be summarized in a five-step procedure as follows: Step 1: Develop a P-Diagram: C-factors, X-factors, and response Y in terms of NP. Step 2: Develop an empirical model of the Dx ¼ f ðCÞ sensitivity relationship.

ARTICLE IN PRESS R. Al-Aomar / Journal of the Franklin Institute 343 (2006) 372–388

X-factors

Regression Model ∆x = f (C) (Empirical)

∆x

C-factors

Optimum C-factors

SA Search (Optimization Model)

377

MCS Model Y = f (X) (Stochastic)

Performance (Y)

Fig. 3. Prototype of solution approach.

Step 3: Develop a MCS model of the Y ¼ f ðX Þ stochastic relationship. Step 4: Set the parameters of SA search engine to the specifics of the defined PD problem. Step 5: Run the simulation-based SA search and obtain an optimal solution. 3.1. Machine performance as NP This paper defines machine performance in terms of the machine standalone NP. Machine NP is a unified performance measure that is commonly used in process design and capacity planning [26]. The machine NP is determined by combining measures of machine gross throughput (GT), reliability rate (RR), and quality rate (QR). Machine GT, i.e. the production capacity under ideal conditions, is a key productivity measure that is determined based on machine CT. Such throughput level cannot be practically sustained due to the actual operating conditions, which results in a variable CT. Hence, a mean CT (MCT) in seconds is used to estimate the average machine GT in terms of unit-per-hour (UPH) as follows: GT ¼

3600 . MCT

(3)

In addition to CT variability, machine performance is impacted by frequent machine failures and varying repair time. The average machine RR, i.e. percent of time the machine is available for operation, is determined based on the machine mean time between failure (MTBF) and mean time to repair (MTTR) as follows: RR ¼

MTBF . MTBF þ MTTR

(4)

Finally, machine performance is a function of production quality, which is expressed in terms of the machine QR. Using the machine’s mean DR (MDR), QR is determined as follows: QR ¼ 100%  MDR.

(5)

Therefore, machine NP combines the machine’s GT, RR, and DR in an overarching performance measure that represents the deterioration in machine GT due to the machine

ARTICLE IN PRESS 378

R. Al-Aomar / Journal of the Franklin Institute 343 (2006) 372–388

RR and DR. Analytically, NP is defined as follows: NP ¼ GT  RR  QR:

(6)

In reality, however, estimating NP analytically is not often attainable due to stochastic machine running conditions. Hence, a combination of empirical modeling and MCS is adopted for a realistic estimation of NP. It is also worth mentioning that machine NP may not be the only criterion for selecting machine operating parameters. Other budgetary, regulatory, and environmental restrictions can be considered when making operational decisions. 3.2. Empirical model for Dx ¼ f ðCÞ A multiple linear regression method is used to model the sensitivity amounts ðDxÞ in the means of process random factors (X-factors) to changes in the levels of C-factors. The proposed approach assumes that sufficient benchmark or historical data on machine performance is available in order to develop the regression models. The approach formalizes available data using a fractional factorial experimental design based on three levels of C-factors (Low: L, Medium: M, and High: H). Model response is measured at each level-combination in terms of means for X-factors (MCT, MTBF, MTTR, and MDR). To predict mean changes in terms of Dx, therefore, a multiple linear regression model is developed for each response mean as a function of model C-factors using the following generic approximation [25]: Y ¼ b0 þ b1 x1 þ b2 x2 þ    þ bk xk .

(7)

This model is used to approximate nonlinear relationships (both quadratic and polynomials) as well as factors interactions. The coefficient of determination R2 and the adjusted determination factor (R2adj) are used to check model adequacy when adding nonlinear or interaction terms. R2 value always increases when adding a new term to the regression model while the value of R2adj increases only when the addition of a new term reduces the error of mean square [27]. 3.3. MCS model for Y ¼ f ðX Þ An accurate estimation of the objective function (machine performance in terms of NP) based on the stochastic relationship Y ¼ f ðX Þ is critical to successful application of the proposed approach. Based on Formula 6, Y can be defined analytically using the means of X-factors as follows:     x¯ 2 3600 Y¼ (8)   ð100%  x¯ 4 Þ. x¯ 2 þ x¯ 3 x¯ 1 When modeling machine performance as a stochastic process, however, there will be times when the response is too complicated for analytical modeling; in such case, simulation would be an appropriate tool. Modeling randomness (uncertainty propagation), which essentially turns a deterministic model into a stochastic model, is the main objective of MCS. This better mimics the realistic system behavior and determines how random variation in model inputs affects the sensitivity of model response. Fig. 4 demonstrates the concept of randomness propagation with MCS.

ARTICLE IN PRESS R. Al-Aomar / Journal of the Franklin Institute 343 (2006) 372–388

379

x1 x2

Deterministic Model f (X)

Y Stochastic Response

x3 Fig. 4. Randomness propagation with MCS.

MCS is a computer-aided statistical sampling method that is typically used for evaluating a deterministic model, iteratively, using sets of random numbers as inputs [28]. Further descriptions of MCS and its applications can be found in [29,30]. MCS is applicable to a wide variety of problems where statistical sampling is employed to approximate solutions to quantitative problems. To this end, MCS depends heavily on random number generation (RNG) to model the stochastic nature of the system being studied. Using RNG, inputs are randomly generated from probability distributions to simulate the process of sampling from the corresponding actual population ones. As discussed in [31], uniform RNG between 0 and 1 is commonly achieved using the congruential method while successive independent random samples from theoretical probability distributions are generated using the inverse method. At the end of the simulation, the collection, or ensemble, of randomly chosen points provides some information about the behavior of model response. Statistical analyses are conducted to formalize the provided information. 3.4. SA algorithm In this paper, SA is utilized to seek optimal operating parameters by searching the domain of the defined PD problem, i.e. practically feasible ranges of the machine operating parameters. SA is an optimization method that is based on the structural properties of materials (mainly metals) undergoing the annealing process, where materials are melted down and then cooled slowly in a controlled manner [32]. Such process resembles the SA search in seeking global optima while avoiding being trapped at local optima. Initially, SA was applied to combinatorial optimization problems using a deterministic closed-form definition of the objective function. Latter researchers successfully applied SA to stochastic optimization problems [33–35]. In recent researches, SA has become a popular global search engine for solving problems where mathematical programming formulations become intractable. This includes solving various combinatorial optimization problems in circuit design, scheduling, path generation, and many others. Further descriptions of SA and its applications can be found in [36,37]. Fig. 5 illustrates the movement mechanism of the SA where the algorithm starts by setting SA control parameters; initial temperature (T), cooling parameter (a), number of T decrement steps (S), and the maximum number of iterations (n) at each T step. At the initial T, an initial solution is randomly generated from the feasible space and compared to a candidate solution from the neighborhood of the initial solution. If its performance is better, the candidate solution replaces the initial solution and the process repeated certain

ARTICLE IN PRESS 380

R. Al-Aomar / Journal of the Franklin Institute 343 (2006) 372–388

Start

SA Parameters Ts, α, n, S

Initialize solution

Generate a candidate solution

Estimate objective value of current and candidate solutions (MCS)

Accepted?

(Observe ∆E)

Yes Update solution

No

No

No

Change T? (observe n)

Yes Reduce T (use α)

Terminate? (observe S )

Yes Final solution

Stop

Fig. 5. SA search algorithm.

ARTICLE IN PRESS R. Al-Aomar / Journal of the Franklin Institute 343 (2006) 372–388

381

number of iterations before reducing the temperature and checking the termination condition. The temperature T, which is modulated by a predetermined cooling schedule, controls the degree of randomness presents within the search. Determining the initial T value is a problem specific that depends on the scaling of the objective function. Generally speaking, T must be high enough to allow the search to move to almost any neighborhood state in order to avoid being trapped in local optima. The search will seek convergence to the local optima toward the end of the computation, when the temperature T is nearly zero. The cooling parameter a 2 ½0; 1 controls the rate at which the temperature is reduced, where large values (typically between 0.70 and 0.99) will produce better results through slow cooling schedules. Longer temperature steps (large number of iterations n) will also produce slower cooling rate at a fixed a by allowing the system to stabilize at that temperature step. The combination of cooling rate ðaÞ and the length of temperature step (or cooling time) in terms of n establishes the SA cooling schedule. Such schedule is highly problem-dependent and has a considerable impact on the quality of the solution found. Slow SA cooling results in longer computation time and rapid SA cooling increases the chance of trapping the search in local optima. After setting SA parameters, an initial solution is generated randomly and used as the first current solution. The function value of the initial solution, as well as the candidate solution, is evaluated using the MCS-based estimation of NP. When employed with SA search, a modification to the simple Monte Carlo method is made so that a new point in search space is sampled by making a slight change to the current point and unrealistic samples are not placed into the ensemble [38]. This modified procedure, which is known as a Metropolis MCS or Metropolis annealing, was proposed by Kirkpatrick in 1983 to find the lowest energy (most stable) orientation of a system [39]. In the proposed approach, the performance value (NP) measured using MCS is analogous to the Energy (E) value in thermodynamics, where higher NP (lower E) implies a closer state to thermal equilibrium. Thus, change in energy (DE ¼ New E  Current E) is measured at each evaluated solution and the solution is accepted if DEo0 (candidate solution results in lower E). Else, Boltzmann acceptance criterion is employed where a random value R 2 ½0; 1 is generated and compared to Boltzmann probability of acceptance pðDEÞ. If RppðDEÞ, the candidate solution is accepted and the current solution is updated. Otherwise, another candidate solution is generated. By using this acceptance probability one can prove that the average E over the ensemble is equal to the Boltzmann average of E as determined by the Boltzmann distribution law, for a sufficiently large ensemble [40]. Given the current temperature T, pðDEÞ is defined as follows:   ½DEþ pðDEÞ ¼ exp . (9) T The search continues in this inner loop until reaching maximum step iterations (n) where T is decremented according to the cooling parameter ðaÞ. Reducing T results in reducing Boltzmann probability for acceptance of worse solutions (those with higher energy E). Different methods can be used for reducing T in the cooling process [41]. A commonly used simple approach is the linear method, where: T iþ1 ¼ aT i .

(10)

ARTICLE IN PRESS 382

R. Al-Aomar / Journal of the Franklin Institute 343 (2006) 372–388

As the temperature drops, the process of finding neighboring solutions and accepting them as current solutions, if acceptance criterion is met, is repeated until termination. The algorithm is either set for terminating at a subzero final T, to run certain number of T steps (S), or when no improvement occurs in one T step. 4. Case study This case study includes designing the operating parameters for a lathe machine through the application of the proposed 5-step procedure, described in Section 3. First, the PD problem is defined, based on the P-diagram shown in Fig. 1, in terms of C-factors, X-factors, and response (Y). Second, the model response is assessed using the NP definition in Formula 6. As discussed in Section 2, the problem definition includes five C-factors (C1: operating pattern in terms of working hours between breaks, C2: operating hours per shift, C3: changeovers per shift, C4: schedule shutdowns per year, and C5: maintenance level) and four X-factors (X1: CT, X2: TBF, X3: TTR, and X4: DR). Levels of defined C-factors are summarized in Table 1. Third, the sensitivity of X-factors in terms of Dx to changes in C-factors levels is determined through multiple linear regressions (empirical models). To this end, historical machine performance data (in terms of MCT, MTTR, MTBF, and MDR) are collected at different levels of C-factors and formalized using a fractional factorial design. The defined PD problem in this case results in 18 900 potential C-factors combinations, which makes it extremely difficult, inefficient, and time-consuming to set a full factorial design of all factor levels [42]. Hence, a three-level fractional factorial design is set using available performance data. The used three-level factorial design is shown in Table 2. Based on [27,43], an L27 fractional factorial design is developed (Taguchi’s orthogonal array of knp designs, in this case 352 ¼ 27 designs). Table A1 in the Appendix summarizes the results of the L27 experimental design. The experimental design results are used in regression analysis. Nonlinearity and interactions in each model are also checked. To this end, main effect plots are used to determine factors with largest impact on X-factors means. Based on the plots of main effect shown in Fig. 6, it was found that C5 has the largest main effect on MTTR, C4 has the largest main effect on MTBF, C1 has the largest main effect on MDR, and C2 has the largest main effect on MCT. Nonlinearity is checked using a quadratic approximation Table 1 Levels of C-factors C1 levels

C2 levels

C3 levels

C4 levels

C5 levels

2.0 2.5 3.0 3.5 4.0 4.5 5.0

4 5 6 7 8 9 10 11 12

1 2 3 4 5 6

3 4 5 6 7 8 9 10 11 12

1 2 3 4 5

ARTICLE IN PRESS R. Al-Aomar / Journal of the Franklin Institute 343 (2006) 372–388

383

Table 2 Three-level factorial design Parameter

L: Low (0)

M: Medium (1)

H: High (2)

C1 C2 C3 C4 C5

2.0 h 4h 1 changeover 3 shutdowns Repair method 1

3.5 h 8h 3 changeover 6 shutdowns Repair method 3

5.0 h 12 h 6 changeovers 12 shutdowns Repair method 5

Main Effects Plot - Means for MTBF

Main Effects Plot - Means for MTTR 255

44

MTBF

MTTR

38 32

245 235 225

26

215

20 C1

C2

C3

C4

C5

C1

Main Effects Plot - Means for MDR

C2

C3

C4

C5

Main Effects Plot - Means for MCT

13.8 158

MCT

MDR

12.6 11.4

146 134

10.2

122

9.0

110 C1

C2

C3

C4

C5

C1

C2

C3

C4

C5

Fig. 6. Plots of main factorial effects.

Base Regression Models: MTTR = 13.5 + 0.333 C1 - 0.097 C2 + 0.307 C3 - 0.111 C4 + 6.03 C5 MTBF = 275 - 0.667 C1 - 0.222 C2 - 0.325 C3 - 4.63 C4 - 0.139 C5 MDR = 4.60 + 1.67 C1 + 0.0139 C2 - 0.0058 C3 + 0.0582 C4 + 0.0833 C5 MCT = 193 - 0.73 C1 - 7.26 C2 + 0.162 C3 + 0.076 C4 - 0.752 C5 Enhanced Regression Models: MTTR = 17.9 - 1.18 C1 - 0.758 C2 + 0.307 C3 - 0.111 C4 + 6.31 C5 + 0.189 C1C2 MDR = 4.97 + 1.73 C1 - 0.061 C2 - 0.006 C3 + 0.058 C4 - 0.117 C5 + 0.025 C2C5 MCT = 212 - 8.47 C1 - 6.29 C2 + 0.162 C3 + 0.076 C4 - 9.79 C5 + 2.58 C1C5 Fig. 7. Base and enhanced regression models.

of the factor with highest main effect. It was found that nonlinearity impact is negligible; hence no nonlinear term was added to the regression models. Interaction plots showed the following major interactions do exist among C-factors. C1–C2 interaction in MTTR, no interaction impact was noticed in MTBF, C2–C5 interaction in MDR, and C1–C5 interaction in MCT. Base empirical models are, therefore,

ARTICLE IN PRESS 384

R. Al-Aomar / Journal of the Franklin Institute 343 (2006) 372–388

enhanced by adding main interaction terms. Fig. 7 shows the base and enhanced regression models. Model adequacy is checked using the coefficient of determination R2 and the adjusted coefficient of determination (R2adj). As shown in Table 3, values of both R2 and R2adj are close to unity and have improved when enhancing the models with interaction terms. Table 3 also shows factors with main effects and main interactions. Fourth, a MCS model is set to estimate the machine NP at each combination of C-factors. MCS estimation is based on the concept of randomness propagation shown in Fig. 4. To propagate randomness encompassed in X-factors into the model response (Y), a large number of trials are sampled from corresponding probability distributions at each response evaluation. Congruential method is used for RNG and inverse method is used for sampling from corresponding probability distribution. The MCS model is set to run five replications each of 1000 data points (sample size) using different random stream generators to assure sampling independence. The combination of large sample size and varying random stream is aimed at obtaining steady state response evaluation. Probability distributions used for the X-factors in the MCS model along with their parameters and initial values are presented in Table 4. Statistical analyses of historical data were used for determining appropriate probability distributions of RVs. To verify the accuracy of MCS sampling from selected probability distributions, sampled (observed) data from each X-factor distribution is graphed and the Chi-Square test is used to compare the obtained distribution to those expected from selected probability distribution. The tested hypotheses on MCS accuracy were accepted with a confidence level of 95%. Finally, SA engine is set to search the PD problem domain (18 900 potential combinations) using MCS-based estimation of machine NP as an objective function. SA search parameters are adapted to suit the specifics of the defined PD problem. As a result, initial (starting) temperature T is set to 100, number of iterations to be performed at each T step is set to 100 iterations, the cooling rate ðaÞ is set to 0.91, and final temperature (subzero T) is reached after 50 T steps (or 5000 total iterations). Fig. 8 shows the progress observed in the SA search towards convergence using the best 100 iterations among the 5000 evaluated iterations. Best solutions found range from an average machine NP of 27.50 UPH to an average NP of 32.38 UPH. Compared an exhaustive search that requires running 18 900 iterations, SA search is found to be quite effective. SA has arrived at the optimal solution with a computation time of 10 min and 46 s using a 1.8 GHz processor and 128 MB RAM.

Table 3 Adequacy check of regression models Means of X-factor

Base regression models 2

MTTR MTBF MDR MCT

2

Enhanced models

R (%)

R adj (%)

Main effect

Main interaction

R2 (%)

R2adj (%)

96.1 96.5 93.7 92.0

95.2 95.7 92.2 90.1

C5 C4 C1 C2

C1–C2 None C2–C5 C1–C5

96.5 N/A 93.9 94.7

95.4 N/A 92.3 93.2

ARTICLE IN PRESS R. Al-Aomar / Journal of the Franklin Institute 343 (2006) 372–388

385

Table 4 Distributions of X-factors in MCS X-factor

Distribution

Parameter(s)

X0

X1: X2: X3: X4:

Normal Exponential Erlang Binomial

N(MCT,30) Exp(MTBF) Erl(MTTR,3) B(p, n ¼ 1000)

120 s 240 min 30 min 10%

CT TBF TTR DR

33.00

Avg. NP (UPH)

32.00 31.00 30.00 29.00 28.00 27.00 1

10

19

28

37

46

55

64

73

82

91

100

Best 100 Runs Fig. 8. SA search results.

The final (near-optimal) solution reached by SA search translates into the following machine operating PD: 3.5 h between breaks, 6 h daily operation, 7 yearly scheduled maintenances, three daily changeovers, and performing maintenance with level 2 repair method. This parameter setting has resulted in an average overall machine NP of 32.38 UPH. The solution was repeated with changing the terminating condition so that the program terminates upon achieving no NP improvement in the 100 iterations run at any one T step. It was found that the program terminates earlier after running 10 T steps (a total of 1000 iterations). The same solution (PD) is obtained but with significantly less run time (2 min and 14 s) using the same computer. 5. Conclusion This paper presented an approach for designing machine operating parameters under stochastic running conditions. The approach utilized Taguchi’s P-Diagram to formulate the PD problem. The approach combined empirical modeling and stochastic MCS to provide a realistic estimation of machine performance under stochastic running conditions. NP was used as an overall machine performance measure that comprises three key performance indicators: GT, RR, and defect rate. A SA optimizer was utilized to solve the resulting combinatorial optimization PD problem using the simulation-based estimation of machine NP as an objective function. The approach functionality and benefits were illustrated using a case study of designing operating parameters for a lathe machine.

ARTICLE IN PRESS R. Al-Aomar / Journal of the Franklin Institute 343 (2006) 372–388

386

Five machine operating parameters were designed: operating pattern, operating hours, shutdowns schedule, changeovers, and maintenance level, under four machine RV: CT, TBF, TTR, and percent defects. Results showed effective SA computations in obtaining a near optimal PD using accurate performance estimation through the combined empiricalMCS model. The approach application can be extended to a wide range of PD problems in machining centers and job shop operations.

Acknowledgments The authors gratefully acknowledge the referees whom helped to clarify and improve the paper presentation.

Appendix A Table A1 summarizes the results of the L27 experimental design.

Table A1 Results of L27 experimental design Design

C1

C2

C3

C4

C5

MDR

MTBF

MTTR

MCT

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27

0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2

0 0 0 1 1 1 2 2 2 0 0 0 1 1 1 2 2 2 0 0 0 1 1 1 2 2 2

0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2

0 1 2 1 2 0 2 0 1 1 2 0 2 0 1 0 1 2 2 0 1 0 1 2 1 2 0

0 0 0 2 2 2 1 1 1 1 1 1 0 0 0 2 2 2 2 2 2 1 1 1 0 0 0

9 8 9 8 9 9 9 9 9 11 12 10 11 10 11 11 12 11 14 13 14 14 15 14 13 14 13

255 242 220 239 215 250 213 257 248 246 220 253 215 257 242 262 240 210 213 255 243 260 241 208 237 212 252

22 21 20 42 43 44 30 31 29 28 32 39 19 21 20 45 47 46 44 45 47 31 29 32 22 21 20

168.20 172.91 169.80 118.23 120.05 122.30 105.23 106.24 107.40 159.28 162.85 160.50 120.67 122.50 123.75 107.34 108.46 105.24 165.27 161.30 159.90 122.70 119.48 124.65 103.86 107.93 105.55

ARTICLE IN PRESS R. Al-Aomar / Journal of the Franklin Institute 343 (2006) 372–388

387

References [1] L.J. Krajewski, L.R. Ritzman, Operations Management: Processes and Value Chains, Prentice-Hall, Upper Saddle River, NJ, 2005. [2] E.J.A. Armarego, R.H. Brown, The Machining of Metal, Prentice-Hall, Englewood Cliffs, NJ, 1969. [3] S.K. Hati, S.S. Rao, Determination of optimum machining conditions: deterministic and probabilistic approaches, Trans. ASME, J. Eng. Ind. 98 (1976) 354–359. [4] T. Freiheit, S.J. Hu, Impact of machining parameters on machine reliability and system productivity, ASME J. Manuf. Sci. Eng. 124 (2) (2002) 296–304. [5] B. Arezzo, K. Ridgway, A.M.A. Al-Ahmari, Selection of cutting tools and conditions of machining operations using an expert system, Comput. Ind. 42 (2000) 43–58. [6] S.H. Yeo, A multipass optimization strategy for CNC lathe operations, Int. J. Prod. Econ. 40 (1995) 209–218. [7] D.L. Kimbler, R.A. Wysk, R.P. Davis, Alternative approaches to the machining parameter optimization problem, Comput. Ind. Eng. 2 (4) (1978) 195–202. [8] M.C. Chen, D.M. Tsai, A simulated annealing approach for optimization of multi-pass turning operations, Int. J. Prod. Res. 34 (10) (1996) 2803–2825. [9] A.M.A. Al-Ahmari, Mathematical model for determining machining parameters in multi-pass turning operations with constraints, Int. J. Prod. Res. 39 (15) (2001) 3367–3376. [10] K. Hashmi, M.A. El Baradie, M. Ryan, Fuzzy logic based intelligent selection of machining parameters, Comput. Ind. Eng. 35 (3) (1998) 571–574. [11] Y. Koren, S.J. Hu., T.W. Weber, Impact of manufacturing system configuration on performance, Ann. CIRP 47 (1) (1998) 369–372. [12] D. Schoch, Critical factors for achievement of successful high speed metal forming production, J. Mater. Process. Technol. 46 (3–4) (1994) 409–414. [13] D.K. Seo, C.M. Klein, W. Jang, Single machine stochastic scheduling to minimize the expected number of tardy jobs using mathematical programming models, Comput. Ind. Eng. 48 (2) (2005) 153–161. [14] J. Liu, L. Tang, A modified genetic algorithm for single machine scheduling, Comput. Ind. Eng. 37 (1) (1999) 43–46. [15] Y. Guo, A. Lim, B. Rodrigues, S. Yu, Minimizing total flow time in single machine environment with release time: an experimental analysis, Comput. Ind. Eng. 47 (2) (2004) 123–140. [16] A. Agnetis, A. Alfieri, G. Nicosia, A heuristic approach to batching and scheduling a single machine to minimize setup costs, Comput. Ind. Eng. 46 (4) (2004) 793–802. [17] G. Abdou, W. Tereshkovich, Optimal operating parameters in high speed milling operations for aluminum, Int. J. Prod. Res. 39 (10) (2001) 2197–2214. [18] T. Toth, F. Erdelyi, F. Rayegani, Intensity type state variables in the integration of planning and controlling manufacturing processes, Comput. Ind. Eng. 37 (1) (1999) 89–92. [19] K. Challap, B. Berra, Automated planning and optimization of machining processes: a systems approach, Comput. Ind. Eng. 1 (1) (1976) 35–46. [20] S.M. Metwalli, M.S. Salama, R.A. Taher, Computer aided reliability for optimum maintenance planning, Comput. Ind. Eng. 35 (4) (1998) 603–606. [21] S. Sundararajan, G. Srinivasan, W.O. Staehle, E.W. Zimmers Jr., Application of a decision support system for operational decisions, Comput. Ind. Eng. 35 (2) (1998) 141–144. [22] G. Taguchi, Introduction to Quality Engineering: Designing Quality into Products and Processes, Quality Resources, White Plains, NY, 1986. [23] T.C. Chang, R.A. Wysk, H.P. Wang, Computer-Aided Manufacturing, second ed., Prentice-Hall, Upper Saddle River, NJ, 1998. [24] R.M. Feldman, C. Valdez-Flores, Applied Probability and Stochastic Processes, PWS Publishing Co., Boston, MA, 1996. [25] D.C. Montgomery, G.C. Runger, Applied Statistics and Probability for Engineers, third ed., Wiley, New York, 2003. [26] R. Al-Aomar, A methodology for determining system and process-level manufacturing performance metrics, SAE Trans. J. Mater. Manuf. 111 (5) (2003) 1043–1050. [27] D.C. Montgomery, Design and Analysis of Experiments, fifth ed., Wiley, New York, 2003. [28] N. Metropolis, S. Ulam, The Monte Carlo method, J. Am. Stat. Assoc. 44 (1949) 335–341.

ARTICLE IN PRESS 388

R. Al-Aomar / Journal of the Franklin Institute 343 (2006) 372–388

[29] R.Y. Rubinstein, Simulation and the Monte Carlo Method, Wiley, New York, 1981. [30] M. Pincus, A Monte Carlo method for the approximate solution of certain types of constrained optimization problems, Oper. Res. 18 (1970) 1225–1228. [31] H.A. Taha, Operations Research, seventh ed., Prentice-Hall, Saddle River, NJ, 2003. [32] S.C. Kirkpatrick, Optimization by simulated annealing—quantitative studies, J. Stat. Phys. 34 (1984) 975–986. [33] S.B. Gelfand, S.K. Mitter, Simulated annealing with noisy or imprecise energy measurements, J. Optim. Theory Appl. 62 (1989) 49–62. [34] W.J. Gutjahr, G.Ch. Pflug, Simulated annealing for noisy cost functions, J. Global Optim. 8 (1996) 1–13. [35] M.H. Alrefaei, S. Andrado´ttir, A simulated annealing algorithm with constant temperature for discrete stochastic optimization, Manag. Sci. 45 (1999) 748–764. [36] P.J.M. Laarhoven, E. Aarts, Simulated Annealing: Theory and Applications, D. Reidel Publishing Company, Holland, 1987. [37] R.W. Eglese, Simulated annealing: a tool for operational research, Eur. J. Oper. Res. 46 (3) (1990) 271–279. [38] N. Metropolis, A. Rosenbluth, A. Teller, E. Teller, Equation of state calculations by fast computing machines, J. Chem. Phys. 21 (6) (1953) 1087–1092. [39] D. Vanderbilt, S.G. Louie, A Monte Carlo simulated annealing approach to optimization over continuous variables, J. Comput. Phys. 56 (2) (1984) 259–271. [40] K. Binder, D. Stauffer, A simple introduction to Monte Carlo simulations and some specialized topics, in: K. Binder (Ed.), Applications of the Monte Carlo Method in Statistical Physics, Springer, Berlin, 1985. [41] E. Aarts, J. Korst, Simulated Annealing and Boltzmann Machines, Wiley, New York, 1990. [42] A.Y. Youssef, Y. Beauchamp, M. Thomas, Comparison of a full factorial experiment to fractional and Taguchi designs in a lathe dry turning operation, Comput. Ind. Eng. 27 (1) (1994) 59–62. [43] J. Chen, D.X. Sun, C.F.J. Wu, A catalogue of two-level and three-level fractional factorial designs with small runs, Int. Stat. Rev. 61 (1993) 131–145.