Manufacturing lead time estimation using data mining

Manufacturing lead time estimation using data mining

European Journal of Operational Research 173 (2006) 683–700 www.elsevier.com/locate/ejor Production, Manufacturing and Logistics Manufacturing lead ...

347KB Sizes 1 Downloads 90 Views

European Journal of Operational Research 173 (2006) 683–700 www.elsevier.com/locate/ejor

Production, Manufacturing and Logistics

Manufacturing lead time estimation using data mining ¨ ztu¨rk, Sinan Kayalıgil, Nur E. O ¨ zdemirel Atakan O

*

Industrial Engineering Department, Middle East Technical University, 06531 Ankara, Turkey Received 14 April 2004; accepted 4 March 2005 Available online 16 June 2005

Abstract We explore use of data mining for lead time estimation in make-to-order manufacturing. The regression tree approach is chosen as the specific data mining method. Training and test data are generated from variations of a job shop simulation model. Starting with a large set of job and shop attributes, a reasonably small subset is selected based on their contribution to estimation performance. Data mining with the selected attributes is compared with linear regression and three other lead time estimation methods from the literature. Empirical results indicate that our data mining approach coupled with the attribute selection scheme outperforms these methods.  2005 Elsevier B.V. All rights reserved. Keywords: Production; Lead time estimation; Knowledge-based systems; Data mining; Regression trees

1. Introduction The three means of job shop control in a maketo-order environment are manufacturing lead time (LT) estimation, order review and release, and job flow control through dispatching. These constitute the instruments for effective resource management with customer service quality and cost concerns. The objective is to realise a competitive delivery *

Corresponding author. Tel.: +90 312 210 4788; fax: +90 312 210 1268. ¨ ztu¨rk), E-mail addresses: [email protected] (A. O [email protected] (S. Kayalıgil), [email protected] ¨ zdemirel). (N.E. O

schedule while running the shop floor activities smoothly with both less congestion and less idleness by means of better synchronisation. LT estimation is critical as it exclusively affects customer relations and shop floor management practices. Due date quoting, which means commitment to meeting customer orders on time, is a direct outcome of LT estimation. Short lead times improve a manufacturerÕs image and future sales potential. However, not only short but also accurate and precise lead time estimates are desirable. Developing methods to infer LT from the real shop data constitutes a classical challenge in shop management literature, as it is affected by many factors such as the process needs, batching

0377-2217/$ - see front matter  2005 Elsevier B.V. All rights reserved. doi:10.1016/j.ejor.2005.03.015

684

¨ ztu¨rk et al. / European Journal of Operational Research 173 (2006) 683–700 A. O

practices, shop congestion and dispatching rules. It is well known that a much larger portion of LT is spent either waiting in queues or in transit than in actual processing. This makes it hard to predict the LT. Although actual process needs can be predefined with sufficient accuracy, the delays to be faced are contingent upon shop status in real time, which is hard to infer in advance. Estimation gets even harder with longer lead times because uncertainty increases as the variance in job mixes and quantities accumulate in time. Analytical, experimental and heuristic methods have been proposed given the multi-faceted nature of the problem. The uncertainty that prevails and dependence on various factors are addressed in almost all methods. However, associations among factors, conditional presence of relations, and clustering of lead times in response classes may as well be explored. Hence, knowledge based techniques can also assist in LT estimation. These techniques are suitable for representing complex and situation specific interactions in the form of structured rules. One source of such knowledge is applying data mining (DM) on data gathered from shop operation. This is the motivating conception we have in this work. DM essentially encompasses information gathering by discovering the patterns hidden in available data. Given that a homogeneous control policy (in terms of batching, order release and dispatching) is applied in a particular shop, it is possible that flow times of different jobs behave along some typical patterns. To explore these patterns calls for an approach based on extracting knowledge from past data. We seek to analyse the applicability and merit of DM as a knowledge extraction tool in LT estimation. Our purpose is to demonstrate that, compared to other methods in literature, DM is a competitive method for the difficult managerial task of LT estimation. Most of the traditional methods assume models in pre-specified factors (i.e. job and shop attributes) and are based on estimating necessary model parameters. Unlike these methods, DM is exploratory both in selecting the significant factors and in expressing the LT in terms of these factors. This is a natural result of the knowledge extraction capability of DM.

We take the regression tree approach as the specific DM method. In expressing the induced knowledge we aim at exploring the use of a wide range of factors to be later reduced to a parsimonious set. This way, many potential associations and interdependencies among these factors will be accounted and tested for their significance. We use the data generated by a job shop simulation model as input for DM. This way we gain access to an unlimited source of data in a controlled environment. We review LT estimation and DM methods in the next two sections. Section 4 is devoted to description of the simulated manufacturing environment. We present the specific DM approach adopted for LT estimation in Section 5. It is here that we focus on the attribute selection in this specific problem. Section 6 summarises our experimental results followed by our conclusions in Section 7.

2. Review of lead time estimation methods Lead time estimation has often been seen critical in planning shop operations. In an early work reviewing the roles of and the ways to handle lead times, Tatsiopoulos and Kingsman (1983) emphasise that manufacturing lead times lie ‘‘at the heart of production scheduling’’. Due date determination has been addressed both as a decision making problem with various tradeoffs (Siedmann and Smith, 1981) and as an estimation problem. In the former approach a computational method is sought to establish a policy for due date assignment. In the latter the purpose is to infer an expected duration under a given set of operating conditions. Cheng and Gupta (1989) in their often cited survey investigate scheduling issues involving due date determination. They address linkages among due dates, dispatching rules and job completion times in static and dynamic shops. LT is also regarded an essential element in negotiating with the customer (Moodie, 1999). Kingsman and Mercer (1997) treat pricing and LT quotation effects jointly to infer probability of winning tenders.

¨ ztu¨rk et al. / European Journal of Operational Research 173 (2006) 683–700 A. O

LT is essentially made up of manufacturing flow time sometimes inflated by factors like allowances for material acquisition and timing of order release to shop floor. Hence flow time is often the major part of LT and these two are used interchangeably. Job and shop characteristics are the two sources of factors affecting flow times. Earlier studies have taken job related parameters (such as total work content and number of operations) into account in rules like CON, TWK, SLK, and NOP (Smith and Siedman, 1983). Shop related measures such as total shop load at the time of order arrival or shop congestion on an orderÕs prospective route have been considered in later rules. JIQ (Eilon and Chowdhury, 1976) and JIS (Weeks, 1979) are two well known examples. Using job and shop status data in combination has proven to be more effective (Weeks, 1979; Bertrand, 1983). Experiments have shown that shop related measures specific to an order are more effective than aggregate shop congestion indicators in estimating flow times (Ragatz and Mabert, 1984). LT estimation based on dynamic aspects like operating conditions at the time of order arrival have been emphasized recently (Veral, 2001). This work makes use of queuing theoretic findings and assumes that utilisation rate and process time are factors defining the flow time. The proposed model is quadratic in both factors. Higher order terms also appear in another approach (Ruben and Mahmoodi, 2000). Vig and Dooley (1991) base their OFS (operation flow time sampling) and COPS (congestion and operation flow time sampling) rules on the fact that flow times are correlated. A sample of recently completed orders is used to infer ongoing average flow time per operation to reflect near term shop congestion. Their results support the previous findings about superiority of using shop conditions. However significant differences between the rules are found depending on shop balance. Probability distribution based approaches explicitly consider the impact of variance either of the workload (e.g. JIQ) or of the error in estimation. Enns (1995) refers to an earlier work (Bertrand, 1983) making use of the normal distribution

685

in quoting LT estimates subject to uncertainty. Hopp and Sturgis (2000) treat flow time as a normal distributed random variable and base LT estimation on a service level requirement. The effect of dispatching rules on the performance of due date estimation has been inquired since the early work of Eilon and Chowdhury (1976). They demonstrate that the interaction between LT estimation and dispatching rule has a significant impact on due date related performance measures. Dispatching rule effects have been addressed in later studies as well (Vig and Dooley, 1991; Enns, 1995; Sabuncuoglu and Hommertzheim, 1995; Sabuncuoglu and Comlekci, 2002). Flow time estimation is found harder under FCFS than with due date dependent dispatching rules like EDD. LT estimation methods in general are in the form of estimating parameters of a pre-assumed mathematical model. This is often realised by regression applied on flow time data generated by a shop simulation model. Analytical methods involving optimisation of LT based objectives are rare (Cheng and Gupta, 1989) as such results are difficult to generalize. As the estimation method moves from ‘‘generic’’ to ‘‘specific’’ the prediction quality improves. Two distinct accounts (Ragatz and Mabert, 1984; Sabuncuoglu and Comlekci, 2002) concur that one generic form of LT estimation does not necessarily perform well for all shops, not even for all jobs in a given shop. Comparisons among alternative methods of LT estimation are based upon the resulting lateness and tardiness related measures. Means and standard deviations of these measures are regarded critical. Mean absolute error (Enns, 1995) and mean square error are sometimes used to measure the estimation accuracy.

3. Overview of data mining DM is typically used for generating information (through some form of learning) by discovering the patterns hidden in available data. The data used for DM are usually in tabular form where rows are transactions or instances, and columns are attributes.

686

¨ ztu¨rk et al. / European Journal of Operational Research 173 (2006) 683–700 A. O

3.1. Decision and regression tree construction

One basic factor that classifies DM methods is the type of learning, i.e. whether it is supervised or unsupervised. Purpose of supervised learning is to discover the relationship between the attributes and a response variable. In unsupervised learning a response variable is not identified and the aim is to explore the associations among the attributes. Fig. 1 depicts a schematic outlook of DM methods which can be classified under four major headings: (1) decision and regression tree construction, (2) rule induction, (3) associative modelling, and (4) clustering. Except clustering all are supervised learning methods. The first two mainly aim at prediction, and the last two deal with discovering the associations among the attributes. Because our purpose is prediction, we briefly review the first two methods in this section. Interested reader may refer to Agrawal et al. (1993) and Srikant and Agrawal (1997) for associative modelling. Clustering is described by Michaud (1997) and Zait and Messatfa (1997). We also discuss attribute selection and discretisation, which are critical elements of some DM applications. We conclude this section with a brief account of DM applications in manufacturing.

A decision tree is a structure that aims at branching the data in a top–down fashion. If the response variable is discrete (also called class or ordinal or categorical) as in the case of being ill or not, then a decision tree is constructed. Trials are made in order to divide the transactions into nodes with a total impurity (in terms of dissimilar attributes) less than that of the parent node. Each division is based on a single attribute, selected to bring the largest reduction in impurity. However, large decision trees are difficult to interpret. Therefore, they are simplified and decision rules (also called production rules) are derived from the trees. A decision rule is of the form ‘‘if hconjunctioni, then class = C’’. The conjunction is composed of conditions defined on one or more attributes, and the resulting class is the conclusion part of the rule. Decision tree based algorithms are mainly built upon two major works, CART (classification and regression trees) and C4.5. CART (Breiman et al., 1984) uses binary trees, where exactly two new nodes are created at each branching. The impurity measure used is the Gini index, which

Data Mining

Supervised Learning

Tree Construction

Decision Trees (Categoric Attributes)

Associative Modeling

Regression Trees (Continuous Attributes)

Unsupervised Learning

Rule Induction

Clustering

CN2

ITRULE

CART

CART

C4.5-ID3

M5

Fig. 1. A classification of data mining work.

Attribute Selection and Discretization

¨ ztu¨rk et al. / European Journal of Operational Research 173 (2006) 683–700 A. O

takes a smaller value as the probability of finding a class at that node gets larger. This probability is estimated by the fraction of transactions belonging to the class. C4.5, introduced by Quinlan (1993), is based on an earlier algorithm by the same author, ID3. The impurity measure used by C4.5 is entropy. As for the continuous response values, which lead to regression tree development, the basic principles of tree growth are the same. However, while decision trees can exploit frequency of a class at a node as the basis for impurity, variance figures must be employed in regression trees. Basic methods in this area are CART and M5. The regression tree construction procedure of CART is fundamentally the same as the decision tree development. The total impurity measure of an offspring node is the weighted sum of the variances of transactions in that node (Markham et al., 1998). The M5 regression tree algorithm (Quinlan, 1992) is similar to CART with two basic differences. The impurity measure in M5 is based on sample standard deviation instead of sample variance. Also, instead of predicting a fixed value at a terminal node, training transactions falling into a node are used for fitting a linear regression model for the node. 3.2. Rule induction Rule induction is the process of constructing rules directly from data rather than deriving them from an explicit decision tree. Conjunctions of the rules are created and the set of ‘‘best’’ rules are found. Two rule induction methods, CN2 and ITRULE, are described in this section. CN2 was proposed by Clark and Niblett (1989). It follows a top–down approach. The algorithm first starts with an empty set of ‘‘complexes’’. A complex is a logical statement of the form ‘‘attribute X < (P)x AND attribute Y < (P)y AND . . .’’. In each iteration, more specialised complexes are tested for significance and are added to the ‘‘qualified’’ complex set. The significance of a candidate rule is tested using a likelihood ratio statistic (Apte and Weiss, 1997). ITRULE is a top–down procedure proposed by Smyth and Goodman (1992). Rules are again in

687

the form of ‘‘if hcondition yi then hclass xi’’. The algorithm proceeds using the J-measure, J(xjy), which in a way represents the power of class x for the transactions satisfying y. At first, K rules are constructed, each conditioned on a single attribute. Rule construction proceeds using depth-first search on a tree with the objective of maximizing the smallest J-measure among the K rules. 3.3. Attribute selection and discretisation Attribute selection aims at finding subsets of attributes that are at least as strong as the whole set in terms of predictive power, thus the motivation is to eliminate irrelevant attributes (Kohonenko and Hong, 1997). Discretisation is the process of converting continuous attributes to discrete ones without significant loss of information. Dougherty et al. (1995) categorises discretisation algorithms as: (1) supervised versus unsupervised depending on whether or not labelled classes are present, (2) local versus global meaning individual ranges or joint regions are considered for attribute values, and (3) static versus dynamic as to whether or not interdependencies between attributes are accounted for. Although attribute selection and discretisation are different tasks, they are often tackled jointly, since the attributes that are discretised are also tested for their predictive power (Kohonenko and Hong, 1997). Liu and Setiono (1997) have developed the Chi2 algorithm, which partitions continuous data points into buckets to be merged later on. In each iteration the merging operation is validated by a significance test based on the Chisquare distribution. Attributes that are found irrelevant to the target class turn out to have a single discretised category. The contextual merit approach (Hong, 1997) sorts attributes according to their prediction performance in the presence of other attributes. This feature emphasises the ‘‘contextual’’ nature of the algorithm. The smaller the number of attributes used in separating transactions, the larger will be their marginal discriminative power or merit. Attributes are first sorted by their merit, and those few having the least merit are eliminated to discover potential power of the remaining attributes.

688

¨ ztu¨rk et al. / European Journal of Operational Research 173 (2006) 683–700 A. O

Using contextual merit, Hong (1997) proposes a discretisation algorithm. 3.4. Data mining applications in manufacturing Apte et al. (1993), Markham et al. (1998), and Koonce and Tsai (2000) present DM applications in the manufacturing contexts. These studies address classification of defects in manufacturing, rule induction to determine the number of kanbans, and DM applied to discover patterns in shop schedules, respectively. Although current applications are limited, Kingsman et al. (1996) specifically underline the potential for DM in make-to-order settings, hinting that decision rules induced by DM can be of use.

4. Manufacturing environment A hypothetical manufacturing environment is created through computer simulation to generate the controlled data needed in this study. These data are afterwards input to the DM tools and to alternative LT estimation methods, and their outcomes are compared. Three distinct job shops (identified as SHOP-V, SHOP-A and SHOP-I) previously proposed in a similar context (Ruben and Mahmoodi, 2000) are defined to represent various make-to-order environments. All parts are forced to visit the first machine in SHOP-V, whereas only the last machine is necessarily common to all the routes

in SHOP-A. Fig. 2 depicts part routes in SHOPV and SHOP-A. These two shop types are preferred in order to have well defined bottleneck stages. In SHOP-V the bottleneck is closest to the order release point, hence inferences on LT made at the time of release have a high likelihood to hold. Contrary to SHOP-V, bottleneck is the furthest stage in SHOP-A, hence LT is much harder to predict. SHOP-I, on the other hand, is a balanced system, which does not impose any restrictions on part routes. Following is a summary of shop and order characteristics common to all three shops: • There are six machines in each shop as this number is found ‘‘adequate to represent the complex structure of a job shop’’ (Ramasesh, 1990). • Ten part types are produced in a shop instance, each having its own route. • Number of machines on a partÕs route is discrete normal with mean 4 and standard deviation 2. The distribution is truncated such that this number is between 2 and 6. Once the number of machines on a route is generated, and the first (last) machine on the route is fixed in SHOP-V (SHOP-A), the remaining machines are assigned at random provided that a machine is visited at most once. • Orders arrive in homogenous batches with exponentially distributed interarrival times. Part type (a predetermined routing) is randomly assigned upon orderÕs arrival.

.........

.........

(a)

(b)

Fig. 2. Part flow in (a) SHOP-V and (b) SHOP-A.

¨ ztu¨rk et al. / European Journal of Operational Research 173 (2006) 683–700 A. O

• Batch sizes are distributed uniformly between 1 and 10 to create orders of varying work content. • Orders are released to the shop as soon as they arrive since our aim is to compare LT estimation methods rather than to test any particular order review and release policy. • FCFS rule is used for dispatching as LT estimation is found harder under this rule. • For processing of an order on a machine, a sequence independent setup time per batch and a run time per piece are generated from lognormal distribution (Dudley, 1963; Law and Kelton, 2000). Lognormal probability density represents the right-skewed behaviour of process times and is declared appropriate for modelling randomness as the product of a number of component processes (Banks et al., 2001). The mean and the standard deviation of run time per piece are 10 minutes and 2 minutes, respectively. The parameters for setup time per batch are 55 minutes (mean run time per piece multiplied by the average batch size) and 11 minutes. Although not constant, setup and process times are assumed to be known upon arrival. • Delays caused by part transfers between machines are ignored. • Batches are not split. • No pre-emption is allowed and machines are continuously available. Ramasesh (1990) suggests that machine utilization rates must be in the range 85–94% for effective analysis. Moreover, LT estimation literature (e.g. Sabuncuoglu and Hommertzheim, 1995) suggests that LT estimation gets much harder for higher congestion levels. To achieve this with the shop parameters given above, various mean order interarrival times are tried in pilot runs. Using these pilot test results, the mean order interarrival times are set at 125, 125 and 100 minutes for SHOP-V, SHOP-A and SHOP-I, respectively. With these parameters, the bottleneck machine in each shop type is found to have an average utilisation of 88%. Besides these, mean interarrival time of 125 minutes is also considered for SHOP-I to test a balanced system with a lower utilisation rate.

689

5. Data mining approach In this section, we first discuss an initial set of attributes that we expect will have potential use in LT estimation. Next, we describe our DM methodology choices based on the characteristics of alternative methods reported in the literature. Performance measures used in evaluating DM and comparing it with other LT estimation techniques are presented later. Finally, we briefly explain our attribute selection approach.

5.1. Initial set of attributes As DM attributes, we use various statistics collected from the simulated system as orders arrive or depart. A complete list of attributes is given in Table 1. Our aim in starting out with a long list is to include all potentially useful attributes to begin with and to identify in the process those that are essential in LT estimation. Here, the motivation is to explore the potential of as many data items as possible in extracting knowledge on LT estimation. Attributes 1–5 in Table 1 are static. These are used to capture part type characteristics. Most of the remaining attributes are dynamic meaning that they reflect the order/shop status at the time a new order arrives. Some attributes (1–12) are used to capture the order properties whereas others are about the shop status. Attributes 7 and 8 reflect the orderÕs work content given its batch size (attribute 6). As one source of variability in the shop is the interarrival times, it is tracked by attribute 9. To monitor the shopÕs recent completion performance, moving averages of interdeparture times and flow times (attributes 10–12) are kept. Attributes 13–17, which are also found in Veral (2001), constitute a myopic set measuring the current shop load in terms of number of orders/parts in the system. Machine utilisations (attributes 18– 20) are emphasised by Kingsman et al. (1996). Average waiting times in machine queues, relative queue lengths also used by Ooijen and Bertrand (2001), and potential loads of machines (attributes 21, 23 and 25) are longer term measures of the shopÕs load distribution. PotenTot is the basic

¨ ztu¨rk et al. / European Journal of Operational Research 173 (2006) 683–700 A. O

690

Table 1 Initial set of data mining attributes Number

Attribute

Description

1 2 3–5

PartType NumOpr RouteAvg RouteMin RouteMax

Part type of the arriving order which defines the orderÕs routing Number of operations in the orderÕs route Average, minimum and maximum number of part types that visit the machines on the orderÕs route

6 7

BatchSize TWK

8

TotProcTime

9

MavgArrival

10

MavgDeparture

11

MavgFlowTime

12

ExpFlowTime

13 14 15 16 17

NoInSystem NoInRoute NoInFirst NoInLast NoInMaxLoad

18–20

UtilAvg UtilMin UtilMax

Batch size of the arriving order Total expected work content of the order (mean setup time + mean process time · BatchSize for all machines on the orderÕs route Total actual process time of the order (computed as in TWK using actual setup and process times generated for the order) Moving average of interarrival times of the last nine orders (used to reflect the recent arrival intensity) Moving average of interdeparture times of the last 30 orders (completion of 30 orders takes about a week) Moving average of flow times of the last 30 orders having the same part type as the arriving order Expected flow time of the order estimated by the average flow time of all orders having the same part type as the arriving order Number of orders present in the shop upon the orderÕs arrival Number of parts waiting or in process at all the machines on the orderÕs route upon its arrival Number of parts waiting or in process at the first machine on the orderÕs route upon its arrival Number of parts waiting or in process at the last machine on the orderÕs route upon its arrival Number of parts waiting or in process at the most loaded machine on the orderÕs route upon its arrival Average, minimum and maximum of the machine utilisations on the orderÕs route upon its arrival

21 22

TimeInMcTot TimeInMcMax  TimeInMcMin RelQueAvg

23

24 25 26

RelQueMax  RelQueMin PotenTot PotenMax  PotenMin

Sum of the average waiting times in queue for all the machines on the partÕs route upon its arrival Difference between average waiting times at the two machines in the shop having the maximum and minimum waiting time Average of the relative queue lengths of the machines on the orderÕs route upon its arrival (relative queue length of a machine is found as actual queue length at the time of arrival divided by average queue length so far) Difference between maximum and minimum relative queue lengths among the machines on the orderÕs route Total expected potential load of the machines on the orderÕs route upon its arrival (includes work content of the orders that are in the shop and are yet to visit those machines) Difference between maximum and minimum potential loads among the machines on the orderÕs route

‘‘look ahead’’ attribute, providing a measure of the shopÕs load in the near future. Dispersion is a factor proposed in LT estimation in earlier works (Weeks, 1979; Enns, 1995). Attributes 22, 24 and 26 are included as various forms of range or deviation measures. Throughout the simulation runs, every order arrival is treated as a ‘‘transaction’’ to be fed into the DM analysis. All attribute values are stored as

a new transaction upon arrival of an order. The response variable (realised flow time) for each transaction is also recorded at completion of the order. 5.2. Data mining methodology We have selected our DM methodology considering three basic criteria. These are: (1) type of

¨ ztu¨rk et al. / European Journal of Operational Research 173 (2006) 683–700 A. O

training (supervised or unsupervised), (2) purpose of DM which is prediction in our case, and (3) nature of attributes and the response variable (categorical or continuous values). Value of the response variable (realised flow time of an order) is recorded for every order departing from the system. Hence, our problem involves analysis with a number of ‘‘independent’’ attributes and a dependent variable whose value is known. (Note that job and shop attributes are not independent in the statistical sense. We use the term independence only to indicate that flow time will be defined by these attributes.) Therefore, we have chosen to use supervised learning. In real life systems similar data can be collected with the help of computer integrated shop floor control systems, as Sabuncuoglu and Comlekci (2002) also suggest. Since our purpose is predicting the LT, as opposed to association of attributes or clustering, we have adopted the tree (and rule) construction approach. Most of our attributes and in particular the response variable are continuous in nature. However, since most DM work and software deal with categoric attributes, we have considered both discrete and continuous approaches before making a choice. As the DM tool, we have tried See5 and Cubist, both of which are developed by Quinlan (2003). See5 is the MS Windows implementation of C5.0, which is an enhanced version of the C4.5 (Quinlan, 1993). Cubist, which is based on M5 (Quinlan, 1992), is particularly suitable for continuous attributes as it makes use of regression trees. Both packages generate a decision tree and produce a ‘‘ruleset’’ by simplifying the tree. In using See5, we discretised the flow time, originally measured in minutes, expressing it in days where a day is 480 minutes. Using discrete values for the flow time resulted in loss of precision in estimation. For example, two transactions occurring at the 479 and 481 minutes hence separated by 2 minutes will be classified one full day apart. The realised and predicted values for a transaction can be as disparate as 120 and 950 minutes and the prediction error will be taken as one day. This makes precise prediction and error estimation impossible.

691

During the pilot test runs with See5 a misclassification rate of 20–30% was encountered. (A transaction having an realised flow time of X days is misclassified, if the predicted flow time for the order is different from X days.) Although we tried various options available in See5, we could not reduce the misclassification rate significantly. If, however, a prediction interval of plus or minus one day is deemed acceptable, then the misclassification rate drops below 10%. Note that this implies an absolute prediction error of about 480 minutes. Cubist, on the other hand, constructs a decision tree by minimizing the sample standard deviation on newly created nodes at each branching. Every new node supplies an additional condition to take part in the resultant rule. Eventually, Cubist applies linear regression to the set of transactions classified in each terminal node. Hence, conclusion part of each rule is a distinct linear regression model. A sample of Cubist output is given in Fig. 3. The condition in Rule 1 is that the PotenTot attribute of a transaction is less than 370 minutes. When this holds for a transaction, the accompanying regression model for predicting the flow time is PredictedFlow ¼ 10.1509 þ 0.156 PotenTot þ 53 NoInFirst þ 0.97 TotProcTime  15 NoInRoute þ 56 RelQueAvg  14ðRelQueMax  RelQueMinÞ þ 0.2 TimeInMcTot  0.2ðTimeInMcMax  TimeInMcMinÞ. We do not propose to use such models built around all 26 attributes. This would increase variance of the estimates without adding much to the modelÕs predictive power. Instead, we intend to select a relatively small but significant subset as will be described in Section 5.4, and repeat Cubist runs with the selected attributes. In Fig. 3 ‘‘Average jerrorj’’ and ‘‘Relative jerrorj’’ shown at the bottom of the Cubist output are the values for the average and relative absolute prediction error. The average absolute difference between realised and predicted flow times is

692

¨ ztu¨rk et al. / European Journal of Operational Research 173 (2006) 683–700 A. O

Fig. 3. Sample Cubist output.

91 minutes, much less than the 480 minutes found by See5. Relative absolute error is the ratio of the given average absolute error to the average absolute error of the overall flow time average, were it used as a naı¨ve estimate. The last measure is the correlation coefficient between realised and predicted values. We also compared See5 and Cubist in terms of their worst case performance. The difference between realised and predicted flow times could be as large as eight days in See5, whereas it never exceeded two workdays (960 minutes) in Cubist. Hence, we concluded that Cubist is superior to See5 for our problem and used Cubist in the rest of our study.

• Average realised flow time per order: Used to give an idea about the magnitude of the manufacturing LT and is identical in all situations to be compared. • Average absolute error: Average over all transactions of the absolute difference between realised and predicted flow time. • Coefficient of variation (CV) of absolute error. • Relative error: Ratio of the methodÕs average absolute error to the would-be average absolute error if the overall flow time average were the predictor. • Mean square error (MSE): Average over all transactions of the squared difference between realised and predicted flow time. • Adjusted R2 where a regression model is used.

5.3. Performance measures We use the following performance measures in evaluating the DM approach and comparing it with alternative LT estimation methods.

In addition to these performance measures, we examined the percentage of transactions with underestimated (overestimated) flow times to yield tardiness (earliness) assuming the estimated LT is

¨ ztu¨rk et al. / European Journal of Operational Research 173 (2006) 683–700 A. O

quoted as the due date. The average tardiness and the average earliness are also considered. These measures are mainly used for control purposes. Percentages of tardy and early orders should both be close to 50%. Average tardiness and earliness should give some idea about the symmetry in prediction errors. 5.4. Attribute selection scheme We have started the DM analysis with the initial set of 26 attributes given in Table 1. In fact most of these attributes are interdependent or correlated, and using all of them would result in redundancies. Besides, data collection in a real shop would be a serious burden with all those attributes. Hence, we need to select a small subset of attributes that has the highest predictive power and that is essential in estimating LT with acceptable accuracy. Using the minimal number of attributes is also in line with the observation made by Ragatz and Mabert (1984) that additional factors in LT estimation yield diminishing returns. Most attribute selection methods in DM literature deal with categoric attributes and are not applicable for continuous attributes. Therefore, we have devised an empirical attribute selection (or elimination) procedure. MSE is used as the primary performance measure in attribute elimination as proposed by Breiman (1996). Our selection procedure assumes that occurrences of an attribute in Cubist rules is an indicator of its predictive power. To identify the most frequently used attributes, occurrence of an attribute is checked in the condition and/or the regression parts of individual Cubist rules. The weighted attribute usage ratio for attribute i, WAURi, is then defined as PN j¼1 xij wj ; WAURi ¼ PN j¼1 wj where j is the rule index, N is the number of rules in the ruleset, wj is the number of transactions that satisfy condition of rule j (available in the Cubist output), xij is one if attribute i is used in rule j and zero otherwise.

693

Attribute selection is performed in successive stages starting with the set of all candidates and getting more demanding at each stage. We eliminate attributes gradually as we increase the WAUR threshold by increments of 0.1 at each stage. For example, at the threshold level of 50%, only those attributes having WAUR values larger than 0.5 are kept, and the remaining attributes are dropped. MSE is monitored in the meantime as the threshold is increased. MSE did not grow in the initial stages of attribute removal. The attributes left just before MSE starts increasing steeply are selected. Further details of attribute selection are given in the following section.

6. Experimental results We have experimented with four shop types, SHOP-V, SHOP-A, SHOP-I (100) and SHOP-I (125), where figures in parentheses for the last two shops are the mean interarrival times. Ten routing instances are generated for each of the four shop types as described in Section 4. Our DM approach is applied by performing two replications for each shop instance. The first replication is for training, i.e. Cubist is run to construct a ruleset using the transactions generated from this replication. The second replication is used for testing, i.e. the rules generated from the first replication are applied to predict the flow times. Performance measures are based on the test replication results. Replication length is taken as 10,000 days, and the first 100 days are truncated to eliminate the initial bias. Note that output analysis issues such as checks for normality and independence are irrelevant, since we use simulation not for prediction but for generating transactions for DM. About 38,000 transactions are generated in each replication. A Cubist run for 38,000 transactions takes 30–120 seconds depending on the number of attributes on a 1400 MHz personal computer with 256 MB RAM. Firstly pilot runs were performed for attribute selection. Then, using the selected attributes, we performed our production runs and compared

694

¨ ztu¨rk et al. / European Journal of Operational Research 173 (2006) 683–700 A. O

our approach with other (some more recently proposed) LT estimation methods. 6.1. Attribute selection We applied the attribute selection scheme described in Section 5.4 to three specific shop instances selected from the 10 generated for each shop type. The first two instances for a shop are chosen from the ‘‘average’’ ones, where typical characteristics such as the number of visitations per part type and per machine are close to their targeted mean levels. In the third instance chosen, these characteristics are far from their mean values to represent outliers. Pilot Cubist runs were made for the three instances with all the 26 attributes. Rulesets generated contain 25–40 rules in general. Table 2 summarises performance of the Cubist test runs. SHOP-V seems to be the easiest shop to predict with the lowest MSE. Average absolute error is 97 minutes (9% of the average realised order flow time), slightly larger than the 91 minutes found in the training run but smaller than the mean interarrival time. MSE statistics for SHOP-A and SHOP-I (125) are twice as large as that of SHOP-V. Average absolute error is 12% and 15% of the average realised flow time for these two shops. SHOP-I (100) proves to be the most difficult to predict with an average absolute error of 193 minutes (14% of the average realised flow time). Another observation is that flow time is overestimated in more than half (52–53%) of the

transactions. This implies that realised flow time distribution is slightly skewed to the right compared to predicted flow time distribution. The same pattern is also observed in our production runs, though to a lesser degree. Results of our attribute selection scheme are summarised in Fig. 4(a) and (b). In these figures MSE values for SHOP-V and SHOP-A are plotted against increasing WAUR threshold levels for the three instances of each shop (SHOP-I shows similar behaviour and thus is not included). The plots indicate that different instances of the same shop exhibit very consistent behaviour. This justifies attribute selection based only on only three shop instances. There are clear cut WAUR levels after which MSE starts a sharp increase, 60% for SHOP-V and 70% for SHOP-A. This means that eliminating attributes up to those levels does not affect the performance of DM significantly. Attributes selected for the four shop types are shown in Table 3. We have two sets for each shop. The conservative set is taken at the highest WAUR level after which MSE starts increasing. The risky set is obtained at the next level of increase in MSE. The former set guarantees a minimum error performance, the latter minimises the number of attributes hence the data collection effort. In particular, TotProcTime, NoInFirst and PotenTot are common to all four shops and are found essential regardless of the shop type and the WAUR level. All three are load related measures. TotProcTime is a static job characteristic. NoInFirst and

Table 2 Pilot DM results for four shops Performance measurea

SHOP-V

SHOP-A

SHOP-I (100)

SHOP-I (125)

Average realised flow time Average absolute error CV of absolute error Relative error Mean square error Percent underestimated Average underestimation Percent overestimated Average overestimation

1121.27 96.96 0.88 0.20 16,777.75 47.00 108.26 53.00 86.92

1174.36 141.29 0.93 0.28 37,324.69 48.00 157.52 52.00 126.31

1393.44 192.90 0.94 0.30 71,701.29 48.00 218.75 52.00 169.39

895.00 133.79 0.96 0.38 34,853.85 47.00 155.22 53.00 115.03

a

Using all 26 attributes, average of three shop instances for each shop type.

25000

100000

20000

80000

15000

60000

MSE

MSE

¨ ztu¨rk et al. / European Journal of Operational Research 173 (2006) 683–700 A. O

10000 5000

695

40000 20000

0

0 0

20

30

40 50 60 WAUR (%)

70

80

90

0

20

30

40 50 60 WAUR (%)

(a)

70

80

90

(b)

Fig. 4. MSE values for three instances of (a) SHOP-V and (b) SHOP-A as WAUR threshold increases.

Table 3 Conservative and risky sets of selected attributes SHOP-V 60%

SHOP-A 70%

SHOP-I (100) 70%

SHOP-I (125) 80%

Conservative

TotProcTime NoInFirst PotenTot NoInRoute NoInMaxLoad PotenMax  PotenMin PartType

TotProcTime NoInFirst PotenTot NoInRoute PotenMax  PotenMin

TotProcTime NoInFirst PotenTot NoInRoute NoInMaxLoad MavgArrival NoInSystem PartType

TotProcTime NoInFirst PotenTot RelQueAvg NoInMaxLoad MavgArrival NoInLast

WAUR level

90%

80%

80%

90%

Risky

TotProcTime NoInFirst PotenTot NoInRoute

TotProcTime NoInFirst PotenTot PotenMax  PotenMin

TotProcTime NoInFirst PotenTot NoInSystem NoInMaxLoad

TotProcTime NoInFirst PotenTot

WAUR level

PotenTot are dynamic shop estimates with a weak association, in general. 6.2. Comparison of data mining approach with other estimation methods The DM approach is compared with four other methods. The first method is linear regression where independent variables are restricted to the attributes in the conservative and risky sets given in Table 3. A third and larger attribute set (generated at 40% WAUR level) is also used for control purposes. This is performed to explore the value added by CubistÕs decision tree and the partitioning introduced by the condition part of the rules.

The second comparison method is a recent proposal due to Ruben and Mahmoodi (2000). We use their combined model, COMB, given as X X ffiffiffiffiffiffiffiffi p 4 LTi ¼ b0 þ b1 Qk þ b2 Qj þ b3 Qj Qk ; j2Ri j6¼k

j2Ri j6¼k

where LT estimate for an arriving order i (LTi) is given by an expression with three elements: number of parts waiting or in process at the bottleneck machine k (Qk), number of parts waiting in machine queue j on the route Ri of part i (Qj), and interaction between the two. These variables, except the interaction terms, are among the DM attributes. The bottleneck for SHOP-I is taken as the maximum loaded machine at the time of the orderÕs arrival.

696

¨ ztu¨rk et al. / European Journal of Operational Research 173 (2006) 683–700 A. O

The third method is also a recent suggestion by Hopp and Sturgis (2000). It represents LT for order i by LTi ¼ lðni Þ þ ff½za rðni Þ; where l(ni) is the mean flow time and r(ni) is the estimated standard deviation of flow time with ni jobs present in the system. ff is called the fudge factor and is used to adjust the LT estimation dynamically to account for a service level. The service level a, which is percentage of tardy orders tolerated, is set at 50% to make the method comparable with DM. This eliminates the standard deviation component in the main formula as z0.50 = 0. The load dependent mean flow time and its standard deviation are based on predefined critical work-in-process levels. Details of estimating l(ni) and r(ni) can be found in Hopp and Sturgis (2000). Their method is also based on the number of orders in system, but uses a quadratic regression model in estimating l(ni) as opposed to multiple linear models used in Cubist. Hence, the restriction of linear conduct in Cubist is evaluated against a quadratic approach. Expected flow time in an empty system, l(0), is replaced with the intercept b0 estimated from regression equation. Otherwise regression cannot be run since realised flow time for some transactions is shorter than l(0) due to randomness. Critical work-in-process levels are found by using the mean setup and process times of the shop instance. Interdeparture times are estimated from simulation, which turned out to be the same as interarrival times as the shops reach steady state. The last method is the traditional TWK (total work content). It is chosen since it is widely accepted and based on the attribute TotProcTime, which is one of the top three most important attributes found in our DM implementation. In implementing the method, the shop instance dependent coefficients needed for the TWK formula LTi = k * TotProcTimei, are determined so as to attain 50% tardiness on the average. All the models above are firstly fit to a sample of around 1000 transactions randomly drawn from the training data set (a total of 38,000 transactions on the average) for each shop instance to assure independence of data points. To validate the sampling, regression runs are also made using the

whole training data set. Models fit to the sample are found as successful as those found using the whole data set. All comparison methods are applied to each of the 10 instances of four shops. The average results are summarised in Table 4. The first observation regarding Cubist is that the absolute, relative and mean square error values are not larger than those in Table 2. Hence, we can conclude that even the risky attribute sets perform almost as well as the whole set of 26 attributes, and the attribute selection scheme has been successful. With conservative attribute sets for different shop types, average absolute error is 8–15% of the average realised order flow time. The same range is 9–16% with risky attribute sets. MSE values obtained with risky attribute sets are 5–23% larger than those obtained with conservative sets. Relative behaviour of the shops is also the same as in Table 2; SHOP-V is again the easiest to predict and SHOP-I (100) is the most difficult. This is expected because the bottleneck in SHOP-V, which is an important indicator of the shopÕs load, is visited early on, and this makes prediction more accurate. In SHOP-I, however, routings are random and bottlenecks may come later on the routes. Linear regression with selected attributes performs 6–15% worse than Cubist with conservative attribute sets, and 4–13% with risky ones in terms of the average absolute error. The difference can be attributed to CubistÕs decision tree and use of a different regression model in each node. In fact, Cubist results with conservative attribute sets are better than all regression results for all four shops; this is true even with risky attributes for the first three shops. Compared to the other three methods, performance of linear regression with selected attributes is the closest to that realised by Cubist. This is probably because of the relatively linear nature of flow time data. The regression tree approach would be particularly stronger than straightforward regression when higher order dependencies (nonlinearity) exist in the data. This fact is also underlined by Apte and Weiss (1997). The relative insensitivity of linear regression to all three attribute sets probably indicates the bottom-line effectiveness of the method, given the choice of initial attribute set.

Table 4 Comparison of data mining approach with other methods Cubist

Regression with selected attributes

Ruben and Mahmoodi

Hopp and Sturgis

TWK

Conservative

Risky

40% WAUR

Conservative

Risky

SHOP-V

Average realised flow time Average absolute error CV of absolute error Relative error Mean square error Adjusted R2

1064.76 85.95 0.89 0.18 13,726.94 N/A

1064.76 91.87 0.90 0.19 15,667.35 N/A

1064.76 98.84 0.87 0.21 17,759.34 0.95

1064.76 99.09 0.87 0.21 17,785.95 0.95

1064.76 99.94 0.87 0.21 18,065.86 0.95

1064.76 200.38 1.02 0.42 82,168.40 0.75

1064.76 181.72 0.79 0.38 54,404.45 0.44

1064.76 446.07 1.01 0.95 399,893.52 N/A

SHOP-A

Average realised flow time Average absolute error CV of absolute error Relative error Mean square error Adjusted R2

1076.56 130.12 0.95 0.27 32,623.17 N/A

1076.56 140.84 0.95 0.29 38,250.97 N/A

1076.56 142.19 0.90 0.29 37,008.03 0.90

1076.56 147.43 0.89 0.31 39,309.67 0.90

1076.56 159.96 0.89 0.33 46,163.12 0.88

1076.56 195.34 0.92 0.41 70,325.53 0.77

1076.56 199.29 0.86 0.41 69,421.14 0.38

1076.56 453.46 1.00 0.94 410,737.20 N/A

SHOP-I (100)

Average realised flow time Average absolute error CV of absolute error Relative error Mean square error Adjusted R2

1227.89 170.04 0.96 0.31 58,432.82 N/A

1227.89 173.83 0.96 0.32 61,386.56 N/A

1227.89 183.02 0.92 0.33 65,321.01 0.88

1227.89 183.76 0.92 0.33 65,986.74 0.88

1227.89 184.20 0.93 0.34 66,521.34 0.88

1227.89 238.68 0.94 0.45 110,563.55 0.75

1227.89 277.17 0.92 0.51 149,807.15 0.54

1227.89 490.73 1.02 0.87 540,427.67 N/A

SHOP-I (125)

Average realised flow time Average absolute error CV of absolute error Relative error Mean square error Adjusted R2

816.25 121.39 0.98 0.38 29,880.35 N/A

816.25 133.71 0.99 0.41 36,691.81 N/A

816.25 127.73 0.91 0.40 31,189.53 0.82

816.25 128.94 0.93 0.40 32,286.42 0.81

816.25 138.85 0.95 0.43 38,456.53 0.78

816.25 195.04 0.86 0.62 67,609.53 0.55

816.25 196.69 0.82 0.62 66,183.53 0.35

816.25 260.00 0.97 0.79 147,781.91 N/A

a

¨ ztu¨rk et al. / European Journal of Operational Research 173 (2006) 683–700 A. O

Performance measurea

Average of 10 shop instances for each shop type.

697

698

¨ ztu¨rk et al. / European Journal of Operational Research 173 (2006) 683–700 A. O

Cubist and linear regression outperform the other methods by a wide margin. Average absolute errors for different shops are 37–117% larger with the method by Ruben and Mahmoodi, and 41– 97% larger with Hopp and SturgisÕ approach relative to the performance of Cubist even with risky attribute sets. However, there is not a significant difference in CV of absolute error among different methods. This indicates that improved accuracy in DM is not attained at the expense of loss in precision. MSE values for these two methods are twice as large as those found with Cubist. Relative error and adjusted R2 are two measures reflecting how well a model explains the variability in flow time. Relative error remains in the range 0.18–0.43 with Cubist and linear regression, whereas the range is 0.38–0.62 with the other two methods. Adjusted R2 values are between 0.78 and 0.95 with linear regression. They drop to 0.55–0.77 in Ruben and MahmoodiÕs, and 0.35–0.54 in Hopp and SturgisÕ regression models. These results indicate that the attributes selected and the regression tree approach used in DM are more effective in selecting the critical factors and estimating LT compared to the other models. TWK is always much worse in terms of all performance measures. The largest differences between Cubist and other methods are observed for SHOP-V.

7. Conclusion We have demonstrated use of DM, specifically the regression tree approach, in make-to-order manufacturing LT estimation. Simulating four shop types generates the training and test data for DM. The analysis starts with a large set of attributes reflecting static and dynamic order and shop characteristics. We have developed an empirical scheme to select a reasonably small subset of attributes having a relatively high predictive power. We have compared our DM approach with linear regression and three other LT estimation methods from the literature. Our attribute selection scheme proves to be effective. Eliminating attributes up to a certain point does not decrease the estimation quality. Hence, the conservative sets of attributes selected

are almost as successful as the full set of attributes, regardless of the shop type. We can conclude that DM was particularly successful in exploring the patterns in data and determining critical attributes in LT estimation. Some of the selected attributes are common to all shop types. In particular, total process time of the order, number of parts waiting or in process at the first machine on the orderÕs route, and total expected potential load of the machines on the orderÕs route prove to be essential. Among the three shop types, SHOP-V is the easiest to predict with DM. For this shop, average absolute error is below 9% of the average realised flow time with conservative or risky attribute sets. This figure is 12–16% for other shops. Our results indicate that the regression tree approach of DM coupled with our attribute selection scheme outperforms the methods compared. Among these, linear regression with selected attributes has the closest estimation quality to DM. TWK has the worst performance. We can conclude that the knowledge-based approaches constitute a viable alternative and can prove more effective in estimating LT compared to many other models. Instead of postulating a model and estimating its parameters from the data, DM focuses on learning what the data may reveal with all its pecularities. Our study can be complemented by its application in an actual manufacturing environment. The application seems feasible given the computer integrated resource planning and shop floor control systems supported by state-of-the-art data warehousing software available today. A shop floor control system can be adjusted to collect the necessary data, and data warehousing can be used to extract the DM attribute values from these data. This underlines the essence of systematic treatment of historical data to be used not only for report generation but also for modelling purposes to provide decision support. DM is known to be effective with large amounts of data. In this study we have used simulation as the data source. In a real life manufacturing system, accumulation of sufficient data for DM may take a long time during which the environment (part mix, demand, processes and so on) may change. Therefore, the potential of using DM with

¨ ztu¨rk et al. / European Journal of Operational Research 173 (2006) 683–700 A. O

limited amount of data may be worthwhile to explore. We have observed that attribute selection is an essential part of the DM process. Most attribute selection methods in DM literature deal with categoric attributes, and they are not directly applicable to continuous attributes as in our case. It seems there is a need to adapt these methods or to develop new ones for selection of continuous attributes. We expect that nonlinearity in flow times introduced by batching practices, stoppages caused by machine breakdowns, use of due date based dispatching rules, and time losses caused by changeovers, will increase the difference in performance between DM and linear regression in favour of the former. Therefore the strength of DM, in particular that of regression trees, can be explored by experimenting with data having higher order dependencies between the DM attributes and the flow time. Artificial neural networks, which were applied with success to a variety of production planning and control problems, can also be used for LT estimation as an alternative to regression trees. An important research area in manufacturing domain would be use of DM for order review and release. DM can be used to produce sets of rules that will guide the decision maker in choosing the appropriate release times. Another application area can be determination of part families and group technology cell configurations. Acknowledgment This research is supported by the Turkish Science and Research Council (TUBITAK) under the contract MISAG-164.

References Agrawal, R., Imielinski, T., Swami, A., 1993. Mining association rules between sets of items in large databases. In: ACM SIGMOD International Conference on Management of Data, Washington, DC, pp. 207–216. Apte, C., Weiss, S.M., 1997. Data mining with decision trees and decision rules. Future Generation Computer Systems 13 (2–3), 197–210.

699

Apte, C., Weiss, S.M., Grout, G., 1993. Predicting defects in disk drive manufacturing: A case study in high dimensional classification. In: IEEE Annual Conference on AI Applications, Orlando, FL. Banks, J., Carson II, J.S., Nelson, B.L., Nicol, D.M., 2001. Discrete-event System Simulation, third ed. Prentice Hall, New Jersey. Bertrand, J.W.M., 1983. The use of workload information to control job lateness in controlled and uncontrolled release production systems. Journal of Operations Management 3 (2), 79–92. Breiman, L., 1996. Bagging predictors. Machine Learning 24 (2), 123–140. Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J., 1984. Classification and Regression Trees. Wadsworth, London. Cheng, T.C.E., Gupta, M.C., 1989. Survey of scheduling research involving due date determination decisions. European Journal of Operational Research 38 (2), 156–166. Clark, P., Niblett, T., 1989. The CN2 induction algorithm. Machine Learning 3 (4), 261–283. Dougherty, J., Kohavi, R., Sahami, M., 1995. Supervised and unsupervised discretization of continuous features. In: The Twelfth International Conference on Machine Learning, Tahoe City, CA, pp. 194–202. Dudley, N.A., 1963. Work-time distributions. International Journal of Production Research 1, 137–144. Eilon, S., Chowdhury, I.G., 1976. Due dates in job shop scheduling. International Journal of Production Research 14 (2), 223–237. Enns, S.T., 1995. A dynamic forecasting model for job shop flowtime prediction and tardiness control. International Journal of Production Research 33 (5), 1295–1312. Hong, S.J., 1997. Use of contextual information for feature ranking and discretization. IEEE Transactions on Knowledge and Data Engineering 9 (5), 718–730. Hopp, W.J., Sturgis, M.L.R., 2000. Quoting manufacturing due dates subject to a service level constraint. IIE Transactions 32 (9), 771–784. Kingsman, B.G., Mercer, A., 1997. Strike rate matrices for integrating marketing and production during the tendering process in make-to-order subcontractors. International Transactions in Operations Research 4 (1), 251–257. Kingsman, B.G., Hendry, L., Mercer, A., de Souza, A., 1996. Responding to customer inquiries in make-to-order companies: Problems and solutions. International Journal of Production Economics 46–47 (December), 219–231. Kohonenko, I., Hong, S.J., 1997. Attribute selection for modelling. Future Generation Computer Systems 13 (2–3), 181–195. Koonce, D.A., Tsai, S.-C., 2000. Using data mining to find patterns in genetic algorithm solutions to a job shop schedule. Computers and Industrial Engineering 38 (3), 361–374. Law, A.M., Kelton, W.D., 2000. Simulation Modeling and Analysis, third ed. McGraw-Hill, Singapore. Liu, H., Setiono, R., 1997. Feature selection via discretization. IEEE Transactions on Knowledge and Data Engineering 9 (4), 642–645.

700

¨ ztu¨rk et al. / European Journal of Operational Research 173 (2006) 683–700 A. O

Markham, I.S., Mathieu, R.G., Wray, B.A., 1998. A rule induction approach for determining the number of kanbans in a just-in-time production system. Computers and Industrial Engineering 34 (4), 717–727. Michaud, P., 1997. Clustering techniques. Future Generation Computer Systems 13 (2–3), 135–147. Moodie, D., 1999. Demand management: The evaluation of price and due date negotiation strategies using simulation. Journal of Production Operations Management Society 8 (2), 151–162. Quinlan, J.R., 1992. Learning with continuous classes. In: 5th Australian Joint Conference on Artificial Intelligence, Singapore, pp. 343–348. Quinlan, J.R., 1993. C4.5: Programs for Machine Learning. Morgan Freeman, San Francisco, CA. Quinlan, J.R., 2003. See5 and Cubist online user guides, Rulequest Research Data Mining Tools, Available from: . Ragatz, G., Mabert, V., 1984. A framework for the study of due date management in job shops. International Journal of Production Research 22 (4), 685–695. Ramasesh, R., 1990. Dynamic job shop scheduling: A survey of simulation research. OMEGA 18 (1), 43–57. Ruben, R.A., Mahmoodi, F., 2000. Lead time prediction in unbalanced production systems. International Journal of Production Systems 38 (7), 1711–1729. Sabuncuoglu, I., Comlekci, A., 2002. Operation-based flowtime estimation in a dynamic job shop. OMEGA 30 (6), 423–442. Sabuncuoglu, I., Hommertzheim, D.L., 1995. Experimental investigation of an FMS due-date scheduling problem: An evaluation of due-date assignment rules. International

Journal of Computer Integrated Manufacturing 8 (2), 133–144. Siedmann, A., Smith, M.L., 1981. Due date assignment for production systems. Management Science 27 (5), 571–581. Smith, M.L., Siedman, A., 1983. Due date selection procedures for job-shop simulation. Computers and Industrial Engineering 7 (3), 199–207. Smyth, P., Goodman, R., 1992. An information theoretic approach to rule induction from databases. IEEE Transactions on Knowledge and Data Engineering 4 (4), 301–316. Srikant, R., Agrawal, R., 1997. Mining generalized association rules. Future Generation Computer Systems 13 (2–3), 161– 180. Tatsiopoulos, I.P., Kingsman, B.G., 1983. Lead time management. European Journal of Operational Research 14 (4), 351–358. van Ooijen, H.P.G., Bertrand, J.W.M., 2001. Economic duedate setting in job-shops based on routing and workload dependent flow time distribution functions. International Journal of Production Economics 74 (1–3), 261–268. Veral, E., 2001. Computer simulation of due-date setting in multi-machine job shops. Computers and Industrial Engineering 41 (1), 77–94. Vig, M.M., Dooley, J.K., 1991. Dynamic rules for due-date assignment. International Journal of Production Research 29 (7), 1361–1377. Weeks, J.K., 1979. A simulation study of predictable due-dates. Management Science 25 (4), 363–373. Zait, M., Messatfa, M., 1997. A comparative study of clustering methods. Future Generation Computer Systems 13 (2–3), 149–159.