An efficient MILP model for the short-term scheduling of single stage batch plants

An efficient MILP model for the short-term scheduling of single stage batch plants

Computers and Chemical Engineering 30 (2006) 1003–1018 An efficient MILP model for the short-term scheduling of single stage batch plants Pedro M. Ca...

390KB Sizes 3 Downloads 129 Views

Computers and Chemical Engineering 30 (2006) 1003–1018

An efficient MILP model for the short-term scheduling of single stage batch plants Pedro M. Castro a,b,∗ , Ignacio E. Grossmann b a

b

Departamento de Modela¸ca˜ o e Simula¸ca˜ o de Processos, INETI, 1649-038 Lisboa, Portugal Department of Chemical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA

Received 9 March 2005; received in revised form 29 December 2005; accepted 29 December 2005 Available online 5 April 2006

Abstract This paper presents a multiple time grid continuous time MILP model for the short-term scheduling of single stage, multiproduct batch plants. It can handle both release and due dates and the objective can be either the minimization of total cost or total earliness. This formulation is compared to other mixed-integer linear programming approaches that have appeared in the literature, to a constraint programming model, and to a hybrid mixed-integer linear/constraint programming algorithm. The results show that the proposed formulation is significantly more efficient than the MILP and CP models and comparable to the hybrid model. For one large instance, both methods exceeded the time limit but the hybrid method failed to find a feasible solution. The results also show that a discrete-time formulation performs very efficiently even when a large number of time intervals are used. © 2006 Elsevier Ltd. All rights reserved. Keywords: Mixed-integer programming; Constraint programming; Hybrid strategy; Resource task network; Parallel machine scheduling

1. Introduction Scheduling is concerned with allocation of resources over time so as to execute the processing tasks required to manufacture a given set of products (Pinedo, 2001). Depending on the amount of resources and time that one has available, finding a feasible or an optimal schedule with respect to a certain objective, can be trivial or very complex. The simplest scheduling problem is the single machine sequencing problem. If more than one equipment or machine exists, the solution involves not only sequencing of tasks or orders, but also their assignment to a particular machine. The machines can be arranged in parallel, in series or on a more complex structure where a given machine can be used for tasks belonging to different stages of the process. On multistage problems we need to account for the flow of material between stages to ensure that a given order only starts being processed on a stage after it has gone through the previous one, which adds more complexity to the problem. In the limit, orders do not have an identity throughout the process and the



Corresponding author. Tel.: +351 217162712; fax: +351 217167016. E-mail address: [email protected] (P.M. Castro).

0098-1354/$ – see front matter © 2006 Elsevier Ltd. All rights reserved. doi:10.1016/j.compchemeng.2005.12.014

materials that make the orders must be considered instead. In that case, order sequencing becomes undefined and different types of variables must be used to model this more general problem. General mathematical formulations for scheduling are either based on the state task network (Kondili, Pantelides, & Sargent, 1993) or resource task network (Pantelides, 1994) process representations. STN or RTN-based formulations can be applied to any network topology featuring operations spanning from batch to continuous, consisting of fixed or variable duration tasks, with the major difference being that the STN treats equipment resources implicitly, while the RTN treats them explicitly. Earlier formulations use a discrete representation of time and may involve several thousands of binary variables in order to handle a sufficiently fine discretization that closely matches the exact problem data. As a consequence, when solving large problems, problem data is usually rounded to maintain problem tractability. Overall, discrete-time mixed-integer linear programming formulations are usually very tight and perform well for a variety of objective functions, even makespan minimization (Maravelias and Grossmann, 2003b). Continuous-time formulations began to appear in the last decade, and have received much attention recently. Recent examples of STN-based formulations are the work of Giannelos and Georgiadis (2002a,

1004

P.M. Castro, I.E. Grossmann / Computers and Chemical Engineering 30 (2006) 1003–1018

Nomenclature ci,m di DDi H I/i, i Im It,m M/m Mi Ni,m,t

cost of processing order i on machine m due date of order i delivery date of order i time horizon process orders orders to be processed on machine m orders that can start on time point t on machine m process equipments (machines) machines that can process order i binary variable that assigns the start of order i on machine m to time point t ¯ i,m,t,t  binary variable that assigns the end of order i, proN cessed on machine m, which began at t, to event point t pi,m processing time of order i on machine m ri release date of order i Rm,t excess amount of machine m at time point t t number of event points allowed between the beginning and end of a processing task Tt absolute time of event point t Tt,m absolute time of event point t on machine m T/t, t ,t points of the time grid Ti,m time points where order i can start to be processed on machine m Greek letters δ duration of each time interval on the discrete-time grid τ i,m processing time of order i on machine m as an integer multiple of δ

2002b), Maravelias and Grossmann (2003a) and Janak, Lin, and Floudas (2004), while for the RTN we have the work of Castro, Barbosa-P´ovoa, Matos, and Novais (2004) and Castro, Barbosa-P´ovoa, and Novais (2005). In continuous-time formulations one needs to specify the number of points that compose the time grid(s), depending on whether the formulation uses a single time grid (Maravelias and Grossmann, 2003a; Castro et al., 2004, 2005), or one for each equipment resource (Giannelos and Georgiadis, 2002a, 2002b; Janak et al., 2004). Single or multistage, multiproduct plants have special characteristics that allow for a different type of mathematical programming approach, where some of the model variables are more closely related to the real world decisions (e.g. assign order i to machine m, make order i before i ). Examples of sequential MILP short-term scheduling models can be found in Pinto and Grossmann (1995), M´endez and Cerd´a (2000, 2002), Jain and Grossmann (2001), Harjunkoski and Grossmann (2002). When compared to the general continuous-time formulations, the more recent sequential models do not need to specify the number of event points, meaning that they only need to be solved once. Another important advantage is that heuristic rules can be used to pre-order items in order to decrease the com-

plexity of the problems and allow for bigger problems to be solved. Further details on the above mentioned and other available mathematical programming approaches for short-term scheduling can be found in the recent review papers of Floudas and Lin (2004) and M´endez, Cerd´a, Grossmann, Harjunkoski, and Fahl (2006). Constraint programming (CP) is another solution approach that can be used for solving some classes of scheduling problems (Baptiste, Le Pape, & Nuijten, 2001). CP is particularly effective for solving feasibility problems and seems to be better suited than traditional MILP approaches in special types of discrete optimization problems where finding a feasible solution is difficult. The lack of an obvious relaxation, however, makes CP worse for loosely constrained problems, where the focus is on finding the optimal solution among many feasible ones and proving optimality. Overall, CP and MILP have complementary strengths that can be combined into hybrid algorithms, yielding considerable computational improvements when compared to the standalone approaches. Examples of these are the work of Jain and Grossmann (2001) for single-stage, Harjunkoski and Grossmann (2002) for multistage multiproduct plants, and Maravelias and Grossmann (2004a, 2004b) for multipurpose plants. This paper presents a new RTN-based continuous-time MILP model for the short-term scheduling of single stage multiproduct plants with parallel units of machines. It is based on the general formulation of Castro et al. (2004), but has one important difference: a different time grid is used for each machine of the process instead of a single time grid for all events taking place. New constraints are presented that allow the consideration of both release and due dates in a general way. The new formulation is shown to perform much better than other continuous-time MILP formulations and standalone CP models, and is comparable to the hybrid MILP/CP algorithm of Maravelias and Grossmann (2004b) on a set of example problems where the objective is the minimization of total cost, or the minimization of total earliness. The rest of the paper is structured as follows. Section 2 gives the problem definition. The next couple of sections present the RTN-based approaches following an historic perspective. Section 3 presents the simplified versions (for the single stage parallel machine problem) of the general, single time grid, discrete and continuous-time formulations. The new multiple time grid continuous-time formulation is then given in Section 4. The main characteristics of other approaches that have been used to solve this specific type of problem are given in Section 5, while the results for the test problems is left for Section 6. Finally, the conclusions are given in Section 7. 2. Problem definition In this paper, the short-term scheduling problem of single stage multiproduct batch plants with parallel units or machines is considered. A set I of product orders is to be processed on a set M of dissimilar parallel machines, where any given machine m can process all orders belonging to set Im . The processing time of order i on machine m is assumed to be known (pi,m ), as

P.M. Castro, I.E. Grossmann / Computers and Chemical Engineering 30 (2006) 1003–1018

Fig. 1. Uniform time grid for discrete-time formulation.

well as its release ri and due dates di , which are treated as hard constraints. It is also assumed that if setup times exist, they are not sequence dependent, so that they can be incorporated on the processing time. Two alternative objectives are considered: (i) minimization of the total cost, where the processing cost of order i on unit m is given by ci,m ; (ii) minimization of total earliness. 3. General RTN formulations adapted to the single stage case This section presents the simplified versions of the resource task network-based formulations, both the discrete-time formulation of Pantelides (1994) and the continuous-time formulation of Castro et al. (2004), for the special case of single stage multiproduct plants without common resources and sequence dependent changeovers. Both formulations use a uniform time grid to keep track of all events taking place and similar sets of variables and constraints. However, due to the different representation of time, the resulting mixed-integer linear programs have different size and complexity. 3.1. Discrete-time formulation (F1) In discrete-time formulations, the time horizon is divided into |T|-1 time slots of equal duration, δ. This parameter is of great importance since all processing data must be converted to the time interval basis. If the value of the processing times is arbitrary, a very small value of δ, and consequently a large number of time intervals, may be required to consider the exact problem data. If one finds that the resulting mathematical problem becomes too difficult to solve, one can use a higher value of δ and round the duration of the tasks (τ i,m ) to the next integer multiple of δ. The drawback is that suboptimal or infeasible solutions may result since the problem being considered is an approximation of the real one. The discrete-time grid is shown in Fig. 1, where the first and last time points match up to the minimum release date and maximum due dates, respectively. The discrete-time formulation uses only two sets of variables and constraints plus the objective function. The processing of order i on machine m starting at time point t is identified through the binary variable Ni,m,t , while the availability of a given machine at the same time point is given by the excess resource variable Rm,t (equal to one if the machine is available, zero otherwise). It is worth mentioning that the excess resource balance (Eq. (1)) ensures that the excess resource variables Rm,t can only take 0 or 1 values, so they can be defined as continuous variables. Due to the release and due dates, each order can only start on a subset of the total number of time points on the grid, |T|. Furthermore, since the approximated processing times

1005

(τ i,m ) on the various machines can be different, the number of possible starting points will also be machine dependent. The set of orders that can start at time point t on machine m is specified in It,m and its consideration significantly reduces the number of binary variables in the model. The other required subsets are given in Nomenclature. The constraint in Eq. (1) is the excess resource balance, which is a typical multiperiod balance, where the availability of machine m at time point t is equal to that at the previous time point, minus one if there is a task starting at t, plus one if there is a task ending at t (starting at t − τ i,m ). Notice that 1 represents the initial resource availability, only used at the first time point. The constraint in Eq. (2) is simpler, stating that all orders must be processed exactly once:  Rm,t = (1|t=1 + Rm,t−1 |t=1 ) − Ni,m,t i ∈ It,m



+

Ni,m,t−τi,m ,

∀m ∈ M, t ∈ T

(1)

i ∈ It−τi,m ,m

 

Ni,m,t = 1,

∀i ∈ I

(2)

t ∈ Ti,m m ∈ Mi

Two different objective functions will be considered. The first, minimization of total cost, is given in Eq. (3), while the minimization of total earliness is achieved through Eq. (4). In this equation, the second term represents the time corresponding to the first time point, while the third term represents the ending times of all processing tasks (number of time intervals spanned by all tasks multiplied by the duration of each time interval):    min (3) Ni,m,t ci,m t ∈ Ti,m m ∈ Mi i ∈ I

min Z =

 i∈I

di − min ri −  i ∈I

   t ∈ Ti,m m ∈ Mi i ∈ I

Ni,m,t (t + τi,m − 1)δ (4)

3.2. Continuous-time formulation (F2) The number of time intervals can be reduced if a continuous representation is used instead. In such a case, the time points usually have at least one task starting or ending,1 and therefore are often called event points. The absolute time of all time points that compose the time grid, and hence the duration of all time intervals, is known only after solving the model. The lower bound on the absolute time of the first time point is equal to the minimum release date and the upper bound on the absolute time of the last time point is equal to the maximum due date. The continuous-time grid is shown in Fig. 2. 1 When more time points are used than those required to find the optimal solution, there can be points where no task is starting or ending. However, these are easily identified since they will have the same absolute time as other event points.

1006

P.M. Castro, I.E. Grossmann / Computers and Chemical Engineering 30 (2006) 1003–1018

  m ∈ Mi t ∈ T

+



i ∈ Im



t ∈T

t−t≤t 
¯ i,m,t  ,t , N



t ∈T

t
∀m ∈ M, t ∈ T

∀i ∈ I

(6)

t ∈T

t


¯ i,m,t,t  pi,m , N

i ∈ Im

The continuous-time formulation presented below, although a simplified version of the general formulation of Castro et al. (2004) has the novelty of being able to deal with release and due dates. It uses more sets of variables and constraints than its discrete-time counterpart due to the fact that the time corresponding to each event point (Tt ) is unknown. The other important difference is that we do not know a priori how many time intervals a particular task will span. Thus, both the starting and ending event point of the task must be considered, which means that the binary variables will have two time indexes ¯ i,m,t,t  . Regarding this set of variables, it will instead of one: N be assumed that each task can only span a limited number of time intervals: t ≤ t + t. If the value of t is lower than |T|-1, then it may act as a hidden constraint by preventing us from finding the true optimal solution for a given number of |T|. The tradeoff is that lower values of t and |T| substantially reduce the computational effort. For this reason, one should start with relatively low values of t and |T| and keep solving the problem until no increment is found in the objective function for a single increase of |T| and t (further details can be found in Castro et al., 2005). At that point it is assumed that the solution is the global optimum. The first two constraints (Eqs. (5) and (6)) of the MILP model are equivalent to Eqs. (1) and (2), but now one more summation is required to find the exact event point where the task ends (if started at t), or starts (if it ends at t). Note also that all orders can start at any time point but the last. This is because we do not know for sure which variables can be eliminated from the formulation without compromising the optimal solution, even so it is expected that orders with earlier due dates will end at lower event points. The next set of constraints relates the absolute time of two time points to the processing time of the task occurring between those two points. Although it can either be written for every order i or for every machine m, the latter option is preferred (Eq. (7)) since it leads to a better performance. To ensure that the release dates are respected and that the due dates are met, Eqs. (8) and (9) are required. While the former states that if order i starts at time point t, the absolute time of point t cannot be lower than its release date, the latter is formulated as a big-M constraint (active if there is an order being processed on machine m that actually ends at t, otherwise the constraint is relaxed to Eq. (11)). Finally, Eqs. (10) and (11) place appropriate lower and upper bounds on the timing variables:   ¯ i,m,t,t  Rm,t = (1|t=1 + Rm,t−1 |t=1 ) − N 

¯ i,m,t,t  = 1, N



Tt  − Tt ≥

Fig. 2. Uniform time grid for continuous-time formulation.

i ∈ Im



∀m ∈ M, t, t  ∈ T, t < t  ≤ t + t Tt ≥





i ∈ Im



¯ i,m,t,t  ri , N

(7)

∀m ∈ M, t ∈ T, t = |T |

t ∈T

t


⎛ Tt ≤



 i ∈ Im

(8)

t ∈ T

⎜  ¯ i,m,t  ,t di + H ⎜ N ⎜1 − ⎝

i ∈ Im

t−t≤t 
 t ∈ T

⎟ ¯ i,m,t  ,t⎟ N ⎟, ⎠

t−t≤t 
∀m ∈ M, t ∈ T, t = 1 Tt ≥ min ri ,

(9)

∀t ∈ T

i∈I

Tt ≤ H = max di , i∈I

(10) ∀t ∈ T

(11)

The above set of constraints, together with Eq. (14), defines the total cost minimization problem. For the minimization of total earliness (Eq. (15)), one additional set of positive continuous variables DDi , the delivery date of order i, and two more sets of big-M constraints, must be defined. Note that in Eq. (12), the delivery date of order i must not be lower than the absolute time of event point t if it ends at or after t. Similarly, in Eq. (13), the delivery date must not be higher than Tt if the order ends at, or before t: ⎞ ⎛ ⎜  ⎜ DDi ≥ Tt − H ⎜1 − ⎝

m ∈ Mi





t  ∈ T

t ∈ T

⎟ ¯ i,m,t  ,t  ⎟ N ⎟, ⎠

t  −t≤t 
∀i ∈ I, t ∈ T, t = 1

(12)





⎜  ⎜ DDi ≤ Tt + di ⎜1 − ⎝

m ∈ Mi





t  ∈ T

t ∈ T

t  −t≤t 
∀i ∈ I, t ∈ T, t = 1

min



  



t ∈ T m ∈ Mi i ∈ I

t ∈T

t
(5) min

 i∈I

(di − DDi )

⎟ ¯ i,m,t  ,t  ⎟ N ⎟, ⎠ (13)

¯ i,m,t,t  ci,m N

(14)

(15)

P.M. Castro, I.E. Grossmann / Computers and Chemical Engineering 30 (2006) 1003–1018

1007

4. New multiple time grid, continuous time formulation (F3) The idea of using multiple time grids, one for each machine, instead of a single time grid is not a new one. Multiple time grid formulations have been used to solve the short-term scheduling problem of multipurpose plants (see the recent papers of Giannelos and Georgiadis, 2002a, 2002b; Janak et al., 2004). The problem in multipurpose plants is that features like the transfer of material between units of consecutive stages and/or the existence of resources that are shared by several tasks occurring simultaneously in different equipment units, require the use of complex constraints, which in some cases have been shown not to be entirely general, meaning that they can lead to infeasible solutions. Overall, multiple time grid continuous-time formulations have so far not proved to be more efficient than their uniform time grid counterparts. In cases where common resources like utilities are not an issue, machines belonging to the same processing stage are independent, the use of multiple time grids is potentially attractive. In the proposed formulation, each machine uses a time grid similar to the one given in Fig. 2. All |M| time grids are totally independent, meaning that no relation is assumed between time points of different grids. Nevertheless, the time grids have two things in common: (i) same number of time points (it is straightforward to use a different number of time points on each machine, but this would lead to the extra problem of choosing how many time points to use on a particular machine); (ii) same lower and upper bounds on the first and last time points, the minimum release date and the maximum due date, respectively. The advantage of considering |M| independent time grids is that we can assume, without loss of generality, that all processing tasks last exactly one time interval. Thus, it is sufficient to consider a single time index that identifies the time point at which the order starts to be processed. The execution of order i on machine m at time point t is identified through the binary variable Ni,m,t , just like in the discrete time formulation, whereas the continuous variables that represent the time corresponding to the time points, now have two indices: Tt,m . Variables Rm,t have the same meaning as in the two previous models, but based on computational experience it is better to consider them as binary variables. It is important to note that if a task starts at t it does not necessarily mean that it ends at t + 1, it can end somewhere in between the two time points, more specifically at time Tt,m + pi,m (see orders I1 and I2 in Fig. 3). Naturally, it is also possible that the optimal solution features orders ending exactly at the subsequent time point (see orders I3, I4 and I5 in Fig. 3). These extra degrees of freedom ensure that the number of time points required on time grid m is equal to the number of orders assigned to that machine plus one, i.e. when solving the single machine problem (see Section 6.2.3) the number of event points that ensures feasibility is the same number that ensures global optimality. On the multiple machines problem the procedure to find the global optimum solution is similar to that described for the uniform time grid continuous-time formulation, but now the minimum

Fig. 3. Possible solution from multiple time grid continuous-time formulation (|I| = 5, |M| = 3, |T| = 3).

number of time points that can lead to feasibility is given by rounding up the expression |I|/|M| + 1 to the next integer. Whenever the number of orders assigned to a machine is lower than |T|-1, there is at least one time point whose absolute time is free to vary. For example in Fig. 3, for machine M3, the absolute time of the third time point is set to its upper bound (H) but it could as easily be set to T2,3 , if the objective function does not depend on these variables (e.g. total cost minimization). When minimizing total earliness, the objective function ensures that the absolute time of all time points that are not starting points of tasks, are equal to their upper bound (H), which is particularly useful since we know a priori how many time points will be in that situation (|I| − |M| × |T|). As shown below, the model constraints of the multiple time grid formulation are very similar to the constraints of their uniform grid counterpart with the great difference that no additional sets of variables and constraints are required for the minimization of total earliness. There are however a couple of differences worth highlighting. The first is that in Eq. (20) the focus in on the starting point of the task and not on its ending point (see Eq. (9)). It states that if order i is executed in machine m at time point t, then the sum of the absolute time of point t (of grid m) plus the duration of the processing task (pi,m ) must not exceed the order’s due date. The second difference is that the objective of total earliness minimization, Eq. (24), is now defined with a completely different expression, where the second term accounts for the delivery dates of all orders, and the third subtracts the absolute time of all time points that have no tasks starting at:   Rm,t = (1|t=1 + Rm,t−1 |t=1 ) − Ni,m,t + Ni,m,t−1 , i ∈ Im

i ∈ Im

∀m ∈ M, t ∈ T  

(16)

Ni,m,t = 1,

∀i ∈ I

(17)

m ∈ Mi t ∈ T

Tt+1,m − Tt,m ≥



Ni,m,t pi,m ,

∀m ∈ M, t ∈ T, t = |T |

i ∈ Im

Tt,m ≥

 i ∈ Im

Ni,m,t ri ,

(18)

∀m ∈ M, t ∈ T, t = |T |

(19)

1008

P.M. Castro, I.E. Grossmann / Computers and Chemical Engineering 30 (2006) 1003–1018



Tt,m ≤

⎛ Ni,m,t (di − pi,m ) + H ⎝1 −

i ∈ Im





5. Other approaches

Ni,m,t ⎠ ,

i ∈ Im

∀m ∈ M, t ∈ T, t = |T |

(20)

Tt,m ≥ min ri ,

(21)

i∈I

∀m ∈ M, t ∈ T

Tt,m ≤ H = max di , i∈I

min

  

∀m ∈ M, t ∈ T

(22) (23)

Ni,m,t ci,m

The single stage parallel scheduling problem can also be solved by other approaches. The three alternative models tested are essentially those described in Jain and Grossmann (2001). Thus, instead of showing the detailed models we will focus on the main characteristics of each method and highlight its differences and similarities to the formulations shown in Sections 3 and 4. Nevertheless, the changes made to the original models, to implement recent developments or to adapt them to deal with the objective of total earliness minimization, will also be mentioned.

t ∈ T m ∈ Mi i ∈ I



min

i∈I

di −

 

⎛ ⎝Tt,m +

t ∈ Tm∈M t=|T |

− H[|I| − |M|(|T | − 1)]



5.1. MILP model with sequencing variables (F4)

⎞ Ni,m,t pi,m ⎠

i ∈ Im

(24)

This new multiple-time grid formulation considers that batch sizes are not problem variables and that unit capacities are not required as problem data. As a consequence, the processing times are constant but unit dependent. However, these are not actual limitations of the model since appropriate continuous variables, to model the amount of material associated to the execution of each order, and appropriate constraints, to ensure that the capacities of the equipments are not exceeded, can easily be used (see the single time grid continuous-time formulation from which this new formulation originates, Castro et al., 2004). These will also allow handling batch size dependent processing times. Another issue that can be mistaken by a limitation of the model is the fact that any order can only be processed once throughout the time horizon. This simply means that we may need to use several orders to account for different batches of the same product (or even the same batch-sizes with different due dates), and consequently more variables and constraints. However, we believe that this option leads to a more efficient model than the one that would be derived from allowing an order to be processed more than once throughout the time horizon. The real limitation of the model is that it cannot handle limited supplies of manufacturing resources other than equipment units. Nevertheless, if a given resource is only produced and consumed once over the time horizon, e.g. material resources associated to a given stage of production, additional timing variables can be employed to ensure that a given material resource can only be consumed after being produced. This technique has been used to extend the model to multistage, multiproduct short-term scheduling problems (Castro and Grossmann, 2005). It is also worth mentioning that that paper also considers the objective of makespan minimization, another important objective function in short-term scheduling problems. Preliminary results have shown that it is also possible to extend the formulation to efficiently handle sequence-dependent changeover times, one of the most important assumptions made here. This will be the subject of a future paper.

Not all continuous-time formulations need to consider one or more time grids explicitly. If one considers binary assignment xi,m and sequencing variables yi,i , no time indexes are required to generate a MILP that can solve the problem at hand. In this way, only one problem needs to be solved to find the optimal solution, instead of a few problems (in the search for the adequate number of event points, |T|, see Sections 3.2 and 4). Other advantages include the possibility to enforce or forbid certain product sequences and to handle sequence-dependent changeovers in a simple way. Enforcing and/or forbidding certain product sequences means considering fewer binary sequencing variables, while sequence dependent changeovers are treated with exactly the same constraints (one more term is added to the timing constraints relating the starting time of orders i and i ). These two problem characteristics, which are very common in reality, can be difficult to implement on the RTN-based formulations and usually involve adding more variables and constraints, thus increasing their complexity. The original model of Jain and Grossmann (2001) has logic cuts (Eqs. (22) and (23) of their paper) that involve a large number of constrains (for example Eq. (23) uses four indices: i,i ,m,m ). We found that removing these from the formulation generally yields better computational performance for total cost minimization while for total earliness minimization the results were not conclusive. For the latter objective, the model can be adapted by changing Eq. (17) of their original work into an equality, replacing di by DDi and then using Eq. (15) of this paper as the objective function. 5.2. Constraint programming model (F5) The same scheduling problem can also be modeled using constraint programming (CP). CP models, in contrast to MILP models, are highly dependent on the CP package used to model the problem. In this paper we use ILOG’s OPL modeling language (Van Hentenryck, 1999), which has a set of constructs especially designed for scheduling problems. The basic OPL modeling framework is similar to the resource task network in how it looks at the problem: a set of activities (tasks) that need to be performed using a certain set of resources, where the equipment resources (the machines) are defined as unary resources. CP models can be viewed as discrete-time models with intervals

P.M. Castro, I.E. Grossmann / Computers and Chemical Engineering 30 (2006) 1003–1018

of one time unit length, since all variables of a given activity (start, duration and end, with start + duration = end) are integer variables. The CP model used was that of Jain and Grossmann (2001), where the objective for total earliness minimization can be defined by Eq. (25):  min (di − i · end) (25) i∈I

5.3. Hybrid MILP/CP model (F6) The strengths of both models can be combined by using a simplified version of the MILP for the assignments, and then solve |M| single machine problems with CP to sequence the orders that were assigned to a particular machine. When minimizing  the total cost (defined in the MILP by xi,m ci,m ), this decomposition strategy has the advantage of always leading to the global optimum solution since the objective function only depends on the assignment variables and the simplified MILP is a less constrained model than the full MILP. The only drawback is that the assignments may be infeasible on one or more machines, but that can be overcome simply be adding integer cuts to the MILP and iterating until all CP sequencing problems are feasible. While the same decomposition strategy can be used when minimizing earliness, or generally when considering an objective that is a function of the sequencing variables, the performance of this decomposition is likely to worsen as then the CP has to solve an optimization problem rather than a feasibility problem. Following the work of Jain and Grossmann (2001), Bockmayr and Pisaruk (2003) generalized the integer cuts and used them in a branch and cut framework, while Sadykov and Wolsey (in press) proposed a tighter formulation for the MILP and explored several integrated schemes. Also, Maravelias and Grossmann (2004b) have proposed a pre-processing algorithm that generates knapsack constraints or cover cuts for certain subsets of orders that can be added to the cut pool of the MILP a priori. The preprocessing algorithm was shown to reduce the computational effort by one order of magnitude in the set of instances studied by Jain and Grossmann (2001) and so this work uses the knapsack constraints proposed by Maravelias and Grossmann (2004b) for the single stage plant. 6. Computational results In this section, the performance of the mathematical formulations is illustrated through the solution of a large set of benchmark problems. These are numbered from 1 to 13, and identified through two additional characters where the last identifies the objective function being considered (e.g. P1C stands for problem 1, total cost minimization, while P11E stands for problem 11 minimization of total earliness). Problems P1–P6 correspond to the single stage example problems 3.1–5.2 of Harjunkoski and Grossmann (2002). The data for problems P7–P10 is based on the values given in Table 1, where P7–P8 comprise the first 25 orders and P9–P10 the full set of orders. In P8, the processing times were increased by 5% and rounded to the closest integer value, while in P9 they were decreased by 20% (and rounded).

1009

Table 1 Data for problems P7-P10 Order

I1 I2 I3 I4 I5 I6 I7 I8 I9 I10 I11 I12 I13 I14 I15 I16 I17 I18 I19 I20 I21 I22 I23 I24 I25 I26 I27 I28 I29 I30

Dates (day)

pi,m (day)/ci,m

ri

di

M1

M2

M3

M4

M5

66 40 65 28 36 10 56 50 20 26 46 78 88 53 46 95 94 12 72 12 66 29 2 99 81 3 45 2 16 75

110 188 163 137 221 126 286 237 254 119 229 189 159 219 281 269 200 258 142 184 294 184 295 156 142 270 277 134 170 157

30/5 30/5 35/4 46/3 46/3 47/3 38/4 29/5 48/2 32/5 33/4 31/5 53/2 28/5 51/2 43/3 38/4 55/1 54/2 28/5 33/4 54/2 48/2 50/2 43/3 42/3 41/3 49/2 54/2 57/1

53/2 50/2 45/3 40/3 34/4 49/2 55/1 57/1 54/2 39/4 48/2 57/1 40/3 55/1 58/1 57/1 39/4 34/4 53/2 57/1 55/1 58/1 54/2 29/5 53/2 50/2 54/1 50/2 37/4 43/3

39/4 50/2 42/3 38/4 50/2 31/5 33/4 42/3 50/2 50/2 49/2 49/2 42/3 29/5 33/4 32/5 45/3 58/1 49/2 38/4 36/4 49/2 49/2 37/4 41/3 33/4 57/1 40/3 48/2 57/1

36/4 57/1 45/3 41/3 55/1 29/5 34/4 42/3 38/4 42/3 53/2 52/2 44/3 28/5 53/2 39/4 37/4 56/1 44/3 43/3 43/3 47/2 33/4 40/3 33/4 52/2 43/3 37/4 48/2 52/2

56/1 43/3 41/3 31/5 57/1 54/2 51/2 49/2 40/3 39/4 34/4 42/3 36/4 57/1 40/3 44/3 49/2 40/3 38/4 51/2 48/2 31/5 31/5 45/3 38/4 37/4 49/2 45/3 43/3 37/4

The data from problems P11–P13 was taken from M´endez and Cerd´a (2003) and is given in Table 2, where P11 considers the first 12 orders, P12 the first 29 orders and P13 the complete set of 40 orders. All MILP models, where solved to optimality (1E-6 relative tolerance) or to a maximum resource limit of 1 h of computational time,2 on a Pentium-4 2.8 GHz machine, running the commercial solver GAMS/CPLEX 9.0. The CP and hybrid MILP/CP models where implemented and solved in ILOG’s OPL studio 3.7, on the same machine. The results in terms of computational effort are given in Table 3 for total cost minimization and in Table 4 for total earliness minimization. More detailed computational statistics are given in Tables 5–12 to make it easier for other researchers to reproduce the results and hence validate their models. 6.1. Minimization of total cost As can be seen from Table 3, the discrete-time formulation (F1) is the most efficient formulation overall because it is more consistent even though other formulations perform better in 7 2 Due to the large number of problems being solved 3600 CPUs was used as the time limit.

1010

P.M. Castro, I.E. Grossmann / Computers and Chemical Engineering 30 (2006) 1003–1018

Table 2 Data for problems P11E–P13E Order

di (day)

pi,m (day)

Order

M1 I1 I2 I3 I4 I5 I6 I7 I8 I9 I10 I11 I12 I13 I14 I15 I16 I17 I18 I19 I20

15 30 22 25 20 30 21 26 30 29 30 21 30 25 24 30 30 30 13 19

M2

M3

1.718 1.680 1.787 1.564 0.736 5.443 5.045

3.025 1.500 1.869 1.457 3.925 6.971

7.000

11.430 2.812 5.180 1.430 4.654 1.604 3.305 2.604

1.074

di (day)

pi,m (day)

M4 1.424 1.019 1.048 2.373 1.247 3.430 3.444 1.670 2.689 3.230 5.830 6.946 1.757 3.215 1.013 3.266 2.917 1.830

M1 I21 I22 I23 I24 I25 I26 I27 I28 I29 I30 I31 I32 I33 I34 I35 I36 I37 I38 I39 I40

30 20 12 30 17 20 11 30 25 26 22 18 15 10 10 14 24 16 22 23

M2

7.497

M3 3.614 0.864 3.624 2.667 3.448

6.132 4.004 6.590 5.680

1.744 2.698

M4

4.230 5.132 1.987 4.167 3.465 4.516 2.384 1.593 3.884

2.322 3.445 3.660

2.658 2.780 2.433 2.320

2.194

2.545 2.210

2.080 2.065

Table 3 Overview of computational performance (CPUs) for total cost minimization Type of model

P1C (12 orders, 3 machines) P2C (12 orders, 3 machines) P3C (15 orders, 5 machines) P4C (15 orders, 5 machines) P5C (20 orders, 5 machines) P6C (20 orders, 5 machines) P7C (25 orders, 5 machines) P8C (25 orders, 5 machines) P9C (30 orders, 5 machines) P10C (30 orders, 5 machines) a b c d

Discrete-time MILP

Continuous-time MILP

F1a

F2a

F3a

18.9 5.67 13.9 5.48 7.25 35.6 2.47 4.70 13.5 27.2

3600b,c 0.13 – – – – – – – –

6.18 0.04 3.82 0.14 443 0.74 220 29.1 13.6 3600b

CP

Hybrid MILP/CP

F4a

F5a

F6a

15.2 0.05 33.3 5.32 421 3600b,d 3600b,d 3600b,d 107.8 3600b,d

0.98 0.13 97.3 6.53 3600b,d 2326 518 4133 183 3600b,d

0.44 0.03 0.63 0.45 0.99 0.77 172 968 46.8 3600b,c

Problem/model. Maximum resource limit. No solution found. Suboptimal solution returned.

of the 10 example problems. Despite the large number of time points used, ranging from 294 to 381 (δ was set to 1 in order to consider the exact problem data), the most difficult instance (P6C) requires only 35.6 s to solve. The MILPs resulting from F1 are the largest of all four MILP formulations considered, but are also those that exhibit the lowest integrality gaps. Another interesting result is that the optimal solution is usually found on the first nodes of the search tree and that few nodes are required to prove optimality. The results also show that the effect of the number of time points in the computational effort is at least as important as the effect of the number of orders and machines since P1C, which features 12 orders on 3 machines and 381 time points, takes more time to solve than P9C, which features 30 orders on 5 machines and 294 time points. The discrete-time formulation is also the only one that can solve P10C to global optimality (see Table 8). The optimal solution is shown in Fig. 4,

Fig. 4. Optimal solution for problem P10C.

P.M. Castro, I.E. Grossmann / Computers and Chemical Engineering 30 (2006) 1003–1018

1011

Table 4 Overview of computational performance (CPUs) for total earliness minimization Type of model

P1E (12 orders, 3 machines) P2E (12 orders, 3 machines) P3E (15 orders, 5 machines) P4E (15 orders, 5 machines) P5E (20 orders, 5 machines) P6E (20 orders, 5 machines) P7E (25 orders, 5 machines) P8E (25 orders, 5 machines) P9E (30 orders, 5 machines) P10E (30 orders, 5 machines) P11E (12 orders, 4 machines) P12E (29 orders, 4 machines) P13E (40 orders, 4 machines)

Discrete-time MILP

Continuous-time MILP

F1a

F2a

F3a

F4a

CP F5a

5.61 1.56 1.21 0.70 10.2 1.23 0.98 0.78 1.63 2.09 7.35 300e 3600b,f,e

– – – – – – – – – – 5256 – –

0.37 8.66 7.30 3600b 1232 3600b – – – – 0.38 209 3600b,f

1663 95.0 3600b,c 3226 – – – – – – 0.30 3600b,c –

4.56 19.0 563d,c 1467 – – – – – – 300e 3600b,c,e –

Note: P1E–P10E have integer processing times while P11E–P13E have continuous times. a Problem/model. b Maximum resource limit. c Suboptimal solution returned. d Solver terminated due to unknown error. e Approximated solution. f Best solution found; can be or not the optimal solution.

Table 5 Computational statistics for problems P1C–P2C Problem model

|T| Discrete variables Single variables Constraints RMIP Obj CPU Nodes Choice points Major iterations

P1C

P2C

F1

F2

F3

F4

F5

381 6948 8092 1156 101.86 104 18.9 153

12 1620 1669 250 84.67 – 3600a 864752

6 198 217 76 98.10 104 6.18 14443

168 181 358 97.89 104 15.2 10469

F6

72 84 104 0.98

104 0.44

F2

F3

F4

F5

381 8914 10058 1156 84.92 85 5.67 7

9 540 577 133 84.54 85 0.13 0

9 315 343 112 84.9 85 0.04 0

168 181 358 84.88 85 0.05 0

F6

72 84 85 0.13

3593

85 0.03

818

Total cuts a

F1

3

1

109

31

Resource limit exceeded (best possible solution = 88.01, total tree size = 590 MB).

Table 6 Computational statistics for problems P3C–P4C Problem model

|T| Discrete variables Single variables Constraints RMIP Obj CPU Nodes Choice points Major iterations Total cuts

P3C

P4C

F1

F3

F4

371 13320 15176 1871 116 116 13.9 15

6 405 436 121 113.03 116 3.82 6745

285 301 771 112.73 116 33.3 10356

F5

F6

90 135 116 97.3

116 0.63

424969

F1

F3

F4

371 16950 18806 1871 104 105 5.48 1

8 565 606 161 104 105 0.14 18

285 301 771 103.78 105 5.32 2704

F5

F6

90 135 105 6.53

105 0.45

22791 2 302

2 174

1012

P.M. Castro, I.E. Grossmann / Computers and Chemical Engineering 30 (2006) 1003–1018

Table 7 Computational statistics for problems P7C–P8C Problem model

|T| Discrete variables Single variables Constraints RMIP Obj CPU Nodes Choice points Major iterations

P7C

P8C

F1

F3

F4

F5

294 13548 15019 1496 50.25 51 2.47 0

6 655 686 131 45.21 51 220 172156

725 751 2156 46.72 54 3600a 398140

F6

150 225 51 518

51 172

a

F3

F4

F5

294 13273 14744 1496 53.71 54 4.70 0

6 655 686 131 48.13 54 29.1 19527

725 751 2156 46.98 60 3600b 487058

F6

150 225 54 4133

829542

54 968

6538160

Total cuts b

F1

50

53

933

954

Resource limit exceeded (best possible solution = 47.13, total tree size = 176 MB). Resource limit exceeded (best possible solution = 47.26, total tree size = 240 MB).

Table 8 Computational statistics for problems P9C–P10C Problem model

|T| Discrete variables Single variables Constraints RMIP Obj CPU Nodes Choice points Major iterations

P9C

P10C

F1

F3

F4

294 18063 19534 1501 53 53 13.5 8

7 935 971 156 50 53 13.6 8379

1020 1051 3111 51.44 53 107.8 11470

F5

F6

180 270 53 183

53 46.8

b c d

F3

F4

294 16753 18224 1501 73.85 75 27.2 3

7 935 971 156 69.72 75 3600a 1836931

1020 1051 3111 59.62 98 3600b 68467

F5

F6

180 270 83 107c

228701

– 3600d

259260

Total cuts a

F1

12

1

1150

1163

Resource limit exceeded (best possible solution = 73.33, total tree size = 228 MB). Resource limit exceeded (best possible solution = 61.55, total tree size = 57 MB). Computational time required to find the solution reported (solver terminated at 3600 CPUs). Resource limit exceeded for first MILP (integer solution = 75, best possible solution = 67.15).

Table 9 Computational statistics for problems P3E–P4E Problem model

|T| Discrete variables Single variables Constraints RMIP Obj CPU Nodes Choice points a b c

P3E

P4E

F1

F3

F4

371 13320 15176 1871 424 424 1.21 0

6 405 436 121 154.86 424 7.30 7758

285 316 2976 0 489 3600b 363445

F5

75 135 444a 563 3006143

Solver terminated due to unknown error. Resource limit exceeded (best possible solution = 0, total tree size = 144 MB). Resource limit exceeded (best possible solution = 0, total tree size = 223 MB).

F1

F3

F4

371 16950 18806 1871 24 24 0.7 0

5 325 351 101 0 24 3600c 2685905

285 316 2976 0 24 3226 649506

F5

75 135 24 1467 7441021

P.M. Castro, I.E. Grossmann / Computers and Chemical Engineering 30 (2006) 1003–1018

1013

Table 10 Computational statistics for problems P5E–P10E Problem model

P5E

|T| Discrete variables Single variables Constraints RMIP Obj CPU Nodes a

P6E

F1

F3

F1

F3

381 18790 20696 1926 782 787 10.2 13

8 740 781 166 471.10 787 1232 753565

381 23320 25226 1926 84 84 1.23 0

6 530 561 126 0 84 3600a 1926341

P7E (F1)

P8E (F1)

P9E (F1)

P10E (F1)

294 13548 15019 1496 51 51 0.98 0

294 13273 14744 1496 85 85 0.78 0

294 18063 19534 1501 81 81 1.63 0

294 18224 16753 1501 298 298 2.09 0

Resource limit exceeded (best possible solution = 0, total tree size = 436 MB).

Table 11 Computational statistics for problems P11E–P13E Problem model

P11E

|T| Discrete variables Single variables Constraints RMIP Obj CPU Nodes Choice points a b c d

P12E

F1

F2

F3

F4

3001 53955 66000 12017 1.03 1.03 7.35 0

10 875 938 445 0 1.026 5256 1508390

5 120 141 65 0 1.026 0.38 470

157 182 512 0 1.026 0.30 22

F5

60 84 – 1.03 300 – 20931

P13E

F1

F3

F4

F5

601 23190 25595 2434 60.183 61.35 300 892

11 614 659 154 37.12 59.896 209 183206

869 928 2845 0 64.441 3600a 550451

145 203 – 68.05 3355b

F1

F3

301 14189 15394 1245 125.49 133.9 3600c 29878

14 1018 1075 201 85.29 126.927 3600d 1257613

12935204

Resource limit exceeded (best possible solution = 23.137, total tree size = 263 MB). Computational time required to find the solution reported (solver terminated at 3600 CPUs). Resource limit exceeded (best possible solution = 130.47, total tree size = 371 MB). Resource limit exceeded (best possible solution = 112.14, total tree size = 365 MB).

where it can be seen that the 30 orders have been equally divided between the five machines. The constraint programming model (F5), like the discretetime formulation, is also limited to integer data. This is the only resemblance to F1 since CP uses much fewer variables and constraints and its performance is highly dependent on the problem size. As the size increases, so does the number of choice points and the computational effort. While problems P1C/P2C are solved in less than one second, P3C/P4C, P7C and P9C take several seconds to solve, P6C and P8C are solved to optimality in roughly 1 h, but for P5C/P10C the optimal solution cannot be found in 1 h of computational time (Harjunkoski

and Grossmann, 2002, report 189,244 s to find the optimum of P5C). CP models have the advantage of finding reasonably good solutions very fast and improve from there, although more computational time does not necessarily mean better solutions, e.g. the reported solutions for P5C and P10C were obtained in just 57 and 107 s but no better solutions were found up to 1 h of computational time. The results in Table 5 show that the uniform time grid continuous-time formulation (F2) has the worst performance of all models tested. Interestingly, it has completely distinct performances for problems P1C and P2C. While the former is intractable, the latter is solved in just 0.13 s. This difference

Table 12 Computational statistics for single machine problems (problem P13E) Model

|T|

Discrete variables

Single variables

F1 + F3 M1 M2 M3 M4 F1 + F4 M1 M2 M3 M4

– 8 10 14 12

– 49 81 169 121

– 66 102 198 146

– 30 38 54 46

– – – –

49 81 169 121

64 100 196 144

100 164 340 244

Constraints

RMIP

MIP

CPUs

Nodes

– 23.07 12.36 31.93 24.21

126.949 30.849 21.616 42.880 31.604 126.949 30.849 21.616 42.880 31.604

3601 0.13 0.13 0.26 0.14 3612 0.24 0.14 8.8 0.23

– 4 9 69 50 – 28 88 34145 612

0 0 0 0

1014

P.M. Castro, I.E. Grossmann / Computers and Chemical Engineering 30 (2006) 1003–1018

in behaviour is better understood by recalling that the computational effort increases substantially with an increase in the number of event points (|T|) and/or the maximum number of time intervals that a particular task can span (t). While P1C requires |T| = 12 and t = 5 to find the global optimum (this can be confirmed by using the solution from F3 to fix the binary variables of model F2), P2C requires only |T| = 9 and t = 2. Since an increase in the number of orders leads to an increase in the number of event points required to solve the problem, there is no point in solving the other problems with model F2. When going from the uniform to the multiple time grid continuous-time formulation (F3) the size of the resulting MILPs is significantly reduced, the integrality gap becomes lower and, above all, the computational effort is greatly reduced (P1C is solved in 6.18 s). The other advantage becomes apparent when comparing the results between problems P1C and P3C (see Tables 5 and 6). Although the size roughly doubled, due to the fact that P3C considers three more orders and two more machines, the number of orders per machine decreased, which has a direct influence on the number of required time points and hence on the computational effort. As a consequence, problem P3C took less time to solve by (F3) than P1C, while for the continuous-time formulation with sequencing variables (F4) the computational time increased significantly. Furthermore, the MILPs resulting from (F3) have slightly lower integrality gaps than those resulting from (F4), require a larger number of binary variables, but are generally solved in significantly less time. The multiple time grid formulation (F3) also scales up better with problem size as seen with problems P6C/P7C/P8C, which were solved in less than 4 min, whereas the continuous time formulation with sequencing variables (F4) could not even find the optimal solution in 1 h. Problem P10C is the only one that cannot be solved to optimality by F3, even though the optimal solution was found in 1 h of computational time. The surprising result here is that the proposed MILP (F3) has a performance comparable to the hybrid model (F6). The hybrid MILP/CP model (F6) has the best performance for four of the first six problems (P4C and P6C are solved faster by the multiple continuous-time formulation, F3). The cuts proposed by Maravelias and Grossmann (2004b) make the hybrid model very efficient, since few assignment problems (the number of major iterations) need to be solved (the maximum, three iterations, was found for problem P1C). Without the knapsack constraints of Maravelias and Grossmann (2004b), P1C requires a total of 27 major iterations and 42 integer cuts to find the optimal solution in 9.0 CPUs (instead of 0.44 s, see Table 5), while P5C requires 22 major iterations and 38 integer cuts for a total computational effort of 18.0 CPUs, instead of 0.99 s. The number of major iterations of the hybrid model and also the computational effort per iteration are expected to increase with the problem size, which can eventually lead to a decrease in performance, when compared to the multiple time grid continuous-time formulation (F3). It is thus not surprising that problem P8C (53 iterations and 18.3 s/iteration, see Table 7) requires 968 s versus 29.1 s, while P9C (12 iterations and 3.9 s/iteration) is solved more than three times slower. What is surprising is that P10C is intractable with the hybrid MILP/CP

since the solver cannot even solve the first MILP in 1 h. The reason for this is explained next. The MILP with assignment and sequencing variables is intractable (see results for model F4) mainly due to its very large integrality gap. The simplified MILP generated from the hybrid model suffers from a similar problem and even though the integrality gap is lower (the solution of the relaxed problem equals 64.4) and a feasible solution is found, the solver is still far from proving optimality after 1 h of computational time. Interestingly, the best solution found has the same objective as the global optimum solution, the difference being that for the simplified MILP we are not sure that that solution is even feasible. It is now clear that the decision of dividing the problem in two, assignment and feasibility problems is good but has two important limitations. The first was already known. This decomposition strategy is only effective when the objective function depends solely on the assignment variables. The second is that several assignment problems may need to be solved to find the optimal solution. If the resulting MILPs have a small size and a low integrality gap, this approach works fine, but if at least one of the conditions fails, then we have a problem, as seen for P10C. Since the hybrid model only reaches a feasible solution when it finds the optimal solution, at termination, we do not obtain a feasible solution to the problem if the search is not completed. Thus, replacing the pure MILP or constraint programming models with the hybrid approach by Jain and Grossmann (2001) is not a good idea for large problems. 6.2. Minimization of total earliness The use of multiple time grids and the fact that each order only lasts one time interval allows the use of a very efficient objective function (see Eq. (24)) for total earliness minimization. This is one of the most important features of the new formulation (F3), and one of the reasons why it compares favourably to the other approaches, most notably in problems P12E–P13E. More specifically, the new formulation is the only continuous-time formulation, of the three considered here, for which the solution of the relaxed problem (LP) may be different than zero (see Tables 9–12). 6.2.1. Problems P1E–P10E The first 10 problems comprise the exact same problems solved in Section 6.1, but now the objective is the minimization of total earliness. The results given in Table 4 when compared to those of Table 3 clearly show that it is generally much more difficult to solve the earliness minimization problem than the cost minimization problem. The exception to the rule is the discrete-time formulation (F1), which solves all (except P5E) earliness minimization problems faster. This behaviour can be explained by the fact that now, the time point at which each order is executed also affects the objective function, meaning fewer degenerate solutions and better performance. Again, it is clear that the number of time intervals plays an important role on the computational effort since the order of magnitude of the latter remained more or less the same, i.e. the increase in the number of orders and machines was somewhat compensated with

P.M. Castro, I.E. Grossmann / Computers and Chemical Engineering 30 (2006) 1003–1018

the need for fewer time intervals when going from P1E–P6E to P7E–P10E. We will return to this discussion in Section 6.2.2. The multiple time grid continuous-time formulation (F3) outperforms both the continuous-time formulation with sequencing variables (F4) and the constraint programming formulation (F5) in P1E–P3E but not in P4E. It is even faster than the discretetime formulation (F1) for problem P1E. When compared to F4, the multiple time grid formulation always provides tighter relaxations, when the solution of the relaxed MILP is different than zero. In such cases, F3 is much more efficient, otherwise it seems that as the problem size increases (e.g. P4E) the continuous-time formulation with sequencing variables (F4) does a better job at reducing the gap (after 1 h of computational time, the integrality gap with F3 was the same as the initial gap). Nevertheless, F3 could always find the optimal solution rather fast. However, the number of time points required to find the global optimal solution is always an issue since this has to be determined by trial and error. Thus, if optimality cannot be proved on a reasonable time we do not know if a sufficient number of time points have been used, unless of course we know the optimal solution from other method, e.g. the discrete-time formulation. In this respect, it is interesting to see that the number of time points required can vary from one objective function to the other (e.g. P2C and P6C require 9 time points, while P2E and P6E need only 6, P5E requires 8 event points and P5C only 7). 6.2.2. Problems P11E–P13E The last three problems were taken from M´endez and Cerd´a (2003) and go from 12 orders to 29 orders to a 40 order problem in 4 machines. There are three main differences from the first set. The first two are: (i) no release dates; (ii) a particular order can only be processed on a subset of the available machines. Both decrease the complexity of the problem and this is the reason why we go up to a 40 order problem when the maximum number of orders in P1E–P10E was just 30. The third difference is that now the processing times have four significant digits, which forces an approximate rounding of the problem data to maintain problem tractability, in the discrete-time formulation. The discrete-time formulation (F1) requires 30,001 time points to consider the exact problem data (30/0.001 + 1), which is obviously too large. Thus, one needs to round the problem data as described in Section 3.1. The smaller the interval length (δ), the more difficult it is to solve a particular problem. Thus, an increase in problem complexity (e.g. increase in the number of orders) can be compensated by an increase in interval length, in order to maintain problem tractability. For these set of problems we have chosen δ = 0.01, 0.05 and 0.1 days for problems P11E, P12E and P13E, which corresponds to 3001, 601 and 301 time points, respectively. Despite the decrease in problem size, the computational effort increases significantly from P11E (7.35 s) to P12E (300 s) to P13E (3600 s), with the latter representing the maximum resource limit (integer solution within 2.63% of the optimum). The solutions from the discrete-time model (F1) are good upper bounds on the true optimal solution, with the quality of the approximation generally decreasing with a reduction in the number of time intervals (1.03 versus 1.026, for P11E; 61.35 versus 59.896, for P12E, see Table 11).

1015

The constraint programming model (F5) like the discretetime formulation (F1) uses integer data for the processing times. This means that we need to multiply the values in Table 2 by 1000 before we solve the problem in ILOG (the objective function is then divided by 1000, to find the total earliness). Problem P11E is solved in 3266 CPUs, which is already too much time. To overcome this problem we can use a smaller basis by multiplying the problem data by 100 (20 for P12E and 10 for P13E) and then rounding it to the next integer. Doing this ensures that the problem data is exactly the same as that considered in the discrete-time formulation (F1), thus allowing for a more valid comparison. The computational statistics in Table 11 clearly show that the CP model (F5) performs worse than model (F1). The uniform time grid continuous-time formulation (F2) can find the global optimal solution rather quickly, but for P11E due to the 100% relative integrality gap it is very difficult to prove optimality (about 1.5 h of computational time). The other disadvantage is that a few problems needed to be solved to find the minimum number of event points (10) and the minimum value of t (5) to get to the optimal solution. Thus, like for the minimization of total cost, the uniform time grid continuoustime formulation performs poorly and is only useful for very small problems. The multiple time grid continuous-time formulation (F3) is the best performer since it is able to find the global optimal solution for P11E in roughly the same time as F4 and it can find the best solutions for problems P12E and P13E. The solution with an earliness of 59.896 days found for P12E is probably the global optimal solution since no improvement in the objective function was found when the number of time points was increased from 11 to 12 (CPU = 258 s). Problem P13E requires more time points to find the global optimal solution. For 12 time points, we found an optimal solution of 130.135 days in 3430 s, which improved to 129.074 with a single increase in the number of time points. Although optimality could not be proved (solver terminated with best possible solution equal to 119.81), this is probably the optimal solution for 13 time points. However, it is not the global optimal solution since a better solution can be found for 14 time points (126.927). The optimal solution for problem P12E, which has a total earliness of 59.896 days, 2.481 days better (almost 4%) than the best solution reported in the literature (M´endez and Cerd´a, 2003), is given in Fig. 5. The first thing to note is that there is not a limiting machine throughout the time horizon since all have waiting periods after they start to be used (M1 stops 0.955 days between orders 1 and 7 and 0.104 days between 7 and 14; M2 stops 1.026 days between orders 19 and 12 and 2.014 days between 12 and 11; M3 stops 0.664 days between orders 23 and 25; M4 stops 0.206 days between orders 27 and 29 and 0.065 days between orders 3 and 28). Overall, M3 and M4 can be viewed as more limiting machines since they stop for less time. The best solution found for problem P13E has a total earliness of 126.927 days and is shown in Fig. 6. It is 4.4% better than the solution reported by M´endez and Cerd´a (2003), which has a total earliness of 132.727. When compared to Fig. 5, the machines have fewer waiting periods (M1 stops for 0.026 days between

1016

P.M. Castro, I.E. Grossmann / Computers and Chemical Engineering 30 (2006) 1003–1018

Fig. 5. Optimal solution for problem P12E.

orders 38 and 39; M2 stops for 1.871 days between orders 33 and 37 and 0.014 days between orders 40 and 11; and M3 and M4 never stop) and maintain most of the assignments for the first 29 orders. Of the 40 orders, only 7 orders are delivered exactly at their due dates (orders 2, 9, 16, 18, 33, 38 and 40). 6.2.3. An alternative strategy for large problems The results of the previous section have shown that the size of the problems that can be solved efficiently by continuoustime or constraint programming models is smaller for earliness minimization than for cost minimization. The discrete-time formulation is very efficient, but has one important disadvantage: due to the discretization of the time horizon, a very large number of time intervals may be required to consider the exact problem data, inevitably leading to problem intractability. However, one can always consider an approximation of the problem by rounding the problem data to integer multiples of the interval length (δ). Rounding up the data ensures that if the approximate problem is feasible so is the real problem. Furthermore, the optimal solution of the former will be a very good upper bound to the optimal solution of the latter, with better approximations resulting from lower rounding errors. The optimal solution from the discrete-time formulation (approximate problem) can then be used to find a very good

Fig. 6. Best solution found for problem P13E.

solution to the exact scheduling problem by using one of the two continuous-time formulations. The proposed strategy is the following: (i) solve the approximate problem by the discrete-time formulation (F1) to find the assignments of orders to machines. Although it is desirable to solve the problem to optimality, a feasible solution is enough to proceed to the next step; (ii) solve a simplified version of the continuous-time model with multiple time grid (F3) or sequencing variables (F4), by considering in sets Im and Mi , only the orders assigned to machine m and the machine assigned to order i, or only the variables xi,m with i ∈ Im , respectively. Note that once the assignments are fixed, each machine is totally independent from the others so it is better to solve |M| single machine problems instead of a more complex parallel machines problem. For model (F3), the number of event points for solving the single machine problem, for machine m, is given by |T| = |Im | + 1; (iii) if the complexity of one or more single machine problems in the previous step is still too large to handle, further reduce the complexity by also fixing the event point at which each order is processed (the first order allocated to a particular machine will start at the first event point, the second order at the second and so on, for model F3), or by fixing the sequencing variables yi,i (F4). In this case, there are fewer degrees of freedom and hence the exact schedule corresponding to the near optimal solution found in the first step is generated. The proposed strategy was applied to problems P12E and P13E. For P12E the assignments of the approximated problem (solution of F1) were in fact the optimal assignments since the single machine problems generated solutions of 19.232, 3.665, 16.907 and 20.092 for machines M1–M4, for a total earliness of 59.896 days, the global optimum solution found by the multiple time grid formulation (F3) (see Table 11). For P13E the assignments of F1 are not optimal (when compared to Fig. 6, order 6 is assigned to M1 while order 28 goes to M4) since the solution of the single machine problems gives a total earliness of 126.949 days (see Table 12), a value that is slightly higher than the best solution of F3, 126.927 days. The results in Table 12 also show that both continuous-time formulations solve the four single machine problems very efficiently with the multiple time grid formulation (F3) showing a better performance for machine M3, which has more orders assigned to. An alternative way to decrease the complexity of the single stage parallel machine problem is through preordering. Pinto and Grossmann (1995) and Ierapetritou, Hen´e, and Floudas (1999) used the earliest due date (EDD) method to find solutions of 82.202 and 94.814 days for problem P12E, respectively. M´endez and Cerd´a (2003) used the increasing slack times (MST) rule to simplify their MILP formulation and found a better solution (62.377 days), which was further improved by using a very efficient rescheduling MILP to re-sequence orders assigned to a particular machine. However, this is still 4% worse than the global optimal solution found by the multiple time grid continuoustime formulation and by the proposed decomposition strategy, which is shown in Fig. 5. The reason why the heuristics fail can be found for example in order 13, which is only the 19th order according to the EDD rule and the 15th order according to the MST rule but is the 4th order to be completed in the optimal schedule.

P.M. Castro, I.E. Grossmann / Computers and Chemical Engineering 30 (2006) 1003–1018

7. Conclusions This paper has presented a new continuous-time formulation for the short-term scheduling of single-stage parallel machine plants. It can be viewed as an improvement from the general continuous-time formulation for multipurpose plants of Castro et al. (2004), which proved to be quite limited for the specific type of problem considered. Multiple time grids are used, one for each equipment resource, instead of a single time grid, to take advantage of the lack of common resources like manpower or utilities, and intermediate materials. This, allows us to consider each machine independently, which in turn makes it possible to restrict the duration of all batch tasks to a single time interval. The advantage is that both the number of event points required to find the optimal solution as well as the integrality gap, two commonly used performance indicators for MILPs and for continuous-time formulations, are greatly reduced. Another novel feature presented in this paper, concerning the continuous-time formulations, is the introduction of hard constraints to model release and due dates. The other goal of the paper has been to provide a critical review of other existing approaches that are able to solve this type of scheduling problem based on their performance on a total of 23 examples. The results were very conclusive and allowed us to arrive at the following conclusions. The discrete-time RTN formulation (Pantelides, 1994) was shown to be the best formulation. It has a very consistent performance for different problem sizes and for the two different objective functions tested. As is well known, its most important limitation is that it cannot handle variable duration tasks. Another disadvantage is that it may need to use a large number of time intervals to consider the exact problem data. However, fewer time intervals can always be used if the problem data is rounded to the next integer multiple of the interval length. In that case, the solution obtained will be an upper bound on the true global optimum. This important feature of the discrete-time formulation has been used to propose a strategy for total earliness minimization. It consists on finding the assignments of orders to machines by solving the parallel machine single stage scheduling problem with the discrete-time formulation, and subsequent solution of |M| single machine problems with one of the two most efficient continuous-time formulations. This optimization strategy was shown to be better than preordering heuristics. The proposed multiple time grid continuous-time formulation follows the discrete-time formulation in terms of computational performance both for the minimization of total cost and total earliness. Even when optimality could not be guaranteed, the multiple time grid continuous-time formulation could always find the global optimum (or the best known solution) in one hour of computational time. Particularly, it found the global optimal solution of problem P12E and the best known solution for P13E, which are roughly 4% better than the best ones reported in the literature. The continuous-time formulation with sequencing variables and the constraint programming model do not perform as well, but they could always find good solutions for the cost minimization problems, when not the optimum, in one hour. The

1017

constraint programming model has the disadvantage of dealing with integer data, like the discrete-time formulation, which means that sometimes, in order to maintain problem tractability, it is limited to solve approximated versions of the problems. The continuous-time formulation with sequencing variables has the advantage of being easily adapted in order to deal with sequence dependent changeovers, unlike RTN-based formulations that require additional complex sets of variables and constraints. Finally, the hybrid MILP/CP approach by Jain and Grossmann (2001) modified by Maravelias and Grossmann (2004b) had the best performance of all six cost minimization models tested for small to medium sized problems. However, it failed to find a solution for one large instance since the assignment part of the hybrid model (the simplified MILP) became intractable like the complete MILP from which it originates. Another disadvantage of the hybrid approach is that the only feasible solution it generates is the global optimum so there are no intermediate feasible solutions. Acknowledgments The authors gratefully acknowledge financial support from Fundac¸a˜ o Calouste Gulbenkian and from the Centre for Advanced Process Decision-making at Carnegie Mellon. References Baptiste, P., Le Pape, C., & Nuijten, W. (2001). Constrained-based scheduling: applying constraint programming to scheduling problems. Kluwer Academic Publishers. Bockmayr, A., & Pisaruk, N. (2003). Detecting infeasibility and generating Cuts for MIP using CP. Proceedings CPAIOR’03, 24. Castro, P. M., Barbosa-P´ovoa, A. P., Matos, H. A., & Novais, A. Q. (2004). Simple continuous-time formulation for short-term scheduling of batch and continuous processes. Industrial & Engineering Chemistry Research, 43, 105. Castro, P. M., Barbosa-P´ovoa, A. P., & Novais, A. Q. (2005). Simultaneous design and scheduling of multipurpose plants using RTNbased continuous-time formulations. Industrial & Engineering Chemistry Research, 44, 343. Castro, P. M., & Grossmann, I. E. (2005). New continuous-time MILP model for the short-term scheduling of multistage batch plants. Industrial & Engineering Chemistry Research, 44, 9175. Floudas, C., & Lin, X. (2004). Continuous-time versus discrete-time approaches for scheduling of chemical processes: A review. Computers & Chemical Engineering, 28, 2109. Giannelos, N. F., & Georgiadis, M. C. (2002a). A novel event-driven formulation for short-term scheduling of multipurpose continuous processes. Industrial & Engineering Chemistry Research, 41, 2431. Giannelos, N. F., & Georgiadis, M. C. (2002b). A simple continuous-time formulation for short-term scheduling of multipurpose batch processes. Industrial & Engineering Chemistry Research, 41, 2178. Harjunkoski, I., & Grossmann, I. E. (2002). Decomposition techniques for multistage scheduling problems using mixed-integer and constraint programming methods. Computers & Chemical Engineering, 26, 1533. Ierapetritou, M. G., Hen´e, T. S., & Floudas, C. A. (1999). Effective continuous-time formulation for short-term scheduling, 3 multiple intermediate due dates. Industrial & Engineering Chemistry Research, 38, 3446. Jain, V., & Grossmann, I. E. (2001). Algorithms for hybrid MILP/CP models for a class of optimization problems. INFORMS Journal on Computing, 13(4), 258.

1018

P.M. Castro, I.E. Grossmann / Computers and Chemical Engineering 30 (2006) 1003–1018

Janak, S. L., Lin, X., & Floudas, C. A. (2004). Enhanced continuoustime unit-specific event-based formulation for short-term scheduling of multipurpose batch processes: Resource constraints and mixed storage policies. Industrial & Engineering Chemistry Research, 43, 2516. Kondili, E., Pantelides, C. C., & Sargent, R. (1993). A general algorithm for short-term scheduling of batch operations I, MILP formulation. Computers & Chemical Engineering, 17, 211. Maravelias, C. T., & Grossmann, I. E. (2003a). New general continuous-time state-task network formulation for short-term scheduling of multipurpose batch plants. Industrial & Engineering Chemistry Research, 42, 3056. Maravelias, C. T., & Grossmann, I. E. (2003b). Minimization of the makespan with a discrete-time state-task network formulation. Industrial & Engineering Chemistry Research, 42, 6252. Maravelias, C. T., & Grossmann, I. E. (2004). A Hybrid MILP/CP decomposition approach for the continuous time scheduling of multipurpose batch plants. Computers & Chemical Engineering, 28, 1921. Maravelias, C. T., & Grossmann, I. E. (2004b). Using MILP and CP for the Scheduling of Batch Chemical Processes. Proceedings CPAIOR’04, 1. M´endez, C., & Cerd´a, J. (2000). Optimal scheduling of a resource-constrained multiproduct batch plant supplying intermediates to nearby end-product facilities. Computers & Chemical Engineering, 24, 369.

M´endez, C., & Cerd´a, J. (2002). An efficient MILP continuous-time formulation for short-term scheduling of multiproduct continuous facilities. Computers & Chemical Engineering, 26, 687. M´endez, C., & Cerd´a, J. (2003). Dynamic scheduling in multiproduct batch plants. Computers & Chemical Engineering, 27, 1247. M´endez, C. A., Cerd´a, J., Grossmann, I. E., Harjunkoski, I., & Fahl, M. (2006). State-of-the-art review of optimization methods for short-term scheduling of batch processes. Computers & Chemical Engineering, in press. Pantelides, C. C. (1994). Unified Frameworks for the optimal process planning and scheduling. In Proceeding of the second conference on foundations of computer aided operations (p. 253). New York: Cache Publications. Pinedo, M. (2001). Scheduling: Theory, algorithms and systems. PrenticeHall. Pinto, J. M., & Grossmann, I. E. (1995). A continuous time mixed integer linear porgramming model for short term scheduling of multistage batch plants. Industrial & Engineering Chemistry Research, 34, 3037. Sadykov, R., & Wolsey, L. (in press). Integer programming and constraint programming in solving a multi-machine assignment scheduling problem with deadlines and release dates. INFORMS Journal on Computing. Van Hentenryck, P. (1999). The OPL optimization programming language. Cambridge, MA: MIT Press.