Automation in Construction 27 (2012) 89–98
Contents lists available at SciVerse ScienceDirect
Automation in Construction journal homepage: www.elsevier.com/locate/autcon
Hybrid meta-heuristic methods for the multi-resource leveling problem with activity splitting Hadeel Alsayegh, Moncer Hariga ⁎ Engineering Systems Management Graduate Program, College of Engineering, American University of Sharjah, P O Box 26666, Sharjah, United Arab Emirates
a r t i c l e
i n f o
Article history: Accepted 16 April 2012 Available online 13 June 2012 Keywords: Resource leveling Activity splitting Meta-heuristics Particle swarm optimization Simulated annealing
a b s t r a c t In this paper, we consider the multi-resource leveling problem with the objective of minimizing the total costs resulting from the variation of the resource utilization and the cost of splitting non-critical activities. We propose hybrid meta-heuristic methods which combine particle swarm optimization (PSO) and simulated annealing (SA) search procedures to generate near-optimal project schedules in less computational time than the exact optimization procedure. The PSO algorithms are based on different update mechanisms for the particles' velocities and positions. The cost and computation time performances of the combined PSO/SA search procedures are evaluated using a set of benchmark problems. Based on the results of the computational experiments, we suggest one of the proposed heuristic procedures to be used for solving the multi-resource leveling problem with activity splitting. © 2012 Elsevier B.V. All rights reserved.
1. Introduction Resource leveling is a resource management technique to minimize total deviations of resource requirements over a fixed project duration. Most resource leveling models proposed in the literature assume that activities may not be split, indicating that once an activity starts, it will continue until the work is finished. However, in practice, it may be cost effective to interrupt an activity to release its resources and assign them to other activities (Karaa and Nasr [7]). By allowing activities to be interrupted, the resulting resource leveling optimization model becomes mathematically complex as it introduces more decision variables and constraints (Hariga and El-Sayegh [4]). Furthermore, additional costs should be included in the objective function such as resource and activity dependent costs, which are incurred as a result of splitting activities. A resource-dependent cost involves the costs of acquiring or releasing the resource in a given period. On the other hand, activity-dependent costs are related to the stopping and restarting of the activity. Upon splitting an activity, the learning process of the resources is affected, and it will take some time for the resources to re-achieve the learning level just prior to splitting the activity. For more details about these two types of costs, interested readers can refer to Hariga and El-Sayegh [4]. Although extensive research works have been carried out on resource leveling without splitting, very little research is found on resource leveling with activity splitting. Hariga and El-Sayegh [4] developed a
⁎ Corresponding author. E-mail addresses:
[email protected] (H. Alsayegh),
[email protected] (M. Hariga). 0926-5805/$ – see front matter © 2012 Elsevier B.V. All rights reserved. doi:10.1016/j.autcon.2012.04.017
mathematical model having cost based objective function rather than the traditional utilization function for the multi-resource leveling problem with activity splitting. They formulated the problem as a mixed integer linear program model. Although their model is guaranteed to generate optimal project schedules, it requires a lot of computational effort for a large number of non-critical activities. Therefore, the purpose of this paper is to propose a cost and computational time efficient heuristic procedure for the resource leveling problem with activity splitting. The remainder of the paper is organized as follows. In Section 2, we review the literature relevant to the resource leveling problem with activity splitting. In Section 3, we discuss the proposed PSO–SA search procedures. In Section 4, we present the experimental framework and the computation results. Finally, in Section 5 we conclude the paper. 2. Literature review Both resource allocation and resource leveling problems have been challenging topics of extensive research in the project management area. Several optimization techniques have been used to solve such problems, which can be classified as exact procedures, heuristic procedures, and meta-heuristic procedures. In the following, we review the research works which are relevant to the problem addressed in this paper. Harris [5] was the first to develop a simple heuristic procedure called the Minimum Moment algorithm for the resource leveling problem without activity splitting. Later, Hiyassat [6] modified Harris's procedure by considering the value of the activity's free float and its resource rate in the selection criterion. Easa [2] presented the first optimization model for the resource leveling problem. The objective of Easa's model
90
H. Alsayegh, M. Hariga / Automation in Construction 27 (2012) 89–98
is to minimize the deviations between the actual and desirable resource rates. Also, Ramlogan and Goulter [15] proposed a mixed integer model to level resources for project scheduling. Bandelloni et al. [1] followed the dynamic programming approach to level resources. Mattila and Abraham [14] formulated an integer linear programming model to smooth resources' usage for linear projects, which are characterized by having a set of activities that are repeated in different locations. Son and Matilla's paper [17] is one of the earliest papers in which activity splitting is allowed. Recently, Hariga and El-Sayegh [4] formulated as a mixed binary–integer programming to minimize the costs associated with the splitting of the non-critical activities. In addition to the above exact formulations for the resource leveling problem, several authors proposed meta-heuristic procedures to generate near-optimal schedules. Senouci and Eldin [16] proposed a model based on genetic algorithm (GA) for resource scheduling. This model performs resource leveling along with resource allocation simultaneously. Son and Skibniewski [18] developed a technique for resource leveling, based on a local optimizer procedure and a hybrid procedure, with the objective of minimizing the difference between the required resources and the desired resource profile. In a recent paper, Liao et al. [13] provided a comprehensive review of previous research works using meta-heuristics to address project management problems and issues. Leu et al. [12] proposed a GA based optimization system to minimize the weighted total deviations of resources' requirements. Leu and Hung [11] solved the stochastic resource leveling problem using a combined simulation-GA model. A GA-based system was also used by Georgy [3] to perform resource leveling for linear projects. As it can be noticed from the review of approximate solution methods, none of the cited papers utilizes PSO based meta-heuristic procedures. Therefore, to the best of our knowledge, our paper is the first to propose an integrated meta-heuristic search procedure by combining PSO with simulated annealing for the multi-resource leveling problem with activity splitting (MRLP-AS) and considering a cost instead of a utilization based objective function. 3. PSO–SA method for MRLP-AS In this section, we first review the concepts and techniques underlying different versions of PSO procedures. We then show how to implement PSO to the resource leveling problem with activity splitting. Stand-alone and combined PSO and SA search procedures are also presented in this section. 3.1. Review of PSO search procedures Particle Swarm Optimization, as developed by Kennedy and Eberhart [9], is based on the social behaviors of animals and insects, such as bird flocks and fish schools. Swarms, or groups, of these animals and insects, tend to self-organize themselves in optimal spatial patterns. Their behaviors, such as speed and direction, are determined through the exchange of information between individuals, called particles. Each particle is represented by its position in an N-dimensional space and its velocity. The velocity corresponds to the speed and direction at which the particle is moving. In the classical PSO method, the particles and velocities are represented by real numbers. Each particle, from a population of P particles in the swarm, is initialized with a random position and velocity. Next, the PSO procedure searches iteratively for the best position (near or optimum) by updating each particle's velocity and position using its own previous best position (cognitive learning) and best position of all particles (social learning). The local and global best positions are determined through the assessment of each particle's fitness values. The search continues until convergence which is attained either when the allowed maximum number of iterations, K, is exceeded or a relatively steady position is reached.
The particle's position and velocity are denoted by Xi(k) and Vi(k), for i = 1, 2, …, P and k = 1, 2, …, K. The N-dimensional position for the ith particle at the kth iteration is represented by X i ðkÞ ¼ ½xi1 ðkÞ; xi2 ðkÞ; …; xiN ðkÞ where, xij(k) represents the jth coordinate of the ith particle for j = 1,2, …, N. Similarly, the velocity for the ith particle at the kth iteration is represented by V i ðkÞ ¼ ½vi1 ðkÞ; vi2 ðkÞ; …; viN ðkÞ: The updating mechanism of the ith particle's velocity and position at the kth iteration is performed using the following two equations, respectively h i h i V i ðkÞ ¼ wV i ðk−1Þ þ c1 r 1 X L1 −X i ðk−1Þ þ c2 r 2 X G −X i ðk−1Þ X i ðkÞ ¼ V i ðkÞ þ X i ðk−1Þ
ð1Þ
where, XiL is the local best position of the ith particle found after (k − 1) iterations. X G is the global best position among all particles in the swarm visited so far. w is the inertia weight used to reduce the impact of previous velocities on the current velocity so that it does not go out of control. c1 and c2 are two positive parameters representing the cognition and social learning factors, respectively. If c1 is large, then the particles tend to move toward their own local best. On the other hand, if c2 is large, then the particles tend to move toward the known global best so far. r1 and r2 are random numbers between 0 and 1. The velocity of any particle is restricted in the interval [Vmin, Vmax]. If the new velocity is smaller than Vmin, then it is set to Vmin. Similarly, if the new velocity is larger than Vmax, then it is set to Vmax. Kennedy and Eberhart [8] modified their PSO method to handle binary variables. In this case, the velocities of the particle no longer represent the speed but rather represent either the probability of a position changing its value to one or the probability of a position being 0. Thus, the values of the velocities are restricted to the interval [0, 1]. In discrete PSO, the particle's velocity is updated in the same way as in the continuous PSO. However, a normalization function is used to transform the real numbers to binary numbers. This is done using the following sigmoid function: vij ðkÞ ¼ sig vij ðkÞ ¼
1 1 þ e−vij ðkÞ
:
ð2Þ
The velocity is then used to update the position of the particle using the following equation: xij ðk þ 1Þ ¼
( ) 1 if r ij b sigðvij ðk þ 1Þ 0 otherwise
ð3Þ
where, rij is a random number between [0,1]. Yang et al. [19] presented another version of discrete PSO, which is based on quantum theory. In the quantum theory, the quantum particle position, Xij(k), consists of bits, where each bit holds the values of 0 or 1. A quantum particle vector, Vi(k) denotes the particle's velocity, vij, which represents the probability that the jth bit of the ith
H. Alsayegh, M. Hariga / Automation in Construction 27 (2012) 89–98
particle being 0. The quantum discrete PSO uses the following equations to update the velocity of the particle and to obtain a new binary position (Yang et al. [19]): L L L V i ðkÞ ¼ aX i ðkÞ þ β 1−X i ðkÞ
ð4Þ
G G G V i ðkÞ ¼ aX ðkÞ þ β 1−X ðkÞ
ð5Þ
L
G
V i ðkÞ ¼ wV i ðk−1Þ þ c1 V i ðkÞ þ c2 V i ðkÞ
ð6Þ
1 if ðrand b V i ðkÞÞ 0 otherwise
ð7Þ
X i ðkÞ ¼ where,
α and β are control parameters which indicate the control degree of the velocity, with α + β = 1 and 0 ≤ α,β ≤ 1, w, c1, and c2 represent the inertia weight, and the cognitive and social learning factors, respectively; where w + c1 + c2 = 1, w ≥ 0, c1 ≤ 1, and c2 ≤ 1, and rand is a random number generated between [0, 1]. 3.2. PSO implementation to MRLP-AS In this paper, a particle within the PSO population denotes a feasible schedule for a project having n activities and R resource types. Therefore, a particle represents the set of non-critical activities since the critical activities are not changed when leveling the resources. Hereafter, each PSO particle consists of a number of bits, each representing the value of a binary variable, ytj, of a given non-critical activity. The binary variable indicates if activity j is active (ytj = 1) or idle (ytj = 0) at time period t. The following row vector represents the position of the ith particle at the kth iteration of a project consisting of 7 non-critical activities X i ðkÞ ¼
ð1; 0Þ; ð0; 1; 1; 1; 1; 1; 0; 0; 0; 0Þ; ð0; 0; 0; 0; 0; 0; 1; 1; 1; 0Þ; ð1; 1; 1; 1; 1; 1; 0; 1; 1; 1Þ; ð0; 0; 0; 0; 1Þ; ; ð0; 0; 1; 0; 0; 0Þ; ð0; 1; 0; 0; 0; 0Þ
note that the dimension, or the total number of bits, of Xi(k) is given by nn X LF j −ESj þ 1 j¼1
where ESj is the early start time for activity j, EFj is the early finish time of activity j, and nn is the number of non-critical activities. In addition, the sum of the ones for each non-critical activity should be equal to its duration. For example, the duration for the fourth activity of the above particle is equal to 9 and it was split only once at the seventh period after its start. Fig. 1 below shows the pseudo-code for the PSO algorithm which consists of mainly two phases: initialization and search for best particle. In the initialization phase, R particles with feasible schedules are first generated. In this paper, the first particle represents the project schedule resulting from the CPM method; i.e. a feasible schedule without splitting. As for the remaining (R − 1) particles, their positions are randomly generated using the algorithm described in Section 3.2.1, which ensures that the particles are feasible. The initial velocities, Vij(0), are created by randomly generating numbers between 0 and 1. In the second phase, the particles' positions are assessed. First, the fitness, F(Xi(k)), of each particle, i, at the kth iteration is computed as the sum of the costs resulting from the fluctuations in resource usage over the project life and the total costs of splitting non-critical activities (Hariga and El-Sayegh [4]). Next, F(Xi(k)) is compared with the fitness of the best local position for particle i, F(XiL). If F(Xi(k)) is a better fit than F(XiL), then XiL is set to Xi(k) and F(XiL) is set to F(Xi(k)).
91
Similarly, F(Xi(k)) is compared with the fitness of the global best position, F(X G). If F(Xi(k)) has a better cost performance than the global best position, then the global best position, F(X G), is set to F(Xi(k)) and the cost improvement counter, kimp, is initialized to zero since a new global best is found; otherwise kimp is incremented. Finally, the velocity, Vi(k), and the position, Xi(k), of the ith particle are updated. 3.2.1. Generation of the initial particle positions The initial particles of a population are generated such that each particle represents a feasible project schedule. A project schedule is feasible if it meets the duration constraints and the network logic constraints. Fig. 2 is an algorithmic description of the generation of an initial feasible particle. The first two steps of the algorithm ensure that the network logic constraints of the schedule are satisfied, and the last step makes sure that the duration constraint is satisfied. 3.2.2. Transformation of a particle's position into a feasible particle's position Once a particle's position is updated, it is possible that the new position does not represent a feasible schedule. Therefore, the algorithm below is applied to transform the updated particle's position into a particle representing a feasible project schedule (Fig. 3). Upon completing the aforementioned steps for all the non-critical activities, a feasible particle is obtained. Note that if activity, j, has no predecessors, steps 2–5 are not required. 3.3. PSO stand-alone heuristic procedures In this paper, six different PSO procedures are proposed, whereby in each procedure a different mechanism to update a particle's velocity and/or position is utilized. Several experiments for each of the heuristic procedures are conducted; each with a different parameter setting (i.e. number of particles, maximum number of iterations, and maximum number of steady iterations). Based on the experiments, it is observed that PSO has more chances to hit the optimal solution for large numbers of particles; however, the optimal solution is generated at the expense of its time efficiency. Therefore, the size of the particle's population should balance the cost and time efficiencies of the PSO algorithm. In this study, the number of particles is problem dependent and is set equal to twice the number of binary variables; contrary to the PSO literature, where the number of particles is assumed to be fixed regardless of the size of the problem to be solved. The number of binary variables is equal to∑ jnn = 1(LFj − ESj + 1) which is very small compared to the total number of possible solu Tj tions ∏nn j¼1 EF −ES þ 1 . In the following subsections, a description j j of each heuristic procedure, which includes the update velocity and position mechanism, is presented. 3.3.1. PSO procedure 1 This heuristic procedure is based on the continuous PSO model presented by Kennedy and Eberhart [9], where the new velocity of the ith particle, Vi(k), which is restricted to [0, 1], is obtained using Eq. (1) and the particle's position is updated using 0 if vij ðkÞ b 0:5 xij ðkÞ ¼ : ð8Þ 1 otherwise Note that in this procedure the velocity is defined as the probability that the particle holds the value of 0 or 1. It is assumed that there is a 50–50% chance for the particle to hold the values 0 or 1. 3.3.2. PSO procedure 2 This heuristic procedure is also based on the continuous PSO model presented by Kennedy and Eberhart [9]. However, the velocity
92
H. Alsayegh, M. Hariga / Automation in Construction 27 (2012) 89–98
Initialization: Generate P feasible particles Set K = maximum iteration counter Set maxKimp = maximum number of iterations with no cost improvement Set iteration counter, k, to 1 Set iteration improvement counter, kimp = 0 Searching for best particle: Do while (k
kimp = 0
Fig. 1. Pseudo-code for PSO algorithm.
is defined as the probability that the particle changes its value. The following mechanism is used to update the particle's position xij ðkÞ ¼
xij ðk−1Þ if vij ðkÞ b 0:5 : 1−xij ðk−1Þ otherwise
ð9Þ
3.3.3. PSO procedure 3 This procedure is based on the quantum discrete PSO algorithm, where the position of the particle contains only binary values. However, in this procedure the velocity is defined as the probability that the particle holds the value of 0 or 1. It is assumed that there is a 50–50% chance for the particle to hold the values 0 or 1. The new velocity of the ith particle, Vi(k), is achieved using Eqs. (3) to (6) where the control parameters are assumed to be random having their sum equal to one. The new position of the particle is determined by Eq. (8). 3.3.4. PSO procedure 4 This procedure also relies on the quantum discrete PSO algorithm, and therefore, uses the same set of equations to update the velocity. However,
the velocity is defined as the probability that the particle changes its value. The new position of the particle is determined by Eq. (9). 3.3.5. PSO procedure 5 This procedure modifies the mechanism of PSO procedure 3 to update the velocity of a particle by using two different random control parameters and not restricting their sum to be equal to one. The procedure uses Eq. (8) to find the new particle's position. 3.3.6. PSO procedure 6 This procedure uses the same mechanism of PSO procedure 5 to update particle's velocity and relies on Eq. (9) to change the particle's position. 3.4. Simulating annealing procedure The Particle Swarm Optimization heuristic has many advantages; some of these include its simplicity in coding, ease of implementation with fewer parameters to adjust and its consistency in performance, along with its local and global search abilities. However, one drawback of PSO is the possibility of being trapped in local optima.
For each non-critical activity, j, within particle, i, the following steps are taken: Step 1: If activity, j, does not have any non-critical activity as its predecessor: - Generate a discrete random number between ESj and ESj + TF j – 1, where TFj is the total float for activity j. - Set the starting time of activity j, Sj, to the generated random number. - Set ytj = 0, for all ESj, ESj+ 1, …, Sj-1. - Go to step 3 Step 2: If activity, j, does have some non-critical activities as predecessors: - Find the finishing times of the non -critical predecessors, which have been already determined, and set F as the largest finishing time. - Set Sj = F + 1 - Set ytj = 0, for all t < Sj. - Go to step 3. Step 3: - Generate Tj discrete random numbers between Sj and LFj, where Tj is the duration of activity j. - Set the cells ( ytj) corresponding to these random numbers to 1 and the remaining ytj values to 0. Fig. 2. Algorithm for the generation of feasible particle position.
H. Alsayegh, M. Hariga / Automation in Construction 27 (2012) 89–98
93
For each non-critical activity, j, in the ith particle, 1. If activity, j, does not have any non-critical activities as predecessors, go to step 6; otherwise continue to step 2. 2. Find the starting time for activity j, Sj, using the following equation: where T is the fixed project duration. 3. Calculate the finishi ng times of activity j’s predecessors and set F as the latest finishing time. 4. If F > Sj, then set Sj = F +1. 5. For all t < Sj, set ytj = 0. 6. Count the number of ytj values that are equal to 1 between Sj and LFj. If the sum of the ones is less than the activ ity’s duration, Tj, continue on to step 7, otherwise go to step 12. 7. For t Sj, count the number of ytj values that are equal to zero, say z, and assign an equal probability for each, . 8. Randomly generate a number, r, between [0,1]. 9. If , where u = 1, 2, …, z , then set the uth bit of the particle that has a value of 0 to 1. 10. Increment the count by 1. 11. Repeat steps 8 -10 until the sum of the ytj’s values is equal to the activity’s duration, Tj, and skip to step 17. 12. For t Sj, count the number of ytj values that are equal to one, say z, and assign an equal probability for each, . 13. Randomly generate a number, r, between [0,1]. 14. If where u = 1, 2, …, z , then set the uth bit of the particle that has a value of 1 to 0. 15. Decrement the count by 1. 16. Repeat steps 13 -15 until the sum of the duration, Tj. 17. Stop.
ytj’s values is equal to the activity’s
Fig. 3. Algorithm for the transformation of a particle into a feasible particle.
Therefore, in this paper we propose to combine PSO with Simulated Annealing (SA) to overcome this deficiency. SA is a probabilistic based search meta-heuristic to locate a good solution to a global optimization problem with multiple local optimal solutions. The concept of simulated annealing is based on the analogy between the simulation of the annealing of solids and the problem of solving large combinatorial optimization problems (Laarhoven and Aarts [10]). Annealing refers to the process in which the particles of a solid are randomly arranged once the solid turns to liquid at high temperatures. Technically speaking, as the temperature rises, the particles of a solid tend to move around each other faster to make new forms. One of the main advantages of simulated annealing is its ability to find good solutions without being trapped in a local optimum. The Simulated Annealing search procedure consists of two phases: initialization and search for the best neighboring solution. The initialization phase begins by setting an initial solution as the best solution found so far, sbest. The initial solution is usually the best solution generated by another search procedure. Also, the initial temperature, temp0, is set to a fixed or calculated temperature. The temperature, temp, is first set to the initial temperature which decreases at a rate of λ as the procedure iterates to search for a good solution in the next phase. In the second phase, the procedure iterates until it finds a candidate solution. During each iteration, a neighboring solution, s′, is generated from the neighborhood of s. Next, the fitness of the neighboring solution is computed, F(s′), and is compared to the fitness of the current solution s, F(s), where their difference is denoted by Δ. The neighboring solution is considered as a candidate for sbest if it is either a better solution than the one found so far or the probability of accepting a worse solution is high. The simulation annealing process is described in Fig. 4. In this paper, SA algorithm is slightly modified in order to increase the chances of achieving a near optimum solution in less time. At each
change in temperature, 10 neighboring solutions are generated as opposed to the classical algorithm in which only one neighbor is generated. Next, the fitness is calculated for each of the neighboring solutions, and the neighboring solution with the best fit (s′) is compared to the best solution found so far (s). Note that each neighboring solution represents a feasible project schedule which is generated using the neighborhood selection algorithm and the control parameter settings described below. A neighboring solution is generated by swapping a random pair of ytj's having different values, for each of the non-critical activities within the particle. Prior to determining the pair to be swapped, the ytj values of the non-critical activity must first satisfy the duration and the network logic constraints. In other words, the sum of the ytj values of the activity should be equal to its duration and that the ytj's of the activity are only active between its calculated start and finish times. For each feasible non-critical activity, two discrete random numbers between its starting and finishing times are generated, say u and v. If the values of yuj and yvj are different, then they are swapped. However, in case the two ytj values are equal, two new random numbers are generated until they correspond to different ytj's values. This process is repeated for all of the non-critical activities within a particle, and thus, a new neighboring solution is generated. The SA algorithm has several parameters. The main three parameters are initial temperature, Temp0, final temperature, Tempf, and cooling parameter, λ. At each iteration of the algorithm, the temperature is decreased at a constant rate, λ, which is between 0 and 1. The smaller the value of λ, the slower the algorithm reaches the final temperature, and thus, increases the chances of finding a better solution. However, a slow search increases the computational time. Therefore, it is very important to choose wisely the settings of the parameters. In this paper, it is decided to vary the initial temperature depending on the problem's characteristic. In general, a neighboring solution at the
94
H. Alsayegh, M. Hariga / Automation in Construction 27 (2012) 89–98
nth iteration is accepted if it is a better fit (less cost) than the best generated solution so far or if it is near to the best solution by a certain probability, e − Δ/tempn, where tempn represents the temperature after n iterations and is equal to λ ntemp0. Let Pmax denotes the maximum probability to accept a neighboring solution. Therefore, the initial temperature is determined using the acceptance probability as follows: −Δ=tempn
P max ¼ e
; or P max ¼ e
Δ −λn temp
0
:
accepted as a good solution if it reduces the cost of the particle or if its cost is not greater than 20% of the best SA position. Once SA procedure ends by reaching the final temperature, PSO procedure is run again only if SA has produced a project schedule with better cost performance. If it is the case, PSO search procedure is performed one more time with SA output as initial particles. This procedure (PSO– SA–PSO) is repeated 10 times and the best global position is returned as the least-cost project schedule for the multi-resource leveling problem with activity splitting. All the above steps of the combined PSO–SA procedure are summarized in the flowchart shown in Fig. 5.
Therefore temp0 ¼ −
Δ Δ >− : λn lnðP max Þ λ lnðP max Þ
3.6. Illustrative example
In our computational study, the initial temperature, temp0, is calculated given that a neighboring solution is accepted with a maximum probability, Pmax, of 80% and if there is an increase in the cost performance of no more than 20%. Thus, Δ = 0.2 * F(s). Furthermore, since there is a trade off between the computational time and a good solution, the initial temperature is set as temp0 ¼ −
0:2F ðsÞ 0:9 lnð0:8Þ
to reduce the computation time. 3.5. Combined PSO and SA search procedure The combined PSO/SA search procedure is composed of three stages. In the first stage, only the PSO search procedure is performed, with the number of particles equal to twice the number of ytj's values. The local best positions of each of the particles and the global best position over all the particles are updated during each run. The PSO search procedure stops when either reaching the maximum number of iterations or becoming stuck in local optimum. In the next stage, the local best particle positions obtained from the previous stage are input into the SA search procedure. For each local best particle, 10 feasible neighboring solutions are generated. The neighboring solution with the least cost is selected to be compared to the best local solution of that particle. The selected neighboring solution is
In this section, we illustrate the combined PSO–SA procedures with a project having 14 activities and 6 different resource types. Fig. 6 above shows the project network for this example. After performing the CPM calculations, the ES, EF, LS, LF times and the slack for each activity are determined and are shown in Table 1. Moreover, non-critical activities are identified as B, D, E, F, H, I, J, K, and M. The project's duration is 18 time periods. The number of binary variables for the MRLP–AS problem to be solved is 130. The CPM results are taken as input in the PSO–SA solution procedure, along with the resource requirements (R1, R2 … R6), resource costs for acquiring and releasing, and splitting costs (CS). Table 1 shows the data with respect to the illustrative example. The acquiring and releasing costs are set to $1 per unit for each of the six resource types. Using Hariga and El-Sayegh's optimization model, the optimal solution generated a total cost of 188 with a computational time of 10,688 s. Table 2 shows the summary of the results in terms of cost, computational time, and the percentage difference between the optimal solution and the generated solution after performing each of the abovementioned six heuristic procedures. In the same table, we also report the results when the PSO procedures are run alone. One can see from this table that when combining PSO with SA, the total project cost is reduced but at the expense of the computational time. The PSO/SA procedure 3 generated a near-optimal solution with a 1.06% cost deviation in 407 s compared to 10,688 s for the exact procedure.
Initialization: Get initial solution, s Set sbest = s Set Tempo = initial temperature Set Tempf=final temperature Set Temp = Temp0 Search for best neighboring solution: Do while Temp
H. Alsayegh, M. Hariga / Automation in Construction 27 (2012) 89–98
95
4. Computational experiments
4.1. Benchmark problems
In this section, we present the results of an extensive computational study which is carried out to investigate the cost and time performances of the hybrid PSO/SA heuristic procedures. Each of the six heuristic procedures is numerically assessed using a set of benchmark problems.
As there are no benchmark problems available in the project scheduling literature, a set of test instances are developed to assess the solution quality of the proposed meta-heuristic methods. The test problems are generated by varying the number of non-critical activities, nn, and number of resource types, R. For each combination
Generate P feasible particles; Xi(k),Vi(k)
Initialize PSO parameters: k,K, max Kimp, kimp, XiL, XG
k < K and kimp < maxKimp XG = sbest
Is SA procedure performed?
No
Yes
Compute fitness, F(Xi(k))
F(Xi(k)) < L F(Xi )
Yes
XiL = Xi(k) No
F(Xi(k)) < F(XG)
No
Yes G
X = Xi(k) Kimp = 0
Kimp = Kimp + 1
No No
Update velocity and position; Vi(k),Xi(k) k=k+1
Initialize SA parameters; s = XiL, Temp0, Tempf, Temp
Temp < Tempf Yes
r
Generate neighboring solution, s’
Compute
= Fitness (s’) – fitness (s)
No
0
Yes No
sbest = s’
Rand < exp(- / Temp)
Yes
sbest = s’ No
r=r+1
XiL = sbest Temp = Temp
Fig. 5. Combined PSO/SA search procedure.
Yes
XG
96
H. Alsayegh, M. Hariga / Automation in Construction 27 (2012) 89–98
Legend ES SLACK
EF ACT
SLACK
6 3 9
LS DUR LF
1 5 0 A 0 1 5 5
6 0 6
6 B 3 1 9
9 C 0 4 9
6 10 6 D 6 12 5 16
7 7 6 E 6 13 1 13
8 11 6 J 6 14 4 17
7 10 3 F 3 10 4 13
11 14 3 K 3 14 4 17
10 14 0 G 0 10 5 14
15 17 0 L 0 15 3 17
10 10 4 H 4 14 1 14
11 13 4 M 4 15 3 17
18 18 0 N 0 18 1 18
11 11 6 I 6 17 1 17
Fig. 6. Project network for the illustrative example.
of nn ε {2, 4, 7, 8, 9, 10} and R ε {2, 4, 6}, 10 instances with different problem parameters (i.e. varying the resource utilization rates and costs) are created. The resulting numbers of binary variables for the tested problems were {20, 98, 114, 130, 178}. The benchmark problems used in our computational study will be made available to interested researchers upon their request. Each of the 180 problem instances are setup in Microsoft Excel with the application of the cost optimization model of Hariga and El-Sayegh [4] and are then solved using What's Best 9 from LINDO Systems. Each of the proposed six heuristic procedures is assessed using these 180 test problems. All heuristic procedures are programmed in Java and are run on an HP Pavilion Notebook PC 2.13 GHz with 3.0 GB RAM. The same set of initial PSO particles, generated by setting the seed of the random number to “123456789”, is used for all of the procedures. 4.2. Parameter configurations For each heuristic procedure, c1 and c2 are initially varied between [0.05, 0.95] with an increment of 0.05 (that is, c1 = 0.05 and c2 = 0.05, c1 = 0.05 and c2 = 0.10 … c1 = 0.95 and c2 = 0.05). Upon running the six procedures with the different values of c1 and c2, it is noted that when the values of c1and c2 are equal and are in the range of [0.20,
Table 1 Project data for the illustrative example.
Activity
Duration
ES
LF
Slack
R1
R2
R3
R4
R5
R6
CS
A B C D E F G H I J K L M N
5 1 4 5 1 4 5 1 1 4 4 3 3 1
1 6 6 6 7 7 10 10 11 8 11 15 11 18
5 9 9 16 13 13 14 14 17 17 17 17 17 18
0 3 0 6 6 3 0 4 6 6 3 0 4 0
2 4 2 1 4 6 4 5 2 2 3 1 2 3
3 5 6 8 9 2 3 6 4 1 2 3 5 4
3 6 5 1 5 8 5 2 4 6 3 6 5 3
5 2 3 3 2 3 5 2 3 1 2 3 3 3
1 8 8 8 8 8 5 5 4 7 3 5 5 1
5 2 5 5 2 3 4 2 3 8 5 2 2 5
0 2 0 4 6 8 0 2 4 6 8 0 2 0
0.50], better results are obtained. In other words, when the particles move with equal probability to the best global solution, each heuristic procedure generates better results. Therefore, it was decided to analyze the results of the procedures when the values of c1 and c2 are equal and vary in the interval [0.25, 0.45], with an increment of 0.05. The maximum number of iterations is set to 400 and the maximum number of iterations with no cost improvement is equal to 100 (i.e. the global best position does not change in the last 100 iterations). The cooling parameter λ is set as 0.90, and the final temperature to be used in SA procedure is set to 0.01.
4.3. Computational results The performance measures used to assess each heuristic procedure are the percentage deviation of its cost from the optimal solution and its CPU runtime. The cost performance of each heuristic procedure is computed as the cost difference between the total costs of the project schedule generated by the heuristic procedure and the total costs of the optimal project schedule obtained by solving Hariga and El-Sayegh's exact model. Then, each heuristic procedure's cost performance is assessed using the number of problems resulting in 0% cost deviation and the number of problems having a cost deviation of less than or equal to 2%, 5%, and 10%. Moreover, for each variation in c1 and c2, the minimum, maximum, and average cost percentage difference for all the procedures are calculated.
Table 2 Summary of the PSO/SA procedure results. Procedure
PSO results
PSO–SA results
Cost Computational Percentage Cost Computational Percentage time (seconds) difference time (seconds) difference Procedure 1 Procedure 2 Procedure 3 Procedure 4 Procedure 5 Procedure 6
204 268 198 202 212 202
140 108 49 97 4 136
8.51% 42.55% 5.32% 7.45% 12.77% 7.45%
200 200 190 202 204 202
353 348 407 691 271 414
6.38% 6.38% 1.06% 7.45% 8.51% 7.45%
H. Alsayegh, M. Hariga / Automation in Construction 27 (2012) 89–98
Table 3 reports the results of the six combined PSO/SA procedures. The highlighted row in the table shows the heuristic procedure with best cost performance. PSO/SA procedure 3 outperformed other procedures as it resulted with an average cost difference of 4.23% and the costs of 147 out of the 180 tested problems were within 10% of the optimal ones. It can also be noticed that all six heuristic procedures resulted with an average cost deviation less than 10%. For each heuristic procedure, the time required to generate a near-optimal project schedule is recorded. The average runtimes over the 180 tested problems for the exact procedure (using Hariga and El-Sayegh model) and PSO/SA heuristic procedure 3 were 1809 and 281 s, respectively. The maximum computational times for the exact and heuristic procedures were 30,725 and 653 s, respectively. Moreover, we observed during the numerical experiments that the computational efficiency of the proposed heuristic procedure, compared to the exact procedure, is more significant for large sized problems. Table 4 summarizes the variation of the CPU computational time of PSO–SA procedure 3 with c1 = c2 = 0.25 as function of the problem size (number of binary variables and number of resource types). It is clear from this table that for large size problems the proposed procedure is much faster than the optimal procedure. For example, the average computational time for a problem with 130 binary variables and 6 resource types is 433 s, whereas the optimal procedure has an average computational time of 12,868 s. To further illustrate the computational time efficiency of the proposed heuristic procedure and to show the extent of time savings for larger problems, two large problems with 25 activities, of which 15 are non-critical, are created. Each of these two problems, having 320 binary variables, was solved using the exact and the proposed heuristic procedures. When solved with the exact procedure, What's Best solver was interrupted after running for more than 24 h. However, when the two problems are run using PSO/SA procedure 3, solutions are generated in 1380 and 1512 s, respectively. This proves that the proposed heuristic procedures are more computationally time efficient for large sized project schedules. In conclusion, even with the implementation of Simulated Annealing along with the PSO procedure, the computational time is indeed significantly less than Hariga and El-Sayegh [4] optimization model for large size problems. Moreover, the computational time of PSO/SA can be further reduced by carefully designing a time efficient mechanism for the update of the particles' position which does not affect the feasibility of the particles. In fact, through experimentation with PSO/SA search procedures, we noticed that much of the computation time of these procedures is taken in making the particle feasible. Indeed, the feasibility algorithm is run for each particle after updating its position. For example when solving the problem with 25 activities, the percent of time the heuristic procedure spent in updating particles' positions was 57% since 75% of the updated positions had to be converted to feasible project schedules.
5. Conclusions Resource leveling is a technique to smooth the resources' profiles by minimizing the fluctuations in their requirements over the project
Table 3 Cost performance of the six combined PSO/SA heuristic procedures.
% Cost deviation PSO/SA
c1, c2
Min
Average
Max
Number of problems with percent cost deviation ≤2 ≤5 ≤ 10 =0
1 2 3 4 5 6
0.25 0.25 0.25 0.30 0.30 0.40
0 0 0 0 0 0
6.85 9.67 4.23 7.72 8.23 8.16
23.53 47.06 18.18 24.24 58.33 33.33
67 46 87 56 50 53
77 55 95 60 59 62
89 71 113 72 73 71
115 95 147 107 105 105
97
Table 4 Cost and computational performance of PSO–SA procedure 3. Problem size (number of binary variables, number of resources)
Average CPU time Procedure 3
Exact procedure
(130, (130, (130, (178, (178, (178,
370 407 433 522 589 609
356 6465 12,868 1381 5414 5783
2) 4) 6) 2) 4) 6)
life. A recent research work proposed an exact method to level the costs of resources' usage while allowing activity splitting. Such optimization based method may not be time-efficient for large sized projects. Therefore, the contribution of this paper to the resource leveling literature is twofold. First, it proposes meta-heuristic methods capable of generating near-optimal and cost-leveled schedules for large sized projects. Second, it opens up new avenues for research on the development of other heuristic procedures for the problem addressed in this paper. We designed six hybrid PSO/SA search procedures; each having different mechanisms to update the particles' velocities and positions. The cost and time performances of the six heuristic procedures were assessed using 180 created benchmark problems. For each heuristic procedure, the minimum and maximum cost deviations from the optimal one are calculated along with the average cost deviation. PSO/SA procedure 3 has achieved the best results with an average cost difference of 4.23%. Furthermore, 81.67% of the tested problems have a percentage difference of less than or equal to 10%. As for the computation time, the heuristic procedures generated solutions in much less time as compared to the exact solution method. The improvement of the computation time of the hybrid PSO/SA is an interesting extension to the work in this paper. Indeed, future research may investigate the design of a new mechanism for particle positions' update to reduce the computational time of the metaheuristics. Finally, one more topic for future research is to compare the proposed PSO/SA search procedures with other meta-heuristics such as genetic or Tabu search procedures. Acknowledgments The authors are indebted to the reviewers for their insightful comments and helpful suggestions which have significantly improved the content and quality of the paper. References [1] M. Bandelloni, M. Tucci, R. Rinaldi, Optimal resource leveling using non-serial dynamic programming, European Journal of Operational Research 78 (1994) 162–177. [2] S. Easa, Resource leveling in construction by optimization, Journal of Construction Engineering and Management 115 (1989) 302–316. [3] M. Georgy, Evolutionary resource scheduler for linear projects, Automation in Construction 17 (2008) 573–583. [4] M. Hariga, S. El-Sayegh, Cost optimization for the multi-resource leveling problem with allowed activity splitting, Journal of Construction Engineering and Management 137 (2011) 56–64. [5] R. Harris, Precedence and Arrow Networking Techniques for Construction, John Wiley & Sons, New York, 1978. [6] M. Hiyassat, Modification of minimum moment approach in resource leveling, Journal of Construction Engineering and Management 126 (2000) 278–284. [7] F. Karaa, A. Nasr, Resource management in construction, Journal of Construction Engineering 112 (1986) 346–357. [8] J. Kennedy, R. Eberhart, A discrete binary version of the particle swarm algorithm, IEEE International Conference on Systems, Man, and Cybernetics 5 (1997) 4104–4108. [9] J. Kennedy, R. Eberhart, Particle swarm optimization, IEEE International Conference in Neural Networks 4 (1995) 1942–1948. [10] P. Laarhoven, E. Aarts, Simulated Annealing: Theory and Applications, Kluwer Academic Publisher, Dordrecht, 1987. [11] S. Leu, T. Hung, An optimal construction resource leveling scheduling simulation model, Canadian Journal of Civil Engineering 29 (2002) 267–275.
98
H. Alsayegh, M. Hariga / Automation in Construction 27 (2012) 89–98
[12] S. Leu, C. Yang, J. Huang, Resource leveling in construction by genetic algorithmbased optimization and its decision support system application, Automation in Construction 10 (2000) 27–41. [13] T. Liao, J. Egbelu, R. Sarker, S. Leu, Metaheuristics for project and construction management — a state of the art review, Automation in Construction 20 (2011) 491–505. [14] K. Mattila, D. Abraham, Resource leveling of linear schedules using integer linear programming, Journal of Construction Engineering and Management 124 (1998) 232–244. [15] R. Ramlogan, I. Goulter, Mixed integer model for resource allocation in project management, Engineering Optimization 15 (1989) 97–111.
[16] A. Senouci, N. Eldin, Use of genetic algorithms in resource scheduling of construction projects, Journal of Construction Engineering and Management 130 (2004) 869–877. [17] J. Son, K. Mattila, Binary resource leveling model: activity splitting allowed, Journal of Construction Engineering and Management 130 (2004) 887–894. [18] J. Son, M. Skibniewski, Multiheuristic approach for resource leveling problem in construction engineering: hybrid approach, Journal of Construction Engineering and Management 125 (1999) 23–31. [19] S. Yang, M. Wang, L. Jiao, A quantum particle swarm optimization, Proceedings of the Congress on Evolutionary Computations, IEEE Press, New Jersey, 2004, pp. 320–324.