A comparative study of two hybrid grouping evolutionary techniques for the capacitated P-median problem

A comparative study of two hybrid grouping evolutionary techniques for the capacitated P-median problem

Computers & Operations Research 39 (2012) 2214–2222 Contents lists available at SciVerse ScienceDirect Computers & Operations Research journal homep...

364KB Sizes 0 Downloads 61 Views

Computers & Operations Research 39 (2012) 2214–2222

Contents lists available at SciVerse ScienceDirect

Computers & Operations Research journal homepage: www.elsevier.com/locate/caor

A comparative study of two hybrid grouping evolutionary techniques for the capacitated P-median problem I. Landa-Torres a, J. Del Ser a, S. Salcedo-Sanz b,n, S. Gil-Lopez a, J.A. Portilla-Figueras b, O. Alonso-Garrido b a b

TECNALIA-TELECOM, Zamudio, Bizkaia, Spain ´, 28871 Alcala ´, Madrid, Spain Department of Signal Theory and Communications, Universidad de Alcala

a r t i c l e i n f o

a b s t r a c t

Available online 12 November 2011

This paper addresses the application of two different grouping-based algorithms to the so-called capacitated P-median problem (CPMP). The CPMP is an NP-complete problem, well-known in the operations research field, arising from a wide spectrum of applications in diverse fields such as telecommunications, manufacturing and industrial engineering. The CPMP problem has been previously tackled by using distinct algorithmic approaches, among which we focus on evolutionary computation techniques. The work presented herein elaborates on these evolutionary computation algorithms when applied to the CPMP, by evaluating the performance of a novel grouping genetic algorithm (GGA) and a novel grouping harmony search approach (GHS). Both GGA and GHS are hybridized with a specially tailored local search procedure for enhancing the overall performance of the algorithm in the particular CPMP scenario under consideration. This manuscript delves into the main characteristics of the proposed GGA and GHS schemes by thoroughly describing the grouping encoding procedure, the evolutionary operators (GGA) and the improvisation process (GHS), the aforementioned local search procedure and a repairing technique that accounts for the feasibility of the solutions iteratively provided by both algorithms. The performance of the proposed algorithms is compared with that of several existing evolutionary-based algorithms for CPMP instances of varying size, based on which it is concluded that GGA and GHS dominate any other approaches published so far in the literature, specially when the size of the CPMP increases. The experimental section of the paper tries to evaluate the goodness of the grouping encoding, and also the differences in behavior between the GGA and GHS due to the meta-heuristic algorithm used. & 2011 Elsevier Ltd. All rights reserved.

Keywords: Capacitated P-median problem Genetic algorithms Harmony search Grouping approaches Hybrid algorithms

1. Introduction The capacitated P-median problem (CPMP) is an NP-complete problem which hinges on partitioning a set of N nodes into M different disjoint clusters, each represented by a certain node that is designed as concentrator. The N–M nodes that are not chosen as concentrators are referred as terminals. The partitioning of the initial N nodes must be performed in such a way that a measure of total distance between the terminals and their corresponding concentrators is minimized. In addition, a capacity constraint imposed on the concentrators must be met, in order to obtain feasible solutions to the problem [1–4]. The related literature has coined several different names for the CPMP problem, such as the capacitated warehouse location problem or the capacitated clustering problem. A direct application of the CPMP arises in the context of communication networks deployment, where a set of

n

Corresponding author. Tel.: þ34 91 885 6731; fax: þ 34 91 885 6699. E-mail address: [email protected] (S. Salcedo-Sanz).

0305-0548/$ - see front matter & 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.cor.2011.11.004

terminals in the network must be assigned to the corresponding concentrator in order to compose access networks that satisfy the rate requirements of such terminals [5]. In this context, most of the efforts so far has focused on the topological design of communication networks (e.g. Wireless Sensor Networks (WSN), backbone networks or mobile networks [6–8]) since many of the processes involved in such networks can be approached as a CPMP problem, e.g. server load balancing in backbone links, and clustering and/or data aggregation in WSN. This problem also stems from applications in other fields, such as the design of distribution networks [9], vehicle routing [10], political district organization [11], selection of facilities for a university’s admission examination [2] and sales force territories design [12], among others. To efficiently tackle this NP-complete problem, a large amount of algorithms can be found in the literature, e.g. see [1,2,13–15, 3,16–19] and references therein. Among such previous approaches, evolutionary-based algorithms have been intensively investigated [2,5,20,21]. These algorithms usually perform well in discrete and extremely constrained optimization problems such as the here considered CPMP, and consequently they are generally shown to

I. Landa-Torres et al. / Computers & Operations Research 39 (2012) 2214–2222

2215

outperform other algorithmic approaches in terms of complexity. In spite of this huge previous work on evolutionary computing for the CPMP, the performance of grouping-based heuristic algorithms has not been discussed in this specific application. In this context, the most widely used grouping approach in the literature is the grouping genetic algorithm (GGA), which essentially is a class of evolutionary algorithms specially tailored for grouping problems, i.e. problems where a number of items must be assigned to a set of predefined groups. GGA was first proposed by Falkenauer [22], who realized that traditional genetic algorithms undergo severe drawbacks when applied to grouping problems, as exemplified by the increase in search space size obtained by the standard binary encoding in this kind of problems. Based on this rationale, several applications emerging from different fields have benefited from the application of GGA to the corresponding optimization problem, such as assignment problems [23], manufacturing [24,25], telecommunications [26,27] or industrial engineering [28]. The aim of this work is twofold. First, we propose a novel grouping heuristic based on the harmony search (HS) approach, which will be hereafter denoted as the Grouping Harmony Search (GHS) algorithm. The HS algorithm was first introduced by Zong et al. in [29], and it is inspired from the behavior of a music orchestra in the process of music composition. Since this landmark work, several optimization problems have exploited the excellent performance of HS in scenarios of significantly high complexity, such as water pipeline design [30], multicast routing [31], multiuser detection [32,33] or distributed radio resource allocation [34]. This paper presents the first grouping version of HS, applied in this case to the CPMP problem. On the other hand, the second goal of this manuscript is to exhaustively compare the aforementioned GHS grouping approach with (1) a novel hybrid GGA which share the encoding and local search method with the previous GHS; and with (2) alternative existing evolutionarybased techniques for the CPMP problem. As the results will clearly show, the proposed GGA and GHS grouping approaches are, to the authors’ knowledge, the best evolutionary techniques for the CPMP problem reported so far in the literature. The remainder of the paper is structured as follows: Section 2 poses the CPMP from a mathematical standpoint, as well as recapitulates previous scientific contributions and state of the art on this specific problem. Next, Section 3 comprises the technical content of this contribution where the proposed grouping algorithms are comprehensively described. Special emphasis is placed on key features of such algorithms such as the encoding method, the evolutionary operators (GGA) or the improvisation process (GHS), and the local search procedure. Section 4 presents the experimental part of the paper, where the performance of the proposed grouping approaches is assessed in CPMP instances of diverse size. Section 5 concludes the paper by drawing some final remarks.

between each vertex in fVV p g (demand set) and its nearest vertex in Vp is minimized; (b) all the vertices in the demand set are assigned to a vertex in the median set, in such a way that the capacity of each vertex Vp is not exceeded. Another definition of the problem is due to Ravelle and Swain [35]. Let T 9f1,    ,Ng denote the set of indices corresponding to the N initially deployed nodes (terminals), and let M be the number of concentrators to be appointed among the N terminals. We define a binary vector y such that yi ¼1 (i A f1, . . . ,Ng) means that node i is selected as a concentrator, and yi ¼0 otherwise. Let C ¼ fi A f1, . . . ,Ng : yi ¼ 1g denote the set of nodes appointed as concentrators; clearly, 9C9 ¼ M. We further define a 1  N capacity vector c, whose element ci (i A f1, . . . ,Ng) stands for the total capacity of node i when it is selected as a concentrator in the network, i.e. when yi ¼1. Similarly, an 1  N weight vector w9 fw1 , . . . ,wN g is further assumed so as to account for the weight demanded by node i if it acts as a terminal in the system, i.e. if yi ¼0. A N  M matrix D can also be constructed so as to account for the distance between nodes and concentrators, i.e. each element dij represents the Euclidean distance from node i to concentrator j, with j A f1, . . . ,Mg. Finally, let X be an N  M binary matrix, where xij ¼ 1 means that terminal i is assigned to concentrator j, and xij ¼ 0 otherwise. In the latter definition, an injective mapping L : 1  M:1  N is necessitated1 to map linear indices m A f1, . . . ,Mg to node indices LðmÞ A f1, . . . ,Ng, e.g. Lð4Þ ¼ 23 will denote that the 4th concentrator is node number 23. Obviously, LðmÞ will equal n for some m if and only if yn ¼1. Therefore, given a vector y one can arbitrarily sort the selected concentrators and construct the mapping LðÞ accordingly, but this mapping must be set before arranging the distance and assignment matrices D and X. With the above definitions, the CPMP can be then formulated as ! N X M X min dim  xim ð1Þ

2. Background: problem definition and previous approaches

A large amount of exact, heuristic and meta-heuristic algorithms have been so far derived for solving the CPMP problem, including evolutionary computing-based techniques. Recent contributions on the CPMP spread over very distinct approaches: exact algorithms as those proposed by Ceselli et al. in [15,36,37], heuristics such as Simulated Annealing and Tabu Search by Osman and Christofides in [1], a bionomic approach by Maniezzo et al. in [13], and a later work by the same authors where a Set Partitioning Algorithm was proposed [14]. In a similar approach, Lorena et al. [3] recently presented a so-called Column Generation Algorithm. Also, Scatter Search approaches have been applied to this problem, such as the works by Scheuerer et al. in [16] and Dı´az et al. in [17], the latter

This section is divided in two subsections. The first one summarizes two different (equivalent) definitions for the CPMP. The second one details the different previous approaches to this problem that can be found in the literature. 2.1. The capacitated P-median problem The CPMP problem can be formulated by resorting to set theory. As such, let GðV,EÞ be a undirected graph, where V represents the set of vertices and E the set of edges, in such a way that each edge eij stands for the distances between nodes i and j. The objective of the P-median problem is to find a subset of vertices V p D V (referred to as median set) with cardinality p, such that: (a) the sum of distances

y,X

i¼1m¼1

subject to xim A f0; 1g, yi A f0; 1g, N X

i ¼ 1, . . . ,N, m ¼ 1, . . . ,M, i ¼ 1, . . . ,N,

wi  xim rcLðmÞ ,

m ¼ 1, . . . ,M,

ð2Þ ð3Þ

ð4Þ

i¼1 N X

yi ¼ M:

ð5Þ

i¼1

2.2. Previous work

1

Although it can be understood from the context, the injective mapping

LðÞ is required for the sake of integrity in the problem statement.

2216

I. Landa-Torres et al. / Computers & Operations Research 39 (2012) 2214–2222

incorporating path relinking to improve further the search capabilities of the algorithm. Finally, a guided construction search heuristic was introduced by Osman et al. in [4]. As previously mentioned, evolutionary computing-based algorithm have also been applied to the CPMP problem by virtue of its balanced tradeoff between search performance and computational complexity. In this research line, [21] by Lorena et al. [21] proposed a Constructive Genetic Algorithm (CGA) for grouping problems, which can also be adapted to the CPMP paradigm herein considered. Likewise, Santos-Correa et al. [2] proposed an evolutionary algorithm with novel operators specifically designed for the CPMP scenario. In a more recent approach, Elsayed [5] proposed a genetic algorithm for the Switch Location Problem, a variant of the CPMP arising from the design of dense communication networks. Finally, a recent paper by Salcedo-Sanz et al. [20] also tackled this problem with hybrid evolutionary approaches.

9 6

8

5

10

7 3 2

1

4

3. The proposed hybrid grouping techniques In this section we present the hybrid grouping algorithms proposed in this manuscript, namely, a Grouping Genetic Algorithm (GGA) and a Grouping Harmony Search (GHS) approach. Both algorithms share the same encoding procedure, local search and repair heuristics; however, they are based on very different concepts in what relates to their operational procedures. In the following sections we describe the common and specific parts of the grouping algorithms described in this paper. 3.1. Problem encoding The proposed GGA and GHS approaches use the standard grouping encoding proposed by Falkenauer et al. in [22]. The encoding is carried out by splitting each individual candidate in two parts: first, an assignment part in which the items (and also the group it is assigned to) are encoded. Second, a group part, which defines the groups that must be taken into account for this individual candidate. In such problems where the number of groups is not set a priori, it should be clear that one deals with a variable length encoding algorithm: the group part varies from one individual candidate to another. In the specific case of the CPMP, the assignment part is also divided in two subparts, one representing the nodes which are appointed as concentrators, and the other determining the assignment of every node in the network to the concentrator. Specifically, an individual s has the form s ¼ ½sx 9sy , where sy is an integer vector with the indices of the M nodes designed as concentrators in the network (i.e. sy ¼ Lð1Þ, Lð2Þ, . . . , LðMÞ for the particular individual candidate under consideration), and sx is the assignment part. Each element of the assignment part, namely, sxi A f1,Mg 8i ¼ 1, . . . ,N, stands for the assignment of the node i to the sxi-th concentrator. Note that those nodes appointed as concentrators must be assigned to themselves in the assignment part, i.e. sxi ¼ m if LðmÞ ¼ sxi . A small example may shed light on this proposed encoding for the CPMP: Let us consider the CPMP problem depicted in Fig. 1(a), and the following solution for this problem, where we further assume that the capacity constraint of the problem is fulfilled: s ¼ 1 1 1 1 2 2 3 2 3 392 5 10: This solution indicates that nodes 2, 5 and 10 are appointed as concentrators of the network, i.e. sy ¼2 5 10. The assignment of the terminal nodes to the different concentrators is exemplified by the first part of s, namely, sx ¼1 1 1 1 2 2 3 2 3 3. As such, terminal 7 has assigned an assignment value of sx7 ¼ 3, which means that its concentrator is that in the third position in vector

Fig. 1. Example of a CPMP instance: (a) initial nodes; (b) final optimal solution.

sy (node 10). The final assignment obtained with the previous solution is displayed in Fig. 1(b). Note that nodes 2, 5 and 10 are assigned to values 1, 2 and 3, respectively, since Lð1Þ ¼ 2, Lð2Þ ¼ 5 and Lð3Þ ¼ 10. 3.2. Proposed grouping genetic algorithm As previously mentioned in Section 1, the Grouping Genetic Algorithm (GGA) can be regarded as a class of evolutionary algorithms specially designed to handle grouping problems, i.e. problems in which a number of items must be assigned to a set of predefined groups. It is based on (1) the grouping encoding described above and (2) a number of operators which will be next described. 3.2.1. Selection operator The selection operator adopted in this paper is a ranking-based method based on the one utilized in [2], and can be casted by the selection formula: qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi79 8 6 6 7 < 61þ 1 þ 4  rnd  ðL2 þLÞ7= 5 , ð6Þ SelectðRÞ ¼ r j A R9j ¼ L4 ; : 2

I. Landa-Torres et al. / Computers & Operations Research 39 (2012) 2214–2222

where R is a list containing L individual solutions sorted in increasing order of their fitness values, rnd is a random number from a uniform distribution in the interval R½0; 1, and bac stands for the nearest integer less than or equal to a (floor operator). Note that Eq. (6) returns the position in the list R of the individual to be selected. Also observe that such an expression is designed so as to encourage the selection of those individuals with lower objective function. Finally, the selection operator doubles the size of the population, i.e. if we have x individuals, after the selection procedure we will have 2x individuals, which are then passed on to the crossover operator.

2217

3 6

5

4

9

8

1

2

7 10

3.2.2. Crossover operator The crossover operator implemented in our GGA is a modified version of the original crossover introduced by Falkenauer in [22]. The process to apply this operator and obtain a new individual or offspring can be summarized in the following steps: 1. First, two individuals are selected, one will be referred to as father, and the other as mother. A single offspring individual will be generated from these two individuals. 2. Set the offspring individual entirely equal to the mother. 3. Select two crossing points in the groups part of the father. 4. Insert the elements of the father belonging to the selected groups into the offspring. 5. Search for the optimal set of concentrators for the new offspring. The search for the optimal set of concentrators, which is executed at the end of the crossover process, is quite simple: for each element in the sx part of the computed offspring, we calculate the sum of distances between the actual node and all other in the same group. We select as the new concentrator node that with the less total accumulated distance from the rest of nodes. Note that the capacity of the selected node may not suffice for supporting all the other nodes; in this case, the node is discarded as a concentrator, and the second node with minimum distance from the other is considered. In the case that no node in the group fulfills the capacity constraint, the node with less accumulated distance to its neighboring counterparts is selected and the individual is labeled as not feasible. Consequently, the unfeasible individual will be repaired at a later stage of the algorithm. Let us illustrate the complete crossover process with a brief example contextualized in a network with N ¼ 10 nodes and M¼3 concentrators. Assume that the two individuals selected as the father and the mother are father ¼ 1 1 3 2 2 1 3 2 3 192 4 9, mother ¼ 2 2 3 3 3 1 2 1 1 193 1 4: We first equal the offspring to the mother individual: offspring ¼ 2 2 3 3 3 1 2 1 1 19- - and, once two integer numbers are uniformly picked from the integer range f1; 2, . . . ,Mg, say, 1 and 2, we mark the terminals corresponding to the 1st and 2nd concentrator in the father individual, i.e. father ¼ 1 1 3 2 2 1 3 2 3 192 4 9: Finally, these values are inserted into the offspring: offspring ¼ 1 1 3 2 2 1 2 2 1 19- - -, which is the result of the first part of the crossover procedure. In order to complete the offspring, the best set of M¼3 concentrators must be derived. This process is clarified in Fig. 2(a), which depicts the groups formed by the offspring in the current example. Let the vector of weights for the nodes in the example be given by

3 6

5

4

1

9

8 7

2 10

Fig. 2. Example of the terminal assignment once the groups are formed in the GGA or GHS individuals: (a) Groups formed (in a given individual); (b) final solution (unfeasible due to capacity constraints).

w¼{1,2,1,3,1,1,1,2,2,1}, and the set of capacities when the nodes act as concentrator by c¼ {4,6,5,6,4,4,3,5,4,5}. Note that with these weights and capacities, only the group formed by terminal 3 on its own satisfies the capacity constraint given in expression (4). The other groups are unable to fulfill the capacity constraint no matter which node is appointed as concentrator (Fig. 2(b)). As explained before, when dealing with unfeasible groupings the node with the minimum accumulated distance to all other nodes in the group is chosen as concentrator (marked in black in the plot), and its repairing is deferred for a later step of the algorithm. 3.2.3. Mutation operator A good mutation operator is important in order to avoid getting trapped in local optima. In this paper we implement two different mutation operators that act on the sx and sy parts of each individual. Both operators are applied with a very small probability, i.e. only about 1% of individuals are mutated at each generation.

 Mutation operator I: the first mutation operator is engaged on the sy part of the individual. It consists of randomly choosing a terminal node and a concentrator node, and interchange their corresponding roles, i.e. the terminal node becomes a concentrator, and the concentrator becomes a terminal. An example of this procedure is shown next: consider the following individual in a N ¼10, M ¼3 CPMP instance: s ¼ 1 1 2 3 3 1 1 2 3 391 8 10: Let node 10 (concentrator) and node 7 (terminal) be selected by this mutation operator. Consequently, the mutated individual

2218

I. Landa-Torres et al. / Computers & Operations Research 39 (2012) 2214–2222

will be given by: s ¼ 1 1 2 3 3 1 3 2 3 391 8 7:

 Mutation operator II: this second procedure modifies the sx part of each individual. This operator is based on a swapping operator between two nodes chosen at random. However, in this case both nodes must be assigned to different concentrators, i.e. if nodes k and w are chosen, then sxk a sxw . For the sake of clarity, let us consider the following example: s ¼ 1 1 2 3 3 1 1 2 3 391 8 10: We randomly choose node 2 for the swapping. Note that node 2 is assigned to concentrator number 1; therefore, no other node assigned to this concentrator is eligible for the mutation operator II. If we randomly choose node 3, then the mutated individual will be given by: s ¼ 1 2 1 3 3 1 1 2 3 391 8 10:

3.3. Proposed grouping harmony search algorithm As outlined in Introduction, harmony search (HS) is an iterative optimization algorithm based on mimicking the coordination of the musicians in an orchestra when jointly seeking the best harmony under an esthetic metric. The HS works with a set of j possible problem solutions or harmonies, commonly denoted as Harmony Memory (HM), which are evaluated at each iteration. The HM is updated with any of the new j improvised harmonies at a given iteration provided that it sounds better (under a certain fitness) than any of the j harmonies from the previous iteration. This procedure is iteratively repeated for a fixed number of iterations. Based on this rationale, HS can be regarded as a population-based heuristic. For conformity with the notation in the related literature, we will hereafter refer to a possible candidate individual s as harmony, whereas note will stand for any of the entries of the corresponding sx . Therefore, each harmony comprises N notes denoting the concentrator index to which each terminal in the network is assigned. 3.3.1. Improvisation process The harmony improvisation process of the original HS algorithm is controlled by two parameters. The first one, the harmony memory considering rate (HMCR), establishes the probability that the new value for a note inside a given harmony is taken from the values of the same note from any of the other j1 harmonies in the Harmony Memory. Otherwise, it is randomly drawn from the corresponding alphabet f1, . . . ,Mg. The second parameter, the pitch adjusting rate (PAR), imposes a fine adjusting rate based on the vocabulary for the note. In this proposal, the parameters HMCR and PAR that drive the convergence behavior of the algorithm have been specially tailored for the resolution of the CPMP in order to attain a more effective information exchange between the individuals stored in the HM. This proposal exploits information of the proximity between nodes when performing new assignments. Furthermore, notice that in the present scheme the algorithm improvises the harmonies by updating only the sx part of each solution. Once the sx part is obtained, the sy part of the solution is built deterministically by appointing as concentrator the nodes with minimum sum of distances to the rest of terminals included in the assignment part sx. Hence, the proposed method not only adapts the harmony search algorithm to the problem at hand, but also works rather differently to the GGA also contributed in this paper, since in the GGA counterpart the selection, crossover and mutation

processes operate on both sx and sy parts. Let us elaborate further on the HMCR and PAR processes presented in this work. 3.3.2. Harmony memory considering rate The harmony memory considering rate is characterized by the probabilistic parameter HMCR A R½0; 1, and is sequentially performed at each note of every harmony in the HM. HMCR establishes the probability that the following four steps are applied to a given note: 1. All the concentrators to which the same note is assigned in all the other j1 harmonies included in the Harmony Memory are arranged in a candidate concentrator list. 2. One concentrator is chosen randomly from the candidate list obtained in step 1. 3. The chosen candidate and the concentrators of the actual harmony (except its actual concentrator) are again sorted in a list based on the distances between them (in increasing order). 4. The note (i.e. node) on which the HMCR is applied is connected to the concentrator located in the first position of the list constructed in step 3. 3.3.3. Pitch adjusting rate The pitch adjusting rate, driven by the probability PAR A R½0; 1, performs subtle adjustments in the harmonies and is also applied to every notes from the complete set of j harmonies. For changing the groups defined in the sx part, this procedure disconnects the node on which the PAR is being performed from its actual concentrator, and next assign such a node to the nearest concentrator included in the present harmony. We also include the mutation operator II described in Section 3.2.3 in such a way that the part sx of each harmonies is further modified. 3.4. Repairing of individuals As shown previously, the capacity limit imposed at each concentrator may produce unfeasible individuals (corr. harmonies) in the population (corr. HM) of the GGA (corr. GHS). A repair procedure to fix such individuals is consequently applied to both the GGA and GHS approaches, which is executed for each group not satisfying the corresponding capacity constraint. For every such groups, an Assignment Priority List (APL) is computed: for each node, the APL stores – in increasing order – the difference of distances between the current and the closest alternative concentrator with some loose capacity margin. The repair procedure operates by selecting the node in the order given by the APL. If the capacity margin in this tentative concentrator suffices for allocating the weight of the node, it is reassigned accordingly; otherwise, the next node with second minimum distance to its alternative concentrator in the APL is chosen, and the process is repeated. The repair procedure is stopped when all the groups meet the capacity constraint of the CPMP, or when all the nodes in the APL have been unsuccessfully checked. In the latter case, the local search procedure (to be later described) will try to repair the individual, and if still unfeasible, the corresponding fitness value will be penalized. This repairing process can be summarized as:

 Locate the groups (i.e. concentrators) that do not fulfill the capacity constraint.

 Let concentrator j represent one of such groups. For each terminal i assigned to j, calculate its distance to the closest alternative concentrator k with non-zero capacity margin: APLðiÞ ¼ 9di,j di,kÞ 9.

I. Landa-Torres et al. / Computers & Operations Research 39 (2012) 2214–2222

 Rank terminals in the group represented by concentrator j 

based on APL(i). Following this ranking, try to reallocate terminals to their closest alternative concentrator.

2219

Table 1 Main characteristics of the CPMP instances tackled in the experimental section of the paper. Instance

N

M

Grid

Max. generations

1 2 3 4 5 6 7 8 9 10

10 20 40 50 60 80 90 100 110 150

3 3 4 4 5 8 9 10 10 15

400  400 400  400 400  400 400  400 400  400 400  400 400  400 400  400 400  400 400  400

200 200 200 200 200 200 200 200 200 200

As an example, let us repair the individual s ¼ 1 1 3 2 2 1 2 2 1 192 4 3 that has been previously used to illustrate the crossover procedure. Fig. 2(b) depicts this individual. The weights and capacities for this case are also given in Section 3.2.2. Note that only concentrator 3 fulfils the capacity constraint (p3 ¼ 5 and w3 ¼ 1), as opposed to concentrator 2 (p2 ¼ 6, w1 þ w2 þ w6 þ w9 þw10 ¼ 7) and concentrator 4 (p4 ¼ 6 and w4 þw7 þ w8 ¼ 7). The repair operator applied to this individual reassigns terminals 5 (from concentrator 4 to 3) and 6 (from concentrator 2 to 3), rendering the feasible individual: s ¼ 1 1 3 2 3 3 2 2 1 192 4 3, which is plotted in Fig. 3.

ðdi01 ,sxi þ di02 ,sxi þ di,sxi,min Þ o ðdi01 ,sxi,min þ di02 ,sxi,min þdi,sxi Þ

3.5. Local search

ð8Þ

and wi rwi01 þ wi02 .

In order to further enhance the quality of the individuals in the population (equivalently, the HM), and also as a final opportunity to fix unfeasible individuals, we implement an additional local search technique on both the GGA and GHS approaches. The local search is only applied to the sx part, and is initialized by generating a random permutation p, which sets the order in which the terminals start exploring for better local assignments. Let us assume that the first terminal to be analyzed is terminal i, which is currently assigned to concentrator index sxi (correspondingly, to concentrator node Lðsxi Þ). First, this terminal is disconnected from concentrator sxi, and the corresponding capacity is modified to cLsxi wi . Then, the set of possible concentrators with enough capacity margin to allocate i is obtained. It is important to note that concentrator sxi – to which node i was assigned – is not taken into account. Terminal i is then assigned to the closest concentrator among the aforementioned list. In case there is no concentrator with enough capacity margin to reassign i, then we focus on the closest concentrator disregarding any capacity issues. Again, concentrator sxi is not taken into account. Let such nearest concentrator be sxi,min A f1, . . . ,sxi 1,sxi þ 1, . . . ,Mg. In this concentrator, we calculate the distances from all their assigned nodes, and the distance from i to concentrator sxi,min . If there is a terminal i0 , such that its distance to sxi is smaller than the distance between i and sxi,min , i.e. if di0 ,sxi þ di,sxi,min o di0 ,sxi,min þ di,sxi

we apply a similar approach, but trying to change terminal i from grouping sxi with two terminals i01 and i02 from sxi,min . Now the conditions are

ð7Þ

and wi r wi0 , then we assign terminal i0 to sxi and terminal i to sxi,min . If several terminals satisfy the conditions to be swapped with i, then we swap the one which produces the largest reduction ðdi0 ,sxi,min þ di,sxi Þ ðdi0 ,sxi þdi,sxi,min Þ. Likewise, if we do not find any terminal to swap,

Fig. 3. Example of the repairing procedure implemented in the GGA and GHS approaches using the individual of Fig. 2.

4. Computational experiments and results In order to evaluate the performance of the proposed hybrid grouping techniques when applied to the CPMP, we have carried out systematic Monte Carlo experiments and comparisons with existing evolutionary-based approaches by utilizing randomly generated network deployments. Specifically, we compare the proposed hybrid grouping algorithms with two different genetic algorithms described in [20]: the first hybridizes with a greedy approach, whereas the second hybridizes with a linear programming heuristic. Also, comparisons with the genetic algorithm described in [2] and with the constructive genetic algorithm described in [21] are presented. In the first round of simulations, we have randomly generated 10 CPMP instances under the following model: we define a 400  400 grid over which we randomly spread the N nodes of the network. Each node is assigned a weight for the case it acts as a terminal in the network, which is drawn from a Gaussian distribution with mean 10 and variance 5, i.e. wi  N ð10; 5Þ. For such cases when the node acts as a concentrator, its capacity is chosen as: P 1:1ð N n ¼ 1 wn Þ : ð9Þ ci ¼ M PN PM Note that with this definition, i ¼ 1 wi o j ¼ 1 cj , i.e. there is always a feasible solution for the problem. Table 1 summarizes the main characteristics of the generated instances. The complexity of the instances (the size of the underlying search space) is proportional to its size. Both hybrid grouping algorithms proposed and the rest of approaches for comparison have been run 20 times over each generated network instance, and the best, mean and standard deviation values of each simulation have been registered. Notice that all the compared techniques are evolutionary population-based algorithms. Based on this last observation, fairness in the comparison is guaranteed by setting the number of fitness evaluations per execution of the algorithm equal to 104 in all the compared approaches.2 As for the GHS, however, the number of metric evaluations is decreased down to 4  103 by imposing a lower population size; the reason being 2 This number of fitness evaluations per execution includes all the operators and local search procedures involved in each simulated algorithm.

2220

I. Landa-Torres et al. / Computers & Operations Research 39 (2012) 2214–2222

Table 2 Comparison of the results obtained by the proposed hybrid GGA and other existing evolutionary-based algorithms. A lower bound for the CPMP is also displayed for reference. I stands for CPMP instance number, and LB for the lower bound of infinite capacity in the concentrators. I

GGA proposed

GHS proposed

GA þ Greedy_Exp [20]

GA þ LP [20]

GA in [2]

CGA [21]

LB

1 2 3 4 5 6 7 8 9 10

1014/1014/0 1419/1419/0 2455/2455/0 3712/3712/0 3800/3800/0 3806/3806/0 3792/3795/5 4455/4475/13 4724/4726/4 5059/5077/15

1014/1014/0 1419/1419/0 2455/2455/0 3712/3712/0 3800/3800/0 3806/3807/1 3792/3795/10 4455/4508/47 4724/4728/4 5059/5080/19

1014/1014/0 1419/1419/0 2455/2460/5 3755/3763/11 3841/3877/30 3819/3842/27 3891/3931/38 4455/4464/13 4772/4835/55 5258/5414/106

1014/1014/0 1419/1419/0 2455/2455/0 3712/3717/5 3800/3816/17 3806/3831/26 3792/3837/39 4532/4637/90 4727/4768/53 5059/5210/114

1014/1014/0 1419/1419/0 2455/2463/6 3719/3731/19 3800/3812/12 3860/3874/29 3792/3845/66 4452/4508/29 4744/4794/53 5210/5383/81

1014/1014/0 1419/1419/0 2455/2455/0 3719/3717/15 3841/3856/18 3819/3822/4 3883/3914/21 4452/4567/75 4776/4785/25 5216/5296/57

901 1395 2403 3684 3706 3713 3684 4359 4563 4823

Table 3 Comparison of the results obtained by the proposed hybrid GGA and GHS algorithm in the second set of experiments run. Instance

N

M

Grid

Max. generations

GGA

GHS

11 12 13 14 15

200 250 300 400 500

15 25 25 40 50

400  400 400  400 400  400 400  400 400  400

200 200 200 200 200

7054/7058/10 6377/6386/6 7555/7561/9 7925/7958/22 8645/8688/24

7054/7057/14 6377/6384/8 7549/7560/10 7917/7947/21 8643/8688/20

that as will be later discussed, GHS yields similar results than its GGA counterpart at a significantly reduced computational cost. The probabilities of individual crossover and mutation in the GGA are fixed to Pc ¼0.6 and Pm ¼0.01 in all the compared genetically inspired algorithms, whereas for the GHS the PAR is kept fixed to PAR ¼0.1. For the HMCR, a linear progression along iterations is introduced: by denoting the number of iterations of the GHS algorithm as I , the value of the HMCR parameter at iteration t is computed as HMCRðtÞ ¼ HMCRI þ

ðHMCRF HMCRI Þðt1Þ , I 1

Table 4 Comparison of the real computation time of the GGA and GHS compared, in seconds. Instance

GGA

GHS

11 12 13 14 15

107.76 150.93 181.01 290.21 404.32

51.69 73.69 88.22 139.67 192.22

ð10Þ

where HMCRI represents the value of the HMCR at the beginning of the iterative process and HMCRF is the value of the HMCR in the last iteration. The initial and final values of the HMCR are set to fHMCRI ,HMCRF g ¼ f0:1,0:03g based on a previous simulationbased optimization study (not shown for the sake of space). Note that when describing the probabilities of the operators for both algorithms, it is worth mentioning that those corresponding to the GHS algorithm – PAR and HMCR(t) – always refer to a probability for every note (not for every harmony). Thereby, although the HMCR and PAR values are smaller than those of the GGA operators, it does not mean that they act fewer times. Table 2 shows the obtained results (rounded in format floor(best/mean/std) value of the metric). A lower bound considering infinite capacity at the concentrators is also included for reference purposes. This table evidences the good performance of the proposed grouping approaches when compared to other meta-heuristics dealing with the CPMP. In the instances of smallest size, all the compared algorithms obtain the same best solution in all the runs. However, in the middle size instances the proposed grouping approaches clearly outperform the GA þgreedy [20], the constructive genetic algorithm [21], the GAþLP and the GA in [2]. In the largest instances (i.e. from 12 to 15), the gains in statistical performance of the proposed grouping approaches are significant. Indeed, please observe that the GAþLP is not able to obtain feasible solutions for the largest networks. In these instances, a slightly better average performance of the GHS approach is obtained with respect to the GGA. Also note that the

lower bound assuming infinite capacity at the concentrators is not far away from the results obtained by the proposed hybrid grouping algorithms, from which it can be concluded that the performance of our grouping approaches, even in large instances, is very good. In order to carry out a more comprehensive study on the performance of both grouping algorithms proposed in this work, we have launched a different set of experiments over networks of large size. The obtained results, which are listed in Table 3, show that the effective behavior of the grouping algorithms is really similar, though the GHS bestows slight improvements in the best metric in instances 13–15, and in the average metric in instances 11–15. We have also analyzed the computation time of both groupings approaches in these five CPMP instances. Table 4 shows the real computation time of the GGA and GHS algorithms in these instances.3 Note that the computation time of the GHS approach is much lower than that of the GGA: the GHS converges to a good solution in less than 1 min for the smallest instance, whereas it takes more than 2 min for the GGA. Regarding the largest instance, it takes about 3 min for the GHS to converge, as opposed to the GGA, which in average obtain a final solution in about 6 min. Despite this difference in computation time, it can be concluded that both approaches converge in a reasonable time taking into account the size of the simulated instances. This conclusion is further reinforced if one notes that the deployment

3

All the experiments were run in an Intel Xeon E5540 processor at 2.53 GHz.

I. Landa-Torres et al. / Computers & Operations Research 39 (2012) 2214–2222

2221

GGA GHS mean GGA mean GHS std GGA std GHS

Fig. 4. Performance of the GGA and GHS algorithms in 20 different runs for CPMP instance number 14 (400 nodes, 40 concentrators).

of networks is in general an off-line process, so the computation time is not a critical issue in this application. Finally, Fig. 4 shows a comparison of the GGA and GHS performance in the CPMP instance number 14 (N¼400 nodes, M¼40 concentrators). Specifically, we have depicted the different objective function values obtained in the 20 experiments launched for each grouping algorithm, and the average and standard deviation in thick and dashed lines, respectively. It is clear to see that in this specific instance, the GHS performs slightly better than the GGA. Finally, Fig. 5(a) and (b) depicts two examples of the solutions found for network instance 14 by GGA and GHS, respectively.

5. Conclusions

Fig. 5. Example of the solutions found by the proposed GGA and GHS in the CPMP instance number 14 (400 nodes, 40 concentrators): (a) Best solution found by the GGA; (b) best solution found by the GHS.

In this paper we have studied the performance of two different hybrid grouping algorithms when applied to the capacitated P-median problem (CPMP). This class of algorithms is specially tailored for problems where groups of items must be assigned to certain sets. The CPMP falls within this problem category, but its stringent constraint set makes difficult the application of grouping approaches. In this paper we have presented two novel heuristic techniques for solving this problem: a Grouping Genetic Algorithm (GGA) and a Grouping Harmony Search (GHS) approach. A grouping encoding procedure of the CPMP – common to both algorithms – has been presented, and several modifications of the traditional operators of the GGA and the GHS have been designed by exploiting the intrinsic characteristics of the problem at hand. In addition, we have described a local search method that allows improving the search performance of the proposed algorithms. We have assessed the performance of our grouping algorithms by addressing CPMP instances of very diverse size, and by comparing the obtained results with those of existing evolutionary-based approaches. In all the performed experiments, the proposed grouping approaches are shown to outperform significantly such previous evolutionary techniques. Grouping encodings are, therefore, proven to be an advantageous option to tackle this problem in future applications. We have also verified that the use of harmony search as a metaheuristic approach can be beneficial, since HS obtains similar or better results than GA in all the CPMP problems tackled in this research.

2222

I. Landa-Torres et al. / Computers & Operations Research 39 (2012) 2214–2222

Acknowledgments This work has been partially supported by Spanish Ministry of Science and Innovation, under a project number ECO2010-22065C03-02, by Comunidad de Madrid and Universidad de Alcala´, through project number CCG10-UAH/TIC-5955, and by Spanish Ministry of Science and Innovation through the CONSOLIDERINGENIO 2010 (CSD200800010). References [1] Osman I, Christofides N. Capacitated clustering problems by hybrid simulated annealing and tabu search. International Transactions on Operational Research 1994;1(3):317–36. [2] Santos-Correa E, Steiner MT, Freitas AA, Carnieri C. A genetic algorithm for solving a capacitated P-median problem. Numerical Algorithms 2004;35(4): 373–88. [3] Lorena LN, Senne E. A column generation approach to capacitated P-median problems. Computers & Operations Research 2004;31(6):863–76. [4] Osman I, Ahmadi S. Guided construction search metaheuristics for the capacitated P-median problem with single source constraint. Journal of the Operational Research Society 2006;18:339–48. [5] El-Alfy ES. Applications of genetic algorithms to optimal multilevel design of MPLS-based networks. Computer Communications 2007;30:2010–20. [6] Riis M. Deployment of mobile switching centers in a telecommunications network: a stochastic programming approach. Telecommunication Systems 2004;26(1):93–109. [7] Juttner A, Orban A, Fiala Z. Two new algorithms for UMTS access network topology design. European Journal of Operational Research 2005;164: 456–74. [8] Pomerleau Y, Chamberland S, Pesant G. A constraint programming approach for the design problem of cellular wireless networks. In: Proceedings of the of IEEE Canadian conference on electrical and computer engineering. Canada; 2003. p. 881–4. [9] Fleszar K, Hindi KS. An effective VNS for the capacitated P-median problem. European Journal of Operational Research 2008;191:612–22. [10] Koskosidis Y, Powell WB. Clustering algorithms for consolidation of customer orders into vehicle shipments. Transportation Research B 1992;26:365–79. [11] Bozkaya B, Erkut E, Laporte G. A tabu search heuristic and adaptive memory procedure for political districting. European Journal of Operational Research 2003;144(1):12–26. [12] Mulvey JM, Beck MP. Solving capacitated clustering problems. European Journal of Operational Research 1984;18:317–36. [13] Maniezzo V, Mingozzi A, Baldacci R. A bionomic approach to the capacitated p-median problem. Journal of Heuristics 1998;4:263–80. [14] Baldacci R, Hadjiconstantinou E, Maniezzo V, Mingozzi A. A new method for solving capacitated location problems based on a set partitioning approach. Computers & Operations Research 2002;29(4):365–86. [15] Ceselli A. Two exact algorithms for the capacitated p-median problem. 4OR: A Quarterly Journal of Operations Research 2003;1(4):319–40. [16] Scheuerer S, Wendolsky R. A scatter search heuristic for the capacitated clustering problem. European Journal of Operational Research 2006;169(2): 533–47. [17] Dı´az JA, Ferna´ndez E. Hybrid scatter search and path relinking for the capacitated p-median problem. European Journal of Operational Research 2006;169(2):570–85.

[18] Ahmadi S, Osman I. Density based problem space search for the capacitated clustering p-median problem. Annals of Operations Research 2004;131: 21–43. [19] Prins C, Prodhon C, Calvo RW. Solving the capacitated location-routing problem by a GRASP complemented by a learning process and a path relinking. 4OR: A Quarterly Journal of Operations Research 2006;4(3): 221–38. [20] Salcedo-Sanz S, Portilla-Figueras JA, Ortiz-Garcı´a EG, Pe´rez-Bellido AM, Thraves C, Fernandez-Anta A, et al. Optimal switch location in mobile communication networks using hybrid genetic algorithms. Applied Soft Computing 2008;8(4):1486–97. [21] Lorena LA, Furtado JC. Constructive genetic algorithm for clustering problems. Evolutionary Computation 2001;9(3):309–27. [22] Falkenauer E. The grouping genetic algorithm–widening the scope of the GAs. In: Proceedings of the Belgian journal of operations research statistics and computer science, vol. 33; 1992. p. 79–102. [23] Agustı´n-Blas LE, Salcedo-Sanz S, Ortiz-Garcı´a EG, Portilla-Figueras A, Pe´rezBellido AM. A hybrid grouping genetic algorithm for assigning students to preferred laboratory groups. Expert Systems with Applications 2009;36: 7234–41. [24] Brown EC, Sumichrast R. CF-GGA: a grouping genetic algorithm for the cell formation problem. International Journal of Production Research 2001;36: 3651–69. [25] James TL, Brown EC, Keeling KB. A hybrid grouping genetic algorithm for the cell formation problem. Computers & Operations Research 2007;34:2059–79. [26] James T, Vroblefski M, Nottingham Q. A hybrid grouping genetic algorithm for the registration area planning problem. Computer Communications 2007;30(10):2180–90. [27] Brown EC, Vroblefski M. A grouping genetic algorithm for the microcell sectorization problem. Engineering Applications of Artificial Intelligence 2004;17(6):589–98. [28] Hung C, Sumichrast RT, Brown EC. CPGEA: a grouping genetic algorithm for material cutting plan generation. Computers & Industrial Engineering 2003;44(4):651–72. [29] Geem ZW, Hoon Kim J, Loganathan GV. A new heuristic optimization algorithm: harmony search. Simulation 2001;76(2):60–8. [30] Geem ZW. Optimal cost design of water distribution networks using harmony search. Engineering Optimization 2006;38(3):259–77. [31] Forsati R, Haghighat AT, Mahdavi M. Harmony search based algorithms for bandwidth-delay-constrained least-cost multicast routing. Computer Communications 2008;31(10):2505–19. [32] Gil-Lopez S, Del Ser J, Olabarrieta I. A novel heuristic algorithm for multiuser detection in synchronous CDMA wireless sensor networks. IEEE international conference on ultra modern communications 2009:1–6. [33] Zhang R, Hanzo L. Iterative multiuser detection and channel decoding for DSCDMA using harmony search. IEEE Signal Processing Letters 2009;16(10): 917–20. [34] Del Ser J, Matinmikko M, Gil-Lopez S, Mustonen M. A novel harmony search based spectrum allocation technique for cognitive radio networks. In: IEEE international symposium on wireless communication systems; 2010. September. [35] Revelle C, Swain R. Central facilities location. Geographical Analysis 1970;2: 30–42. [36] Ceselli A, Righini G. A branch-and-price algorithm for the capacitated p-median problem. Networks 2005;45(3):125–42. [37] Ceselli A, Liberatore F, Righini G. A computational evaluation of a general branch-and-price framework for capacitated network location problems. Annals of Operations Research 2009;167:209–51.