An aggregate label setting policy for the multi-objective shortest path problem

An aggregate label setting policy for the multi-objective shortest path problem

European Journal of Operational Research 207 (2010) 1489–1496 Contents lists available at ScienceDirect European Journal of Operational Research jou...

365KB Sizes 54 Downloads 47 Views

European Journal of Operational Research 207 (2010) 1489–1496

Contents lists available at ScienceDirect

European Journal of Operational Research journal homepage: www.elsevier.com/locate/ejor

Decision Support

An aggregate label setting policy for the multi-objective shortest path problem Manuel Iori a,*, Silvano Martello b, Daniele Pretolani a a b

Dipartimento di Scienze e Metodi dell’Ingegneria, Università di Modena e Reggio Emilia, 2, Via Amendola, I-42122 Reggio Emilia, Italy Dipartimento di Elettronica, Informatica e Sistemistica, Università di Bologna, 2, Viale Risorgimento, I-40136 Bologna, Italy

a r t i c l e

i n f o

Article history: Received 6 November 2009 Accepted 23 June 2010 Available online 6 July 2010 Keywords: Multi-objective Shortest path Label setting

a b s t r a c t We consider label setting algorithms for the multi-objective shortest path problem with any number of sum and bottleneck objectives. We propose a weighted sum aggregate ordering of the labels, specifically tailored to combine sum and bottleneck objectives. We show that the aggregate order leads to a consistent reduction of solution times (up to two-thirds) with respect to the classical lexicographic order. Ó 2010 Elsevier B.V. All rights reserved.

1. Introduction Multi-objective shortest path problems consist in finding a set of paths that minimizes a number of objective functions. The objectives to be minimized commonly include the sum of costs and/or the maximum (bottleneck) cost in the path. As observed by Hansen [4], one could alternatively and equivalently consider a bottleneck objective asking to maximize the minimum cost in the path. Such problems have considerable practical relevance, as they appear in a number of real world applications. We refer for example to the transportation of hazardous material (see, e.g., the recent special issue edited by Erkut [1]) in which the traveled distance is not the only objective but other costs (probability of accidents, population density, and so on) have a relevant impact. There is a huge literature on multi-objective problems arising in transportation: the reader is referred to the recent survey by Jozefowiez et al. [5] for an exhaustive review. In these applications, the quality of the roads (highways, local routes, and so on) or the risk of accident can be seen as bottleneck objectives, see, e.g., the de Lima Pinto et al. [6] and de Lima Pinto and Pascoal [7]. Gandibleux et al. [2] discuss an application to Internet traffic routing in which sum and bottleneck objectives are used to construct short routings that also prevent network congestion. In this context, link capacity (bandwidth) arises as a bottleneck objective. Formally, we are given a directed graph G = (V, A), defined by a set of vertices V = {1, 2, . . . , n} and a set A of m arcs (i, j) (i, j 2 V). Each arc (i, j) 2 A has k associated non-negative costs c1(i, j), c2(i, j), . . . , ck(i, j), and there are k different objective functions, one per cost * Corresponding author. Tel.: +39 0522 522 653. E-mail addresses: [email protected] (M. Iori), [email protected] (S. Martello), [email protected] (D. Pretolani). 0377-2217/$ - see front matter Ó 2010 Elsevier B.V. All rights reserved. doi:10.1016/j.ejor.2010.06.035

type. A prefixed vertex r 2 V is called the source. Given a vertex i 2 V, and a path Pq(i) (q = 1, 2, . . .) from r to i, we define the corresponding label of vertex i as the k-tuple ‘q ðiÞ ¼ ½f1q ; f2q ; . . . ; fkq  that gives the k objective function values of path Pq(i). A path Pq(i) weakly dominates another path Pr(i) if

fhq 6 fhr

for h 2 f1; 2; . . . ; kg;

ð1Þ

while Pq(i) dominates Pr(i) if (1) holds and

fhq < fhr for at least one h 2 f1; 2; . . . ; kg:

ð2Þ

A path is said to be non-dominated if no other path dominates it. The set of all non-dominated paths is called the maximal complete set (see Hansen [4]). Note that the maximal complete set may contain paths Pq(i), Pr(i) such that the corresponding k-tuples have fhq ¼ fhr for h 2 {1, 2, . . . , k} (equivalent paths). In contrast, the minimal complete set is a subset of the maximal complete set that only contains one path from any set of equivalent paths. The Multi-Objective Shortest Path Problem (MOSPP) considered in this paper is to find the maximal complete set of paths from r to any other vertex i 2 V, where the objective functions include both sum and bottleneck criteria. The problem is known to be NP-hard, also in the special case where the number of objectives is limited to two (known in the literature as the Bi-Objective Shortest Path Problem). Most of the approaches to the exact solution of multi-objective shortest path problems fall into two main categories, derived from the corresponding classification for the (single objective) shortest path problem, namely label setting and label correcting algorithms. The bi-objective shortest path problem has received considerable attention in the literature since the seminal papers by Vincke [15] (label correcting) and Hansen [4] (label setting). The special case in which one of the two objectives is a bottleneck was consid-

1490

M. Iori et al. / European Journal of Operational Research 207 (2010) 1489–1496

ered by Hansen [4] and Martins [9]. The reader is referred to the papers by Skriver [13] and Raith and Ehrgott [11] for extensive surveys on the bi-objective problem. The general case of MOSPP has received far less attention. The first exact algorithm was proposed by Martins [8] for the case where all objectives are sums. This algorithm is based on a label setting approach, obtained by generalizing the algorithm proposed by Hansen [4] for the bi-objective case. Label correcting methods were later developed by Guerriero and Musmanno [3], again for the case where all objectives are sums. Their computational results indicate no clear superiority of one labeling approach over the other. Gandibleux et al. [2] gave a twofold contribution to the subject: first, they generalized the algorithm by Martins [8] to the case in which one of the objectives is a bottleneck; second, they presented detailed computational results for a particular class of randomly generated instances, which provide a good simulation of real world computer communication issues, where bottleneck objectives are of concern. The label setting algorithms for MOSPP cited so far adopt a lexicographic ordering of the labels. However, other orderings can be adopted, in particular those based on an aggregate function, i.e., a weighted sum of the label values. Aggregate orders have been extensively used for the bi-objective case, see again Raith and Ehrgott [11] for details. For the general case of MOSPP, aggregate orders were suggested as a possible algorithmic enhancement by Tung and Chew [14], but were substantially disregarded. Sastry et al. [12] used it within a heuristic algorithm for the case where all objectives are sums. Recently, Paixão and Santos [10] presented a detailed computational study including several kinds of ordering. Their results too are limited to the case where all objectives are sums, and show that aggregate orders give consistently better results than lexicographic ones. In addition, they confirm that there is no clear winner between label setting and label correcting methods. In this paper we address the general MOSPP with any number of sum and bottleneck objectives, and devise a label setting algorithm based on a tailored aggregate order. Computational results show that, with a suitable choice of the aggregate function, the aggregate order is up to three times faster than the lexicographic one. The rest of the paper is organized as follows. In Section 2 we describe (classical and recent) lexicographical label setting algorithms for MOSPP, while in Section 3 we describe our aggregate version. In Section 4 we describe the benchmarks and the experimental tests performed to choose the aggregate function. The aggregate and lexicographic versions are computationally compared in Section 5. Some conclusions follow in Section 6.

 2. propagation: for each arc (i, j) 2 A create a new label, say ‘ðjÞ, consisting of the k costs corresponding to path Pq(i)  (i, j);  is dominated by one of the (perma3. dominance check: if ‘ðjÞ  and go to nent or temporary) labels of vertex j then delete ‘ðjÞ  the next iteration. Otherwise add a new label, say, ‘r ðjÞ ¼ ‘ðjÞ, to the labels of vertex j, and delete any temporary label of j that is dominated by ‘r(j). The algorithm terminates when no further temporary label exists, with all permanent labels giving the non-dominated paths from vertex r to all the vertices of V. In order to reconstruct the paths, for each label ‘r(j) generated in Step 3 the algorithm maintains a pointer to its predecessor label, i.e., ‘q(i). The correctness of the algorithm follows from the observation that, at the current iteration, the lexicographically minimal temporary label is not dominated by any other label. As the arc costs are non-negative, such a label will never be dominated, at any future iteration, by another temporary label, hence it can be marked as permanent. It is clear that any other selection criterion that ensures the same property (such as the one introduced in the next section) leads to a correct algorithm. Coming to the case considered in this paper, in which the objectives include bottlenecks (i.e., s < k), as observed by Gandibleux et al. [2], the algorithm outlined above would not produce the maximal complete set. Consider, e.g., the situation shown in Fig. 1, where it is assumed that k = 2, the first objective being a sum, and the second a bottleneck. Consider the iteration in which the temporary labels are ‘1(i1) and ‘1(i2): the algorithm would select the lexicographically smallest label ‘1(i1) and propagate it along arc (i1, j1), obtaining ‘1(j1). At the next iteration ‘1(i2) would  Þ ¼ ½l þ c; b þ 1 would be created and then be selected, and ‘ðj 1 dropped, as dominated by ‘1(j1). At the following iteration vertex j2 would receive the label ‘1(j2) = [l + 2c, b + 2]. In this way however only the path going through i1, j1, j2 would be stored, while the equivalent path going through i2, j1, j2 would be lost. Hence the algorithm would not produce the maximal complete set. To avoid this problem Gandibleux et al. [2] proposed to maintain those labels that are equivalent in the sum objectives, but dominated in  Þ ¼ ½l þ c; b þ 1 in the example), the bottleneck ones (like ‘ðj 1 marking them as ‘‘hidden”. A hidden label ‘q(i) does not represent by itself a non-dominated path, but it is propagated like the others as it could produce non-dominated paths at the subsequent iterations. By using the above correction, and adopting the classical lexicographic order for the label selection at Step 1, Gandibleux, Beugnies and Randriamasy [2] obtained an algorithm for MOSPP that was successfully tested on random instances with up to three

2. Label setting MOSPP algorithms As in the classical Dijkstra approach, the label setting algorithms extend the paths from the source to the rest of the network by labeling the vertices. Remind that each vertex i has a number of labels, ‘q(i) (q = 1, 2, . . .), each corresponding to a path Pq(i) from r to i, consisting of a k-tuple ½f1q ; f2q ; . . . ; fkq  containing the k objective function values of path Pq(i). For a given s (s 6 k) assume that q q ½f1q ; f2q ; . . . ; fsq  are sum objectives, and ½fsþ1 ; fsþ2 ; . . . ; fkq  are bottleneck objectives. In the following we denote by Pq(i)  (i,j) the path obtained by adding the arc (i, j) to path Pq(i). Initially, only one label is defined, namely ‘1(r) = [0, 0, . . . , 0], and marked as temporary. Let us consider first the case in which all objectives are sums (i.e., s = k). In the classical implementations each iteration of a label setting algorithm consists of three main steps: 1. selection: select a temporary label ‘q(i) that is lexicographically minimal among all temporary labels, and mark it as permanent;

Fig. 1. Propagation of hidden labels (marked by ‘‘”).

M. Iori et al. / European Journal of Operational Research 207 (2010) 1489–1496

objectives in total, and up to one bottleneck. In the next section we show how such algorithm can be modified to achieve a better performance. 3. An enhanced algorithm for bottleneck objectives We first observe that it is not necessary to propagate the hidden labels. Indeed, the equivalent paths may be found in the backtracing phase, provided the hidden labels are stored at the vertices where they have been detected. In the example of Fig. 1, label ‘2(j2) (as well as all labels that would possibly be obtained by its propagation) would disappear, but, when backtracing from ‘1(j2) to ‘1(j1), one could realize that another non-dominated path exists,  Þ. Using quite simple data structures, given by the hidden label ‘ðj 1 each hidden label can be stored and later retrieved (in the backtracking phase) in constant time. The above consideration, however, cannot be expected to lead to remarkable practical improvements. Indeed our computational experiments (see Section 5) show that, on the graphs we tested, the number of hidden labels is very small (less than 1% of the total number of labels), so the reduction of the CPU time taken by the algorithm is quite irrelevant. In contrast, a relevant improvement of the performance can be obtained replacing the lexicographic ordering of the labels by an aggregate ordering. Basically, this replacement consists in: (i) adding to each vertex label ‘q ðiÞ ¼ ½f1q ; f2q ; . . . ; fkq  an additional q aggregate information, say, fkþ1 , defined by a linear combination of the objective function values f1q ; f2q ; . . . ; fkq of path Pq(i), and q (ii) selecting, at each iteration, a temporary label for which fkþ1 is a minimum. q It is easy to see that, as long as the linear combination fkþ1 exploits positive weights, the aggregate ordering leads to a correct algorithm. Indeed: (i) at the current iteration, the selected label is not dominated by any other label, and (ii) as the arc costs are non-negative, such a label will never be dominated, at any future iteration, by another temporary label. In order to obtain a convenient linear combination of the objective function values, two facts must be taken into consideration. First, arc costs corresponding to different objectives may range in different intervals, thus the objective function values may differ even by orders of magnitude. Second, bottleneck objectives (that are determined by a single arc cost) are expected to be significantly smaller than sum objectives, even if arc costs range in the same interval. In the choice of the weights of the linear combination, our goal was to give each objective the same ‘‘importance”, i.e., the same expected impact on the aggregate information. To this aim define, for each objective h (h = 1, 2, . . . , k), the average arc cost:

ch ¼

X

ch ði; jÞ=m:

ð3Þ

ði;jÞ2A

The aggregate information is then obtained by a linear combination of the normalized objective function values, i.e., q fkþ1 ¼a

s k X X fhq fhq þb ; c c h¼1 h h¼sþ1 h

ð4Þ

where a (resp. b) is a positive weight assigned to the sum (resp. bottleneck) objective functions. Choosing a weight b > a allows to give bottleneck and sum objectives a similar importance. Good values for a and b were obtained through extensive computational experiments, as shown in the next section.

1491

4. Benchmark and parameter setting For our experiments we used as a basis the class of random instances introduced in Gandibleux et al. [2], which is the only benchmark specifically addressing bottleneck objectives proposed so far in the literature. For different numbers of vertices n, the graphs are generated, according to different percentage densities d = m/(n(n  1)), as follows. A rooted tree is first generated, and then random arcs are added until the desired density is reached. The following data sets are considered:  n 2 {50, 100, 200};  d 2 {5%, 10%, 20%}. For each of the nine resulting pairs (n, d), three cost types C are tested. Remember that the sum objective functions are numbered as h = 1, 2, . . . , s and the bottleneck objective functions as h = s + 1, s + 2, . . . , k. The costs are uniformly randomly generated in different ranges according to the cost types as follows: C = 1: ch(i, j) 2 [1, 100] for all h; C = 2: ch(i, j) 2 [1, 1000] for all h; C = 3: ch(i, j) 2 [1, 255] for h = 1, ch(i, j) 2 [1, 50000] for h = 2, ch(i, j) 2 [1, 106] for h P 3. In the following we denote the number of bottleneck objective functions as b ( = k  s). For each triple (n, d, C) and pair (s, b) we generated 10 instances. Algorithms were coded in C++, and compiled with gcc 4.3.2 without optimization options. Computational experiments were performed on a Pentium IV PC with 3 GHz and 2 GB RAM, running the SuSE Linux 11.1 operating system. We first performed a series of tests on a subset of instances of limited difficulty, to identify good values for the weights a and b. In particular, our goal was to understand how the choice of the pair (a, b) could be affected by the parameters describing a set of instances, i.e., n, d, C and the pair (s, b). In order to give a better picture of the behavior of the CPU times, in the following we report the results for a set of pairs (a, b) ranging between (64, 1) and (1, 64), even if the best results were obtained by pairs (1, b) with b ranging between 4 and 16. In the first test we evaluated the algorithms behavior when varying the cost type C, choosing n = 200, d = 5% and s = b = 2. Fig. 2 summarizes the average CPU times needed to produce the maximal complete set. The results indicate the best pair (a, b) around (1, 8)–(1, 10), and, more important, show that the best pair is practically independent of the cost type. The same behavior was observed for the higher densities (with higher CPU times, as will be seen in the following) and by varying the number of nodes n. For this reason, we set C = 3 and n = 200 in the remaining tests. We then evaluated the effect of varying the density d, keeping n = 200, s = b = 2 and C = 3. The results, depicted in Fig. 3, show, as it could be expected, a sharp increase of the CPU times when increasing the density (note the different scale adopted for the ordinate). Moreover, the results suggest that the weight b should be slightly decreased when increasing the density, e.g., the optimal pair is around (1, 10) for d = 5% and around (1, 6) for d = 20%. On the other hand, CPU times seem rather stable around the optimal pair, see, e.g., the pairs between (1, 4) and (1, 10) for d = 20%. Figs. 4 and 5 show the behavior when varying the number of sum and bottleneck objectives, respectively. It is worth observing that an increase in the number of bottleneck objective produces much harder instances, whereas for the sum objectives the effect is less dramatic. The results suggest that the weight b should

1492

M. Iori et al. / European Journal of Operational Research 207 (2010) 1489–1496

Fig. 2. CPU seconds as a function of different (a, b) weight pairs, for C 2 {1, 2, 3}, with n = 200, d = 5%, s = b = 2.

Fig. 3. CPU seconds as a function of different (a, b) weight pairs, for d 2 {5%, 10%, 20%}, with n = 200, C = 3, s = b = 2.

Fig. 4. CPU seconds as a function of different (a, b) weight pairs, for s 2 {1, 2, 3}, with n = 200, C = 3, d = 5%, b = 2.

increase with b and, up to a minor extent, with s. Also in this case, however, CPU times are rather stable for (a, b) pairs close to the optimal one. Summing up, the best choice for the pair (a, b) depends, in different ways, on the parameters d, s and b. However, CPU times are rather stable for small variations around the optimal pair. Therefore, when comparing the aggregate to the lexicographic

ordering, it seems reasonable to choose a single pair for all the instances in our benchmark set. In this case, the pair (a, b) = (1, 10) appears to be, on average, the most appropriate choice. The results reported in the next section were obtained with this pair. We remark that we also tried alternate settings of the weights, which took into account the parameters d, s and b. In particular, keeping a = 1 fixed, we tried setting b = 4(s + b)  7 and b = 15  d/20. In

1493

M. Iori et al. / European Journal of Operational Research 207 (2010) 1489–1496

Fig. 5. CPU seconds as a function of different (a, b) weight pairs, for b 2 {1, 2, 3}, with n = 200, C = 3, d = 5%, s = 2.

The cases (s, b) 2 {(2, 1), (1, 2), (3, 1)} were very easy to solve for both algorithms: the lexicographic version took on average less than two seconds, and the aggregate version less than one second. For this reason we do not give here detailed tables on such cases, but we provide a summary of the results at the end of the section. In Table 1 we give the results obtained for the case (s, b) = (2, 2). The first three columns give the values of the parameters that define the instances. All the entries in the next columns give average values (possibly scaled) computed over the 10 instances generated for each triple (n, d, C). Column jPSj gives the size of the Pareto set, i.e., the maximal complete set. The next two groups of four columns report the results obtained by the two algorithms: sec is the average CPU time in seconds, %rate is the average percentage of temporary labels that become permanent, temp (resp. perm ) are 103 103 the average numbers of comparisons between the generated labels

both cases, however, the resulting CPU times are, on the average, almost identical to those obtained with b = 10. 5. Computational comparisons In this section we report the results of an experimental comparison of the lexicographic and aggregate algorithm on the randomly generated instances described in the previous section. We considered seven pairs (s, b), with s, b 2 {1, 2, 3} and k = s + b 2 {3, 4, 5}. Recall that the results in Gandibleux et al. [2] were limited to the pairs (s, b) with b 2 {0, 1} and k 2 {2, 3}. For each pair (s, b) we considered n 2 {50, 100, 200}, d 2 {5%, 10%, 20%}, and C 2 {1, 2, 3}, and we generated 10 instances per combination, obtaining a total of 1890 instances. Recall that in the aggregating function (4) we set (a, b) = (1, 10).

Table 1 Lexicographic and aggregate algorithms for the case s = 2, b = 2. n

50

d

5

10

20

100

5

10

20

200

5

10

20

C

jPSj

1 2 3 1 2 3 1 2 3

415 387 339 1414 1485 1515 3572 3892 4052

1 2 3 1 2 3 1 2 3

4626 5239 4682 12,283 13,568 12,275 19,816 25,810 27,625

1 2 3 1 2 3 1 2 3

33,146 36,923 40,506 57,883 78,882 82,539 96,662 127,056 128,425

Lexicographic

Aggregate

sec

%rate

temp 103

perm 103

sec

%rate

temp 103

perm 103

<0.1 <0.1 <0.1 <0.1 <0.1 <0.1 0.1 0.1 0.1

98.2 98.1 98.6 95.0 94.9 95.2 91.4 90.8 92.0

1 1 <1 7 9 8 52 61 58

3 3 2 55 62 67 521 621 612

<0.1 <0.1 <0.1 <0.1 <0.1 <0.1 0.1 0.1 0.1

97.9 98.8 98.7 96.3 96.5 95.9 94.6 94.3 94.8

1 1 1 10 12 12 68 75 78

3 2 2 33 38 39 249 288 292

0.1 0.1 0.1 0.6 0.8 0.6 2.2 3.9 4.1

95.1 96.0 95.6 90.6 90.5 91.3 87.0 86.4 87.2

31 37 32 233 270 227 704 1093 1153

285 365 289 2871 3751 2738 11,429 20,228 21,090

0.1 0.1 0.1 0.5 0.6 0.5 1.3 2.0 2.3

96.9 97.4 97.3 95.1 95.1 95.2 92.6 93.9 93.9

45 55 47 309 392 316 795 1297 1399

175 230 173 1365 1655 1274 4659 7181 8502

2.9 3.5 4.3 11.6 21.0 22.0 53.5 88.0 80.8

90.7 91.1 90.2 86.3 86.3 87.0 80.8 82.1 83.2

689 854 997 2537 4076 4345 8442 12,178 12,010

10,122 12,351 15,851 48,932 89,579 93,489 240,566 390,728 359,676

2.0 2.4 3.0 6.0 10.2 11.4 19.6 33.5 33.3

95.4 96.1 95.8 93.3 94.8 94.5 91.3 92.8 92.6

998 1223 1453 3136 5081 5606 8372 12,510 12,690

4527 5486 7326 18,070 32,336 36,578 75,548 125,455 123,617

1494

M. Iori et al. / European Journal of Operational Research 207 (2010) 1489–1496

and the temporary (resp. permanent) labels, expressed in thousands. The same notation is adopted for Tables 2–5. The results presented in Table 1 show that these instances are very easy to solve for n 6 100, for which the two approaches are basically equivalent, but the aggregate approach is slightly faster for n = 100 and d = 20. For n = 200 the instances are somehow harder, and the advantage given by the aggregate approach is more evident, especially for d P 10. The lexicographic approach has a slightly smaller number of comparisons with the temporary labels, but a dramatically higher number of comparisons with the permanent ones. Table 2 refers to the case (s, b) = (3, 2). The results are similar to those obtained for the previous case. However an increase in the number of sum objectives makes the difficult instances (n = 200, d P 10) 4–5 times more difficult in terms of CPU time. This corresponds to a considerable increase in the size of the Pareto set. The improvement given by the aggregate approach for such instances is around 50% for d = 10, and around 60% for d = 20. The average percentage of temporary labels that become permanent is higher for the aggregate approach, especially for the difficult instances. Also note that the behavior of the two approaches for what concerns the numbers of label comparisons is similar to the one observed for the previous table, but the differences are here more evident. In Table 3 we report on the experiments for the case (s, b) = (1, 3). The results show that an increase to three of the number of bottleneck objectives (even with a small number of sum objectives) leads to instances that are considerably more difficult to solve. While instances with n = 50 remain very easy, instances with n = 100 and d = 20 turn out to be already relatively difficult. The improvement given by the aggregate approach for the difficult instances ranges between 40% and 60%. The previous considerations concerning the labels apply to this case too. The trend observed for Tables 1–3 is confirmed by the results for the case (s, b) = (2, 3). Table 4 exhibits a dramatic increase of

the size of the Pareto set, and of the average CPU time needed to solve difficult instances. For the most difficult instances (n = 200, d = 20) the CPU times of the aggregate approach are about one-third of those of the lexicographic approach. Limited experiments on the case (s, b) = (3, 3), not reported here, showed that the CPU time taken by such instances is about 2 or 3 times the one required for the case (s, b) = (2, 3). To make a few examples, instances with n = 100, d = 20 and instances with n = 200, d = 5 took around 4 CPU minutes with the lexicographic version and 2 minutes with the aggregate version, while instances with n = 200, d = 20 needed around 5 hours with the lexicographic version and 2 hours with the aggregate one. The results of Tables 1–4, are summarized in Table 5, by also including the easier cases (s, b) 2 {(2, 1), (1, 2), (3, 1)}. The entries are here the average values computed over the 270 instances generated for each case, and the cases are ordered by increasing value of b, then of s, which corresponds to increasing difficulty. Column hidden gives the average number of hidden labels, while the remaining columns give the same information as the homologous columns of the previous tables. The last line provides the overall average values. Note that hidden labels do not have a significant impact on the performance. For the most critical cases, i.e., when s = 1, the number of hidden labels is around 1% of the total size of the Pareto set, while for the cases where s > 1 this number is negligible. Tables 1–5, allow us to draw some remarks on our label setting algorithms. Observe that the aggregate version gives a quite similar reduction (more than two-thirds in the best cases) both in the CPU times and in the number of comparisons to permanent labels, even if the latter is slightly more relevant. Due to the huge number of label comparisons performed by both algorithms, it is conceivable that the CPU time reduction obtained by the aggregate version is explained by the corresponding reduction in the number of label comparisons. Note that in our implementation, and for both algorithms, new labels are compared to permanent labels

Table 2 Lexicographic and aggregate algorithms for the case s = 3, b = 2. n

d

C

jPSj

Lexicographic sec

50

5

10

20

100

5

10

20

200

5

10

20

Aggregate %rate

temp 103

perm 103

1 2 3 1 2 3 1 2 3

479 401 419 2122 2297 2247 5503 6862 6823

<0.1 <0.1 <0.1 <0.1 <0.1 <0.1 0.2 0.3 0.3

98.4 98.8 98.8 96.7 97.4 96.9 93.5 94.2 93.6

1 1 1 16 21 18 122 167 165

4 3 3 115 126 123 1031 1558 1632

1 2 3 1 2 3 1 2 3

6967 7032 7222 17,008 23,894 22,377 39,091 48,140 49,815

0.2 0.2 0.2 1.1 2.1 1.9 7.7 11.9 12.6

97.0 96.8 97.2 94.1 94.0 93.8 90.0 90.7 90.2

69 76 86 476 772 750 2551 3528 3630

575 585 670 4872 9218 8447 37,263 56,547 59,988

1 2 3 1 2 3 1 2 3

57,167 61,905 64,969 129,975 147,223 159,586 224,913 292,668 324,290

7.4 9.0 9.3 50.6 62.0 73.8 257.3 387.4 456.2

93.3 93.1 93.5 89.9 90.4 90.4 85.3 86.6 87.0

2063 2482 2632 11,862 13,835 15,997 40,654 59,820 71,177

25,886 32,932 33,567 205,348 255,513 305,792 1,099,910 1,647,311 1,935,181

%rate

temp 103

perm 103

<0.1 <0.1 <0.1 <0.1 <0.1 <0.1 0.1 0.2 0.2

99.2 99.1 99.6 98.6 98.8 98.7 97.4 98.0 97.8

1 1 1 24 30 27 162 235 243

3 2 3 72 76 73 465 707 677

0.2 0.2 0.2 0.8 1.4 1.2 3.7 5.7 6.1

98.9 98.8 98.9 97.9 98.3 98.2 96.8 97.4 97.4

104 109 125 702 1184 1176 3425 4835 4963

359 348 416 2044 4157 3349 12,191 18,623 20,343

4.7 5.6 6.0 21.5 29.3 33.6 72.8 124.9 152.0

98.4 98.5 98.5 97.4 97.7 97.7 96.1 96.8 96.9

3218 3814 4113 17,324 19,821 23,665 48,968 76,876 92,507

11,225 13,470 14,443 65,296 88,995 100,837 269,911 425,396 514,946

sec

1495

M. Iori et al. / European Journal of Operational Research 207 (2010) 1489–1496 Table 3 Lexicographic and aggregate algorithms for the case s = 1, b = 3. n

d

C

jPSj

Lexicographic sec

50

5

10

20

100

5

10

20

200

5

10

20

Aggregate %rate

temp 103

perm 103

sec

%rate

temp 103

perm 103

1 2 3 1 2 3 1 2 3

526 375 596 2714 2277 2446 6133 6904 9293

<0.1 <0.1 <0.1 0.1 <0.1 <0.1 0.2 0.3 0.5

98.5 98.3 97.5 95.1 95.4 94.9 92.1 91.5 91.5

1 <1 1 21 16 17 116 139 233

6 3 7 195 147 163 1430 1894 3258

<0.1 <0.1 <0.1 <0.1 <0.1 <0.1 0.2 0.2 0.4

96.7 96.9 96.7 93.1 93.8 94.5 91.1 91.6 92.2

1 1 1 30 23 23 156 188 321

5 2 6 128 95 109 732 970 1610

1 2 3 1 2 3 1 2 3

9762 12,055 9335 20,461 30,785 27,695 47,913 68,126 64,834

0.3 0.5 0.3 1.7 3.7 2.9 12.7 24.2 21.9

95.2 95.4 95.4 90.3 92.1 91.6 86.4 88.5 89.0

102 147 101 508 993 852 2841 4667 4441

1234 1887 1144 7759 17,206 13,698 62,215 117,730 106,475

0.3 0.4 0.3 1.1 2.4 2.0 6.2 12.8 11.6

94.3 95.2 94.4 91.3 93.1 92.7 89.7 91.5 91.7

156 217 143 675 1405 1205 3497 6020 5731

766 1278 808 4096 9025 7323 26,477 52,620 46,961

1 2 3 1 2 3 1 2 3

77,432 102,773 103,203 150,137 206,049 229,100 278,751 401,531 423,427

14.2 22.1 21.6 66.8 125.5 141.4 379.3 748.9 780.7

90.5 92.2 93.0 88.6 87.7 89.9 82.7 84.0 85.3

2900 4206 4131 10,243 18,164 19,675 42,386 71,682 74,131

55,946 89,139 86,770 282,049 542,763 608,461 1,593,312 3,307,992 3,392,441

8.7 14.5 14.2 30.8 64.0 75.2 127.9 291.1 317.2

92.4 94.1 94.3 91.6 92.1 93.2 89.1 90.9 91.4

4169 6217 6296 13, 620 24,255 27,156 45,561 81,925 88,502

29,768 48,726 46,594 124,940 244,950 283,378 599,122 1,205,060 1,274,974

%rate

temp 103

perm 103

Table 4 Lexicographic and aggregate algorithms for the case s = 2, b = 3. n

d

C

jPSj

Lexicographic %rate

perm 103

<0.1 <0.1 <0.1 0.1 0.1 0.1 0.9 1.6 1.5

98.9 98.6 99.0 97.3 96.9 97.5 95.2 94.8 93.8

1 1 2 56 61 68 421 707 697

8 7 11 512 545 570 4626 8174 7860

<0.1 <0.1 <0.1 0.1 0.1 0.1 0.6 1.0 1.1

98.8 98.7 99.0 98.0 97.9 97.8 96.8 97.1 97.1

2 2 2 84 91 100 630 1093 981

7 6 9 324 337 389 2222 3798 3672

0.6 1.2 1.0 8.7 13.6 11.5 63.1 134.6 114.7

96.7 97.2 97.2 95.3 95.3 95.2 91.6 91.9 92.1

214 429 361 2621 4000 3736 14,215 27,839 24,433

2309 4599 3858 38,476 60,318 50,593 284,980 597,822 509,024

0.5 1.0 0.8 5.4 8.8 7.8 25.9 60.1 52.7

98.1 98.5 98.4 97.8 98.1 97.9 96.6 97.1 97.0

316 650 538 4050 6 331 5630 20,644 41,736 36,488

1551 3076 2536 19,646 31,057 27,316 103,620 222,662 192,628

55.0 97.6 85.0 391.7 696.9 735.0 2196.3 5135.0 4180.6

94.6 95.2 95.2 92.0 92.7 92.9 87.6 89.1 90.3

12,405 20,843 18,517 67,119 109,671 119,158 278,857 564,107 481,874

217,118 394,386 342,096 1,632,776 2,918,830 3,072,673 9,343,281 21,892,195 17,861,727

30.6 62.5 53.3 149.5 327.2 359.8 579.1 1687.6 1578.8

97.9 98.4 98.4 97.1 97.8 97.9 96.0 97.0 97.2

19,786 32,435 29,484 104,158 167,558 179,296 374,289 786,184 657,347

108,528 216,011 177,299 611,199 1,197,585 1,289,790 2,612,366 6,514,477 5,905,223

sec 50

5

10

20

100

5

10

20

200

5

10

20

1 2 3 1 2 3 1 2 3

641 615 730 4517 4405 4832 12,114 15,880 15,389

1 2 3 1 2 3 1 2 3

13,567 19,995 18,178 51,412 64,628 60,092 112,480 165,241 152,868

1 2 3 1 2 3 1 2 3

169,332 230,882 216,410 389,051 530,627 552,012 723,565 1,148,497 1,085,723

Aggregate temp 103

following a temporal order, i.e., in the order in which labels became permanent, which is clearly different for the lexicographic and the aggregate version. The aggregate order turns out to be much more effective, since rejected new labels tend to be dominated by ‘‘older” permanent labels, i.e., labels with a smaller

sec

aggregate weight. On the other hand, the aggregate version tends to maintain longer lists of temporary labels, and therefore performs a larger number of comparisons to temporary labels. However, this fact has a much lesser impact on the performance, since the number of comparison to permanent labels is usually much

1496

M. Iori et al. / European Journal of Operational Research 207 (2010) 1489–1496

Table 5 Lexicographic and aggregate algorithms: summary. s

2 3 1 2 3 1 2

b

1 1 2 2 2 3 3

aver.

jPSj

hidden

Lexicographic

Aggregate

sec

%rate

temp 103

perm 103

sec

%rate

temp 103

perm 103

4837 11,949 10,107 30,556 63,385 84,986 213,47

1 <1 173 3 <1 1056 15

0.3 2.0 1.2 11.1 50.1 87.8 515.8

85.0 89.7 85.8 90.8 93.4 91.6 94.6

88 445 223 1856 8629 9730 64,904

1569 8988 5587 49,122 212,007 381,308 2,194,421

0.2 1.0 0.8 4.8 17.4 36.4 185.0

89.8 95.6 86.9 95.3 98.1 93.0 97.7

72 456 202 2073 11,395 11,759 91,478

733 3021 2711 16,856 58,090 148,538 712,864

59,899

178

95.5

90.1

12,268

407,572

35.1

93.8

16,776

134,687

larger (about one order of magnitude) than the number of comparisons to temporary labels. 6. Conclusions In this paper we considered label setting algorithms for MOSPP with one or more bottleneck objectives. In particular, we investigated an aggregate label setting policy, and we compared it to the usual lexicographic policy. We pointed out that the choice of the aggregating function is a crucial aspect, since sum and bottleneck objectives need to be assigned different weights, even if the range of arc costs is the same or is normalized. Our computational analysis showed that a proper choice of the aggregating function leads to a relevant improvement of the algorithm performance. Moreover, this improvement can be related to the different order in which labels are generated by the two policies. On the methodological side, our paper provides two contributions. The first one concerns the use of weighted aggregate functions with the aim of normalizing the importance of the objectives. The second one is the relation between the performance of the algorithm and the order in which label comparisons are performed. To the best of our knowledge, these two aspects are not addressed in the existing literature on the general case of MOSPP. We believe that these contributions may be useful for the development of faster labeling algorithms for MOSPP, also in the case where only sum objectives are present, or when the minimal (instead of maximal) complete set has to be found. Acknowledgement The authors would like to thank the Ministero dell’Istruzione, dell’Università e della Ricerca, Italy for its support.

References [1] E. Erkut, Editorial: Introduction to the special issue, Computers & Operations Research 34 (2007) 1241–1242. [2] X. Gandibleux, F. Beugnies, S. Randriamasy, Martins algorithm revisited for multi-objective shortest path problems with a maxmin cost function, 4OR 4 (2006) 47–59. [3] F. Guerriero, R. Musmanno, Label correcting methods to solve multicriteria shortest path problems, Journal of Optimization Theory and Applications 111 (2001) 589–613. [4] P. Hansen, Bicriterion path problems, in: G. Fandel, T. Gal (Eds.), Multiple Criteria Decision Making Theory and Application, Lecture Notes in Economics and Mathematical Systems, vol. 177, Springer, 1980, pp. 109–127. [5] N. Jozefowiez, F. Semet, E.-G. Talbi, Multi-objective vehicle routing problems, European Journal of Operational Research 189 (2008) 293–309. [6] L. de Lima Pinto, C.T. Bornstein, N. Maculan, The tricriterion shortest path problem with at least two bottleneck objective functions, European Journal of Operational Research 198 (2009) 387–391. [7] L. de Lima Pinto, M.M.B. Pascoal, On algorithms for the tricriteria shortest path problem with two bottleneck objective functions, Computers & Operations Research 37 (2010) 1774–1779. [8] E.Q.V. Martins, On a multicriteria shortest path problem, European Journal of Operational Research 16 (1984) 236–245. [9] E.Q.V. Martins, On a special class of bicriterion path problems, European Journal of Operational Research 17 (1984) 85–94. [10] J.P. Paixão, J.L. Santos, Labelling methods for the general case of the multiobjective shortest path problem–a computational study. Technical Report 7-2009, Centro de Investigação Operacional, University of Coimbra, (2009) [11] A. Raith, M. Ehrgott, A comparison of solution strategies for biobjective shortest path problems, Computers & Operations Research 36 (2009) 1299– 1331. [12] V.N. Sastry, T.N. Janakiraman, I. Ismail Mohideen, New polynomial time algorithms to compute a set of pareto optimal paths for multi-objective shortest path problems, International Journal of Computer Mathematics 82 (2005) 289–300. [13] A.J.V. Skriver, A classification of bicriterion shortest path (bsp) algorithms, Asia Pacific Journal of Operational Research 17 (2000) 192–212. [14] C.T. Tung, K.L. Chew, A multicriteria pareto-optimal path algorithm, European Journal of Operational Research 62 (1992) 203–209. [15] P. Vincke, Problèmes multicritères, Cahiers du Centre d’Études de Recherche Operationnelle 16 (1974) 425–439.