A multi-start variable neighborhood search for solving the single path multicommodity flow problem

A multi-start variable neighborhood search for solving the single path multicommodity flow problem

Applied Mathematics and Computation 251 (2015) 132–142 Contents lists available at ScienceDirect Applied Mathematics and Computation journal homepag...

991KB Sizes 0 Downloads 105 Views

Applied Mathematics and Computation 251 (2015) 132–142

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

A multi-start variable neighborhood search for solving the single path multicommodity flow problem H. Masri a,⇑, S. Krichen a, A. Guitouni b a b

LARODEC Laboratory, Institut Supérieur de Gestion de Tunis, University of Tunis, Rue de la liberté, 2000 Bardo, Tunisia Peter B. Gustavson School of Business, University of Victoria, Victoria, British-Columbia, Canada

a r t i c l e

i n f o

Keywords: Multicommodity flow problem Routing problem Variable neighborhood search

a b s t r a c t In this paper, we propose a new global routing algorithm supporting advance reservation. A predefined number of messages are to be routed in a capacitated network including a set of nodes that can be producers and/or consumers of information. We assume that the same information can be held by different sources. We first propose the non-linear mathematical formulation for this problem by extending the single path multicommodity flow formulation. To solve such an NP-hard optimization problem, we develop a multi-start variable neighborhood search method (MVNS). The results of extensive computational experiments across a variety of networks from the literature are reported. For small and medium scale instances, the results are compared with the optimal solution generated by LINGO in terms of time and optimality. For large size instances, a comparison to a state-of-the-art ant colony system approach is performed. The obtained results show that the MVNS algorithm is computationally effective and provides high-quality solutions. Ó 2014 Elsevier Inc. All rights reserved.

1. Introduction In telecommunication networks, an efficient routing algorithm should find an optimal path for messages transmission so as to satisfy some quality of services (QoS) for end users. Given a limited capacity, an efficient resource reservation mechanism needs to be defined in order to meet these QoS. In general, we can distinguish two types of network resource reservations [17]. The first type is the immediate reservation which results on having dynamic routing algorithms. The second type is, the in-advance reservation, which allows having a routing plan to be followed in later stages. This second type enables to have a global routing algorithm assuming an a priori knowledge of the network structures and the different requests. Such design is prominent in specific applications where voluminous datasets are required to be transferred across the network. We can state as a potential application the grid computing, that introduces new ways of sharing resources across geographically separated sites. Typical computations on grids lead generally to large data transfers. This would overload the network unless advance reservations are made. Another example of application supporting advance reservation might be the routing of video data in a teleconference where different entities are sharing the network resources. It would be reasonable to assume that, with such applications, users should schedule their requests ahead of time so that the routing optimizer can efficiently manage the transmission.

⇑ Corresponding author. E-mail addresses: [email protected] (H. Masri), [email protected] (S. Krichen), [email protected] (A. Guitouni). http://dx.doi.org/10.1016/j.amc.2014.10.123 0096-3003/Ó 2014 Elsevier Inc. All rights reserved.

H. Masri et al. / Applied Mathematics and Computation 251 (2015) 132–142

133

Different centralized routing algorithms were designed in the literature assuming advance reservation. In the context of multiple pairs of source–destination nodes, the routing problem is modeled as a multicommodity flow problem (MCFP) [13] where a commodity corresponds to a demand of a telecommunication traffic between two nodes. The basic MCFP model can be described as follows [13]: given a set of commodities and an underlying network structure consisting of a number of nodes and capacitated arcs, the routing algorithm should find an optimal routing policy to transfer the commodities through the network at minimum cost without violating the capacity limits [2]. For a comprehensive survey of MCFPs and the solution approaches, the reader can refer to [1,2,13,16]. Although the complexity of the basic MCFP is polynomial, adding specific constraints to the MCFP turns the problem complexity to be NP-hard. In fact, Holmberg et al. [12] extended the basic MCFP to include side constraints on paths. They explained that in telecommunication applications there are often additional delay or reliability requirements on paths used for routing. These requirements are modeled as side constraints. The problem is solved at optimality using the column-generation method. Furthermore, if only a single path is allowed for the flow of each commodity the problem complexity is NP-hard. This problem is denoted as the unsplittable MCFP or the single path MCFP. Such model is suitable for circuit switched networks, and certain optical networks where a flow cannot be bifurcated. For instance, in optical networks employing wavelength division multiplexing (WDM), to be able to send data from one access node to another, one needs to establish a single route or path, also called a light path, between the source and destination nodes and to allocate a free wavelength on all of the links on the path [6]. The first mathematical formulation of the single path multicommodity problem was introduced by Barnhart et al. [5]. Different solution approaches were developed to solve the single path MCFP. Barnhart et al. [4] proposed an exact method using branch-and-price-and-cut algorithm. However, due to its NP-hardness the problem was generally solved using approximation algorithms [7,3]. More recently, a metaheuristic approach based on ant colony method was developed [6]. In this paper, we propose to study the single path multicommodity flow problem with bandwidth allocation. The problem consists of sending various messages from a set of sources to different destinations through a capacitated network where each arc is characterized by a capacity and a transmission delay. A node in the network can be an information producer (source) or/and an information consumer (destination) or simply a relay node. The same information can be held by different sources. We assume the existence of a centralized routing coordinator managing all information exchange requests and that the data transfer sizes are known beforehand. Since bandwidth is a valuable and scarce resource, an efficient bandwidth management is a key objective of the routing problem. Hence, solving the routing problem consists of generating a single path for each flow with a dedicated bandwidth value along the path. We propose to model this problem as a multi-source single path multicommodity flow problem (MMCF) generalizing the formulation of the single path multicommodity problem [4]. The proposed formulation allows handling the assumption of having multiple sources for each request as well as the bandwidth allocation mechanism. Given the complexity of the MMCF, we propose to solve it using the multi-start variable neighborhood search (MVNS) method. VNS is a metaheuristic technique [15] which has quickly gained a widespread success as it generates promising results for numerous optimization problems [11]. Its basic idea is a systematic change of neighborhood within an iterative local search. Brimberg [8] proposed a hybrid combination of VNS and random multi-start local search, and they proved the effectiveness of such procedure in various combinatorial problems. We propose to use MVNS in order to solve the MMCF problem. The adopted neighborhood structure consists of changing one or k paths of a current solution. At each iteration, a submodule optimizing the bandwidth allocation is triggered in order to assign a dedicated bandwidth to each path. This submodule solves a uniobjective nonlinear problem with LINGO API. As a local search method, a greedy exploration procedure is proposed. The results of extensive computational experiments across a variety of networks are reported. The contributions of this paper are twofold: 1. The mathematical formulation of the MMCF extending the single path multicommodity flow problem model by including the multi-source assumption and the bandwidth allocation management. 2. The adaptation and implementation of the VNS metaheuristic. Using different test problems arising from the telecommunications industry [9,6,12], the experimental results show that with the MVNS, promising solutions can be obtained in reasonable CPU time. This fact is proved by performing a comparison with the exact non-linear solver (LINGO) for small sized problems. Furthermore, an empirical comparison with a state-of-the-art ant colony system (ACS) metaheuristic [6] is performed for large instances. The remainder of the paper is organized as follows. Section 2 describes MMCF problem and states its formulation as a non linear optimization problem. Section 3 presents the MVNS solution approach. Section 4 provides some experimental results demonstrating the efficiency of the MVNS method. 2. Problem description The MMCF problem can be modeled by a directed graph G ¼ ðN; MÞ where N is a set of nodes and M a set of arcs. Each arc m is characterized by a limited capacity cm that denotes the maximum number of data units which can be transmitted per unit of time, and a lead time lm that represents the time required to send data through the arc. A set of messages are to be transmitted across the network. A node can be an information provider (source) or/and a consumer requiring an information

134

H. Masri et al. / Applied Mathematics and Computation 251 (2015) 132–142

(destination) or a neutral relay node. An information has a predetermined size si . Since the same information can be requested by different nodes, a flow is described by a pair of (destination node d, information i). The words request and flow are used interchangeably along this paper. As a solution, we specify for each flow the assigned source node and the generated single path. Furthermore, due to the limited capacity of the links in the network, and in order to avoid the overload of the network resources, the router needs to manage the bandwidth allocation. Hence, for each flow a fixed bandwidth value is assigned. This value will implicitly define the transmission rate of its source node. Therefore, solving the MMCF consists of defining for each request:  The assigned source node (a node offering the requested message).  The path (succession of arcs relating the source and destination nodes).  The bandwidth allocated along the path. We state, in what follows, the mathematical formulation of the MMCF that consists of minimizing the overall delay in a communication network, while fulfilling the source assignment, single path and the bandwidth constraints. The delay of a path can be expressed in terms of the size of the message, the assigned capacity of the path, and the sum of the lead times. We propose to model the MMCF as a mixed integer nonlinear program for which we state the following notation. N M I cm lm si AjNjjMj anm

the set of nodes fn1 ; . . . ; njNj g of the graph the set of arcs fm1 ; . . . ; mjMj g of the graph the set of information fi1 ; . . . ; ijIj g to be shared the capacity of the arc m the lead time of the arc m the size of information i incidence matrix between nodes and arcs. 8 if the arc m has the node n as source <1 1 if the arc m is incident to the node n : 0 otherwise information consumer matrix  1 if the node n is requiring information i 0 otherwise information producer matrix  1 if the node n is a provider of information i 0 otherwise 8 < 1 if s is the assigned source to satisfy the receiver d requiring information i : 0 otherwise a vector specifying the arcs composing the generated path satisfying

IC jNjjIj icni IP jNjjIj ipni xsdi

Pdi :

the destination-information pair ðd; iÞ  1 if the arc m exists on the path used to satisfyðd; iÞ 0 otherwise the capacity allocated to the destination–information pair ðd; iÞ

pdi m: ydi :

The objective: Minimize the overall time to satisfy all the requests. The arrival time of a message i to a destination d depends of the size of the message, the lead time of the arcs composing the path as well as the value of the dedicated bandhP P i P si width along the path. It can be decomposed to delivery time and propagation time d ijicdi ¼1 s xsdi ydi hP P P di i d ijicdi ¼1 m p m lm :

" # X X X si X di Min ZðWÞ ¼ xsdi þ pm lm ydi s m d ijic ¼1

ð1Þ

di

System constraints: 1. Source assignment: Each pair of destination node-information ðd; iÞ, such that icdi ¼ 1, should be assigned to one source node

X xsdi ¼ 1 d ¼ 1; . . . ; N i ¼ 1; . . . ; I icdi ¼ 1

ð2Þ

s

A pair ðd; iÞ is assigned to a source node s only if ipsi ¼ 1

xsdi 6 ipsi

d ¼ 1; . . . ; N i ¼ 1; . . . ; I s ¼ 1; . . . ; N

ð3Þ

H. Masri et al. / Applied Mathematics and Computation 251 (2015) 132–142

135

A pair ðd; iÞ is assigned to a source node s only if icdi ¼ 1

xsdi 6 icdi

d ¼ 1; . . . ; N i ¼ 1; . . . ; I s ¼ 1; . . . ; N

ð4Þ

2. Single path: Given a flow ðd; iÞ, such that icdi ¼ 1, only one arc of the generated path should be connected to the assigned source node s.

X X xsdi asm  pdi d ¼ 1; . . . ; N m ¼ 1 s

ð5Þ

m

i ¼ 1; . . . ; I

icdi ¼ 1

The generated path of the flow ðd; iÞ should include one incident arc to the destination node d.

X adm  pdi d ¼ 1; . . . ; N i ¼ 1; . . . ; I icdi ¼ 1 m ¼ 1

ð6Þ

m

Each node in the generated path has to be connected to two arcs, except for the destination and the source nodes

ð1  xndi Þ

X anm :pdi n ¼ 1; . . . ; N d ¼ 1; . . . ; N m ¼ 0

ð7Þ

m

n – di ¼ 1; . . . ; I icdi ¼ 1 3. Bandwidth allocation: Given an arc m, the sum of the bandwidths allocated to the paths crossing this arc should be less or equal to its capacity

XX d

ydi :pdi m 6 cm

m ¼ 1; . . . ; M

ð8Þ

i

ydi > 0 d ¼ 1; . . . ; N i ¼ 1; . . . ; I icdi ¼ 1

ð9Þ

4. continuous and binary requirements: di W ¼ ðxsdi ; pdi m ; ydi Þ; xsdi ; pm 2 f0; 1g; ydi P 0

ð10Þ

m ¼ 1; . . . ; M d ¼ 1; . . . ; N s ¼ 1; . . . ; N i ¼ 1; . . . ; I Hence jWj ¼ jNj  jMj  jIj þ jNj  jNj  jij þ jNj  jIj. 3. MVNS algorithm for solving the MMCF In this section, we first describe the general scheme of the VNS followed by its adaptation to solve the MMCF problem. 3.1. Description of the VNS Variable neighborhood search is a metaheuristic technique [15] that has quickly gained a widespread success and a large number of successful applications have been reported [11]. Let N ¼ fN ð1Þ ; N ð2Þ ; . . . ; N ðmaxÞ g denote a finite set of neighborhoods, where N k ðsÞ is the set of solutions in the kth neighborhood of solution s. Most local search methods use only one type of neighborhood (i.e. max ¼ 1), whereas the VNS [15] tries to avoid being trapped in local minima by using more than one neighborhood. The basic steps of the algorithm are announced as follows: 1. Initialization  Set the neighborhood structure N k .  Determine an initial solution s.  Set k ¼ 1. 2. Repeat.  Shaking: Generate a random solution s0 in N k ðsÞ.  Local search: Apply some local search method with s0 as initial solution, let s00 be the local optimum so obtained.  Move or not: If s00 is better than the incumbent s, then set s ¼ s00 and continue the search with k ¼ 1 else k ¼ k þ 1. The basic VNS consists of both a stochastic component (the randomized selection of a neighbor in the shaking phase), and a deterministic component (the application of an iterative improvement procedure in each iteration). The solution obtained by the local search is compared to the incumbent one and will be accepted as a new starting point if an improvement was made, otherwise it will be rejected. The iterative process is repeated until the stopping criteria are met. The stopping criteria can be the maximum number of iterations, maximum CPU time or maximum number of iterations without improvement.

136

H. Masri et al. / Applied Mathematics and Computation 251 (2015) 132–142

3.2. MVNS adaptation to solve MMCF The complexity of the problem is NP-hard as it extends the single path multicommodity flow problem [5], know to be NP-hard [5]. Therefore, we propose to solve the MMCF problem using the VNS metaheuristic [11]. The first step towards an implementation of the VNS is to define the representation of a feasible solution adapted for MMCF. The choice of the coding is essential for the success of the heuristic. It should be designed so that it leads to an efficient way of implementing the neighborhood search of the VNS. Fig. 1 depicts the structure of a solution. A solution associates, to each request, a path and a dedicated bandwidth. As an input for the algorithm, we assume that a set of paths are generated for each pair of (destination, information) using the k-shortest paths algorithm of Jimenez et al. [10]. Therefore, for each flow, we first determine the set of possible source nodes offering the requested message. Then the k-shortest paths relating the destination to one of the possible sources are generated. As a result, each request will have a list ListPaths containing plausible paths relating the destination node to one of the source nodes offering the requested message. To adapt this metaheuristic to solve the MMCF, we need to provide a detailed description of the initialization method, the shake procedure and the local search method. 1. Initialization: The initial solution s sets a path to each request using a random wheel selection. Indeed, for each pair ðd; iÞ, where IC di ¼ 1, a path is selected from the list ListPaths. The probability distribution is computed according to the total transmission times of the paths. 2. Neighborhood: A neighborhood N k consists of changing k paths from the current solution. These paths are randomly selected from (ListPaths). 3. Local search method: the algorithm applies a local search method starting with a solution s0 to find its neighbor s00 . One of the paths in the current solution s0 will be reconstructed using a greedy procedure. A request ðd; iÞ is randomly selected, and it’s path Pdi is re-generated, while keeping the same paths for the other requests. The main idea is to search for a new path that has a minimum delay while considering the updated arcs’ capacities. We propose to use a probabilistic reverse path construction strategy so that we start from the destination node and move backward until reaching an adequate source. The choice of a neighbor depends on a local information dkj of an edge ðk; jÞ. This value is expressed in terms of the lead time and the capacity of the link; these two factors influence the value of the total delay of the path. The two measures are standardized before calculating dkj :

dkj ¼

  1 1 þ 1 lkj ckj

ð11Þ

While constructing a path, and to move from a current node k to another node j, a pseudo random selection rule is applied. A random number q 2 ½0; . . . ; 1 is generated. If q 6 0:5, the best edge will be chosen according to the value of dkj .

 pkj ¼

1 if j ¼ argmaxh2GðkÞ dkh 0 otherwise

ð12Þ

where GðkÞ defines the neighborhood of node k. Otherwise, the next node is chosen using the roulette wheel selection procedure of evolutionary computation. The probability distribution is:

pkj ¼ P

dkj h2GðkÞ dkh

8j 2 NðkÞ

ð13Þ

Fig. 1. A solution coding for the MMCF.

H. Masri et al. / Applied Mathematics and Computation 251 (2015) 132–142

137

Function LocalSearch (Solution s0 ) s00 ¼ s0 choose randomly a request ðd; iÞ such that icdi ¼ 1 Si ¼ fn 2 Nj8n ipni ¼ 1g compute the updated capacities c for all arcs s00  P di ¼ ; k¼d repeat j = next node according to (11) and (12) if j R P di then s00 :P di ¼ s00 :Pdi [ ðk; jÞ end if k¼j until (j 2 Si ) return s00

In order to explore a wider zone of the solution space, we propose a multi-start implementation. This procedure minimizes the risk of getting trapped in a local minimum [8] as it reduces the impact of the initial solution on the algorithm’s performance. In what follows, the flowchart of the MVNS algorithm is depicted in Fig. 2. The algorithm starts by generating an initial solution s that sets a path for each request. A submodule for bandwidth allocation is then used in order to assign a dedicated bandwidth ydi for the generated paths. This bandwidth-allocation() function ensures the optimization of the bandwidth sharing by solving to the following nonlinear program (14):

Min

X XX sizeis xsdi ydi s i d

Fig. 2. Flowchart of the MVNS algorithm for solving the MMCF.

138

H. Masri et al. / Applied Mathematics and Computation 251 (2015) 132–142

s:t

XX d

ydi :pdi m 6 cm

m ¼ 1; . . . ; M ð14Þ

i

ydi > 0

d ¼ 1; . . . ; N i ¼ 1; . . . ; I

Where xsdi and pdi m are the inputs for this model. xsdi takes 1 if the pair of destination-information ðd; iÞ is assigned to the source node s and 0 otherwise. The variable pdi m is equal to 1 if the edge m belongs to the path of the flow ðd; iÞ. As a result of this subproblem, a bandwidth value ydi is assigned to each path. This value defines the transmission rate of its source. After performing the optimization of the subprblem (14) and obtaining ydi , the initial solution s is totally constructed so that it can be evaluated by computing the total delay (i.e. s  time corresponds to the overall the transmission time). This solution will evolve during an iterative process, where distant neighborhoods of the current incumbent are explored increasingly. The bandwidth-allocation() function is triggered each time a new solution is generated (i.e. after the initialization, the shaking phase or the local search procedure). The parameter k, that controls the shake procedure, is initialized to 1. If the solution s00 , generated by the local search, is better than the incumbent, s00 will be the start point for the next iteration. Otherwise, the neighborhood structure is systematically changed and k is incremented. This local procedure stops if the algorithm is unable to find a better solution than s after jIj=2 iterations, where I is the number of messages to be transmitted. The parameters of the algorithm are the number of multi-start multiStartNbr and the stopping criteria. 4. Experimental results We performed many computational experiments for different problem sizes in order to validate the performance of the proposed MVNS algorithm. The algorithm is coded in Java. LINGO API is used to solve nonlinear subproblem of bandwidth allocation. A Core2 duo 2GHZ laptop has been used for these experiments. The experimental design addresses two problem classes: 1. Small sized problems: We compare the MVNS method with the LINGO 14.0 solver [14]. The instances are generated based on a real network: the US National Science Foundation (NSF) [12,6], containing 14 nodes and 42 directed arcs. The comparison between the two methods is based on their fastness in solving numerous instances, as well as the quality of the solution. 2. Large sized problems: We empirically compare the performances of the MVNS and the state-of-the-art ACS method [6] on solving large instances. The ACS method is the most recent metaheuristic adapted to solve the single path MCFP. Given its good performance [6], we are considering benchmarking against this method. The same submodule of bandwidth allocation is embedded in both MVNS and ACS. To check their effectiveness, we randomly generate two sets of 18 instances, each applied to a different network structures. The first one is a planar real network: the British Synchronous Digital Hierarchy (SDH) network [9,6] containing 30 nodes and 55 bidirectional links (110 directed arcs). The second one, is a grid network having 100 nodes and 360 arcs. The incentive behind the choice of two different network structures is to test the proposed method in a first network that mimics a typical structure of a real telecommunications network and a more intractable second network where there is usually a very large number of possible paths between two nodes. The simulation parameter to vary is the number of messages to be shared. For each message we generate a random number of requests and offers. The size of the an information is uniformly distributed between ½1; . . . ; 10 mbits). Each instance is solved with 10 independent runs. For each request the size of ListPaths is experimentally set to 10 and the multi-start number is set to 5. 4.1. Small sized instances: a comparison with LINGO The first testbed is obtained by randomly generating 15 instances and varying the number of messages between 2 and 15. This data set is limited to relatively small sized problems, where the number of decision variables is ranging in [1576, 15960] and the size of the set of constraints varies between [880, 12313]. For each instance solved by the MVNS algorithm, we report in Table 1 the following measures: the problem description (number of messages jIj, number of offers jOj, number of requests jRj, number of variables Var and constraints Cons), the value of the best found solution (Best), the value of the worst solution (Worst), and average value of the solutions (Avg) over 10 runs. The CPU time (CPU) taken to find best solutions is also reported. Moreover, LINGO 14.0 is used to solve these instances optimally using the mathematical formulation (1)–(10). The optimal solution Obj and the corresponding CPU time (CPU) are reported in Table 1. The best reported values are highlighted in boldface. Finally, the GAP is stated based on the objective values of the optimal solutions generated by LINGO. The value of GAP is computed according to (15). We choose the maximum number of iterations as the termination criteria, set experimentally to 200 iterations. Also if there is no improvement after 50 iterations, the algorithm stops.

GAP ¼ 100 

ðBest-ObjÞ Best

ð15Þ

139

H. Masri et al. / Applied Mathematics and Computation 251 (2015) 132–142 Table 1 Experimental results: small sized instances. Pb

Problem description

MVNS objective

LINGO

Gap (%)

jIj  jOj  jRj

Var

Cons

Best

Worst

Avg

CPU ðsÞ

Obj

CPU ðsÞ

1 2 3 4

ð2  2  3Þ ð4  5  4Þ ð6  8  6Þ ð8  13  8Þ

1576 3192 4788 6384

880 1679 2514 3315

85.66 65.33 111.39 157.1

85.66 65.33 111.39 165.6

85.66 65.33 111.39 163.4

1.8 2.3 2.54 4.07

85.66 65.33 111.39 168.1

2 25 9 22

5 6 7 8

ð10  10  3Þ ð12  15  12Þ ð14  18  14Þ ð16  16  16Þ

7890 9576 11172 12768

4133 4951 5768 6587

188 242.26 290.7 360.78

195.24 252.67 301.47 371.55

193.1 250.14 300.32 364.85

4.63 7.12 6.24 5.86

188 241 286.93 362.88

279 388 704 2351

9 10 11 12 13 14 15

ð18  27  18Þ ð20  27  20Þ ð22  29  22Þ ð24  31  24Þ ð26  31  26Þ ð28  29  28Þ ð30  31  30Þ

14364 15960 17556 19152 20748 22344 23940

7405 8223 9041 9859 10677 11495 12313

439.02 483.4 626.7 626.62 714.9 836.05 940.18

447.28 485.24 638.27 635.6 721.5 841.56 947.58

442.7 484.7 634.7 629.3 717.84 838.41 944.2

7.81 7.21 8.73 8.14 8.48 10.02 13.4

435.02 481.7 622.3 N/A N/A N/A N/A

5470 13508 22579 N/A N/A N/A N/A

Average



12760.6

6589.3

411.206

417.7293333

415.0693333

6.556666667



4121.54

0 0 0 better than LINGO (7) 0 0.52 1.31 better than LINGO (0.58) 0.9 0.35 0.71

0.42

Based on the results of Table 1, one can notice that:  For all small sized instances, LINGO reports that the generated solution is a local optimum.  The computational difficulty increases significantly with network size and the number of messages to be routed. For instance, LINGO takes more than 6 h to solve the problem instance 11, while the MVNS succeeds to find a near optimal solution in only 873 s, with a gap that amounts to only 0.71%. Fig. 3 shows the variation of the CPU time versus the quality of the generated solution for both methods. We notice that both methods have similar performance regarding the quality of solutions. However, the CPU time for the MVNS grows polynomially with the instance size, while the computational requirements of LINGO seem to increase exponentially with the problem size.  For instances 1, 2, 3, and 5, the MVNS method finds the optimal solutions. It even outperforms the exact solver in two instances (4 and 8). Hence, for 40% of the instances, the MVNS generated excellent solutions.  LINGO does not converge for the last four small instances, it ends with an ‘‘out-of-memory error’’, while the MVNS continues to generate solutions within a very limited CPU time.  The multi-start implementation gives the MVNS method the ability to explore a wider space of feasible solutions. Fig. 4 shows the behavior of the MVNS over different iterations for the problem instance 7. We notice that the algorithm shows a fast convergence. It generally stops before reaching the 200 iterations prefixed in our settings and finds that no improvement is recorded after an average of 50 iterations, for all starts.  By analyzing the values of the gap, we notice that for all instances the average gap is about 0.42% which is considerably interesting. We should mention that this value is computed without considering the two instances where the MVNS gives better results than LINGO. If we include these values the average gap will be negative, which means that, on average, the MVNS outperforms the LINGO solver.

Fig. 3. The MVNS vs LINGO.

140

H. Masri et al. / Applied Mathematics and Computation 251 (2015) 132–142

Fig. 4. The MVNS algorithm run (problem instance 7).

4.2. Large sized instances To check the effectiveness of the MVNS, we use two potential networks: SDH and Grid network. For each network, problem instances vary in the interval [50, 900]. The empirical study is conducted to compare the performances of the proposed MVNS and the state-of-the-art ACS method. For each instance, we report in Tables 2 and 3 the following measures:  Best: the value of the best found solution  Std: the standard deviation over 10 trials  CPU: the CPU time in seconds The stopping criterion of both metaheuristics is the number of iterations without improvement set to 50. The experimental results in Tables 2 and 3 show that:  For large instances, LINGO fails to converge to a feasible solution, it terminates with an ‘‘out-of memory error’’. This proves that MMCF is computationally intractable by exact algorithms.  The MVNS obtains better results than the ACS in terms of solution quality. In fact, the MVNS outperformed the ACS in all instances except for SDH6 problem. Table 2 Experimental results: SDH network. Problem description

MVNS objective

jIj  jOj  jRj

Best

Std

CPU ðsÞ

Best

Std

CPU ðs)

SDH1 SDH2 SDH3 SDH4 SDH5 SDH6 SDH7 SDH8 SDH9 SDH10 SDH11 SDH12 SDH13 SDH14 SDH15 SDH16 SDH17 SDH18

ð50  67  75Þ ð100  150  153Þ ð150  238  230Þ ð200  313  314Þ ð250  373  295Þ ð300  452  350Þ ð350  602  446Þ ð400  597  447Þ ð450  669  498Þ ð500  745  548Þ ð550  670  569 ð600  697  712Þ ð650  703  679Þ ð700  794  764Þ ð750  822  787Þ ð800  884  865Þ ð850  923  907Þ ð900  968  918Þ

8262.2 8643.7 14003.9 18533.2 20958.9 25981.7 29651.3 34364.8 35610.9 36869.4 41245.9 41526.8 43892.7 44561.1 47351.6 46357 48526.3 50132.5

16.57 0 13 5.8 20.7 15.6 8.1 10.7 27.6 24.7 22.1 25.4 28.3 20.7 29.4 31.2 27.4 26.8

30.1 63.8 51.1 171.2 148.9 232.9 391.7 336.9 427.6 502.5 536.1 568.3 622.4 719.2 684.1 722.9 845.3 865.7

8361.3 8666.7 14362.4 22348 21438 25936 30751.4 35152 35948 40863 41564.2 42864.5 44947.1 45966.6 48002.9 48978.2 52244.6 53855.3

18.3 1.5 12.3 11.8 10.3 13.4 9.8 13.8 14.9 13.5 7.6 12.5 19.3 13.2 16.5 17 22.3 14.8

100.7 120.6 170.2 230.1 304.7 330.7 405.3 420.8 518.6 563.2 588.6 596.1 678.2 701.4 754.2 786.1 845.3 850.7

Average



33137.4

19.6

440

34569.4

13.4

498

The best reported values are highlighted in boldface.

ACS objective

141

H. Masri et al. / Applied Mathematics and Computation 251 (2015) 132–142 Table 3 Experimental results: grid network. Problem description

MVNS objective

jIj  jOj  jRj

Best

Std

CPU ðsÞ

Best

ACS objective Std

CPU ðs)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

ð50  67  75Þ ð100  150  153Þ ð150  238  230Þ ð200  313  314Þ ð250  373  295Þ ð300  452  350Þ ð350  602  446Þ ð400  597  447Þ ð450  669  498Þ ð500  745  548Þ ð550  670  569 ð600  697  712Þ ð650  703  679Þ ð700  794  764Þ ð750  822  787Þ ð800  884  865Þ ð850  923  907Þ ð900  968  918Þ

11980.2 13138.4 21005.9 27429.1 33115.1 37933.3 37953.7 48265.4 45225.8 44980.7 54444.6 61044.4 52671.2 53918.9 57295.4 66290.5 75215.8 68180.2

10.2 16.0 11.8 18.4 7.6 15.6 17.6 11.7 20.3 28.6 22.4 25.4 31.4 16.7 22.9 31.1 26.4 29.9

60.5 145.5 123.2 431.4 363.3 593.9 991.0 869.2 1090.4 1281.4 1351.0 1363.9 1506.2 1869.9 1929.2 1959.1 2417.6 2372.0

12625.6 13866.7 24272.5 37097.7 38374.0 43313.1 54430.0 60813.0 54281.5 69058.5 72737.4 61296.2 65622.8 68490.2 71524.3 72487.7 86203.6 89399.8

12.4 17.3 8.4 13.4 16.7 22.6 24.8 29.4 19.6 20.1 25.4 18.3 27.3 21.4 36.5 31.2 34.6 30.7

177.2 197.8 258.7 388.9 603.3 661.4 640.4 631.2 933.5 1205.2 1389.1 1454.5 1410.7 1451.9 1583.8 1611.5 1716.0 1718.4

Average



45004.9

20.2

1151.0

55327.5

22.8

1001.9

The best reported values are highlighted in boldface.

Fig. 5. CPU time for solving the SDH and grid instances.

Fig. 6. Best objective values the SDH and grid instances.

142

H. Masri et al. / Applied Mathematics and Computation 251 (2015) 132–142

 Fig. 6 depicts the generated best objective values of both algorithms over the SDH and Grid instances reported in Tables 2 and 3 respectively. We can clearly notice that the performances’ differences between the MVNS and ACS on approaching the optimal solution seems to be magnified as the problem instance size increases. Overall, the gap between the results of the two algorithms is approximately 0.04% for the SDH network and 0.2% for the Grid network. This fact particularly underlines the advantage of the MVNS on tackling large real-world instances.  Fig. 5 shows the CPU times of MVNS and ACS. This figure reveals two results. For the SDH network, the MVNS is able to solve these instances in a reasonable CPU time. The average CPU is about 440 s, while the ACS is more time consuming as it requires about 498 s to finally converge to a lower quality solution. However, as problem size increases this advantage is reversed for the Grid instances where the ACS offers a slightly better computational performance than the MVNS. The average CPU time for the grid instances is respectively 1151 s for MVNS and 1001 s for the ACS.  The standard deviation values shows that MVNS is a stable and robust method. On average and over the two networks, both algorithms have almost the same values, the Std of the MVNS is equal to 19.9 and for the ACS is 18.13. 5. Conclusion In this paper, we presented a new routing algorithm modeled as a multi-source multicommodity flow problem. To cope with the complexity of the MMCF, we proposed to adapt the MVNS metaheuristic. The algorithm efficiency was validated by computational experiments on different instances. A comparison with LINGO is conducted for small sized instances. This first set of experiments shows that only very small instances can be handled by exact solvers using mathematical programming tools (i.e.LINGO). For large instances, an empirical comparison with a state-of-the-art ACS metaheuristic was proposed. The experimental results demonstrate that MVNS is more efficient than the ACS for solving for MMCF as it offers a better objective values. The scale of differences between the two metaheuristics is much noticeable for large instances. This fact shows that MVNS performs consistently well for high dimensional problems as those frequently encountered in real world applications. However, both algorithms have comparable results for the computational expenses. As a future direction, we are considering developing a distributed routing protocol that approximates the behavior of a globally aware router. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17]

R.K. Ahuja, T.L. Magnanti, J.B. Orlin, Network Flows: Theory, Algorithms, and Applications, Prentice-Hall, Englewood Cliffs, NJ, 1993. A.A. Assad, Multicommodity network flows – a survey, Networks 8 (1978) 37–91. G. Baier, E. Khler, M. Skutella, On the k-splittable flow problem, in: 10th Annual European Symposium on Algorithms, 2002, pp. 101–113. C. Barnhart, C.A. Hane, P.H. Vance, Using branch-and-price-and-cut to solve origin–destination integer multicommodity flow problems, Oper. Res. 48 (2) (2000) 318–326. C. Barnhart, C.A. Hane, P.H. Vance, Integer multicommodity flow problems, in: P.M. Pardalos, D.W. Hearn, W.W. Hager (Eds.), Network Optimization, Lecture Notes in Economics and Mathematical Systems, Springer-Verlag, Berlin, Germany, 1996, pp. 17–31. C. Barnhart, C.A. Hane, P.H. Vance, An ant colony optimization metaheuristic for single-path multicommodity network flow problems, J. Oper. Res. Soc. 61 (9) (2010) 1340–1355. A. Bley, Approximability of unsplittable shortest path routing problems, Networks 54 (1) (2009) 23–46. J. Brimberg, P. Hansen, N. Mladenovic´, Attraction probabilities in variable neighborhood search, OR 8 (2) (2010) 181–194. M. Dorigo, T. Stutzle, Ant Colony Optimization, The MIT Press, Massachusetts, 2004. V.M. Jimenez, A. Marzal, Computing the k shortest paths: a new algorithm and experimental comparison, Lect. Notes Comput. Sci. 1668 (1999) 15–29. P. Hansen, N. Mladenovic´, Variable neighbourhood search: principles and applications, Eur. J. Oper. Res. 130 (2001) 449–467. K. Holmberg, D. Yuan, A multicommodity network-flow problem with side constraints on paths solved by column generation, INFORMS J. Comput. 15 (1) (2003) 42–57. J. Kennington, A survey of linear cost multicommodity network flows, Oper. Res. 26 (1978) 209–236. Lindo Systems Inc, LINGO: the modeling language and optimizer. User’s handbook. Lindo Systems Inc., Chicago, IL. 2000. N. Mladenovic´, P. Hansen, Variable neighborhood search, Comput. Oper. Res. 24 (11) (1997) 1097–1100. P.A. Ouorou, P. Mahey, J. Vial, A survey of algorithms for convex multicommodity flow problems, Manage. Sci. 46 (1) (2000) 126–147. E.M. Varvarigos, V. Sourlas, K. Christodoulopoulos, Routing and scheduling connections in networks that support advance reservations, Comput. Networks 52 (2008) 2988–3006.