WDM networks

WDM networks

European Journal of Operational Research 171 (2006) 787–796 www.elsevier.com/locate/ejor A heuristic approach for combined equipment-planning and rou...

331KB Sizes 1 Downloads 17 Views

European Journal of Operational Research 171 (2006) 787–796 www.elsevier.com/locate/ejor

A heuristic approach for combined equipment-planning and routing in multi-layer SDH/WDM networks Holger Ho¨ller *, Stefan Voß Institut fu¨r Wirtschaftsinformatik, Universita¨t Hamburg, Von-Melle-Park 5, 20146 Hamburg, Germany Available online 14 October 2004

Abstract The paper deals with a multi-layer network design problem for a high-speed telecommunication network based on Synchronous Digital Hierarchy (SDH) and Wavelength Division Multiplex (WDM) technology. The network has to carry a certain set of demands with the objective of minimizing the investment in the equipment. The different layers are the fiber-layer, 2.5 Gbit/s-, 10 Gbit/s- and WDM-systems. Several variations of the problem including pathprotected demands and specific types of cross-connect equipment are considered. The problem is described as a mixed integer linear programming model and some results for small networks are presented. Two greedy heuristics, a random start heuristic and a GRASP-like approach are implemented to solve large real world problems.  2004 Elsevier B.V. All rights reserved. Keywords: Optical network design; Protection planning; Integer programming; GRASP; SDH; WDM

1. Introduction The ever-growing demand for bandwidth creates a constant need for the planning of new telecommunication networks and for the expansion of existing systems. Most state-of-the-art highspeed transmission technologies are based on fiber lines and most of the large carriers operate their own fiber network. The structure of the fiber net*

Corresponding author. E-mail addresses: [email protected] (H. Ho¨ller), [email protected] (S. Voß).

work, however, does not change very quickly because the construction of new lines is a very expensive task, especially in densely populated areas. The fiber graph itself can thus be seen as a more or less static infrastructure. However, the further development of the transmission technologies achieves a constant growth of the traffic that can be carried by this infrastructure. Most long haul and metro high-speed networks today are based on Synchronous Digital Hierarchy (SDH) or its American equivalent Synchronous Optical Network (SONET) and Dense Wavelength Division Multiplex (DWDM). SDH allows

0377-2217/$ - see front matter  2004 Elsevier B.V. All rights reserved. doi:10.1016/j.ejor.2004.09.006

788

H. Ho¨ller, S. Voß / European Journal of Operational Research 171 (2006) 787–796

the transmission, multiplexing and protection of standardized data-units ranging from 2 Mbit/s up to 40 Gbit/s. It is a bi-directional circuit switching technology with optical transmission over the fiber lines and electrical multiplexing in the nodes. Current large networks often only use 2.5 Gbit/s and 10 Gbit/s as transmission speeds. Switching and multiplexing of the demands by the cross-connects in the nodes is usually performed on VC4 level (155 Mbit/s) or above. Customer demands currently seldom exceed 2.5 Gbit/s, though 10 Gbit/s demands are expected to become more important in the near future. SDH provides different protection schemes. The perhaps most important one is the widely deployed 1 + 1 path-protection. Each demand is carried over two node-disjoint paths from the source to the sink. If the primary path is interrupted, the system instantly (<50 ms) switches to the backuppath. This mechanism offers a very high degree of failure protection at the cost of high resource consumption but it is the first choice for crucial demands where an interruption would have severe consequences. In large networks 1 + 1 path protection is sometimes used for the majority of demands because it is still less expensive than the consequences of, e.g., a 24 hours breakdown caused by the cutting of a 400 Gbit/s DWDM-link by construction works. DWDM enables even higher transmission speeds than SDH by multiplexing light of several different wavelengths on one fiber. Common systems used in current networks multiplex, e.g., 80 · 2.5 Gbit/s or 40 · 10 Gbit/s. However, DWDM establishes only a point-to-point connection and the signals have to be de-multiplexed and fed through SDH-equipment in the nodes in order to access the data-units. The often-studied routing and wavelength assignment problem (RWA) does not arise in this context, since the wavelengths are terminated in every node. Though all optical switches, which are the technological background of the RWA problem, do exist, most major companies have halted their development due to the downturn of the telecommunications sector in the last years and few carriers are willing to consider investments in this new and very expensive technology. Thus

networks with electrical switching fabrics, as studied in this paper, are still the major planning topic of the telecommunication providers today and in the near future. Nevertheless, the practical relevance of the RWA problem is likely to rise in the long run. DWDM is traditionally a long haul technology but due to the increasing bandwidth demand it is now more and more entering into the metro market. For technical details about SDH/SONET and DWDM technology the reader is referred to, e.g., [8]. Good considerations of various combinatorial optimization problems in the context of telecommunications network design and, especially, WDM can be found in [2,19]. A recent overview on survivability in WDM networks can be found in [9] or [14]. For papers on the RWA topic refer to, e.g., [1] or [12]. In this paper we develop two heuristics for a specific multi-layer SDH/WDM network design problem. Section 2 introduces the hardware components as well as some additional issues arising in real world network design problems. Section 3 describes the multi-layer design problem as a mixed integer model. In Sections 4 and 5 we discuss two different though closely related heuristic approaches for the described network planning problem. Section 6 presents computational results obtained for various tested networks and Section 7 summarizes the results.

2. Equipment specifications The broad spectrum of existing SDH and DWDM technology is unmanageable in an exact model. Therefore, the models presented here rely on a specific set of common equipment with certain user-adjustable parameters. Each demand starts and ends at a cross-connect in a node. This cross-connect has a specific number of ports connected by a switching matrix. It is capable of multiplexing demands to a higher bandwidth and vice versa. Different sizes of cross-connectequipment may be specified to benefit from economies of scale. If path-protection is used, it is always implemented end-to-end and each cross-connect has to be capable of supporting it.

H. Ho¨ller, S. Voß / European Journal of Operational Research 171 (2006) 787–796

Each connection on a link needs specific transmission technology dependent on the required bandwidth. A single SDH link needs at least two port-cards and maybe also additional equipment subject to the covered distance such as amplifiers and regenerators. The costs of these systems depend on the transmission speed (2.5 Gbit/s or 10 Gbit/s). A DWDM-link is a bit more complex. It needs at least a multiplexer at each end that has a certain capacity (number of channels) and causes fixed costs. Each channel that is actually used also needs two port-cards. Theoretically, every number of used ports from zero up to the maximum capacity is possible. However, in reality very small numbers will seldom occur since discrete SDH systems would be cheaper in that case. The amplification is not performed per channel but for the whole system. Thus the distance-dependent costs are fixed concerning the number of channels. Even this reduced set of equipment leads to a very complex multi-layer (fiber, SDH 2.5 Gbit/s, SDH 10 Gbit/s and DWDM) planning problem. The basic trade off takes place between a very dense mesh with many low utilized links and only little transit traffic (many direct links) and a sparse mesh with some highly utilized links and much transit traffic (more detouring and grooming). The latter case becomes the more attractive, the bigger the economies of scale for the equipment are. A rule of thumb is: ‘‘Double costs for quadruple capacity’’. The approach presented here tries to obtain good solutions with as few simplifications of the underlying real world network design problem as possible. Therefore, the equipment is not only modeled with the help of approximate cost-functions, but explicit real world prices for each component down to single port-cards are integrated in the models and algorithms.

3. The mixed integer model

789

be left unused; some nodes without source traffic may only serve as grooming and transit nodes or be entirely left out. The capacities of the modeled transmission systems are stepwise according to the functionality of equipment currently in use. The basic model was inspired by a formulation by Melian et al. [11]. However, the restriction of the routing to k-paths for each demand determined in advance is dropped here and some additional features are added, which is in line with another work of the same authors [10]. Other articles dealing with related approaches are, e.g., [7,4] while only the latter one refers to the complexity of such types of problems. Developing a model for the problem described in the previous section may be performed along the ideas following classical network flow problems as they are commonly discussed in the scientific literature regarding combinatorial optimization problems in telecommunications; see, e.g., [18] for such a model regarding the well-known Steiner problem in graphs. The basic model only considers one type of demands (2.5 Gbit/s) and no further traffic grooming is possible. All cross-connects have the same number of ports. Demands between a pair of nodes must not be split up onto separate paths. The given infrastructure is composed of a set of nodes N and a set of undirected edges E connecting these nodes. A set E 0 containing all arcs of a complete graph with node set N is derived from the latter one. Furthermore, a set of demands D is given which contains the number of 2.5 Gbit/s (VC4-16c) units for each single demand dst from node s to node t. The cost input consists of the following data: 1 CW cost of a WDM connection on edge e e (multiplexers, amplifiers and regenerators); C Fe cost of a 2.5 Gbit/s connection on edge e (port-cards, amplifiers, regenerators); COS cost of a cross-connect;

Line missing The problem is to decide which combination of equipment and static routing will be able to carry the given demands at the lowest cost. The demand matrix, the fiber graph and the locations of the nodes are given. Yet not all of them have to be part of the final solution. Some fiber lines may

1 The cost values given in our model are not explicitly referring to edge lengths (or line lengths). This is possible as we relate the infrastructure costs to specific edges. That is, in case that such costs are depending on the length of an edge this is considered in the respective cost values for specific edges.

790

H. Ho¨ller, S. Voß / European Journal of Operational Research 171 (2006) 787–796

CC

cost of a WDM-channel (a pair of transponders and port-cards).

The capacity of the systems is defined as follows: MW capacity of a WDM-system (number of wavelengths); MOS capacity of a cross-connect (number of ports). Decision variables: fe number of 2.5 Gbit/s SDH-systems on edge e; we number of WDM-systems on edge e; ve number of 2.5 Gbit/s channels used in the WDM-systems on edge e; y Sk number of cross-connects used in node k; zstij 1 if demand (s, t) is routed along arc (i, j); 0 otherwise. Objective function: X Minimize ðC Fe fe þ C We we þ C C ve Þ þ

X

e2E

C OS y Sk ;

k2N

subject to:

8 > < 1 X X st st zhi  zij ¼ 0 > : j2N h2N þ1

9 > =

i¼s

8i 6¼ s; t > ; i¼t

h2N

8ðs; tÞ 2 D; ð1Þ

X

d st ðzstij þ zstji Þ 6 ve þ fe

8e 2 E

ðs;tÞ2D

ð2Þ

with i and j adjacent to e; ve 6 M W we 8e 2 E; X ðve þ fe Þ 6 M OS y Sk

ð3Þ 8k 2 N ;

ð4Þ

e adjacent to k

y Sk P 0 and integer

8k 2 N ;

fe ; we ; ve P 0 and integer 8e 2 E; zstij 2 f0; 1g

8ði; jÞ 2 E0 ; ðs; tÞ 2 D:

Constraints (1) guarantee the flow-conservation. The origin and the destination nodes of each demand both have one adjacent edge that is used, all other nodes have either none or two. Constraints (2) ensure that the demands each edge carries are less or equal than the total capacity of the installed transmission technology. Constraints (3) match the number of available WDM-channels with the maximum capacity of the installed WDM-systems. Constraints (4) adjust the capacity of the cross-connects of each node to the capacity of its adjacent edges. Several extensions to this model can be made in order to allow for additional constraints arising in real world network planning. 1 + 1 protection planning (two node disjoint paths for every demand) can easily be integrated with the help of one new constraint (8) and the replacement of 1 and +1 in constraint (1) with 2 and +2. This ensures that the starting node has exactly two outgoing arcs, each node on the way has at most one incoming arc and the destination node has exactly two incoming arcs. Thus each demand is routed on two paths and these paths share no node except for the source and the sink: X zsthi 6 1 8i 6¼ s; t 8ðs; tÞ 2 D: ð8Þ

ð5Þ ð6Þ ð7Þ

Traffic grooming inside the cross-connects can be integrated by adding factors to the variables ve and fe in constraints (2) and supplying a respective demand matrix. Different sizes of crossconnect equipment can be modeled by adding further COx, MOx and y xn expressions. The basic model only supports 2.5 Gbit/s-systems for SDH as well as for WDM-channels but it is not difficult to extend the model to 10 Gbit/s transmission technology. The new layer is implemented analogously to the 2.5 Gbit/s layer with additional cost data for 0 the links C Fe and the number of 10 Gbit/s-systems on the links fe0 . It makes sense in this context to also change the capacity of the WDM-systems to 10 Gbit/s per channel which can be easily achieved by invoking a constant in constraints (2). The model described above is NP-hard. This can easily be seen when considering that it subsumes the well-known Steiner problem in graphs. Even the basic model cannot be solved to optimal-

H. Ho¨ller, S. Voß / European Journal of Operational Research 171 (2006) 787–796

is capable of a combined equipment- and routing-planning for multi-layer SDH/WDM networks with more than a thousand nodes on a standard PC. The input-data and the considered constraints are roughly the same as with the linear model as described in Section 3. The computational results for small problem instances can be compared directly. The heuristic includes all additional features as described in Section 3 such as 1 + 1 protection planning, traffic grooming and different types of SDH- and WDM-equipment. Furthermore, it is capable of limited support for fiber loop-through in the nodes and a simple graphical output of the resulting network is generated. The first step is the construction of a feasible initial solution. This can be achieved with the help of DijkstraÕs algorithm [3] on the given fiber graph. The weights of the edges are equal to their geographic length in kilometers. Suurballe and others [16,17,2] have extended DijkstraÕs algorithm to node disjoint paths; therefore, it is also suitable for 1 + 1 protection planning. Fig. 1 shows the flow-chart of the heuristic. After the initial solution has been obtained, the equipment needed is assigned and the resulting costs are computed. This takes place on basis of

ity for large networks. Computations with CPLEX 8.1 on a 2.4 GHz Pentium4 PC with 512MB RAM have shown that the limit is approximately 20 nodes. More detailed computational results are presented in [6]. In the context of this paper, the exact solutions obtained for small problem instances will serve as a benchmark for the solution quality of the heuristics. As an additional remark it should be noted, that the relaxation of the integer variables does not help much. Relaxing the zstij has no large effect since most of the zstij remain integer anyway. Furthermore, the computation time is not diminished significantly. Relaxing we and y Sk does speed up the solver, but not enough to find a solution for large problem instances. Also, the achieved objective is too much below the one of the integer solution to serve as a useful lower bound. The integer capacities of the WDM-systems and the crossconnect sizes are one of the key components of the model and their relaxation severely changes the structure of the problem. 4. Random start heuristic The random start heuristic is a slightly improved version of the heuristic presented in [5]. It

Initial solution: distance based shortest-path routing Calculation of the resulting network costs Sorting of the demands: two approaches Greedy sorting with random list permutations

Random

For all bundles of demands: Rerouting if favourable according to the cost-function Calculation of the resulting network costs yes n iterations

791

cost savings? no

Save best overall solution obtained so far Output of the results Fig. 1. Flow chart of the heuristic.

792

H. Ho¨ller, S. Voß / European Journal of Operational Research 171 (2006) 787–796

the cost and capacity data as described for the respective equations of the integer model. The initial solution tends to have many lowly utilized direct links. This serves as a first reference-design and a basis for further iterative improvement. The aim is to exploit economies of scale inherent in the transmission technology by the use of traffic-grooming. Transmission systems with a higher capacity usually have a lower cost per bit carried. Thus it may be advantageous, especially in the long-haul segment, to route some demands on detours via transit-nodes, such that they can be bundled with others to operate high-speed links at full capacity. To make advantage of these effects, all demands are rerouted one after an other based on the initial solution combined in bundles sharing the same source-node. But this time, the shortest path algorithm runs on a new metric. Not the distances but the link costs divided through the link load determine the weights of the edges. The link costs are modeled by a declining cost-function, derived from the step wise equipment costs, that reflects the economies of scale. Usually, for SDH equipment, this leads roughly to a square root function. (The costs of the nodes are distributed among their adjacent edges.) Thus links with a high load attract more traffic. After each rerouting, the equipment is assigned to the current intermediate solution and the resulting network-costs are computed. This enables an integrated multi-layer planning at a very low level: a single bundle of demands. If the rerouting proves to be advantageous, the new solution is kept in memory and the process continues until no more improvement can be achieved. The algorithm has then found a local optimum. At this point, further grooming would result in rising costs because the costs for the additional transitcapacities would outweigh the gain of the economies of scale on the links. Since the rerouting is a sequential process, the sequence in which the demands are rerouted has an important impact on the solution that will be obtained. To eliminate the sequencing-problem, the process described above is repeated several times for randomly ordered demands (the left box in step three). It is also possible to define criteria how to order the demands to get good solu-

tions. This idea will be picked up again in the GRASP approach, but so far we do not know any scheme that dominates the solution quality of the pure random approach. Another issue in connection with the sequential rerouting is the fact that the capacities of the systems form discrete steps, e.g., a WDM-terminal has a maximum capacity of 40 channels. In order to use 41 channels, a whole new system would have to be set up. This would seldom seem favorable for a single demand. Other demands which would be routed later on might take advantage of the same systems and thus make it profitable but the algorithm does not know that in advance. This effect would lead to unnecessary and expensive detouring. However, this problem can be eased, if the stepped costfunction of the equipment is smoothed for the rerouting. Of course, for the evaluation of the intermediate and final results the original equipment costs are used. The result of this planning process is the routing of all demands and a detailed list of equipment with its respective costs. For each node and each link, the number and type of port-cards, WDM-terminals, transponders, amplifiers, regenerators and cross-connects is specified and the overall network costs are given. To visualize the structure of the resulting network an output-file compatible with Graphviz (http://www.research. att.com/~north/graphviz/) can be generated. Fig. 2 shows an example of a computed network structure.

Fig. 2. Fiber graph of a computed network.

H. Ho¨ller, S. Voß / European Journal of Operational Research 171 (2006) 787–796

5. GRASP-like heuristic The bigger the network size, the less likely it becomes that a pure random start heuristic will find a good solution or even a global optimum in a reasonable amount of time due to the exponential increase in the number of possible solutions. To overcome this problem, a GRASP-like heuristic has been developed to guide the algorithm to promising parts of the solution space. GRASP (greedy randomized adaptive search procedure) is a metaheuristic that is typically composed of four components: A greedy construction phase followed by a probabilistic component and a local search procedure. An adaptive mechanism is used to modify the greedy construction after each iteration. A more detailed introduction to GRASP and various modifications can be found in [13]; Resende [15] presents a general bibliography of GRASP. These two sources offer good hints towards further literature concerning GRASP heuristics for telecommunication problems. The flow-chart of the GRASP heuristic is also shown in Fig. 1. The basic outline is very similar to the pure random start heuristic. The first two steps, the generation of a feasible initial solution and the computation of the resulting costs, remain unchanged. But this time, the GRASP module is used in step three. The respective costs of each bundle of demands are calculated and they are sorted in descending order of these costs, which determines the order for the rerouting. In order to add a non-deterministic component, the elements at the beginning of the candidate list are randomly permutated. The ratio between the greediness and the randomness of this procedure can be adjusted in a wide range. Section 6 discusses the effects of this ratio. So the rerouting does not take place in a solely random order but after a randomized greedy scheme. This sorting scheme is motivated by the idea that it may be advantageous to route the large demands first. Large demands are (not per bit but overall) very expensive. Thus it is usually more beneficial to route these demands as directly as possible and let smaller demands do the detouring instead of the other way round. Of course, this basic idea can be implemented in several different ways and

793

other approaches may prove to perform better in the end, but the approach described here seems to be a promising starting point for further testing. The basic rerouting process and the evaluation of the intermediate results are the same as with the random start heuristic. After all demands have been rerouted in the predetermined order, a new candidate list is calculated and the rerouting process is repeated. This adaptive procedure continues until no further improvement can be achieved. The obtained solution belonging to this local optimum is stored and the algorithm may continue with a new search. The initial solution and the first candidate list will always be the same, but then the random permutations will lead to a different part of the solution space. Since the heuristic is led to promising parts by the sorting of the demands, it often finds good solutions faster than the pure random approach. But only if the latter will be significantly restricted due to the run-time, the GRASP approach may lead to better solutions in the end. The results are presented the same way as described in Section 4. The GRASP heuristic uses in fact the same C++ program-code framework as the random start heuristic.

6. Computational results Several tests with artificial test networks as well as with real world problem instances have been performed in order to examine the solution quality of the algorithms. For small networks with up to 20 nodes, the exact solution of the linear model via CPLEX 8.1 can serve as a benchmark. For the larger networks only relative improvements can be given so far. Table 1 shows the comparative results of the algorithms for different networks. The numbers given represent normalized overall network costs. The CPLEX values are proved to be optimal solutions with the exception of network number 5. The heuristics both performed 2000 runs. The results for the small problem instances with up to 20 nodes were all reached after less than 200 runs and then remained static. For the 111 node problem instances improvements were observed until the end, though in very long and uncertain intervals.

794

H. Ho¨ller, S. Voß / European Journal of Operational Research 171 (2006) 787–796

Table 1 Comparative results for different networks No.

Nodes/edges/demands

CPLEX 8.1

Shortest path

Random start 2000 runs

GRASP 2000 runs

1 2 3 4 5 6 7 8

11/16/19 11/16/19 11/16/191 + 1 protection 11/16/19 VC4 grooming 20/33/84 111/160/243 111/160/243 1 + 1 protection 321/full mesh/1896

1983 10 442 4082 1231 9426 – – –

2079 10 577 4346 1347 9728 117 336 333 909 52 371

1989 10 509 4172 1244 9604 108 661 313 303 51 290 (200 runs)

1989 10 509 4172 1244 9604 108 661 313 145 51 298 (200 runs)

The networks numbers 1–5 are randomly generated test instances. The networks numbers 6 and 7 represent real world data of a nationwide network with large demands. The 111 node network with 1 + 1 protection is of particular relevance since it represents a typical class of todayÕs network planning problems. Network number 8 consists of real node- and demand-data as well, but in this case the fiber graph is not restricted. The heuristics determine the routing and the equipment needed on basis of a full mesh and delete the unused edges in the end. This eventually leads to about 1872 edges out of 51 360 ones which are theoretically possible. The number of chosen edges is not much lower than the number of demands. This reflects the fact that a direct connection is the best choice in most cases if sufficient fibers exist. However, if construction costs for these fibers occur (which can be modeled by higher edge costs), the resulting meshing would be much thinner. The heuristic algorithm is quite fast so that the run-time is generally not a key issue for reasonable network sizes in the context of a long-term planning. However, speed does matter if the network planner wishes to test a large number of different scenarios, e.g., to evaluate the effects of certain changes in the demand matrix or the fiber graph. In such a process, where quick results are more important than the last 0.5% of improvement in the objective function, the quick descent of the GRASP algorithm can prove to be very helpful. For the real world problem instance with 111 nodes and 160 edges, one run can be performed in less than one second on a Pentium4 with 2.4 GHz. One run for the large 321 node network

takes about 1 minute. The computation times for the smaller networks were all in the range of a few seconds for the whole 2000 runs. CPLEX computation times range from some seconds for the 11 node networks up to 3 hours for the 20 node network. In the latter case, the calculation was aborted due to a memory limit of 400 MB, thus the given objective is not provably optimal. The results show that both heuristics obtain good results within some percent of the optimal solution for small networks. For larger networks, the absolute performance cannot be judged, but both heuristics achieve a significant improvement compared to the shortest path solution. The shortest path solution is obtained by performing just the first two steps of the algorithm shown in Fig. 1. Thus each demand is routed along the shortest path regarding the lengths of the traversed fibers in kilometers. The random start heuristic and the GRASP heuristic perform almost alike for all problem instances. The small differences for the networks 7 and 8 are below 0.1%. Fig. 3 shows the comparison of the random start and the GRASP heuristic on the example of network number 5. The graph shows the best solution of both algorithms during 100 runs. In order to achieve a smooth graph undisturbed by the randomness of the algorithms, the values are averaged over 100 independent restarts of the whole planning process. The GRASP approach starts in a better part of the solution space; its initial descent is significantly steeper than the one of the random start approach. In the end, both algorithms converge to the same value. This typical behavior can be observed more or less for all of the tested networks. In general, the advantage of the GRAP

H. Ho¨ller, S. Voß / European Journal of Operational Research 171 (2006) 787–796 9700 9680 random start GRASP

costs

9660 9640 9620 9600 9580

96

91

86

81

76

71

66

61

56

51

46

41

36

31

26

21

16

6

11

1

9560

runs

Fig. 3. Comparison of GRASP and random start.

approach is bigger for smaller networks, while both graphs get closer for larger networks. As mentioned in Section 5, the behavior of the GRASP algorithm can be adjusted by the ratio between the greediness and the randomness. Fig. 4 shows the effects of this change, again on the example of network number 5. A strong greedy behavior, as can be seen in graph ‘‘g 10’’, leads to a very steep descent in the beginning but the focus is too narrow to achieve a good result in the long term. On the other hand a stronger random component, as shown in graph ‘‘g 2’’, leads to a milder descent in the beginning but to a better result in the end. Setting ‘‘g 4’’ is somehow inbetween. The parameter setting used for ‘‘g 2’’ (which is the same one used in Fig. 3 and for all GRASP computations in Table 1) turned out to be a good choice for all test networks. It always

795

reaches a final result that is competitive with the random start approach and in most cases has a much steeper descent in the beginning. Thus the adjustment of this ratio seems to be comparatively independent from the structure of the fiber graph and of the demand matrix. The behavior of the two heuristics leads to the conclusion that the quality of the local search procedure determines the results in most cases. The fact that only bundles of demands can be rerouted leads to a comparatively small amount of local minima with large attraction areas. Probably only are routing strategy with single demands would allow the heuristics to reach the solution achieved by CPLEX. Both heuristics are able to find locally optimal solutions for all problem instances. The GRASP approach is faster in obtaining good solutions but has no advantage if a large number of runs can be performed. In these circumstances, this does not allow for a better overall solution than the random start heuristic. The allowance of fiber loop-through in the nodes leads to overall reduced network costs and is implemented for both heuristic approaches. However, it is currently not possible with the linear model and is thus left out in the given results for comparability reasons. This leads to the fact that the costs for the protected networks are over estimated in comparison to the equivalent unprotected ones since 1 + 1 protection leads to a much higher volume of transit traffic.

7. Conclusions 9700 9680

g2 g4 g10

costs

9660 9640 9620 9600 9580

96

91

86

81

76

71

66

61

56

51

46

41

36

31

26

21

16

11

6

1

9560

runs

Fig. 4. Effects of the ratio between randomness and greediness.

We have presented two greedy heuristics for a multi-layer SDH/WDM network planning problem, one pure random start approach and a GRASP-like heuristic. Both approaches are able to perform a combined routing and equipment planning for large networks with up to 1000 nodes in a reasonable amount of time. The solution quality is near optimal for small problem instances and significantly better than a shortest path routing for large problem instances. The planning process does include explicit equipment specifications and costs and can be adjusted to several different kinds of hardware. This enables a network planner to

796

H. Ho¨ller, S. Voß / European Journal of Operational Research 171 (2006) 787–796

compute different scenarios, e.g., with equipment from different manufacturers or different demand prognoses which, combined with his own planning skills, can eventually lead to a suitable design for real world network planning problems in a SDH/ WDM environment. However, especially the GRASP approach has still potential for further improvements in combination with a refined local search. The current solution, therefore, provides a promising basis. References [1] S. Baroni, P. Bayvel, R.J. Gibbens, S.K. Korotky, Analysis and design of resilient multifiber wavelength-routed optical transport networks, Journal of Lightwave Technology 17 (5) (1999) 743–758. [2] R. Bhandari, Survivable Networks: Algorithms for Diverse Routing, Kluwer Academic Publishers, Boston, 1999. [3] E.W. Dijkstra, A note on two problems in connexion with graphs, Numerische Mathematik 1 (1959) 269–271. [4] W.D. Grover, J. Doucette, Topological design of survivable mesh-based transport networks, Annals of Operations Research 106 (2001) 79–125. [5] H. Ho¨ller, S. Voß, M. Fricke, S. Neidlinger, Schichtenu¨bergreifende Netzplanung, Proceedings 4. ITG-Fachtagung Photonische Netze Leipzig, VDE Verlag, Berlin, 2003, pp. 21–28. [6] H. Ho¨ller, S. Voß, A mixed integer linear programming model for multilayer SDH/WDM networks, working paper, Universita¨t Hamburg, 2003. [7] J. Kennington, K. Lewis, E. Olinick, A. Ortynski, G. Spiride, Robust solutions for the DWDM routing and provisioning problem: Models and algorithms, Optical Networks Magazine 4 (2) (2003) 74–84.

[8] B. Lee, W. Kim, Integrated Broadband Networks, Artech House, Boston, 2002. [9] G. Maier, A. Pattavina, S. De Patre, M. Martinelli, Optical network survivability: Protection techniques in the WDM layer, Photonic Network Communications 4 (2002) 251–269. [10] B. Melian, M. Laguna, J.A. Moreno-Perez, Minimizing the cost of placing and sizing wavelength division multiplexing and optical cross-connect equipment in a telecommunications network. Available from , state: 14.5.2004, 2003. [11] B. Melian, M. Laguna, J.A. Moreno-Perez, Capacity expansion of fiber optic networks with WDM systems: Problem formulation and comparative analysis, Computers & Operations Research 31 (3) (2004) 461–472. [12] T.F. Noronha, C.C. Ribeiro, Routing and wavelength assignment by partitioning colouring, European Journal of Operational Research 171 (3) (2006) 797–810. [13] L.S. Pitsoulis, M.G.C. Resende, Greedy randomized adaptive search procedures. Available from , state: 14.5.2004, 2001. [14] S. Ramamurthy, L. Sahasrabuddhe, B. Mukherjee, Survivable WDM mesh networks, Journal of Lightwave Technology 21 (4) (2003) 870–883. [15] M.G.C. Resende, A bibliography of GRASP. Available from , state: 14.5.2004, 2001. [16] J.W. Suurballe, Disjoint paths in a network, Networks 4 (1974) 125–145. [17] J.W. Suurballe, R.E. Tarjan, A quick method for finding shortest pairs of disjoint paths, Networks 14 (1984) 325– 336. [18] R.T. Wong, A dual ascent approach for Steiner tree problems on a directed graph, Mathematical Programming 28 (1984) 271–287. [19] H. Zang, WDM Mesh Networks, Kluwer Academic Publishers, Boston, 2003.