A unified framework for approximation of general telecommunication networks

A unified framework for approximation of general telecommunication networks

European Journal of Operational Research 163 (2005) 482–502 www.elsevier.com/locate/dsw O.R. Applications A unified framework for approximation of ge...

375KB Sizes 3 Downloads 104 Views

European Journal of Operational Research 163 (2005) 482–502 www.elsevier.com/locate/dsw

O.R. Applications

A unified framework for approximation of general telecommunication networks Mark Vroblefski a, R. Ramesh a

b,*

, Stanley Zionts

b

Department of Business Information Technology, Virginia Polytechnic Institute and State University, 1007 Pamplin Hall (0235), Blacksburg, VA 24061, USA b Department of Management Science and Systems, School of Management, State University of New York at Buffalo, Jacobs Management Center, Buffalo, NY 14260-4000, USA Received 25 February 2003; accepted 14 October 2003 Available online 31 December 2003

Abstract In this paper, we develop a unified framework for approximation of the performance of general telecommunication networks based on a decomposition strategy. The method is an extension of the work presented in [15]. The algorithms assume finite buffer space at each switch, state-dependent arrival rates of data packets, and general service time distribution at the switches. Two methods of buffer space allocation at the switches, and two congestion control mechanisms are modeled. The proposed algorithms have been extensively tested against simulation values. The results show that the proposed framework yields robust, reliable and accurate estimates of network performance measures, such as throughput, number of packets in the system, and switch and link utilization. The computation time required is minimal. The unified framework presents a useful set of tools for telecommunication network designers in evaluating numerous network designs. Ó 2003 Elsevier B.V. All rights reserved. Keywords: Telecommunications; Queueing; Network approximations

1. Introduction The flow of traffic through a telecommunication network can be modeled as a network of queues. However, the analysis of such systems is often difficult because of many intrinsic complexities. These complexities include a finite storage capacity at each switching node, general service time distributions and the large number of possible system states. Because of the lack of exact solutions for such systems, there has been considerable efforts to develop techniques to approximate their performance. In this paper we propose an extension of the work in [15]. Vroblefski et al. [15] develop approximation strategies for open and closed queueing systems in a serial manufacturing situation under four types of *

Corresponding author. Tel.: +1-716-6453258; fax: +1-716-6456117. E-mail addresses: [email protected] (M. Vroblefski), [email protected]ffalo.edu (R. Ramesh), [email protected]ffalo.edu (S. Zionts).

0377-2217/$ - see front matter Ó 2003 Elsevier B.V. All rights reserved. doi:10.1016/j.ejor.2003.10.027

M. Vroblefski et al. / European Journal of Operational Research 163 (2005) 482–502

483

blocking: manufacturing, communication, minimal, and general. In this research, we extend this to approximating telecommunication networks. The proposed algorithms are based on a unified model for any general network. We use a graph-theoretic representation of a telecommunication network where a node represents a switching/routing device and an arc represents a link or channel between two devices. Data packets may enter the network at any switching node. We assume full duplex links between nodes but this assumption can be easily relaxed because each one-way, directed link is modeled as a separate server. We also assume general service time distributions at the server nodes and general transmission time distributions at the links. Two types of buffer space manipulation at the network switching nodes are considered: (1) each node has a fixed buffer pool that can be used for either incoming data packets or processed packets awaiting transmission, and (2) each node has a fixed buffer pool for either incoming or processed packets but only a portion of the total buffer space can be used for processed packets. Approximation algorithms for network congestion control mechanisms such as the leaky bucket scheme and isarithmic systems are also developed. Vroblefski et al. [15] reviews some of the relevant literature that pertains to the manufacturing environment. There has been several attempts to approximate telecommunication networks. In [6] Kuehn develops a decomposition approximation technique for a general open queueing network where customers can enter the network at any node. He analyzes the subsystems individually by assuming renewal arrival and departure processes. In [16,17] Whitt presents the Queueing Network Analyzer (QNA), a software package that approximates the congestion measures of any general queueing network, and the experimental results. The nodes are analyzed as GI/G/m queues using the first two moments of the interarrival-time and service-time to characterize the distributions. Whitt and Kuehn assume unlimited capacity at each node. Hasslinger [4] extends previous decomposition methods to include more generalized representation forms of traffic. The method in [4] uses discrete time semi-Markovian processes (SMP) to characterize traffic with short-term and long-term autocorrelations. There has also been a series of papers that use a similar decomposition approach. These include [9], which restricts the analysis to servers in tandem, and [5,8]. Most of the methods for approximation are decomposition techniques. Different from the proposed approximations, each of these papers assumes state-independent arrival rates and/or infinite buffer space at the switching nodes. This is where the proposed approximations generalize and extend the work previously carried out. Vroblefski et al. [15] proposed a unified framework for Blocked Opened Queueing Networks (BOQN) and Blocked Closed Queueing Networks (BCQN) for serial systems that builds on and generalizes the decomposition strategy developed in [3]. These works employ a kanban-system characterization of queuing networks. The generalized framework of [15] can be summarized as follows. First, Vroblefski et al. [15] generalize the decomposition strategy in [3] to approximate kanban systems under various blocking protocols, yielding a framework for the analysis of BOQN systems in general. Next, they extend this framework to BCQN systems by embedding an analysis of the corresponding BOQN system within a NonBlocked Closed Queueing Network (NBCQN) analysis of the pallets in circulation. The strategy in this case is to realize a BOQN with flow characteristics equivalent to those of the BCQN. As a result, they obtain a unified framework for analysis of BOQN as well as BCQN systems under any blocking mechanism. We extend the above decomposition strategy to approximate the performance of telecommunication networks. This application of the strategy differs from the serial manufacturing environment in several ways. First, we cannot assume instantaneous transfer between nodes and therefore model each full duplex link as a pair of servers. The distribution of packet sizes will dictate the transmission time distribution, and thus the service time distributions of the servers that represent the links. Secondly, when considering a general network, there are several arrival rates of data packets to a node (from links entering the node and also from packets entering the network) and there are also several possible destinations after a packet is processed at a node. In a serial system, there is one input stream and one output stream of customers at each service node. Thirdly, we have to consider packets that leave the network immediately after processing

484

M. Vroblefski et al. / European Journal of Operational Research 163 (2005) 482–502

at a node and the appropriate kanbans that are returned to their respective bulletin boards immediately after the packets are processed. While the use of kanbans is accepted in manufacturing systems, it cannot be directly applied to telecommunication networks. We will refer to kanbans throughout our discussion of telecommunication networks purely as an abstraction to simulate the finite buffer size at each switching node. Each kanban will represent storage space for a single packet. The proposed unified framework for telecommunication networks, along with that already developed in [15] presents a highly useful set of tools of analysis for queueing system designers of both manufacturing and telecommunications networks in evaluating network performance under numerous design alternatives. This paper also illustrates the potential of this strategy in a wide variety of applications. We show that the complete framework yields robust, reliable and accurate estimates of system performance, such as data packet throughput, Work-In-Process inventory of data packets (WIP), and average utilization of switches and channels in a wide range of system and network configurations. The algorithms have been proven to be a very efficient alternative to simulation in network design. The organization of the paper is as follows. Section 2 introduces some basic notation and develops the proposed framework for application to telecommunication networks. Section 3 presents the computational results and Section 4 discusses the conclusions and directions for future research.

2. The proposed framework To begin with, we present a taxonomy that defines different network features that need to be considered when designing telecommunication networks and how they effect the approximation techniques. The taxonomy is summarized in Table 1. Next, we present the constructs and algorithms involved in approximating telecommunication networks. 2.1. Telecommunication network taxonomy One of the major determinants of network performance is network topology. The quality of the applied congestion control mechanisms or the optimal buffer pool size of the switching nodes for instance depends on the network topology [12]. It cannot be excluded that a network configuration will be changed in the future even if the transport system is tailored to the network topology. Therefore the analyst should select some of the typical topologies and test the effect of different configurations on the behavior of the transport system. The topologies that will be considered are outlined in Fig. 1. The nodes represent switching nodes and each arc or line connecting nodes represents a full duplex channel. As stated earlier, the full duplex assumption is easily relaxed. The topologies considered include the chain, ring, star, lattice, and any other general network. The end cell or network congestion control mechanisms also affect network performance. These mechanisms are used to control the number of customers or data packets allowed in the network at one time. We consider and model two such mechanisms: (1) the leaky bucket scheme, and (2) isarithmic flow control. The leaky bucket scheme employs a limited buffer pool (bucket) that contains tokens. As data packets enter the system they must acquire a token. If there are no tokens available in the bucket, packets seeking access to the network are discarded. There is a fixed arrival rate of tokens replenishing the bucket, see [13]. As in the leaky bucket scheme, isarithmic flow control uses tokens or ÔpermitsÕ to control the acceptance of packets to the network, see [12]. In this flow control mechanism there is a limited number of permits and therefore a limited number of packets allowed in the network at one time. As traffic reaches its destination, permits are released to allow the acceptance of new traffic. This network flow control mechanism is a BCQN, and an analysis similar to that for BCQNs presented in [15] can be used.

M. Vroblefski et al. / European Journal of Operational Research 163 (2005) 482–502

485

Table 1 Summary of telecommunications taxonomy Factor

Aspects of approximation technique effected

(1) Topology Chain Ring Star Lattice General

Sequence of subsystem analyses and how visit ratios are found

(2) Congestion control schemes Leaky bucket scheme Isarithmic scheme

External arrival rates

(3) Node flow management Minimal blocking Generalized minimal blocking

Node analysis

(4) Transmission time across link Fixed (ATM) Variable (Frame relay)

Service time distribution of servers that represent links

(5) Switching techniques Virtual circuits Datagrams

How visit ratios are found

Another important issue that has to be considered in telecommunication network design is the flow management at each network switching node. We consider two possible node flow management schemes: (1) each node has a finite buffer pool that can be used for either packets awaiting service or packets that have already been processed awaiting transmission (Minimal Blocking Node Flow Control), and (2) each node has a finite buffer pool for packets awaiting processing or processed packets awaiting transmission, except only a portion of the total buffer space can be used for processed packets. We refer to the latter as a Generalized Minimal Blocking Node Flow Control. Depending on the type of network involved, the transmission time across a channel may be either fixed, as in ATM networks, or variable, as in Frame Relay networks. The transmission time distribution is modeled as the service time distribution of the servers representing the channels or links. Lastly, the type of switching techniques used throughout the network must be considered. This is modeled by finding the visit ratios which will be defined shortly. The two switching techniques to be considered are virtual circuits and datagrams. A virtual circuit is a path from source to destination that a string of packets follows. The datagram switching technique allows each packet to find its own route to the destination. An example of computing visit ratios is given below. The visit ratios are inputs to the proposed set of approximation algorithms. 2.1.1. Visit ratio example The network under consideration in this example is a four node chain network (Fig. 1(a)). We define an adjacency matrix, X, where each term of the matrix, xij , equals one if thereÕs a direct link from node i to node j, and 0 otherwise. In the full duplex case xij ¼ xji , but in general this may not be the case. We also define kij as the state-independent arrival rate of packets seeking entrance into the network at node i and have a destination of node j. k0i is defined as the arrival rate of packets entering the network at node i. N X k0i ¼ kij : ð1Þ j¼1 j6¼i

486

M. Vroblefski et al. / European Journal of Operational Research 163 (2005) 482–502

1

1

2

3

6

2

5

3

4

(a) Chain

4

(b) Ring

1 8

2

7

9

3

6

4 5

(c) Star

1

2

4

3

5

7

6

8

9

1

2

4

3

5 6

(d) Lattice 7

(e) General

Fig. 1. Network topologies considered.

The visit ratios are defined as Pi and Pij . We define Pij as the probability that a packet remaining in the network after completing service at node i will next visit node j, i 6¼ j. Obviously, P13 , P14 , P24 , P31 , P42 and P41 ¼ 0 because there are no direct links between these pairs of nodes. And P12 ¼ P43 ¼ 1 because all packets that remain in the system after processing at nodes 1 and 4 must next visit nodes 2 and 3, respectively. Pij can also be defined as the arrival rate of packets into the network that use link i, j divided by the arrival rate of packets into the network that use any link out of node i. Therefore the remaining Pij Õs are as follows: k21 þ k31 þ k41 ; P21 ¼ k21 þ k31 þ k41 þ k23 þ k24 þ k13 þ k14 P23 ¼

k23 þ k24 þ k13 þ k14 ; k21 þ k31 þ k41 þ k23 þ k24 þ k13 þ k14

P32 ¼

k32 þ k31 þ k42 þ k41 ; k32 þ k31 þ k42 þ k41 þ k34 þ k24 þ k14

P34 ¼

k34 þ k24 þ k14 : k32 þ k31 þ k42 þ k41 þ k34 þ k24 þ k14

M. Vroblefski et al. / European Journal of Operational Research 163 (2005) 482–502

487

We define Pi as the probability that a packet that enters node i will immediately leave the network after processing at node i. Similar to the analysis to find Pij , we get the following values for the Pi Õs: P1 ¼

k41 þ k31 þ k21 ; k41 þ k31 þ k21 þ k01

P2 ¼

k12 þ k32 þ k42 ; k12 þ k32 þ k42 þ k02 þ k13 þ k14 þ k31 þ k41

P3 ¼

k13 þ k23 þ k43 ; k13 þ k23 þ k43 þ k03 þ k14 þ k24 þ k42 þ k41

P4 ¼

k14 þ k24 þ k34 : k14 þ k24 þ k34 þ k04

An approach similar to the above example can be used for more complex networks. When virtual circuits are employed, a method similar to the one used above can be used to calculate the visit ratios. When a connectionless scheme is used, a resource allocation integer programming model (IP) can be developed to disperse traffic across the entire network. From the IP model, the average rate of packets sent on various possible routes can be found, and subsequently the visit ratios can be calculated. We now discuss the models for the different node flow mechanisms. 2.2. Switching node and link analyses The first issue to consider is the sequence in which subsystems will be analyzed. Here a subsystem is either a switching node or a server representing a one-way link or channel. Vroblefski et al. [15] considers queues in tandem. Therefore the obvious order of subsystem analysis is to begin at the first node, make a forward pass to the last node and then make a backward pass to return to the first node. In this section we consider several network configurations as seen in Fig. 1. We define a logical sequence of subsystem analyses for each topology as follows. When considering a chain topology, the same sequence as with tandem queues still applies. As stated earlier, in telecommunication networks we model the links also as servers, and the service time for each server corresponds to the transmission time across the link. If we denote each switching node as i ¼ 1; 2; . . . ; N and each link as (i; j), where i is the source node and j is the destination node, the order of the subsystem analyses for the network in Fig. 1(a) would be: 1, (1,2), 2,. . ., (3,4), 4, (4,3), 3,. . ., 2, (2,1). If the network under consideration is a ring topology, the sequence is similar to that of the chain configuration. The forward pass would consist of a full clockwise sweep and the backward pass would consist of a full counterclockwise sweep. For the network in Fig. 1(b), the sequence for subsystem analyses would be: 1, (1,2), 2,. . ., 6, (6,1), 1, (1,6), 6, (6,5),. . ., 2, (2,1). The star topology is slightly more complicated. We propose a subsystems analysis sequence as follows: all the clients, all the links from clients to the hub, the hub, all the links from the hub to the clients. The sequence for the network in Fig. 1(c) would be: 1, 2, . . ., 8, (1,9), (2,9), . . ., (8,9), 9, (9,1), (9,2), . . ., (9,8). For the lattice and general network configurations we propose a sequence that corresponds to a breadth first search method. For Fig. 1(d), the forward pass for the breadth first search sequence would be: 1, (1,2), (1,3), 2, 3, (2,4), (2,5), (3,5), (3,6), 4, 5, 6, (4,7), (5,7), (5,8), (6,8), 7, 8, (7,9), (8,9). The backward pass sequence would be similar to the above sequence but it would begin with node 9 and each link would be in the opposite direction.

488

M. Vroblefski et al. / European Journal of Operational Research 163 (2005) 482–502

2.2.1. Minimal blocking node flow control Without loss of generality, the switching node and link subsystem analyses for the Minimal Blocking Node Flow Control are illustrated in Figs. 2 and 3, respectively. The analysis for both follows the minimal blocking analysis presented in [3]. Each switching node has a fixed input/output buffer pool of size ki . We assume that packets are not transmitted unless there is space available in the buffer pool of the destination node (communication blocking). This assumption holds because congestion was not prohibitive in the networks that were tested. The number of kanbans, ki  1, at each switching node is a result of the above approximation scheme. The remaining kanban and buffer space are transferred back to the links entering the node to ensure packets are sent only when space is available. The reasoning behind this is reviewed in [15]. For more details and a proof for correctness, refer to [10,11]. In the decomposition strategy, the entire network is broken into subsystems, with each subsystem consisting of the synchronization station preceding a service point, the real input queue (RIQ) at the service point, and the following synchronization station. A synchronization station is a virtual station positioned between adjoining service points, and is intended to capture the coordination between the two servers in the transfer of packets from one to the other. Analyzing each subsystem independently, the entire network is approximated using a forward-backward iterative passing scheme among the subsystems (see Section 2.2). The individual subsystems are analyzed with an iterative process also. The service rates at each station within a subsystem, the upstream arrival rates (the kiu Õs and the kiju Õs for node analysis and link analysis respectively), and the downstream arrival rates (the kid Õs and the kijd Õs for node analysis and link analysis respectively) are first initialized. Using the algorithm in [2], the subsystem is solved as a NBCQN, and the state-dependent arrival rates at each station are found. A station is either a synchronization station or the real input queue and server. Given the state-dependent arrival rates, each individual station is solved to find the throughputs. The synchronization stations are solved using a simple Markov chain model. The service station and real input queue throughputs are solved using the algorithm found in [7] for a kðnÞ=C2 =1=N queue. The state-dependent service rates are then set equal to the throughputs. These steps are repeated until the arrival rates converge. For details on the procedure refer to [15]. In the serial systems studied in [15] the upstream and downstream arrival rates are updated from the previous subsystemÕs analysis and the subsequent subsystemÕs analysis, respectively. As mentioned earlier,

SSiI

SSiIII

Pi

RIQi 1 λi

u

λilll(nilll)

(ni

u)

ki-1 λiRIQ(niRIQ)

i

ki-1

λil(nil)

ki-1

λiI,RIQ(nil)

(1-Pi)

λid(nid)

ki-1(kanbans) Fig. 2. Minimal blocking node analysis.

1

M. Vroblefski et al. / European Journal of Operational Research 163 (2005) 482–502

SSijI

489

SSijIll RIQi

ki -1

λiju(niju)

1 1

λijI(nijI)

1

λijlll(nijlll)

i,j

kj -1

λijd(nijd)

Fig. 3. Minimal blocking link analysis.

in the general network topology there are several arrival rates of data packets to a node and there are also several possible destinations after a packet is processed at a node. Also packets that leave the network immediately after processing at a node and the appropriate kanbans that are returned to their respective bulletin boards immediately after the packets are processed have to be considered. Therefore the upstream and downstream arrival rates and the arrival rates of kanbans to their bulletin boards must be adjusted. We now define the upstream and downstream arrival rates in the switching node analysis. We define kiu ðniu Þ as the arrival rate of packets into node i given there are niu packets waiting. This includes packets arriving at node i from outside the network and packets already in the network arriving through other links. Therefore, the total arrival rate of packets into node i is: X ji ji kiu ðniu Þ ¼ k0i þ kIII ðnIII Þ: ð2Þ 8 xji ¼1

kjiIII ðnjiIII Þ

are the state-dependent arrival rates of packets into synchronization station SSjiIII in the link analyses. SSjiIII corresponds to the synchronization station SSiI in the node analyses. We define kid ðnid Þ as the arrival rate of kanbans from the links exiting node i given that there are nid kanbans currently in the queue. Therefore, these rates are the sum of arrival rates of kanbans to the bulletin boards of all links leaving node i. Hence, we have X ij ij kid ðnid Þ ¼ kI ðnI Þ: ð3Þ 8 xij ¼1

kijI ðnijI Þ

are the state-dependent arrival rates of kanbans into synchronization station SSijI in the link analyses. SSijI corresponds to the synchronization station SSiI in the node analyses. We now consider the return of kanbans to the bulletin board as a result of packets leaving the network immediately after processing at node i, kiI;RIQ ðniI Þ. We approximate these arrival rates by assuming there is a constant arrival rate at station RIQi , regardless of the number of kanbans at SSiI and SSiIII and that the arrival rates are state-independent. By definition, Pi is the probability that an arrival at RIQi will leave the network after processing. Therefore we have " # ki 1 X i i kI;RIQ ðniI Þ ¼ kRIQ ðxÞ  P ðniRIQ ¼ xÞ Pi 8 niI : ð4Þ x¼0

490

M. Vroblefski et al. / European Journal of Operational Research 163 (2005) 482–502

The analysis of the links is the same as minimal blocking in [3] where the buffer size is 1. Fig. 3 illustrates the NBCQN for link ði; jÞ. The synchronization station SSijI maps onto synchronization station SSiIII , and SSijIII maps onto SSjI . The upstream arrival rates, kiju ðniju Þ, are defined as the state-dependent arrival rates of packets that need to be transmitted across link ði; jÞ. Therefore it is necessary to weight the arrival rates of the packets exiting node i at synchronization station SSiIII with the probability that a packet leaving node i is destined for node j, Pij . kiju ðniju Þ ¼ kiIII ðniIII Þ  Pij :

ð5Þ

This gives the arrival rate of packets to synchronization station that need to visit node j next. SSijIII coordinates packets from node i that next visit node j. Now consider the downstream arrival rates for the link analyses. At synchronization station SSijIII , we are only considering packets that are arriving at node j from node i. But there may be other arrivals at node j from outside the network and also from other links feeding into node j that are contending for arriving kanbans at the bulletin board of node j. Therefore it is necessary to weight the arrival rate of kanbans to the bulletin board at node j. This implies a fair allocation of kanbans at the bulletin board of node j to packets transmitted on link ði; jÞ. Therefore the arrival rates for the downstream are: SSiIII

kijd ðnijd Þ ¼ ½kjI ðnjI Þ þ kjI;RIQ ðnjI Þ

koj þ

P

kijIII 8 xlj ¼1

kljIII ðnljIII Þ

:

ð6Þ

Using the previous sequences for the appropriate topologies, we present the overall approximation framework as follows. Algorithm: Framework TC Networks Step 0: {Initialization} Find all visit ratios, Pi Õs and Pij Õs, i ¼ 1; . . . ; N , 8 j 3 xij ¼ 1. Fix all arrival rate parameters kI Õs, kRIQ Õs, kIII Õs to some initial value, the switching node service rate is recommended. Step 1: {Subsystem Analyses} Depending on the topology and sequencing method used, solve each subsystem recalculating the ku Õs and kd Õs using the most recent values for the kI Õs, kRIQ Õs, kIII Õs and Eqs. (2), (3), (5) and (6). Step 2: {Convergence Test} Check whether the recomputed kd and ku values are within e of their old values. If they are within the chosen interval, then stop; else return to Step 1. The independent analyses of the subsytems, nodes and links, discussed above corresponds to Step 1 in the framework for telecommunication networks. Modeling each subsystem as a NBCQN with circulating kanbans, the overall strategy of these analyses is presented as follows. Algorithm: Subsystem_Analysis Step 0: {Initialization} Consider subsystem i (or ði; jÞ). Assume the unknown state-dependent service rates (l) at each station of the subsystem (SSI , RIQi , SSIII ). Step 1: {NBCQN Modeling} Modeling the circulation of kanbans within the subsystem as an NBCQN, determine the statedependent arrival rates (k) at each station using the algorithm in [2] (also see [1]). Step 2: {Throughput Determination} If the subsystem is a switching node, adjust kiIII ðniIII ) by multiplying them by ð1  Pi Þ, and calculate kiI;RIQ ðniI ) using Eq. (4). For each station in the subsystem, determine the state-dependent through-

M. Vroblefski et al. / European Journal of Operational Research 163 (2005) 482–502

491

put rates (m), given the arrival rates (k). In this case, employ Markov analyses for synchronization stations and the algorithm in [7] for the real input queue. The individual stationwise analysis requires the underlying subsystem be modeled as an NBCQN. Step 3: {Service Rate Determination} Set the service rates (l) of each station to their corresponding throughput rates (m). Step 4: {Convergence Test} Check whether the recomputed l values are within e of their old values. If they are within the chosen interval, then stop; else return to Step 1. 2.2.2. Generalized minimal blocking node flow control Without loss of generality, the switching node and link subsystem analysis for the generalized minimal blocking node flow control is illustrated in Figs. 4 and 5, respectively. The link analysis is identical to the link analysis for minimal blocking node flow control except the buffer space for incoming packets is bi and not ki . The number of virtual kanbans at subsystem i corresponds to the number of outgoing packets allowed at cell i, bi , and are posted at a virtual bulletin board, which is positioned between the main bulletin board and the service station. The node analysis corresponds to the general blocking analysis presented in [15]. Here we use Eqs. (2)–(6) as in the case of minimal blocking node flow control. We calculate kiVB;RIQ ðniVB Þ similar to the approach used to calculate kiI;RIQ ðniI Þ in Eq. (4). The generalized minimal blocking node flow control can be used to model any general blocking scheme at a switching node in a chain or ring topology (see [10,11]). For any other topology this does not hold and therefore the model is simply a generalization of the minimal blocking node flow control. 2.3. Network congestion control mechanisms Now we develop algorithms for the two network flow control mechanisms previously discussed: (1) the leaky bucket scheme, and (2) isarithmic congestion control. The leaky bucket scheme is often used in QoS (Quality of Service) specifications, for example in the Logical Link Control and Adaptation Protocol

SSiI

VSSi

1

ki-bi-1

λiu(niu)

RIQi λiVQ,(niVQ)

λilll(nilll) λiRIQ(niRIQ)

ki-1

νiI(niI)

λil(nil)

SSiIII

Pi

bi i

bi

λiVB,RIQ(niVB)

bi

(1-Pi)

1

λid(nid)

λiVB(niVB) λi

i l,RIQ(n l)

bi(virtual kanbans)

ki-1(kanbans) Fig. 4. Generalized minimal blocking node analysis.

492

M. Vroblefski et al. / European Journal of Operational Research 163 (2005) 482–502

SSijI

SSijIII RIQi

bi

1 λijlll(nijlll)

λiju(niju)

1 i,j

1

λijl(nijl)

kj-1

λijd(nijd)

Fig. 5. Generalized minimal blocking link analysis.

(L2CAP), [14]. The isarithmic congestion control mechanism is included for comparison and it illustrates the flexibility of the decomposition approach. 2.3.1. Leaky bucket scheme The leaky bucket network flow control mechanism uses tokens to restrict the flow of packets into a network. Each packet entering the network must acquire a token before it is permitted entrance. Available tokens are stored in a reservoir, or a ÔbucketÕ, of size KT . There is a flow of fresh tokens into the bucket at a constant rate, kT . After a packet leaves the network, its token is discarded. We propose a simple algorithm to convert the actual external arrival rates into the network, k0i Õs, to effective arrival rates, k 0i Õs. The effective arrival rates are then used as inputs into the network. The model is illustrated in Fig. 6. A synchronization station is used to coordinate the arrival rate of tokens and the external arrival rates of data packets. The Markov chain for the synchronization station is solved and the throughput is determined. The throughput is then weighted by the proportion of the total external arrival rate attributed by each nodeÕs external arrival rate. The algorithm is as follows. Algorithm: Leaky Bucket Step 1: {Solve Markov Chain}

λ 01 1

λ 02

...

λ*01 λ*02 kT

λT

...

λ 0N

λ*0N

Fig. 6. Leaky bucket scheme for network congestion control.

M. Vroblefski et al. / European Journal of Operational Research 163 (2005) 482–502

493

Solve the Markov chain for the synchronization station in Fig. 6, using arrival rates k0 and kT , P where k0 ¼ Ni¼1 k0i to find m0 , the synchornization stationÕs throughput. Step 2: {Find k 0i Õs} Find k 0i , i ¼ 1; . . . ; N , where k 0i ¼ m0 kk0i0 . 2.3.2. Isarithmic network congestion control The isarithmic network flow control mechanism uses ÔpermitsÕ to regulate the flow of traffic into a network. Similar to the leaky bucket scheme, each packet must acquire a permit to be allowed into the network. As a packet exits the network, its permit is returned to allow another packet to enter the network, [13]. We assume that a fixed number of permits, KI , circulate throughout the network. The isarithmic flow control is illustrated in Fig. 7. We assume that KI permits are available and the arrival rate of permits back to the bulletin board PN is state-independent, as opposed to the BCQN analysis presented in [15]. If we set KT ¼ KI and kT ¼ i¼1 ki0 , where ki0 is the departure rate of packets from node i, the solution of the Markov chain in the leaky bucket algorithm leads to effective arrival rates to each node from outside the network. We follow a similar approach to that of BCQNs found in [15]. The algorithm follows. Algorithm: Isarithmic Flow Control Step 0: {Initialization} Fix ki0 Õs to some initial values. Step 1: {Leaky bucket} Find P the effective arrival rates k 0i using the leaky bucket algorithm, where KT ¼ KI and N kT ¼ i¼1 ki0 . Step 2: {Solve the Network} Solve the network using k 0i Õs and the appropriate algorithms in Section 2.2 to find new ki0 values. Step 3: {Convergence Test} Check whether the recomputed ki0 values are within e of their old values. If they are within the chosen interval, then stop; else return to Step 1.

3. Computational results Extensive computational studies on the proposed algorithms for approximating performance measures of telecommunication networks have been carried out. The purpose of these experiments was to evaluate λ01 1

...

λ*01 λ*02

λ0N kI

NETWORK

...

λ02

λ*0N λ10

λ20

...

Fig. 7. Isarithmic network congestion control.

λN0

494

M. Vroblefski et al. / European Journal of Operational Research 163 (2005) 482–502

the computational effort and the quality of approximation performance measures of the proposed algorithms using network simulations as benchmarks. The average number of packets in the system (WIP), the average packet throughput, node utilization and link utilization are used for this parametric assessment. The algorithms have been programmed in Fortran and implemented on a SUN/UNIX server with a 248 MHz SUNW, UltraSPARC-II CPU. The simulation models have been developed in the AweSim/SLAM simulation language and implemented on a 500 MHz IBM/PC. The algorithms on the UNIX server are fully portable to the PC environment. We present our results by organizing them by the type of network congestion control used. We tested systems with no network congestion control, systems incorporating the leaky bucket scheme, and systems using the isarithmic network congestion control mechanism. 3.1. Study of systems with no congestion control The experimental studies on systems lacking congestion control are based on a full factorial design involving the following parameters and their associated levels: 1. 2. 3. 4. 5.

Network topology: Chain, Ring, Star, and Lattice Number of switching nodes, N : {5, 7, 10}, {6, 9, 12} for the Lattice structure Buffer size at each switching node, k: {5, 10, 15, 20} Output buffer size at each node, bi : {four levels depending on k} Coefficient of variation in node and link service time distribution, CV2 ¼ {0.5, 1, 2}

The above experimental design includes a total of 576 runs, 144 in minimal blocking node flow control and 432 in the generalized minimal blocking node flow mechanism. The kij Õs, the arrival rates of packets that enter the network at node i and are bound for node j, are set for each topology and network size. The mean service times for switching nodes and links are 0.1 and 0.5, respectively. The mean service rate for the hub switching node (node 5 in the 5-node star, and node 10 in the 10-node star) in the star topology is one third that of the other nodes. Packets were routed to their destination by choosing the path with the fewest hops, and the visit ratios were calculated accordingly. Simulations of the networks under all these configurations have been carried out for the processing and transmission of 100,000 packets after a warmup of 10,000 initial packets. Further, the simulations yielded steady performance characteristics under this run length and warmup conditions. The simulations required between 10 minutes to more than an hour of CPU time on the PC, depending on the system parameters. Compared to this, the average CPU time required by the approximation algorithms on the slower machine, the UNIX server, has been 0.23 CPU seconds, with a maximum of 0.57 CPU seconds. All algorithms converged in the trials conducted with e ¼ 0:001. The chosen value e ¼ 0:001 is therefore extremely small and signifies stable convergence conditions. Any further reduction in e may not alter the convergence values significantly. Further, increasing e may also not help as it may affect the quality of the estimates without yielding any computational advantages, since the computational load of the algorithms with e ¼ 0:001 is already minimal. The accuracy of the approximation for each of the four performance measures mentioned earlier in this section is represented in terms of the percent difference (% diff) between the approximation results and the simulation, and can be written as follows: % diff ¼

ðPerf: measure from approximationÞ  ðPerf: measure from simulationÞ

100: ðPerf: measure from simulationÞ

The results of these studies are summarized for representative configurations in Table 2. Table 2 also includes the confidence intervals for the simulation runs given in percentages. The confidence intervals are used to determine the statistical precision of the simulation results. The confidence interval represents the

M. Vroblefski et al. / European Journal of Operational Research 163 (2005) 482–502

495

Table 2 Results for no network congestion control No congestion control N

k

b

% difference WIP

WIP confidence intervals (%)

% difference THPT

THPT confidence intervals (%)

% difference link UTIL

Link UTIL confidence intervals (%)

% difference node UTIL

Node UTIL confidence intervals (%)

Chain topology 5 10 4 5 10 6 5 10 8 5 10 10 5 20 5 5 20 10 5 20 15 5 20 20 10 10 4 10 10 6 10 10 8 10 10 10 10 20 5 10 20 10 10 20 15 10 20 20

)1.42 )2.38 )2.71 )2.83 0.78 )3.87 )4.80 )4.80 )5.26 )4.76 )4.88 )5.57 )7.61 )7.88 )8.15 )8.42

0.80 0.72 1.08 1.14 0.41 1.02 0.54 1.04 1.12 1.06 0.69 1.44 1.04 0.83 0.79 0.91

0.40 0.22 0.05 )0.04 0.42 )0.55 )1.19 )1.19 1.31 )1.26 )2.46 )2.46 1.23 )0.52 )0.74 )0.07

1.86 0.57 1.34 0.43 1.77 0.71 1.03 0.36 1.00 1.34 0.70 1.38 1.32 0.24 1.57 2.13

)0.70 )1.32 )0.92 )1.03 0.05 )1.04 )1.85 )1.85 )1.56 )2.10 )4.70 )4.95 )0.52 )2.28 )2.43 )2.18

1.80 0.26 0.63 0.39 1.73 1.12 1.07 0.38 1.74 0.67 1.70 1.40 1.30 1.39 0.20 1.52

0.28 0.10 )0.11 )0.20 0.29 )0.64 )1.43 )1.43 )0.48 )1.83 )4.51 )4.78 )0.03 )2.56 )3.31 )3.78

1.87 1.38 1.50 1.99 2.08 1.77 1.31 1.71 2.11 1.66 1.80 1.55 2.07 1.54 1.22 1.72

Ring 5 5 5 5 5 5 5 5 10 10 10 10 10 10 10 10

topology 10 4 10 6 10 8 10 10 20 5 20 10 20 15 20 20 10 4 10 6 10 8 10 10 20 5 20 10 20 15 20 20

)2.04 )0.65 )0.70 )2.27 )1.77 )1.52 )1.07 )1.71 )4.35 )3.34 )3.68 )3.92 )6.87 )4.28 )5.33 )7.57

1.02 1.28 1.35 1.37 0.59 1.50 0.94 0.74 0.33 1.40 1.39 0.35 0.40 1.43 0.31 1.33

0.03 0.47 0.47 0.47 0.01 )0.29 0.47 0.21 0.07 0.82 1.27 0.37 )0.99 0.36 )0.54 1.04

1.56 0.26 0.86 0.64 0.89 0.31 0.60 1.21 0.36 0.44 1.14 1.86 1.82 1.41 0.79 1.82

)0.42 0.04 )0.04 )0.36 )0.06 0.23 )0.15 )0.74 0.45 0.70 0.40 )2.10 0.12 1.40 )0.31 1.53

0.55 0.63 1.75 1.68 1.62 1.74 0.99 1.76 1.81 1.84 1.83 0.81 0.52 1.30 0.53 1.22

)0.05 0.61 0.12 1.10 0.42 0.53 )0.28 )1.84 )4.58 )4.75 )4.75 )4.75 )6.17 )4.69 )4.22 )4.92

1.96 1.68 1.56 1.27 1.22 1.61 1.41 1.57 1.67 2.17 1.30 1.92 1.87 1.63 1.71 2.18

Star 5 5 5 5 5 5 5 5 10 10 10

topology 10 4 10 6 10 8 10 10 20 5 20 10 20 15 20 20 10 4 10 6 10 8

)0.28 0.63 )1.23 )1.39 )1.97 )2.52 )2.40 )2.56 )2.63 )4.12 )2.06

0.94 0.77 1.06 1.12 1.14 0.49 1.45 0.91 0.76 1.11 0.56

)0.70 0.35 )1.33 )1.45 0.05 )0.47 )0.11 )0.47 )0.18 )0.44 0.08

1.14 2.15 1.56 1.50 0.81 0.46 0.80 0.93 0.66 1.62 0.89

)1.41 0.41 )2.48 )2.40 )0.77 )1.63 )1.30 )1.03 )0.09 )0.70 0.37

1.09 1.07 0.44 1.15 1.25 0.24 0.67 0.61 1.79 0.30 0.80

)2.45 1.56 )1.28 1.25 )3.67 1.83 )3.67 1.49 )0.85 2.01 )1.09 1.47 )0.86 2.03 )0.86 2.06 )0.56 2.19 )3.93 1.75 )4.92 1.81 (continued on next page)

496

M. Vroblefski et al. / European Journal of Operational Research 163 (2005) 482–502

Table 2 (continued) No congestion control N

k

b

10 10 10 10 10

10 20 20 20 20

10 5 10 15 20

% difference WIP

WIP confidence intervals (%)

% difference THPT

THPT confidence intervals (%)

% difference link UTIL

Link UTIL confidence intervals (%)

% difference node UTIL

Node UTIL confidence intervals (%)

)3.84 )5.70 )3.43 )2.10 )4.42

1.10 0.45 1.32 0.41 0.88

0.21 0.38 0.90 0.64 0.90

2.00 0.31 1.99 0.56 0.74

0.76 0.12 0.77 0.95 0.62

1.09 0.83 0.56 1.05 1.15

)4.59 )0.51 )2.33 )4.83 )5.16

2.12 1.74 1.79 1.81 1.22

Lattice topology 6 10 4 )5.89 6 10 6 )4.90 6 10 8 )5.50 6 10 10 )5.51 6 20 5 )14.39 6 20 10 )14.12 6 20 15 )15.00 6 20 20 )14.20 9 10 4 )2.18 9 10 6 )2.52 9 10 8 )4.28 9 10 10 )2.92 9 20 5 )12.60 9 20 10 )13.09 9 20 15 )11.18 9 20 20 )12.98

1.18 0.75 0.82 1.13 0.44 0.40 0.31 0.42 0.38 1.41 0.90 1.17 0.38 1.34 0.47 0.75

0.00 0.83 )0.04 0.38 )1.22 )1.85 )0.59 0.17 2.32 1.79 1.79 1.79 0.94 1.74 2.53 1.74

0.67 1.30 0.74 1.83 0.51 0.34 0.72 1.58 1.78 0.61 0.43 1.65 2.07 1.04 0.61 1.85

)1.39 )0.28 )1.28 )1.14 )1.62 )1.34 )1.55 )1.34 2.45 1.71 2.77 2.22 2.26 1.94 3.29 2.33

1.44 0.49 0.75 0.59 1.03 1.25 0.73 0.96 1.82 1.00 0.23 0.67 1.61 0.86 0.75 0.29

)0.71 )0.18 )0.96 )1.07 )1.13 )1.27 )0.77 )0.36 1.28 0.80 0.80 0.03 )1.61 )1.81 )2.62 )2.44

1.54 1.52 1.31 1.58 1.80 1.84 1.43 1.79 1.98 2.03 1.34 1.63 2.06 1.91 1.82 1.24

range of values in which the true average will be included with a probability of 95%. Similar results have been obtained in the rest of the trials as well. We extensively tested the approximation schemes for network topologies that commonly occur in practice. The results shown in Table 2 are representative of all values of N , k and CV2 tested. From this study, we conclude that: (i) the approximations work well for both types of node flow control, (ii) the approximations work well across all of the topologies tested, and (iii) even though the approximations for networks require more effort than the approximations for serial systems (see [15]), the require computational effort is still minute compared with simulation. The approximations work very well when the system has reasonable congestion and all or most of the buffer space is utilized at some point in time. This occurs in our study when k 6 10. For k > 10, the approximations tend to underestimate the average number of packets in the system and to a lesser degree the average node utilization. This is due to the modeling constructs for the return of kanbans to their corresponding bulletin boards after a packet leaves the network. The approximation averages the arrival rates of these kanbans back to the appropriate bulletin boards and furthermore assumes they are stateindependent. By spreading the arrival rates over all possible states equally (especially when some states rarely occur, i.e. when some buffers are rarely used), the total arrival rate of kanbans to their bulletin board could be somewhat underestimated. As a result, the estimation of the number of packets allowed to wait at a node could be affected. Also, when the number of kanbans in a BCQN is small (as in the case of modeling the links), especially when k ¼ 1, the assumption that the arrival processes are state-dependent Markovian

M. Vroblefski et al. / European Journal of Operational Research 163 (2005) 482–502

497

processes does not hold. Except in these few special cases, the approximations perform very well in the general class of networks tested. In this analysis, we have chosen to compare the performance of the proposed algorithms with results of simulations incorporating actual system parameters. Hence the deviations in the approximations from the simulation results are the real deviations from actual system performance. Given the performance results summarized in Table 2, we also conclude that the approximation framework yields reliable and accurate results at almost negligible computational effort for practical systems of reasonable size. Although it will be illustrative to compare the proposed approach with other approximations in the literature, we are not aware of any approach for approximating telecommunication networks with finite buffer space. Most of the literature on telecommunication network approximation tends to focus on unrestricted buffer space configurations. 3.2. Study of systems with leaky bucket congestion control The same experimental design as in the systems with no network flow control is used in this case as well, with the addition of the following two parameters: the maximum number of unused tokens allowed in the bucket at once ðKT Þ, and the arrival rate of tokens into the bucket ðkT Þ. Each parameter is varied at two levels in each instance of the other parametrical combinations. The above levels yield different flow rates within a network as determined by KT and kT . The results of the leaky bucket studies are summarized for representative configurations in Table 3. Similar results have been obtained with the rest of the trials as well. The leaky bucket results are comparable to those of systems with no congestion control. Similar to the results for systems with no congestion control, the average number of packets in the system and the average node utilization tend to be underestimated when traffic in the system is light. Again, the average CPU time for the leaky bucket approximation is minimal compared to that of simulation. 3.3. Study of systems with isarithmic congestion control The same experimental design as in the systems with no congestion control is used in this case as well, with the addition of one parameter, the number of permits (KI ), which places a limit on the number of packets allowed in the network at one time. This parameter is varied at 2 levels in each instance of the other parametrical combinations. The results of the isarithmic studies are summarized for representative configurations in Table 4. Similar results have been obtained with the rest of the trials as well. The isarithmic results are comparable to those of systems with no congestion control, or systems that use the leaky bucket scheme. The average CPU time for an isarithmic approximation is 1.76 CPU seconds. There is a slight increase in processing time due to the additional loop in the isarithmic approximation that updates the arrival rates of available permits recently freed by packets exiting the network, in which the algorithm for a system without congestion control is encapsulated. Compared to the simulation runs, the isarithmic approximation run time is still minimal. The average performance of the approximation for average number of packets in the network (WIP), the average throughput (THPT), the average link utilization (Link UTIL), and the average node utilization (Node UTIL) are given in Table 5 for the three types of network flow control and the two types of node flow control tested. The figures in Table 5 are the average deviations from the simulation results. They are computed by averaging the absolute value of each performance parameter deviation from the simulation for all runs studied. The approximation does very well except when measuring the WIP when the congestion in the system is minimal as discussed earlier.

498

M. Vroblefski et al. / European Journal of Operational Research 163 (2005) 482–502

Table 3 Results for leaky bucket network congestion control Leaky bucket congestion control KT

kT

% difference WIP

WIP confidence intervals (%)

% difference THPT

THPT confidence intervals (%)

% difference link UTIL

Link UTIL confidence intervals (%)

% difference node UTIL

Node UTIL confidence intervals (%)

Chain topology 5 10 4 5 10 4 5 10 4 5 10 4 5 10 10 5 10 10 5 10 10 5 10 10 10 20 5 10 20 5 10 20 5 10 20 5 10 20 20 10 20 20 10 20 20 10 20 20

5 10 5 10 5 10 5 10 5 10 5 10 5 10 5 10

1.5 1.5 1.8 1.8 1.5 1.5 1.8 1.8 1.5 1.5 1.8 1.8 1.5 1.5 1.8 1.8

)5.82 )5.55 )5.28 )5.82 )5.48 )5.89 )8.26 )6.05 )10.02 )9.76 )10.25 )9.65 )8.51 )8.28 )12.40 )14.28

2.55 1.38 2.12 1.54 2.45 3.19 0.95 3.09 1.30 2.08 3.00 1.35 2.46 1.54 2.04 1.61

)0.28 0.01 0.50 )0.29 )0.44 )0.67 )0.78 0.16 1.70 0.14 )0.44 )0.60 )0.41 0.73 )0.55 )1.08

0.58 2.36 2.41 2.64 3.72 4.23 2.70 3.35 0.73 1.25 3.56 2.60 2.40 1.78 3.39 2.61

2.32 3.05 0.19 )0.84 2.03 2.54 )1.46 )0.77 )1.18 )1.18 )1.96 )0.86 )1.21 0.13 )3.16 )2.81

1.81 1.85 3.52 2.71 1.49 3.67 0.64 0.97 2.32 1.65 2.97 3.60 1.29 3.57 1.37 2.89

1.88 1.88 1.08 )0.26 1.58 0.48 )1.11 )0.24 )0.91 )1.08 )1.93 )0.70 )0.91 )0.20 )2.54 )2.61

0.82 3.37 2.47 1.05 1.13 1.62 4.35 1.25 3.45 0.59 1.71 3.39 1.81 4.17 0.60 3.55

Ring 5 5 5 5 5 5 5 5 10 10 10 10 10 10 10 10

topology 10 4 10 4 10 4 10 4 10 10 10 10 10 10 10 10 20 5 20 5 20 5 20 5 20 20 20 20 20 20 20 20

5 10 5 10 5 10 5 10 5 10 5 10 5 10 5 10

1.5 1.5 1.8 1.8 1.5 1.5 1.8 1.8 2.1 2.1 2.4 2.4 2.1 2.1 2.4 2.4

)6.05 )5.91 )2.17 )2.10 )6.98 )7.29 )0.56 )0.48 )10.55 )9.73 )4.79 )3.75 )10.29 )8.70 )3.69 )3.26

3.01 2.71 1.99 2.29 0.39 2.58 0.89 3.24 1.38 1.99 0.80 3.18 0.39 3.12 2.32 0.33

0.23 0.55 )0.22 )0.16 )0.11 )0.41 0.73 0.79 )0.81 )0.34 )0.50 )0.44 )0.61 0.27 0.40 0.45

3.51 2.46 0.97 2.17 4.01 2.64 3.82 4.01 4.28 3.08 0.66 2.54 4.12 2.82 0.99 3.59

0.15 0.27 )0.42 )0.36 )0.18 )0.63 0.70 0.74 0.04 0.72 0.57 1.42 )0.04 0.79 1.73 2.49

3.10 3.41 2.78 2.56 3.49 1.68 2.42 3.02 1.26 1.42 2.42 3.13 3.62 0.83 3.36 0.92

0.47 1.05 )0.78 )0.72 0.16 )0.11 0.36 0.41 )0.52 0.13 )0.48 0.61 )0.90 0.53 0.85 0.86

2.01 1.31 0.62 0.97 3.73 0.66 2.95 3.60 2.41 4.28 3.16 3.70 4.05 2.74 4.05 1.26

Star 5 5 5 5 5 5 5 5 10 10

topology 10 4 10 4 10 4 10 4 10 10 10 10 10 10 10 10 20 5 20 5

5 10 5 10 5 10 5 10 5 10

1.1 1.1 1.3 1.3 1.1 1.1 1.3 1.3 2.4 2.4

)7.48 )8.74 0.93 1.15 )7.11 )6.80 2.47 2.70 )8.40 )8.40

0.38 1.27 1.01 2.69 1.75 3.24 0.46 2.27 0.38 2.18

)0.90 )1.74 )0.83 )0.66 )1.67 )0.20 0.12 0.29 )1.16 )0.99

3.20 2.19 3.74 4.53 2.73 4.24 3.32 1.00 3.46 1.23

)1.43 )2.35 )1.74 )1.58 )2.31 )1.38 )0.91 )0.75 )1.21 )1.04

3.50 2.09 2.50 2.48 1.77 3.71 2.69 1.08 2.38 0.47

)2.57 )2.44 )2.08 )2.47 )2.57 )0.15 )0.68 )1.08 )1.24 )1.06

2.75 0.83 2.72 2.14 2.22 1.00 0.53 1.09 4.31 0.51

N

k

b

M. Vroblefski et al. / European Journal of Operational Research 163 (2005) 482–502

499

Table 3 (continued) Leaky bucket congestion control N

k

b

KT

kT

10 10 10 10 10 10

20 20 20 20 20 20

5 5 20 20 20 20

5 10 5 10 5 10

2.7 2.7 2.4 2.4 2.7 2.7

5 10 5 10 5 10 5 10 5 10 5 10 5 10 5 10

4 4 4.3 4.3 4 4 4.3 4.3 7.6 7.6 7.9 7.9 7.6 7.6 7.9 7.9

Lattice topology 6 10 4 6 10 4 6 10 4 6 10 4 6 10 10 6 10 10 6 10 10 6 10 10 9 20 5 9 20 5 9 20 5 9 20 5 9 20 20 9 20 20 9 20 20 9 20 20

% difference WIP

WIP confidence intervals (%)

% difference THPT

THPT confidence intervals (%)

% difference link UTIL

Link UTIL confidence intervals (%)

% difference node UTIL

Node UTIL confidence intervals (%)

)8.47 )6.60 )8.72 )7.28 )7.60 )6.36

1.49 0.47 2.21 1.99 3.27 0.87

)1.00 )0.97 )0.92 )0.03 )0.75 )0.71

3.10 2.39 2.76 1.30 3.54 4.04

)1.65 )1.32 )1.05 )0.26 )1.38 )1.06

1.96 3.54 3.08 0.34 2.92 1.69

)1.40 )1.61 )1.42 )0.36 )1.24 )1.28

1.46 2.34 1.38 3.18 1.75 0.98

)10.46 )7.36 )7.92 )8.18 )10.81 )8.52 )6.21 )7.05 )4.43 )4.10 )5.71 )4.55 )7.31 )6.91 )7.48 )6.92

1.47 1.82 2.33 0.37 2.88 2.68 3.12 2.02 1.58 0.59 1.20 3.11 2.07 1.52 1.64 1.80

)0.31 )0.63 )0.87 )0.75 )1.11 0.17 )0.46 )0.33 )0.25 )1.00 )0.67 )1.44 )0.43 0.33 0.05 )0.72

1.29 0.44 4.16 2.91 1.18 3.68 4.37 2.42 3.99 2.12 2.02 3.97 2.24 0.58 3.15 3.82

)2.38 )1.83 )2.04 )2.24 )2.70 )1.34 )1.33 )1.29 1.34 )2.59 )1.08 )0.92 )0.18 0.18 0.98 )0.18

0.93 3.10 2.91 0.78 1.85 1.39 1.20 1.34 1.25 2.76 3.38 2.46 3.52 2.05 1.11 2.22

)1.37 )1.81 )1.42 )1.41 )2.81 )1.14 )1.20 )0.88 )1.91 )3.40 )0.85 )1.87 )4.04 )4.23 )2.38 )2.88

3.16 1.00 1.76 3.83 4.36 2.84 1.10 1.67 3.64 1.10 2.51 0.90 3.78 4.16 3.40 1.87

4. Conclusion In this paper, we extended a decomposition scheme for approximating the performance of serial manufacturing systems to telecommunication networks. The intrinsic complexities in such systems have led to considerable attention in the literature on both approximation methods and simulation techniques. Much of the approximation literature is concentrated on state-independent arrival and service rates, and infinite buffer space at the switching nodes. The approximation technique developed in this paper considers statedependent arrival and service rates and finite buffer space to allow for a more detailed description of realistically constrained networks. Vroblefski et al. [15] presents a unified framework for open and closed serial queueing systems under any general blocking protocol. We extend their work to any general network and allow input and output at any node. We also generalize their work by modeling the transportation between nodes as a server with a general service time distribution. Using the same framework for approximation, we develop two node flow control mechanisms. We illustrate the robustness of this method by also developing approximations for two network congestion control schemes. The proposed algorithms have been extensively tested. Our results indicate that the proposed network approximations yields robust, reliable, and accurate estimates of system characteristics such as throughput, WIP, average node utilization and average link utilization in a wide range of system configurations.

500

M. Vroblefski et al. / European Journal of Operational Research 163 (2005) 482–502

Table 4 Results for isarithmic network congestion control Isarithmic congestion control KI

% difference WIP

WIP confidence intervals (%)

% difference THPT

THPT confidence intervals (%)

% difference link UTIL

Link UTIL confidence intervals (%)

% difference node UTIL

Node UTIL confidence intervals (%)

topology 10 4 10 4 10 6 10 6 10 8 10 8 10 10 10 10 20 5 20 5 20 10 20 10 20 15 20 15 20 20 20 20

5 10 5 10 5 10 5 10 5 10 5 10 5 10 5 10

)0.85 )0.78 )1.83 )2.28 )1.71 )0.99 )1.70 )0.99 )7.85 )14.41 )11.78 )11.48 )11.19 )11.02 )12.92 )12.76

0.77 2.03 0.78 1.49 2.40 1.17 1.22 0.75 1.45 3.15 2.04 1.96 1.25 1.60 2.55 0.32

)0.10 0.75 0.21 )0.30 0.13 0.30 0.13 0.30 )0.24 0.29 0.15 0.71 )0.47 0.79 )0.52 1.36

4.63 3.89 1.76 0.58 4.06 2.92 1.54 4.07 3.33 2.90 1.18 1.34 4.28 0.82 4.05 0.91

)0.78 0.50 )0.75 )0.81 )0.60 )0.60 )0.61 )0.60 )0.77 )2.13 )1.93 )0.55 )2.00 )0.79 )1.92 )0.63

3.10 1.14 2.45 2.09 1.15 1.19 3.52 2.02 2.99 1.74 2.11 1.46 3.69 2.67 2.23 0.88

)2.59 1.55 )2.90 )5.22 2.28 )1.64 2.65 3.26 )6.15 )2.23 5.32 )6.36 )1.88 0.05 )4.01 3.13

)0.08 0.32 )0.32 )0.72 0.40 0.61 0.41 0.61 )4.02 )4.93 )4.21 )5.38 )6.44 )6.62 )6.27 )6.09

Ring topology 5 10 4 5 10 4 5 10 6 5 10 6 5 10 8 5 10 8 5 10 10 5 10 10 10 20 5 10 20 5 10 20 10 10 20 10 10 20 15 10 20 15 10 20 20 10 20 20

5 10 5 10 5 10 5 10 5 10 5 10 5 10 5 10

)0.91 )3.89 )2.80 )2.12 )0.39 )2.13 )1.19 )2.69 )4.00 )5.04 )2.69 )5.51 )2.66 )5.87 )2.85 )5.67

0.63 2.54 0.68 3.06 3.07 0.91 2.69 2.38 2.31 0.38 0.94 1.29 1.24 1.10 1.99 2.65

)0.21 1.73 0.41 )0.14 )2.57 )2.26 )1.99 )2.33 1.35 0.49 )3.09 1.26 )0.69 )1.55 )1.83 )3.42

2.98 2.35 1.38 0.99 3.85 3.29 0.82 1.89 3.30 3.13 4.17 1.79 3.40 2.55 1.09 0.86

)0.52 )2.44 )0.13 0.55 )1.55 )1.48 )2.38 )2.08 1.00 0.69 0.75 0.41 1.25 0.46 0.82 0.12

3.47 3.32 1.08 0.47 2.51 2.23 1.90 1.81 3.76 1.79 1.39 0.87 2.65 3.48 0.30 1.22

3.51 5.17 2.40 3.84 1.79 4.32 0.43 )4.81 )2.75 4.12 5.23 )0.29 )2.81 2.79 2.31 )1.34

2.34 )0.79 )1.66 0.00 )3.32 0.67 )2.98 )2.40 )1.16 )1.73 )1.85 )0.85 )1.50 )2.30 )0.62 )2.20

Star topology 5 10 4 5 10 4 5 10 6 5 10 6 5 10 8 5 10 8 5 10 10 5 10 10 10 20 5 10 20 5

5 10 5 10 5 10 5 10 5 10

2.56 )0.59 2.56 )1.20 )1.69 0.38 )1.69 0.38 )3.27 )3.99

1.04 2.75 2.28 2.39 1.83 1.64 2.84 0.43 2.12 1.37

)1.59 )3.08 )1.59 )2.97 )1.59 )0.97 )1.59 )0.97 1.16 )1.73

2.26 1.36 3.48 3.22 4.39 2.52 1.14 2.68 1.60 2.13

)1.70 )3.80 )1.70 )3.88 )1.70 )1.76 )1.70 )1.76 0.81 )1.90

2.70 0.60 2.85 0.47 1.06 2.43 0.63 0.65 3.31 1.49

)5.74 0.81 0.46 )3.16 4.57 )5.71 3.82 5.17 1.40 1.37

)2.87 )4.90 )2.87 )4.90 )2.87 )1.81 )2.87 )1.81 0.63 )2.04

N

Chain 5 5 5 5 5 5 5 5 10 10 10 10 10 10 10 10

k

b

M. Vroblefski et al. / European Journal of Operational Research 163 (2005) 482–502

501

Table 4 (continued) Isarithmic congestion control N

k

b

KI

10 10 10 10 10 10

20 20 20 20 20 20

10 10 15 15 20 20

5 10 5 10 5 10

Lattice 6 6 6 6 6 6 6 6 9 9 9 9 9 9 9

topology 10 4 10 4 10 6 10 6 10 8 10 8 10 10 10 10 20 5 20 5 20 10 20 10 20 15 20 15 20 20

5 10 5 10 5 10 5 10 5 10 5 10 5 10 10

% difference WIP

WIP confidence intervals (%)

% difference THPT

THPT confidence intervals (%)

% difference link UTIL

Link UTIL confidence intervals (%)

% difference node UTIL

Node UTIL confidence intervals (%)

)5.12 )6.19 )4.40 )6.19 )3.98 )6.19

2.40 2.93 3.00 1.16 1.35 1.67

0.64 )1.47 0.38 )2.74 2.46 )2.74

2.30 1.32 2.15 3.71 3.36 2.60

0.81 )3.02 )0.91 )2.47 0.81 )3.57

1.42 3.32 2.62 1.45 3.64 2.67

)5.04 )6.26 3.33 2.67 )0.41 4.24

0.63 )3.79 )0.20 )4.10 0.63 )4.41

)3.94 )6.86 )3.85 )5.50 )5.94 )4.60 )3.94 )5.42 )10.58 )9.11 )10.60 )9.94 )10.52 )9.43 )9.68

2.45 3.20 0.96 2.36 2.20 3.26 2.12 0.52 2.56 0.80 2.69 2.79 0.85 1.16 2.02

0.00 )2.46 0.42 )2.46 )0.42 )1.64 0.00 )2.05 3.66 0.88 3.66 )2.25 3.14 )0.69 )0.69

4.52 1.12 3.04 3.69 1.20 1.95 3.16 4.45 2.12 4.29 1.58 2.79 0.56 2.52 0.44

)0.85 )1.60 )0.73 )1.48 )2.22 0.00 )0.85 )0.48 2.91 0.79 2.87 )2.72 2.91 )1.40 )1.45

3.77 0.35 2.21 0.51 0.89 1.30 3.62 0.36 2.25 1.30 0.67 1.62 1.81 1.30 3.36

)4.30 )6.27 )3.54 2.64 4.88 3.93 2.66 )6.07 4.36 5.94 0.51 )0.54 )3.14 3.53 1.94

)0.76 )1.08 )0.87 )1.30 )2.77 )2.14 )0.76 )0.44 )1.53 )0.88 )1.92 )4.13 )2.70 )2.69 )2.50

Table 5 Average performance of approximations Network flow control

Node flow control

% difference WIP

% diff THPT

% difference link UTIL

% difference node UTIL

None None Leaky bucket Leaky bucket Isarithmic Isarithmic

Minimal blocking Generalized minimal Minimal blocking Generalized minimal Minimal blocking Generalized minimal

4.72 4.85 6.45 6.73 7.05 7.17

0.775 0.792 0.587 0.599 1.31 1.44

1.26 1.35 1.24 1.33 1.36 1.57

1.97 2.13 2.23 2.36 3.22 3.39

Furthermore, the computational effort involved is minimal. The performance of the algorithms indicate the attractiveness of the proposed approximation methods in the analysis of complex telecommunication networks, especially compared with the popular and widely used simulation alternative. The proposed network extensions along with the unified framework presented in [15] constitute a set of highly useful tools of analysis to queueing systems designers, whether they are kanban based or not. A major concern in the design of telecommunication systems is ensuring the smooth flow of data packets through a network by avoiding catastrophic congestion. This can be achieved by varying the buffer space at switching nodes and/or incorporating some network congestion control method. While simulation is mostly

502

M. Vroblefski et al. / European Journal of Operational Research 163 (2005) 482–502

used for this purpose, it is often expensive, and hence, restricts the number of design alternatives a designer may wish to evaluate. With other design issues such as topology and media access, network designers are faced with an exponentially increasing array of design options, indicating the need for quick and accurate assessments of telecommunication network configurations. The proposed approximation framework is specifically developed for this purpose. There are several possible avenues of future research. When the traffic of a system is extremely sparse, the approximations tend to underestimate the average WIP and average node utilization. The reasons for this are twofold: (1) large state-dependent service rates are found when some buffer space is rarely used, and (2) we assume the arrival rates of kanbans to their bulletin boards are state-independent. The development of new approximation modeling constructs for systems with such peculiar characteristics is an important area for future research. Next, the above analyses can be extended to allow for data with different priorities. For instance, voice and video data that are very sensitive to latency are usually given higher priority over data from an application that does not require real-time delivery, such as e-mail. With the explosion of realtime applications including collaborative software, and changes in protocols that now give guarantees for such data, the above extension is vital. Many other flow control mechanisms and network intricacies can be modeled and added to the set of tools already developed here. Finally, the current subsystem analyses used can be extended to allow any general blocking node flow control regardless of the network topology. While these research extensions are theoretically attractive, they are also practical and crucial in modeling complex telecommunication networks. We are currently investigating some of these ideas.

References [1] S.C. Bruell, G. Balbo, Computational Algorithms for Closed Queueing Networks, Elsevier North-Holland, Amsterdam, 1980. [2] J.P. Buzen, Computational algorithms for closed queueing networks with exponential servers, Communications of the ACM 16 (9) (1973) 527–531. [3] M. Di Mascolo, Y. Frein, Y. Dallery, An analytic method for performance evaluation of kanban controlled production systems, Operations Research 44 (1996) 50–64. [4] G. Hasslinger, Towards an analytical tool for performance modelling of ATM networks by decomposition, Computer Performance Evaluation: Modeling Techniques and Tools, 9th International Conference, 1997, pp. 83–96. [5] G. Hasslinger, E.S. Rieger, Analysis of open discrete time queueing networks: A refined decomposition approach, Journal of the Operational Research Society 47 (1996) 640–653. [6] P.J. Kuehn, Approximate analysis of general queueing networks by decomposition, IEEE Transactions on Communications COM-27 (1) (1979) 113–126. [7] R. Marie, Calculating equilibrium probabilities for kðnÞ=Ck =1=N queues, Performance Evaluation Review 9 (1980) 117–125. [8] R. Melen, J.S. Turner, Nonblocking multirate distribution networks, IEEE Transactions on Communications 41 (2) (1993) 362– 369. [9] B. Pourbabai, Tandem behavior of a telecommunication system with repeated calls: A Markovian case with buffers, Journal of the Operational Research Society 40 (7) (1989) 671–680. [10] R. Ramesh, S.Y. Prasad, M. Thirumurthy, Flow control in kanban multicell manufacturing: I. A structured analysis of general blocking design space, International Journal of Production Research 35 (8) (1997) 2327–2342. [11] R. Ramesh, S.Y. Prasad, M. Thirumurthy, Flow control in kanban multicell manufacturing: II. Design of control systems and experimental results, International Journal of Production Research 35 (9) (1997) 2413–2427. [12] S. Shoemaker, Computer Networks and Simulation II, North-Holland Publishing Company, 1982. [13] W. Stallings, Data and Computer Communications, fifth ed., Prentice-Hall, 1997. [14] W. Stallings, Wireless Communications and Networks, Prentice-Hall, 2002. [15] M. Vroblefski, R. Ramesh, S. Zionts, General open and closed queueing networks with blocking: A unified framework for approximation, INFORMS Journal on Computing 12 (4) (2000) 299–316. [16] W. Whitt, The queueing network analyzer, Bell System Technical Journal 62 (9) (1983) 2779–2815. [17] W. Whitt, Performance of the queueing network analyzer, Bell System Technical Journal 62 (9) (1983) 2817–2843.