Efficient simulation of QoS in ATM switches using connection traffic descriptors

Efficient simulation of QoS in ATM switches using connection traffic descriptors

Performance Evaluation 38 (1999) 105–132 Efficient simulation of QoS in ATM switches using connection traffic descriptors 夽 Ahmet A. Akyamaça,1, ∗ ,...

586KB Sizes 0 Downloads 43 Views

Performance Evaluation 38 (1999) 105–132

Efficient simulation of QoS in ATM switches using connection traffic descriptors 夽

Ahmet A. Akyamaça,1, ∗ , James A. Freebersyserb,2 , J. Keith Townsend a a

Center for Advanced Computing and Communication, Department of Electrical and Computer Engineering, North Carolina State University, Box 7914, Raleigh, NC 27695-7914, USA b Office of Naval Research, 800 N. Quincy St., Arlington, VA 22217-5660, USA Received 4 August 1998; received in revised form 29 March 1999

Abstract High speed networks using asynchronous transfer mode (ATM) will be able to carry a broad range of traffic classes and will be required to provide QoS measures, such as the cell loss and cell delay probabilities, to many of these traffic classes. The design and testing of ATM networks and the algorithms that perform connection admission control is difficult due to the rare event nature associated with QoS measures, and the unwieldiness of matching statistical models of the broad range of traffic classes entering the network to the connection traffic descriptors used by the connection admission control algorithms. In this paper, as an alternative to using statistical traffic models, we describe the traffic entering the network by the connection traffic descriptors standardized by the ATM Forum and used by the connection admission control algorithms. We present a Monte Carlo simulation model for estimating the cell loss and cell delay probabilities using a multinomial formulation to remove the correlation associated with estimating bursty events. We develop importance sampling techniques to increase the efficiency of the simulation for ATM networks with heterogeneous input traffic classes, namely constant bit rate and variable bit rate traffic. For the experimental examples considered here, the improvement in simulation efficiency compared to conventional Monte Carlo simulation is inversely proportional to the probability estimate. The efficient simulation methods developed here are suitable for the design and testing of the switches and connection admission control algorithms planned for use in ATM networks. ©1999 Elsevier Science B.V. All rights reserved. Keywords: ATM; Efficient simulation; Rare event; Quality of service; Cell loss; Delay

夽 Portions of this paper were presented at the IEEE Global Telecommunications Conference, GLOBECOM ’97 and ’96 and the IEEE International Conference on Communications, ICC ’96. ∗ Corresponding author. Tel.: +1-919-513-2010; fax: +1-919-515-2285. E-mail address: [email protected] (A. A. Akyamaç) 1 Supported by the Center for Advanced Computing and Communication, North Carolina State University, Raleigh, NC. 2 Supported by the US Air Force Palace Knight program, Air Force Research Laboratory, Rome, NY.

0166-5316/99/$ – see front matter ©1999 Elsevier Science B.V. All rights reserved. PII: S 0 1 6 6 - 5 3 1 6 ( 9 9 ) 0 0 0 4 0 - 1

106

A. A. Akyamaç et al. / Performance Evaluation 38 (1999) 105–132

1. Introduction High speed networks using asynchronous transfer mode (ATM) will be able to carry a broad range of traffic classes and will also have to provide a quality of service (QoS) measure to many of the traffic classes. Typical QoS measures are cell loss probability (CLP) and cell transfer delay (CTD), which includes queueing delays and service times. One of the many problems facing the designers of ATM networks is how to guarantee that each customer will receive the QoS negotiated when the connection is established. Simple, approximate connection admission control (CAC) algorithms must be used in actual ATM networks to perform, in real time, the negotiation and decision functions. However, some method of testing the design of ATM switches and the CAC algorithms is necessary to verify that the switch parameters and resulting CAC decisions are correct, or at least conservative, since the network must guarantee the QoS. Closed form solutions for QoS measures such as the cell loss and cell delay probabilities are often unavailable by analytical methods for realistic network conditions because of the high dimensionality that results from the complexity of the ATM switches, in particular, the large buffer sizes, and the variety of multimedia traffic expected in ATM networks. Monte Carlo (MC) simulation can be used to obtain accurate estimates of performance measures of complex ATM networks, but is too slow to be used for CAC in real time. Even in nonreal time environments such as CAC algorithm testing, MC simulation can be too slow for estimating the rare events associated with the QoS measures in ATM networks. For example, the CLP and the probability that the cell delay exceeds a given high threshold (delay threshold probability) are expected to be in the range 10−9 − 10−12 . Importance sampling (IS) is a technique that, under the proper conditions, can speed up simulations involving rare events of network (queueing) systems (see, for example, [1–5]). By carefully choosing the modification or bias of the underlying probability measures, large speed-up factors in simulation run time can be obtained. IS allows prior knowledge of the system to be used to speed up the simulation. Most commonly, statistical traffic models have been used in the analysis and simulation of ATM switches and networks. However, it is difficult to establish a relationship between the parameters of the statistical models of the ATM traffic and the connection traffic descriptors used by the CAC algorithm [6]. Rather than using statistical models to characterize the input traffic, we use the connection traffic descriptors standardized by the ATM Forum [7]. This approach has been called the operational approach in [8]. An ad hoc IS method using the operational method to simulate an N*D/D/1 system was presented in [9]. Several studies have attempted to determine the cell loss probability based on various traffic descriptors [10–16]. All of the analysis based studies for determining the cell loss probability using the operational approach have drawbacks. Either low cell loss probabilities were not obtained or restrictions on the system parameters were made in order to arrive at approximations or upper bounds of the cell loss probability, and only [12] considered heterogeneous connection traffic descriptors. In [17], a simulation method based on the operational approach is presented, but did not provide any results for low cell loss probabilities. Traffic descriptors were also used in [18] for cell delay probabilities, but low probabilities were not obtained. The ATM Forum traffic descriptors are used for the VBR sources to generate cell delay results in [19], where the service time is assumed to be equal to the transmission time of a cell. An IS method using the operational approach was presented in [20], which used fixed MPEG-encoded film sequences sorted by decreasing frame size. In [21,22], we developed and demonstrated efficient rare event simulation techniques using the operational approach for estimating the cell loss probability and delay threshold probability, respectively, for single-stage ATM switches with homogeneous input traffic. These techniques used IS as a means of efficient simulation.

A. A. Akyamaç et al. / Performance Evaluation 38 (1999) 105–132

107

In this paper, we present an efficient simulation model for estimating the cell loss and cell delay probabilities resulting from a single stage ATM switch with heterogeneous input traffic, namely constant bit rate (CBR) and variable bit rate (VBR) traffic. Since the probabilities involved are very low, we use IS as a means of generating efficient simulations. Our model does neither place any restrictions on the input traffic other than the fact that it has to be usage parameter control (UPC) [7] compliant nor does it place any restriction on the service rate other than the fact that it should be able to accommodate the incoming traffic. This paper is organized as follows. In Section 2, we give a description of the slotted-time system model and the ATM Forum standardized connection traffic descriptors used to describe the input traffic we use in our simulations. In Section 3 we demonstrate that an exhaustive solution to finding the delay threshold probability quickly becomes intractable for realistic systems. We then describe an MC method for estimating the cell loss and cell delay probabilities using a multinomial formulation to remove the correlation associated with estimating bursty events. The IS method we use to speed up our simulations is described in Section 4. We present some experimental results in Section 5 and observe that the improvement we obtain using IS instead of conventional MC is inversely proportional to the probability being estimated. We give conclusions in Section 6. 2. System description The model we use for the ATM switch is shown in Fig. 1. The switch has NP input ports and NP output ports and we assume the number of connections is NC = NP . We characterize a connection requesting admission into the ATM switch by the following connection traffic descriptors standardized by the ATM Forum User-Network Interface (UNI) Specification [7]: ˆ λ, peak cell rate (PCR) in bits per second (bps); ¯λ, sustained (average) cell rate (SCR) in bps; ˆ maximum burst length in cells at the peak cell rate. B, ˆ The (λˆ , λ¯ , B)-triplet contains optional connection traffic descriptors which can be used to derive the remaining required connection traffic descriptors, specifically the cell delay variation and burst tolerance, that define the traffic contract at the output of the UPC device at the UNI [7]. Each connection with a given QoS, which may be an individual source or multiplexed sources, negotiates with the CAC algorithm for ˆ admission into the network using the (λˆ , λ¯ , B)-triplet to describe the connection traffic.

Fig. 1. ATM switch model.

108

A. A. Akyamaç et al. / Performance Evaluation 38 (1999) 105–132

We assume there are three types of traffic that can enter the switch: CBR traffic with a triplet (λˆ CBR , λ¯ CBR , Bˆ CBR ), where λˆ CBR = λ¯ CBR and Bˆ CBR = 1; VBR traffic with a triplet (λˆ VBR , λ¯ VBR , Bˆ VBR ). We assume, in the case of multiple classes of VBR traffic, that there is a dominant VBR class with a triplet (λˆ DOM , λ¯ DOM , Bˆ DOM ) such that either λˆ DOM > λˆ or Bˆ DOM > Bˆ if λˆ DOM = λˆ for the other VBR classes. If there are multiple VBR classes for which λˆ = λˆ DOM and Bˆ = Bˆ DOM , the dominant class is the one for which λ¯ DOM > λ¯ of the other VBR classes. The other CBR and VBR classes are referred to as the nondominant classes. We also assume that λˆ DOM = k λˆ CBR and λˆ DOM = l λˆ VBR , where k, l ∈ Z + , k ≥ 2 and l ≥ 1. For the nondominant VBR classes for which λˆ = λˆ DOM and Bˆ = Bˆ DOM , we assume that λ¯ = mλ¯ , where m ∈ Z + , m ≥ 2. We assume that the dominant class is a VBR class, because if a CBR class was dominant, peak cell rate allocation would be sufficient since most of the traffic load would be the CBR class. The combination of traffic classes considered will be either heterogeneous CBR/VBR or heterogeneous VBR/VBR. It is also an implicit assumption (due to the rare nature of the important events) that nondominant traffic alone (in the absence of dominant VBR traffic) cannot cause an important event. Each connection is routed to one of the output ports through a nonblocking shared bus. We model the long-term routing behavior by assuming that the output port is chosen with uniform probability over the NP available output ports. Hence, we analyze a single tagged output buffer to represent the performance of all output buffers. The output buffers have a finite size of K cells, a service rate of µˆ Mbps, and use a first-in first-out (FIFO) queueing discipline. The bus operates at a speed of NP λˆ DOM . We assume the routing is instantaneous in that it has negligible effect on the cell’s end-to-end delay. ˆ ˆ λ, ¯ B)-triplets The CAC algorithm must use knowledge of the (λ, of the new connection and the existing ˆ K) to decide if a new connection should be allowed NC connections, as well as the switch parameters (µ, into the network at the requested QoS. We consider the worst case UPC-compliant VBR traffic to be the periodic greedy ON/OFF traffic pattern. 3 In the ON state, the source generates traffic at a peak rate of λˆ VBR for a burst duration of Bˆ VBR , and the OFF state is required to average out to the mean rate λ¯ VBR . The ¯ ˆ λ, ¯ B)-triplet negotiated (λ, is what the connection is paying for, therefore the network must guarantee the QoS assuming traffic at the worst-case level. Thus, the key feature of this simulation model is that the traffic source is derived from the standardized connection traffic descriptors planned for use in ATM networks [7], rather than a stochastic model for the traffic source. For our simulations, we use a slotted time model resulting from normalizing with respect to the peak rate of the dominant VBR sources, λˆ DOM . Hence, one slot corresponds to the transmission of a 53 byte ATM cell at a rate of λˆ DOM . With this normalization, the equivalent arrival rate of the dominant VBR sources is 1 cell/slot, whereas that of the other VBR sources is λˆ VBR /λˆ DOM = 1/ l cells/slot and that of the CBR sources is λˆ CBR /λˆ DOM = 1/k cells/slot, where l and k were defined above. Both the VBR and the CBR sources are periodic with the corresponding periods measured in slots: TDOM =

λˆ DOM Bˆ DOM , λ¯ DOM

TVBR =

λˆ DOM Bˆ VBR , λ¯ VBR

TCBR =

λˆ DOM Bˆ CBR k λˆ CBR = = k. λ¯ CBR λ¯ CBR

(1)

3 There is some discussion (see [14,19,23] and the references therein) regarding the worst-case UPC compliant traffic pattern. In our analysis in [24], we argue that the ON/OFF pattern considered here is the worst-case in the operating region of interest and there is no compelling evidence to the contrary with the exception of conditions which are not representative of realistic operating conditions such as only two connections.

A. A. Akyamaç et al. / Performance Evaluation 38 (1999) 105–132

109

Fig. 2. Examples of traffic generated by one CBR and two VBR sources.

Fig. 3. Arrival slot and service slot relationship.

Since all VBR and CBR sources are periodic, the entire input traffic has a super-period of T = LCM(TDOM , TVBR , TCBR ), where LCM(·) is the lowest common multiple operator. We assume 4 that ˆ λˆ DOM cells/slot. An TDOM = T . After normalization, the equivalent service rate is given by µ = µ/ example of the resulting slotted-time pattern is plotted in Fig. 2 for a dominant VBR source (solid) with λˆ DOM = 8 cells/s, λ¯ DOM = 2 cells/s, Bˆ DOM = 4 cells, a nondominant VBR source (gray) with λˆ VBR = 4 cells/s, λ¯ = 2 cells/s, Bˆ VBR = 2 cells and a CBR source (plaid) for which λ¯ CBR = λˆ CBR = 2 cells/s and Bˆ CBR = 1 cell, where all connections start at slot 0. The randomness in the simulation model is found in the connection starting-slot positions. Each of the NDOM dominant VBR connections can be characterized by a discrete-time random variable, uniform on [0, TDOM − 1] representing the starting-slot of the connection. Similarly, each of the NVBR nondominant VBR connections can be characterized by a discrete-time random variable, uniform on [0, TVBR − 1] and each of the NCBR nondominant CBR connections can be characterized by a discrete-time random variable, uniform on [0, TCBR − 1]. Here, NC = NDOM + NCBR + NVBR . We represent the starting-slots as an NC -dimensional vector v, which can be partitioned into three vectors v DOM , v VBR and v CBR representing the starting-slots of the dominant VBR and nondominant VBR and CBR connections, respectively, as follows: v = (v DOM |v VBR |v CBR ).

(2)

For simplicity, we fix one dominant VBR connection to start at slot 0. Hence, the first component of v is always 0. Note that in v, we will have either CBR/VBR or VBR/VBR entries. Each arrival slot is composed of NC service slots as in Fig. 3. If at an arrival slot the number of cell arrivals is N < NC , we assume these cells occupy the first N service slots. In each service slot, one cell can be loaded into the output buffer and µ/NC cells can be serviced. A cell is lost if the buffer is full upon If there are other nondominant VBR classes for which λˆ = λˆ DOM and Bˆ = Bˆ DOM for which the period T = mTDOM , m ∈ Z + , m ≥ 2, these classes are treated as dominant VBR sources with T = TDOM for the purpose of IS simulation. As described in Section 4.4, this does not affect the outcome of the simulation. 4

110

A. A. Akyamaç et al. / Performance Evaluation 38 (1999) 105–132

its arrival. For a cell that is not lost, the delay threshold τ represents a fraction of the buffer length such that Km = τ K. If the queue length is greater than Km upon a cell arrival, the cell is considered to have exceeded the delay threshold τ . For delay, the buffer is assumed to be of infinite length and the buffer size specification K is used in determining the delay thresholds. The system model behaves deterministically once the starting-slot positions of the NC connections are selected. We need to only observe the buffer size for at most three super-periods to determine whether or not steady state has been reached. The onset of the steady state in the system occurs when the buffer size behavior and cell arrival pattern are repeated over consecutive super-periods. Once steady state has been established, occurrences of cell loss and cell delay events can be examined for the selected starting-slot. Note that there is a minimum number of connections NCmin , such that if there are less than NCmin connections, an important event such as a cell loss or a cell exceeding the delay threshold cannot occur [22,25]. The value of NCmin depends on the important event considered and on the types and number of connections and will be derived in subsequent sections. There is also a maximum number of connections NCg such that the server can effectively accommodate incoming traffic. The service rate and mean rates of the connections must satisfy the stability constraint NC X λ¯ i < µ. ˆ

(3)

i=1

The value of NCg will be derived in terms of the system parameters.

3. Simulation framework 3.1. Closed form analysis Let M denote the maximum number of important events that can occur for a given starting-slot vector v. For the case of cell loss, M = Lmax , where Lmax is the maximum number of lost cells and for the case of cell delay, M = Dmax , where Dmax is the maximum number of cells that exceed the delay threshold. We have shown in [22] (for the case of cell delay) and [25] (for the case of cell loss) that the worst case congestion occurs when all connections start in the same slot. This result is also proven in [26]. Since one VBR connection is fixed to start at slot 0, the worst case congestion occurs for the aligned-at-zero (AAZ) case. Hence, M can be found by running the AAZ vector given by v = (0, . . . , 0|0, . . . , 0|0, . . . , 0). This is equivalent to a single simulation run and adds negligible overhead to the simulation. Let V be the set of all connection starting-slot vectors and nj be the number of connection starting-slot vectors that map to exactly j important events in steady state. Here, |V | = (TDOM )NDOM −1 · (TVBR )NVBR · (TCBR )NCBR , where | · | denotes the cardinality of the set. Then, the probability of Pj simultaneous important events in steady state is given by pj = nj /|V | for j = 0, . . . , M. Note that M pj = 1. The average PM j =0 number of important events in steady state, n, ¯ can then be found as n¯ = j =0 jpj . This can correspond to either the average number of cell losses or the average number of cells that exceed the threshold in steady state. Hence, the important event probability is given by PM  M  X j j =0 jpj P = = (4) pj , Ncells Ncells j =0

A. A. Akyamaç et al. / Performance Evaluation 38 (1999) 105–132

111

where Ncells is the total number of cells that arrive in a steady state period, Ncells = NDOM Bˆ DOM + NVBR ·

T TVBR

· Bˆ VBR + NCBR ·

T TCBR

.

(5)

In Eq. (4), P can represent P CL , the cell loss probability, or PTH , the delay threshold probability and pj can represent pCLj , the probability of j simultaneous cell losses, or pTHj , the probability that j cells exceed the delay threshold. In this formulation, P is expressed as a weighted sum of multinomial probabilities. Thus, we have a multiple bin probability structure composed of bins numbered 0 to M, where bin j corresponds to j simultaneous important events. To obtain P, we have to run all possible vectors in |V | and collect the probabilities pj . This exhaustive solution has a complexity of O((TDOM )NDOM −1 · (TVBR )NVBR · (TCBR )NCBR ) and quickly becomes intractable for realistic systems. Thus, we use simulation to estimate the cell loss and cell delay probabilities. 3.2. Monte Carlo simulation The conventional MC simulation approach is to estimate P by randomly selecting NMC connection starting-slot vectors from V and checking for an important event (i.e., cell loss or cell delay) in each arrival slot for each connection once steady state has been reached. Thus, we would form an estimate Pˆ of P as follows: P MC PT PNC (1/NMC ) N j =0 i=1 n=1 I (i, j, n) Pˆ = , (6) Ncells where I (·) is a binary indicator function of the important event:  1 if an event occurred for cell n at slotj for trial i, I (i, j, n) = 0 otherwise.

(7)

With conventional MC simulation, we are estimating the numerator in Eq. (4), since the denominator is the number of cell arrivals per period in steady state. By estimating P we are implicitly estimating the multinomial probabilities pj as in Section 3.1. One drawback to the conventional approach to MC simulation is that the binomial random variable represented by the binary indicator function in Eq. (6) is correlated because the cell loss or cell delay events occur in a bursty fashion [27]. Measuring this correlation is difficult when rare events are involved, which makes deriving accurate confidence intervals of estimates problematic. The correlation problem associated with the binary indicator function is solved by the multinomial formulation introduced in Section 3.1 because the selection of each connection starting-slot vector from V is done independently. Using the multinomial formulation, we form an unbiased estimate Pˆ of P as follows:  M  X j (8) pˆ j , Pˆ = Ncells j =0 where pˆ j are estimates of the individual probabilities pj , j = 0, . . . , M. The estimate pˆ j is formed by running NMC simulation runs with one connection starting-slot vector being drawn uniformly from the sample space V for each run as follows: NMC 1 X pˆ j = Ij (s), NMC s=1

(9)

112

A. A. Akyamaç et al. / Performance Evaluation 38 (1999) 105–132

where Ij (s) is the multinomial indicator function for j simultaneous important events in steady state for vector s  1 if j simultaneous important events occurred for vectors, (10) Ij (s) = 0 otherwise. The multinomial formulation using the multinomial indicator function Ij maps each of the NMC starting-slot vectors drawn independently from V to j simultaneous important events ( j cell losses or j cells that exceed the delay threshold) in a steady-state super-period for all NC connections. Due to the rare probabilities, if j > 0 cells do not cause important events, then, with probability close to 1, no important events will occur. Thus, we consider the bins to be independent since bin 0 does not contribute to the estimates. This multiple bin structure also allows us to generate proper confidence intervals for our estimates. The variance of the estimator Pˆ is given by 2 M  X pj (1 − pj ) j 2 ˆ . (11) σ (P ) = Ncells NMC j =0 Since we do not know the probabilities pj a priori, we estimate this variance as follows: σˆ (Pˆ ) = 2

2 M  X j σˆ 2 (pˆ j ), N cells j =0

(12)

where σˆ 2 (pˆ j ) are the estimates of the estimator variances for the individual bins using the unbiased variance estimator of the mean of NMC Bernoulli samples for each bin j: NMC X 1 σˆ (pbj ) = (Ij (s) − pˆ j )2 . NMC (NMC − 1) s=1 2

(13)

In [28], it was determined that the confidence interval of a weighted sum of a multinomial distribution follows a χ 2 distribution. We state the theorem from [28] with the original notation slightly altered for our application. Theorem 1. [28]. As NMC → ∞, the lower limit of the probability that P satisfies Pˆ − Z σˆ (Pˆ ) ≤ P ≤ Pˆ + Z σˆ (Pˆ )

(14)

is at least 1 − α, where Z is the positive square root of the upper (1 − α)th percentage point of the χ 2 distribution with M degrees of freedom. We use this result to generate confidence intervals. 4. Importance sampling method For the QoS associated with ATM networks, the range of interest of the cell loss and cell delay probabilities is 10−9 and below. In order to obtain sufficiently accurate estimates in this range, 1011 samples or more of an MC simulation are necessary. Thus, MC simulation quickly becomes intractable at such low probabilities. The basic notion behind IS is to modify or bias the original probability density

A. A. Akyamaç et al. / Performance Evaluation 38 (1999) 105–132

113

function (PDF) in the MC simulation to a new, biased PDF. When used properly, IS reduces the variance of the new estimate for a given number of simulation runs, or equivalently, reduces the number of simulation runs required to achieve a given variance of the estimate. The key to using IS is knowing how to bias the original PDF. A necessary condition for a good biasing scheme is that the number of events of interest that occur must be increased and the variance of the estimate of the event of interest decreased under the biased distribution. We use a priori information about how the event of interest occurs to arrive at the biased joint PDF by reducing the support of the unbiased joint PDF. 4.1. Theory The joint PDF fV (v) of the connection starting-slots can be expressed in terms of the individual traffic classes as fVDOM ,VVBR ,VCBR (v DOM , v VBR , v CBR ). We use IS to modify or bias the initial PDF fVDOM ,VVBR ,VCBR (v DOM , v VBR , v CBR ) to a new PDF fV∗∗ ,V ∗ ,V ∗ (v ∗DOM , v ∗VBR , v ∗CBR ) where Pˆ ∗ is the new estimate of DOM VBR CBR the important event of interest, and VDOM , VVBR and VCBR refer to the set of all possible connection starting-slot vectors v DOM , v VBR and v CBR for the different types of traffic sources. IS requires that the biased PDF fV∗∗ ,V ∗ ,V ∗ (v ∗DOM , v ∗VBR , v ∗CBR ) > 0 whenever the original PDF DOM VBR CBR fVDOM ,VVBR ,VCBR (v ∗DOM , v ∗VBR , v ∗CBR ) > 0 when v ∗ ∈ V 0 , where the set V 0 is the important region which is a subset of V such that every vector v ∗ = (v ∗DOM |v ∗VBR |v ∗CBR ) in V 0 results in at least one important event, i.e., one cell lost or one cell that exceeds the threshold. To keep Pˆ ∗ an unbiased estimate of P, each important event (identified by cells lost or cells that exceed the delay threshold) must be appropriately weighted or unbiased. The weight used to unbias the IS estimate is given by w(v ∗DOM , v ∗VBR , v ∗CBR ) =

fVDOM ,VVBR ,VCBR (v ∗DOM , v ∗VBR , v ∗CBR ) fV∗∗ ,V ∗ ,V ∗ (v ∗DOM , v ∗VBR , v ∗CBR ) DOM

VBR

(15)

CBR

The use of the multiple bin structure results in V = V0 ∪ V1 ∪ · · · ∪ VM and V 0 = V1 ∪ · · · ∪ VM , where the set Vj contains only those vectors that result in j important events, i.e., j cells lost or j cells that exceed the delay threshold in the steady state. For realistic ATM systems of interest, |V0 |  |Vj | for j = 1, 2, . . . , M. MC simulation becomes intractable since it samples from the entire set V , including the set V0 which is composed of the vectors that cause no important events. Ideally with IS, we would like to sample exclusively from the set V 0 , but the set partitions Vj are not known a priori. Instead, we sample from a set V ∗ = V1∗ ∪ · · · ∪ VM∗ ∗ , where for simulation improvement, we must have |Vj∗ |  |V | and |V ∗ |  |V |. However, the sets Vj∗ may no longer be disjoint and may even have intersections with V0 and it is possible that M ∗ 6= M. Note that V 0 ⊆ V ∗ ⊂ V . Also, unlike in MC simulation where we run a total of NMC trials for all bins, the number of runs per bin can vary in IS simulation. MC simulation forms the estimate Pˆ by drawing vectors over the entire space V . Thus, for MC simulation, fV (v ∗ ) = 1/|V |. Our IS algorithm biases the PDF fV (v ∗ ) to fV∗∗ (v ∗ ) for each starting-slot vector v ∗ and estimates the multinomial probability pj by independently drawing Nj starting-slot vectors using the biased PDF as follows: Nj

pˆ j∗

1 X ∗ = I (s)wj∗ (s), Nj s=1 j

(16)

114

A. A. Akyamaç et al. / Performance Evaluation 38 (1999) 105–132

where Ij∗ (s) is the indicator function of an important event under the new PDF fV∗∗ (v ∗ ) and wj∗ (s) is the vector-dependent weight function described above. For run s, the weight for vector v ∗ (s) is given by  ∗ ∗ wj (s) = 1/ |V |fV ∗ (v ∗ (s)) . Thus, as in the MC simulation case from Section 3.2, we form the unbiased estimate Pˆ ∗ as a linear combination of the individual estimates pˆ j∗ : Pˆ ∗ =

 M  X j pˆ j∗ . N cells j =0

(17)

The unbiased estimate of the variance of Pˆ ∗ is given by 2 M  X j σˆ 2 (pˆ j∗ ), σˆ (P ) = N cells j =0 2

ˆ∗

(18)

where the individual variances σˆ 2 (pˆ j∗ ) are estimated using the unbiased variance estimator of the mean of Nj Bernoulli samples for bin j as follows: σˆ 2 (pˆ j∗ ) =

Nj X 1 (I ∗ (s)wj∗ (s) − pˆ j∗ )2 . Nj (Nj − 1) s=1 j

(19)

4.2. Overview of the IS method Important events (cell losses or delays) occur when the connections start closely enough together that the server cannot keep up with the arriving cells over a time t < T and either buffer overflow results in a cell loss or increasing buffer occupancy causes high delays to occur. For the minimum number of connections that causes an important event, NCmin , we can determine an interval of ϕ arrival slots in which all the connections must start their cell burst in order for an important event to occur. If any connection starts its cell burst more than ϕ arrival slots away from the beginning of a cell burst of any other connection, then an important event can never occur [22,25]. The multinomial indicator function Ij (·) is always zero for j > 0 outside this ϕ length arrival slot interval and the support of the uniform PDF that originally covered the entire interval T can be reduced. Thus, the ϕ-distances define the set V ∗ described in Section 4.1. The ϕ-distances are different for each type of important event. We describe the c-distances for cell loss in Section 4.4 and the d-distances for cell delay in Section 4.5. We solve the problems of cell loss and cell delay threshold probability calculation with three algorithms. The first algorithm calculates and generates the ϕ-distances associated with the connection starting-slots that identify the set V ∗ in the implementation of IS. The second algorithm called the interval reduction method samples a connection starting-slot vector and generates an IS weight given a set of distances. The third algorithm called the distance-shrinking algorithm extends the distance calculation algorithm to subsequent bins. The functions and relations of these three IS algorithms are shown in Fig. 4. The distance calculation algorithm and the distance-shrinking algorithm involve pre-simulation calculations and pre-simulation runs, respectively, and add negligible overhead to the overall simulation. This overhead is discussed in Section 4.7. The simulation runs are represented by the dashed box in Fig. 4. The simulation runs accumulate the necessary statistics for the sampled vector. The interval reduction algorithm, using the results of the

A. A. Akyamaç et al. / Performance Evaluation 38 (1999) 105–132

115

Fig. 4. Relation between the three IS algorithms and the simulation runs.

pre-simulation algorithms, samples a vector and generates an IS weight. The interval reduction process adds minimal overhead to the simulation, which is discussed in Section 4.7. In reference to Fig. 4, the interval reduction method will be described in Section 4.3. The distance calculations will be discussed in Section 4.4 for the case of cell loss and Section 4.5 for the case of cell delay and the distance-shrinking algorithm will be presented for the cases of cell loss and cell delay in Section 4.6. 4.3. Interval reduction The method of interval reduction guides the selection of the connection starting-slots so that an important event occurs while reducing the support over which the next connection starting-slot can be randomly drawn. The overall IS biasing method provides a means of biasing a discrete uniform PDF by analytically defining a region V ∗ ⊂ V such that the important region V 0 ⊆ V ∗ . To the knowledge of the authors, there have been few other IS simulation approaches used to bias a discrete uniform PDF. An ad hoc method based on the geometric distribution was used in [9] for N*D/D/1 systems. An approach similar in nature to the interval reduction method was used in [20] which used fixed MPEG-encoded film sequences sorted by decreasing frame size, where the starting-slots of the sequences were selected according to the load presented by each sequence in order to cause an overload situation. The interval reduction method is a more general approach, and coupled with the distance calculations (described in Sections 4.4 and 4.5), includes all important event categories. The distance-shrinking method (described in Section 4.6) further enhances the efficiency of the interval reduction method. Given the distance calculations, the interval reduction algorithm efficiently samples a vector from the vector space generated by the distance calculations and also calculates an IS weight relating to the sampled vector. The interval reduction method can be used with both the c-distances (from Section 4.4) and the d-distances (from Section 4.5) and also with the distances that are outputs of the distance shrinking method (from Section 4.6). We demonstrate the method of interval reduction for a number of connections NC > NCmin using the example shown in Fig. 5. For this example, we consider cell loss as the important event, hence the interval reduction method uses the c-distances. There are NC = 4 connections with a period of T = 12. We assume that, using the distance calculations in Section 4.4, it is determined that at least three connections are required for cell loss. Furthermore, it is determined that cell loss can occur

116

A. A. Akyamaç et al. / Performance Evaluation 38 (1999) 105–132

Fig. 5. Example of the method of interval reduction for NC > NCmin .

as a result of three connections that arrive within one slot of each other, or four connections that arrive within three slots of each other. If the connections are more than three slots apart, cell loss cannot occur. Thus, T = 12, NCmin = 3, NC = 4, c0 = 1 and c1 = 3, where c0 and c1 are the c-distances described in Section 4.4. The first connection (black square) is fixed to start at the beginning of the period. For each subsequent connection i, 1 ≤ i ≤ NCmin + r − 1, let tˆi denote the total support in slots from which the starting-slot for connection i is sampled. The first r = NC − NCmin = 1 connection(s) are randomly drawn over the entire period tˆ1 = T with a weight of w ∗ (v ∗ ) = 1. We then examine the different cases that occur depending on where the subsequent connections are randomly placed. Case 1. If connection 2 (gray square) is c1 slots from connection 1, then a cell loss can occur as a result of the interaction between all NC connections, or there could be a cell loss resulting from the interaction between NCmin connections involving either of the first two connections. For a cell loss to occur, the

A. A. Akyamaç et al. / Performance Evaluation 38 (1999) 105–132

117

support of connection 3 must be restricted to tˆ2 = c1 + 1 + 2c0 slots rather than the entire period of T slots. The weight up to this point is w ∗ (v ∗ ) = 1 · (c1 + 1 + 2c0 ) /T . Case 2. If connection 2 is not within c1 slots of the first connection, then a cell loss can only result from the interaction between NCmin connections involving either of the first two connections. For a cell loss to occur, the support of connection 3 must be restricted to tˆ2 = 2 (2c0 + 1) slots rather than the entire period of T slots. The weight up to this point is w∗ (v ∗ ) = 1 · 2 (2c0 + 1) /T . Case 3. If connection 2 is within c0 slots of the first connection, then we do not yet have to restrict the support of connection 3 for a cell loss to occur. We have the potential for a natural cell loss, i.e., the possibility exists that a cell loss occurs naturally based on the random draws without needing any restriction of the support of the PDF. The support of connection 3 is the entire period of tˆ2 = T slots. The weight up to this point is w ∗ (v ∗ ) = 1 · 1. Case 4. If connection 3 (plaid square) is within c0 slots of either connection 1 or connection 2, and within c1 slots of both connections 1 and 2, then a cell loss can occur as a result of the interaction between NCmin connections involving either of the first two connections or the interaction between NC connections. For a cell loss to occur, the support of connection 4 must be restricted to tˆ3 = c1 + 1 slots rather than the entire period of T slots. The weight is w ∗ (v ∗ ) = 1 · (c1 + 1 + 2c0 ) /T · (c1 + 1) /T . Case 5. If connection 3 (plaid square) is within c0 slots of either connection 1 or connection 2, but not within c1 slots of both connections 1 and 2, then a cell loss can occur only as a result of the interaction between NCmin connections involving either of the first two connections. For a cell loss to occur, the support of connection 4 must be restricted to tˆ3 = c0 + 1 slots rather than the entire period of T slots. The weight is w ∗ (v ∗ ) = 1 · (c1 + 1 + 2c0 ) /T · (c0 + 1) /T . Case 6. Since connection 2 was not within c1 slots of the first connection, a cell loss cannot occur as a result of the interaction between NC connections. Thus, connection 3 will be within c0 slots of either connection 1 or 2, and a cell loss can occur only as a result of the interaction between NCmin connections. For this case, the support of connection 4 must be restricted to tˆ3 = 2c0 + 1 slots rather than the entire period of T slots. The weight is w ∗ (v ∗ ) = 1 · 2 (2c0 + 1) /T · (2c0 + 1) /T . Case 7. This case is very similar to Case 6, except that the support of connection 4 must be restricted to tˆ3 = c0 +1 slots rather than the entire period of T slots. The weight is w∗ (v ∗ ) = 1·2 (2c0 + 1) /T ·(c0 + 1) /T . Case 8. If connection 3 is not within c1 slots of both of the first two connections, then a cell loss can occur only as a result of the interaction between the first two connections. For a cell loss to occur, the support of connection 4 must be restricted to tˆ3 = c0 + 1 slots rather than the entire period of T slots. The weight is w ∗ (v ∗ ) = 1 · 1 · (c0 + 1) /T . Case 9. If connection 3 is within c1 slots but not c0 slots of both of the first two connections, then a cell loss can occur only as a result of the interaction between NC connections. For a cell loss to occur, the support of connection 4 must be restricted to tˆ3 = c1 + 1 slots rather than the entire period of T slots. The weight is w ∗ (v ∗ ) = 1 · 1 · (c1 + 1) /T . Case 10. If connection 3 is within c0 slots of both of the first two connections, then we do not have to restrict the support over which connection 4 is randomly drawn for a cell loss to occur, thus the support of connection 4 is the entire period of tˆ3 = T slots. The weight for this case is w∗ (v ∗ ) = 1 · 1 · 1 since no restriction was necessary for a cell loss to occur. This is an example of a natural cell loss,

118

A. A. Akyamaç et al. / Performance Evaluation 38 (1999) 105–132

a cell loss that occurs without any restriction of the support over which the connections are randomly drawn. The algorithm iteratively keeps track of the intervals tˆi over which the starting-slots are sampled, where ˆtNC +r−1 ≤ tˆNC +r−2 ≤ · · · ≤ tˆ2 ≤ tˆ1 . This inequality is a result of the reduction of size of the intervals min min from which starting-slots are sampled. Since MC simulation samples from the entire period, the weight for each sample is tˆi /T . The overall weight is multiplicative. Thus, the IS weight for vector v ∗ is calculated as w ∗ (v ∗ ) =

1 T NC −1

NY C −1

tˆk .

(20)

k=1

4.4. Distance calculation for cell loss We estimate cell loss probabilities for systems with combined CBR/VBR traffic and VBR/VBR traffic. We bias only the dominant VBR sources. The distance calculation algorithm identifies the set V1∗ from which the VBR starting-slot vector v 1 is to be sampled and thus corresponds to bin 1 in the multinomial formulation. As in Section 2, let NCmin = NC0 be the minimum number of connections required to cause a cell loss. From [25], NC0 is calculated as follows: NC0 = µ + K/Bˆ DOM .

(21)

Let r = NDOM − NC0 denote the extra number of VBR connections present. Define ci , 0 ≤ i ≤ r, as the distance in normalized arrival slots between the cell bursts starting the farthest apart that still causes a cell loss involving the interaction of NC0 + i connections. Interaction between connections is defined to be when the buffer is nonempty between the start of each adjacent connection. We refer to the distances ci as the c-distances. The c-distances depend on the service rate and the number of connections. A detailed derivation of the c-distances can be found in [21,25]. We first summarize the results for combined CBR/VBR traffic in Table 1, where c0 i are intermediate variables that represent the c-distances for homogeneous dominant VBR sources in the absence of background traffic. Next, we summarize the results for VBR/VBR traffic in Table 2. Note that for the VBR/VBR case where µ ≥ 1.0, the c-distances are determined assuming that all VBR connections have the same triplet as the dominant VBR class. Also, for both µ < 1.0 and µ ≥ 1.0, if there are nondominant VBR classes for which λˆ = λˆ DOM and Bˆ = Bˆ DOM , they are assumed to have the same triplet as the dominant VBR class while determining the c-distances. These assumptions do not effect the accuracy of the simulation results in any way since the Table 1 c-distances for cell loss probability for CBR/VBR traffic Server

c0 0 , see below for ci

c0 i , 1 ≤ i ≤ r

µ ≥ 1.0, NC0 − 1 < µ max{bBˆ DOM − K/(NC0 − µ)c, 0} + 1 c0 0 + i Bˆ DOM µ ≥ 1.0, NC0 − 1 ≥ µ max{bBˆ DOM (NC0 − µ) − Kc, 0} + 1 c0 0 + i Bˆ DOM µ < 1.0 bBˆ DOM (1/µ − 1)c + 1 bBˆ DOM ((i + 1)/µ − 1)c + 1 P Distanceci = c0 i + dNCBR (1 + αi=1 d0 i /(λˆ DOM /λˆ CBR )i e)/min(µ, 1)e, where α is the smallest integer s.t. ci /(λˆ DOM /λˆ CBR )α < 1

A. A. Akyamaç et al. / Performance Evaluation 38 (1999) 105–132

119

Table 2 c-distances for cell loss probability for VBR/VBR traffic Server

c0

ci , 1 ≤ i ≤ r

µ ≥ 1.0, NC0 − 1 < µ µ ≥ 1.0, NC0 − 1 ≥ µ µ < 1.0

max{bBˆ DOM − K/(NC0 − µ)c, 0} + 1 max{bBˆ DOM (NC0 − µ) − Kc, 0} + 1 bNDOM Bˆ DOM /µ + (NC − NDOM )Bˆ VBR − Bˆ DOM c + 1

c0 + (i + NVBR )Bˆ DOM c0 + (i + NVBR )Bˆ DOM c0 + i Bˆ DOM

simulations are performed using the original triplets. Rather, they are required to guarantee that the space 0 V ∗ contains all the important events such that V ⊆ V ∗ , as described in Section 4.1. P C ¯ ˆ Note From Eq. (3), the number of connections is limited to the stability condition that N i=1 λi < µ. that given a connection starting-slot x, a c-distance ci can represent ci slots either to the left or right of slot x, for a total of 2ci + 1 slots. Thus, the distance calculation algorithm provides speedup in the range of c-distances ci < T /2 for 0 ≤ i ≤ r. If ci ≥ T /2 for any i, then the slots represented by ci cover the entire period T and the distance calculation algorithm degenerates to MC simulation and provides no improvement. 4.5. Distance calculations for cell delay We estimate cell delay probabilities for systems where the input consists of CBR sources multiplexed with background VBR sources, which constitute the dominant class. As explained in Section 2, given a delay threshold τ corresponding to a queue length of Km = τ K, there is a minimum number of connections necessary, NCmin = NC0 (τ ), such that if there are less than NC0 (τ ) connections, no cells will exceed the threshold [22]. Here, NC0 (τ ) can correspond to different combinations of the number of CBR and VBR sources. Assuming there are NCn nondominant CBR connections, we arbitrarily fix the p number of dominant VBR connections to the minimum number required, NC0 (τ ), such that if there are p less than NC0 (τ ) VBR dominant connections, no cells will exceed the threshold. In the above, n denotes nondominant and p denotes dominant. We then vary the number of CBR sources and estimate delay threshold probabilities. Let the number of CBR connections be fixed at NCn at a threshold of τ . If only VBR sources were present, the minimum number of connections required, NC0 (τ ), is given by [22,29]: r   2    µ + Km /Bˆ VBR + µ + Km /Bˆ VBR − 4µ/Bˆ VBR     + 1. (22) NC0 (τ ) =    2 p

p

Hence, NC0 (τ ) is an absolute upper bound for NC0 (τ ). An absolute lower bound for NC0 (τ ) can be found by assuming that the NCn CBR connections are in fact of the same class as the VBR connections. Thus, we p p have NC0 (τ ) − NCn ≤ NC0 (τ ) ≤ NC0 (τ ). To find NC0 (τ ), we run the AAZ case from NC0 (τ ) − NCn VBR p connections to NC0 (τ ) VBR connections. Then, NC0 (τ ) is the minimum number of VBR connections for which at least one cell exceeds the threshold. Note that this stage is equivalent in run-time to at most NCn + 1 simulation runs, which represents a negligible simulation overhead. p Given NC0 (τ ) VBR connections, there is a certain number of CBR connections NCn g such that if there are more than NCng CBR connections, then all cells will eventually exceed the threshold in steady state.

120

A. A. Akyamaç et al. / Performance Evaluation 38 (1999) 105–132

Table 3 d-distances for delay threshold probability Distance di , 0 ≤ i ≤ r l Bˆ VBR (N p (τ )+i−µ)−Km +µ+NCBR +NCBR ·Bˆ VBR /(λˆ VBR /λˆ CBR ) m C0 di = −1 µ−NCBR /(λˆ VBR /λˆ CBR ) m n  l l ˆ p di = Bˆ VBR (NC0 (τ ) + i − µ) − Km + NCBR · λˆ BVBR + max 1 + yN 1− CBR ˆ CBR VBR /λ  + ˆ 1 if BVBR = lTCBR , l ∈ Z where y = 0 otherwise

Server µ<1 µ≥1

µ NC



, (2 + NCBR ) NµC

om

− 1,

From the stability constraint Eq. (3), it is required that µˆ − NC0 (τ )λ¯ VBR . λ¯ CBR p

NC0 (τ )λ¯ VBR + NCBR λ¯ CBR < µˆ ⇒ NCBR < p

(23)

Thus, NCng is computed as follows: $ % p ¯ VBR µ ˆ − N (τ ) λ C 0 , NCng = λ¯ CBR

(24) p

where b·c is the floor operator. Simulations should be performed for NC0 (τ ) VBR connections and CBR connections in the range NCn ≤ NCBR ≤ NCng . As in the case of cell loss, we bias only the VBR sources. Here, the function of the distance calculation algorithm is to identify the set V1∗ from which the VBR starting-slot vector v 1 is to be sampled. Thus, the distance calculations correspond to bin 1 in the multinomial structure. Given a number of VBR and CBR connections, we define a d-distance as the maximum distance in arrival slots between the connection starting-slots of the first and last bursts of the VBR source such that at least one cell can exceed the threshold for at least one possible CBR vector. Thus, a d-distance is such that, unless all VBR connections start their bursts within d slots of each other, no cells will exceed the threshold resulting in the multinomial indicator function Ij (·) = 0 for j > 0. The d-distances depend on the service rate µ and the number of connections NVBR . Let r = NVBR − NC0 (τ ) represent the extra number of VBR connections present above the minimum required NC0 (τ ) sources to cause important cell delay events. For NVBR connections, a cell can exceed the delay threshold as a result of any combination of NC0 (τ ) + i VBR bursts arriving within di slots of each other for 0 ≤ i ≤ r [30]. A detailed derivation of the d-distances can be found in [30]. Here, we summarize the results in Table 3. Note that for NVBR = NC0 (τ ) + r connections, if for any i, 0 ≤ i ≤ r, the calculation from Table 3 results in di ≥ T /2, then the implementation of IS degenerates to a regular MC simulation since the distances will encompass the entire period. 4.6. Distance shrinking The original distances described in Sections 4.4 and 4.5 above are derived by forcing one cell to cause an important event. In the multiple bin structure, vectors that correspond to more than one important event will invariably have connections confined (in terms of starting-slots) to a distance smaller than the original distances. In fact, in most cases, the vectors produced by the original distances will pertain to bins 0 and 1 (i.e., result in either no cells or one cell causing an important event). Thus, simulating for the higher

A. A. Akyamaç et al. / Performance Evaluation 38 (1999) 105–132

121

Fig. 6. Phase 1 of the distance-shrinking algorithm for cell delay.

bins with the original distances could prove to be very inefficient since multiple hits are required for all bins to accurately determine the important event probabilities. It is more efficient to generate distances for each of the bins. For cell loss, all bins have nonzero probability. Furthermore, if a cell loss occurs for a given set of c-distances, at least one extra cell will be lost every time the distance is shrunk by 1/µ if µ < 1 and by 1 if µ ≥ 1 [25]. Thus, we can target S simultaneous cell losses (or bin S) for 1 ≤ S ≤ Lmax by shrinking the c-distances in Section 4.4 by S using ci (S) = ci − b(S − 1) /min (µ, 1)c,

(25)

where 0 ≤ i ≤ r. Here, S is referred to as the shrink value. For cell delay, we would ideally choose to identify sets of distances di (h) for 0 ≤ i ≤ r and 1 ≤ h ≤ Dmax where h specifies the number of cells that exceed the threshold (multinomial bin number). Theoretically computing di (h) is not possible because a given distance set di (·) may correspond to multiple bins; some bins may have identically zero probability; and there is no way of knowing a priori by how much distance has to be reduced to cause one more cell to exceed the threshold. Thus, we use a two phase algorithm to experimentally determine the proper shrink values for cell delay. The two phase distance shrinking algorithm uses the original d-distances. Let the d-distances be specified by an ordered set 1r = {d0 , d1 , . . . , dr } for NC = NC0 +r connections, where r ≥ 0. Distance shrinking involves identifying the worst case connection starting-slot configuration such that NC0 (τ ) connections start within d0 of each other, NC0 (τ ) + 1 connections start within d1 of each other, etc. The worst case configuration is identified by bursts producing the maximum congestion 0 conforming to the set of distances 1r and not to any other set of distances 1r which contains ordered 0 elements di < di . Identification of the worst case configurations is described in [30].

122

A. A. Akyamaç et al. / Performance Evaluation 38 (1999) 105–132

Table 4 Table generated by Phase 2 of the distance-shrinking algorithm for cell delay S

ηs

Bin range

Lower bin

Upper bin

0 1 2 .. .

η0 η1 η2 .. . Dmax

1, . . . , η0 η0 + 1, . . . , η1 η1 + 1, . . . , η2 .. . ηSmax −1 + 1, . . . , Dmax

1 min(η0 + 1, Dmax ) min(η1 + 1, Dmax ) .. . min(ηSmax −1 + 1, Dmax )

η0 min(max(η0 + 1, η1 ), Dmax ) min(max(η1 + 1, η2 ), Dmax ) .. . Dmax

Smax

Fig. 7. An example comparing the sets 0 , 1 , · · · Smax and set V.

In Fig. 6, we present the algorithm for the first phase. Here, S denotes the shrink value, ηS denotes the maximum number of cells that exceed the threshold for the worst case configuration conforming to the set 1r shrunk by S and ω1r denotes the worst case vector with elements ω1r (1), ω1r (2), . . . , ω1r (NC ) conforming to the set 1r shrunk by S. Note that the shrink values S are, in general, not the same as h described above. Table 4 shows a typical output of the second phase of the algorithm. This table lists the shrink value S, the number of cells ηs that exceed the threshold for the worst case vector for shrink value S, the bin range to be collected for shrink value S and lower and upper bin ranges corrected for overflow conditions. Let the set of vectors conforming to the ordered distance set generated by shrink value S be denoted by S for 0 ≤ S ≤ Smax . Then, we have 0 ⊃ 1 ⊃ · · · ⊃ S and 0 ⊂ V since V also contains vectors in bin 0 which are not in 0 . An example depicting the above sets is shown in Fig. 7 for |V | = 22, where a number x ∈ {0, 1, 2, 3, 4} indicates a vector that results in x important events. Due to the relationship between the sets i , the data collection proceeds as follows For S = 0, hits are recorded for bins 1 to Dmax . For S = 1, hits are recorded for bins η0 + 1 to Dmax , etc. Finally for S = Smax , hits are recorded for bins ηS−1 + 1 to Dmax . The shrink value is advanced when at least 10 to 100 hits are recorded for all the bins in the shrink range (for the χ 2 assumption [28]). Note that for a certain shrink S, the bin range that hits are recorded for is from ηS−1 + 1 to Dmax . This is due to the fact that the space S produced by restricting to the d-distances shrunk by S no longer contains

A. A. Akyamaç et al. / Performance Evaluation 38 (1999) 105–132

123

all the vectors that produce 1 to ηS−1 cells that exceed the threshold. If we did collect for bins 1 to ηS−1 at shrink S, we would be violating the IS condition described in Section 4.1. The data collection for the cell loss case proceeds in the same manner, with the shrinks S described for cell loss earlier. The sets i correspond to the sets Vi∗ described in Section 4.1 and 0 corresponds to V ∗ . The set 0 is generated by shrink value S = 0 and hence is defined by the original c-distances for cell loss and d-distances for cell delay. 4.7. Overhead The three algorithms used to implement the IS method (namely distance calculation, distance shrinking and interval reduction) introduce processing overhead to the simulation. In this section, we approximately quantify this overhead and show that it is negligible compared to the processing required for the simulation runs. We first consider the total number of operations required to simulate a single sampled vector. There are NC random variable selections (for the connection starting-slots). In the worst case, for each time slot in [0, . . . , T − 1], there are NC lookups (to determine if a cell has arrived for each connection), NC additions and comparisons (to determine if an important event has occurred), NC updates (to accumulate the simulation counters) and NC subtractions (for the service in the service slots). This represents NC + 5 · NC · T ≈ 5 · NC · T operations. During a typical simulation, these operations account for most of the processing time. We define a simulation unit (SU) as the processing time required to perform the above operations once a starting-slot vector has been chosen for a given simulation run. Thus, if our three IS algorithms were not used, a simulation consisting of N runs would require a processing time of ϒNO−IS = N,

(26)

where ϒNO−IS is the approximate processing time in SUs for a simulation that does not employ the three IS algorithms. The overhead introduced by the three IS algorithms can thus be incorporated as follows: ϒIS = vdc + vds + N (1 + vir ),

(27)

where ϒIS is the approximate processing time in SUs for a simulation that employs the three IS algorithms, vdc and vds are the fixed overheads in SUs introduced by the distance calculation algorithm (which involves pre-simulation calculations) and the distance-shrinking algorithm (which involves pre-simulation runs), respectively, and vir is the fractional overhead introduced by the interval reduction algorithm (which is run during the simulation while sampling the vectors). The distance calculation algorithm involves the calculations shown in Tables 1 and 3, and in the worst case, at most NCn + 1 simulation runs (see Sections 4.4 and 4.5). The calculations involve negligible processing compared to a simulation run of a single period, thus we can write vdc ≤ 1 + NCn + 1 SUs. The distance shrinking algorithm involves the calculations shown in Eq. (25) and Fig. 6, and in the worst case, Dmax + 1 simulation runs. Thus, we can write vds ≤ 1 + Dmax + 1 SUs. In a typical IS simulation of a rare event, N  Dmax + NCn and thus Eq. (27) can be approximated by ϒIS ≈ N (1 + υir ).

(28)

For the systems presented in Section 5, average total observed run-time on a SUN SPARCstation 4 for the distance calculation algorithm and the distance-shrinking algorithm was below 2 min, whereas the simulation runs accounted for tens of minutes (for very low probabilities) to numerous hours (for higher probabilities) or a few days.

124

A. A. Akyamaç et al. / Performance Evaluation 38 (1999) 105–132

The interval reduction method adds additional processing during the selection of the NC random variables. With NC = NC0 + r connections, the first r starting-slots are sampled from the entire period T as before. The remaining connections are sampled based on the positions of the connections that have already been sampled. Thus, if r +i, 0 ≤ i ≤ NC −1, connections have already been sampled, the sampling space for the subsequent connection is calculated based on the relative starting-slot positions of the r + i sampled connections. For each connection r + i, 0 ≤ i ≤ NC − 1, this requires r operations to identify the slots based on r + 1 simultaneous important events defined by the distances d0 , . . . , dr . Once the sampling space is calculated, the selection of the starting-slot of the subsequent connection requires one sampling operation. Thus, the total number of additional calculations required by the interval reduction PNC −1 method for the NC − r = NC0 connections is i=00 (r + i)r. The selection of the NC random variables then involves r + N C0 r 2 +

NC0 (NC0 − 1) NC0 (NC0 − 1) · r + NC0 = NC + NC0 r 2 + ·r 2 2

operations. After the NC random variables are selected, the simulation of the period then requires 5·NC ·T operations as before. Then, vir can be calculated as follows:     NC + r · NC0 r + (NC0 /2) − (1/2) NC + r · NC0 r + NC0 1 + r · NC0 ≤ = . (29) vir = 5 · NC · T 5 · NC · T 5·T In a typical IS simulation of a rare event, T  r · NC0 and thus vir ≈ 0 and we can write ϒIS ≈ N = ϒNO−IS .

(30)

For the systems presented in Section 5, the worst case overhead occurred when T = 250 slots and r · NC0 = 8, which satisfies T  r · NC0 . 4.8. Improvement factors The improvement in simulation efficiency that results from using IS describes the factor by which the variance of an IS estimator is reduced when compared to the variance of the conventional MC estimator for a fixed sample size, or, equivalently, how many fewer steady-state periods must be simulated to obtain a given accuracy. Thus, the improvement can be calculated as follows: NMC σ 2 (Pˆ ) = . (31) Rnet = NIS σ 2 (Pˆ )=σ 2 (Pˆ ∗ ) σ 2 (Pˆ ∗ ) N =N MC

IS

Since our IS procedure uses a different number of simulation runs for each bin, we use the latter approach in generating improvement factors. Furthermore, since the exact estimator variances are not available, we generate an estimate of the improvement using the estimates of the estimator variances as follows: σˆ 2 (Pˆ ) . Rˆ net = σˆ 2 (Pˆ ∗ )

(32)

As described in Section 4.1, the estimator of the IS estimator variance σˆ 2 (Pˆ ∗ ) is available from the simulation runs. However, the MC estimator variance is not available. To generate an estimate σˆ 2 (Pˆ ), we use the bin probabilities obtained from IS simulation. For any bin j, the number of IS simulation runs is

A. A. Akyamaç et al. / Performance Evaluation 38 (1999) 105–132

125

Nj and the probability estimate is pˆ j∗ . If MC simulation were used for bin j, the estimate formed would involve a sum of Bernoulli trials on the hits on bin j divided by the number of trials. Then, assuming the number of trials was Nj and that the resulting probability estimate is pˆ j = pˆ j∗ , an estimate of the variance for bin j resulting from MC simulation can be found using pˆ j∗ (1 − pˆ j∗ )/Nj . Note that this is not an unbiased estimator. We estimate the improvement as follows: PM 2 ∗ 2 ˆ ˆ j (1 − pˆ j∗ )/Nj σ ˆ ( P ) j =0 j p = . (33) Rˆ net = PM 2 2 ∗ ˆ (pˆ j ) σˆ 2 (Pˆ ∗ ) j =0 j σ Here, M is known, Nj and pˆ j∗ and σˆ 2 (pˆ j∗ ) are the outputs of the IS simulation. Note that the denominator in Eq. (33) is the estimated variance resulting from IS simulation. Due to the bias of the estimate of the MC estimator variance, Rˆ net is a biased estimator of the actual improvement Rnet . In order to quantify this bias, we use the following: 2 ˆ ˜ Rˆ net } = E{σˆ (P )} , E{ E{σˆ 2 (Pˆ ∗ )}

(34)

˜ indicates that we are approximating the expected value of the quotient using the quotient of the where E{·} expected values. Note that since σˆ 2 (Pˆ ∗ ) is an unbiased estimator, E{σˆ 2 (Pˆ ∗ )} = σ 2 (Pˆ ∗ ). Furthermore, we have that   2 ∗ 2 M  M  ∗  X X p ˆ (1 − p ˆ ) E{pˆ j∗ } − E{(pˆ j∗ )2 } j j j j = . (35) E{σˆ 2 (Pˆ )} = E   N N N N cells j cells j j =0 j =0 ˜ Rˆ net } Using the fact that E{pˆ j∗ } = E{pˆ j } = pj and E{(pˆ j∗ )2 } = σ 2 (pˆ j∗ ) + (pj )2 , and using Eq. (11), E{ can be written as follows: PM (j/Ncells )2 ((pj − (pj )2 − σ 2 (pˆ j∗ ))/Nj ) ˜ Rˆ net } = j =0 E{ σ 2 (Pˆ ∗ ) PM 2 2 ˆ j∗ )/Nj ) j =0 (j/Ncells ) (σ (p , (36) = Rnet − σ 2 (Pˆ ∗ ) | {z } b

where b is the bias term. Defining Nmin = minj {nj } and using Eq. (18), the bias b is limited to 0≤b≤

1 . Nmin

(37)

Thus, we have Rnet −

1 ˜ Rˆ net } ≤ Rnet . ≤ E{ Nmin

(38)

In a typical IS simulation, the number of runs per bin is at least 100, thus Nmin > 100 and the bias in the improvement estimator is negligible.

126

A. A. Akyamaç et al. / Performance Evaluation 38 (1999) 105–132

Table 5 Sources for the heterogeneous systems Source

λˆ (Mbps)

λ¯ (Mbps),

Bˆ (cells)

CBR VBR-1 VBR-2

10 50 500

10 5 2

1 25 50

Table 6 System parameters and derived parameters System: no. of sources

k

l

µˆ (Mbps)

K (cells)

T (slots)

µ(cells/slot) ˆ

A: 6 VBR-1, var. CBR B: 4 VBR-2, var. CBR C: 5 VBR-2, var. CBR D: 5 VBR-2, var. VBR-1

5 50 50 n/a

n/a n/a n/a 10

120 120 120 120

200 200 100 200

250 12, 500 12, 500 12, 500

2.4 0.24 0.24 0.24

5. Experimental results The connection traffic descriptors for the traffic sources are listed in Table 5. We selected the peak cell rate of the CBR class to be comparable to that required by video traffic. The system parameters and derived parameters are listed in Table 6. We generated cell loss probability results for systems C (CBR/VBR) and D (VBR/VBR). We fixed the number of dominant VBR sources to the minimum required to cause cell loss in the absence of background traffic. We first considered system C with CBR/VBR traffic. This corresponds to NC0 = 5 VBR-2 sources. We subsequently varied the number of CBR sources. We generated cell loss probability results for aggregate traffic, individual VBR-2 traffic and individual CBR traffic with random buffer filling and priority buffer filling, where each class in turn is given priority in entering the buffer. The improvement factors and confidence intervals were generated for the aggregate probabilities. The stopping conditions were set at 10–20 cell losses. The bottom points were obtained by simulation using IS, and the upper points were obtained by conventional MC simulation. The simulation time varied from tens of minutes (for the lowest probabilities) to numerous hours (higher probabilities) or a few days. The results are plotted in Fig. 8, where the aggregate cell loss probability is plotted as an increasing solid curve, whereas the cell loss probabilities for individual CBR and VBR-2 classes are plotted as dash–dotted and dashed curves, respectively. The improvement factors (or simulation speedup) resulting from using IS rather than MC simulation are plotted as a decreasing solid curve for the points for which IS was used. The circular markers represent random buffer filling and the triangular markers represent priority buffer filling for the CBR sources. The 95% confidence intervals are also plotted for the aggregate cell loss probability curve. The simulation time increased as the probability increased due to the increase in the number of bins Lmax . Furthermore, as the probability increases, the improvement provided by the IS method decreases. For the points that were obtained using IS simulation, it is observed from Fig. 8 that the improvement in simulation efficiency is inversely proportional to the probability being estimated. In Fig. 8, the cell loss probability initially decreases (as the number of CBR sources is changed from 0 to 1) as a result of increasing the total number of cell arrivals, while the cell losses do not increase accordingly. However, the cell loss probabilities of the individual classes continue to increase

A. A. Akyamaç et al. / Performance Evaluation 38 (1999) 105–132

127

Fig. 8. Aggregate and individual cell loss probabilities for system C, 5 VBR-2 sources, number of CBR sources varying.

monotonically. The difference in burstiness for the CBR/VBR case results in the cell loss probability of the CBR traffic in Fig. 8 being one or two orders of magnitude below that of the VBR-2 traffic. There is little difference in the cell loss probability between the random buffer filling scheme and the priority buffer filling scheme. Next, we considered system D with VBR/VBR traffic. The number of dominant VBR-2 sources was fixed at five and the number of VBR-1 sources was varied. We generated cell loss probability results for aggregate traffic, individual VBR-2 traffic and individual VBR-1 traffic with random buffer filling and priority buffer filling. The results are plotted in Fig. 9 which has the same style as Fig. 8, where the cell loss probabilities for individual VBR-1 and VBR-2 classes are plotted as dash–dotted and dashed curves, respectively. As was the case for system C, the cell loss probability initially decreases (as the number of VBR-1 sources is changed from 0 to 1) as a result of increasing the total number of cell arrivals while the cell losses do not increase accordingly, while the cell loss probabilities of the individual classes continue to increase monotonically. The cell loss probability of VBR-1 traffic is roughly one order of magnitude below the cell loss probability of VBR-2 traffic. Also, as observed for system C, there is little difference in the cell loss probability between the random and priority buffer filling schemes. As seen in Fig. 8, the improvement in simulation efficiency is inversely proportional to the probability being estimated. We generated delay threshold probability results for systems A (CBR/VBR) and B (CBR/VBR). We considered CBR/VBR traffic and assumed the CBR connections to occupy the first lines in the switch. In effect, this gives the CBR sources line priority. The simulations were run by fixing the number of VBR sources to the minimum number required for dispersion when multiplexed with a single CBR source. This

128

A. A. Akyamaç et al. / Performance Evaluation 38 (1999) 105–132

Fig. 9. Aggregate and individual cell loss probabilities for system D, 5 VBR-2 sources, number of VBR-1 sources varying.

p

corresponds to NC0 (τ ) VBR sources for NCn = 1 CBR source as described in Section 4.5. We subsequently varied the number of CBR sources. We generated delay threshold probability results for aggregate traffic and individual CBR and VBR traffic. The improvement factors and confidence intervals are reported for the aggregate probabilities. The stopping conditions were set at 100 hits per bin. As in the simulation for cell loss, the simulation time varied from tens of minutes (for the lowest probabilities) to numerous hours (higher probabilities) or a few days. Note again that more simulation runs are required as Dmax increases due to the stopping criteria of 100 hits per bin. In the simulation for cell delay, all points on the curve were obtained using IS simulation. We first considered system A with two threshold levels, τ = 40% and τ = 50%, corresponding to queue lengths of 80 and 100 cells, respectively. The number of VBR-1 sources was fixed at six and the number of CBR sources was varied. In Fig. 10, the aggregate delay threshold probabilities are plotted as increasing solid curves, whereas the delay threshold probabilities for individual CBR and VBR-1 classes are plotted as dash–dotted and dashed curves, respectively. The improvement factors (or simulation speedup) resulting from using IS rather than conventional MC simulation are plotted as decreasing solid curves. The threshold levels τ = 40% and τ = 50% are indicated by circular and triangular markers, respectively. The 95% confidence intervals are depicted with the same style markers as the pertaining delay threshold probability curve. For six VBR-1 sources, one CBR source was sufficient for cells to exceed the threshold of τ = 40%. For τ = 50%, at least two CBR sources were necessary for cells to exceed the threshold. As expected, the delay threshold probability increases when the number of CBR sources is increased and decreases when

A. A. Akyamaç et al. / Performance Evaluation 38 (1999) 105–132

129

Fig. 10. Aggregate and individual delay threshold probabilities for system A, six VBR-1 sources, number of CBR sources varying.

the delay threshold is increased. The delay threshold probability for the CBR traffic is always lower than that of the aggregate traffic and vice versa for the VBR-1 traffic. This is due to the fact that CBR sources have line priority. The aggregate and individual probabilities converge as more CBR sources are added since the increased average queue length diminishes the effect of line priority. As seen in Fig. 10, the improvement factor is inversely proportional to the probability being estimated. The improvement factor approaches unity as the delay threshold probability exceeds 10−5 . MC simulation is generally feasible for these probabilities, but will typically result in wider confidence intervals for equal run sizes since MC simulation does not incorporate the distance shrinking algorithm which targets individual bins. Next, we considered System B with two threshold levels, τ = 85% and τ = 95%, corresponding to queue lengths of 170 and 190 cells, respectively. The number of VBR-2 sources was fixed at four and the number of CBR sources was varied. The results are plotted in Fig. 11 which is in the same style as Fig. 10, where the circular markers represent τ = 85% and the triangular markers represent τ = 95%. For four VBR-2 sources, one CBR source was sufficient for cells to exceed the threshold of τ = 85%. However, at least two CBR sources were necessary for cells to exceed the threshold at τ = 95%. Similar behavior in terms of delay threshold probabilities and improvement factors was also observed for system B, where the improvement factors were again inversely proportional to the probability being estimated. As before, the delay threshold probability for CBR traffic is always lower than that for aggregate traffic and vice versa for the VBR-2 traffic. However, due to the large period of System B, the addition of CBR sources does not have as large an effect on the delay threshold probabilities.

130

A. A. Akyamaç et al. / Performance Evaluation 38 (1999) 105–132

Fig. 11. Aggregate and individual delay threshold probability for system B, four VBR-2 sources, number of CBR sources varying.

6. Conclusion In this paper, we considered the problem of estimating rare cell loss and cell delay probabilities in ATM networks with heterogeneous input traffic consisting of CBR and VBR sources. As an alternative to the stochastic approach, we used the operational approach in order to describe the traffic entering the network by the ATM Forum standardized connection traffic descriptors which are also used by the CAC algorithms. We used a multinomial formulation which effectively removed the correlation associated with estimating bursty events in an MC simulation. We developed efficient simulation techniques based on IS to estimate the cell loss and cell delay probabilities. The IS techniques bias a discrete uniform PDF by generating sets that include the important region. For the experimental systems considered, we observed that the improvement in simulation efficiency over conventional MC simulation was inversely proportional to the probability being estimated. Acknowledgements The authors would like to thank Brad Makrucki for his helpful feedback. References [1] S. Parekh, J. Walrand, A quick simulation method for excessive backlogs in networks of queues, IEEE Trans. Automat. Contr. AC-34 (1) (1989) 54–66.

A. A. Akyamaç et al. / Performance Evaluation 38 (1999) 105–132

131

[2] Q. Wang, V.S. Frost, Efficient estimation of cell blocking probability for ATM systems, IEEE/ACM Trans. Networking 1(2) (1993) 230–235. [3] P. Heidelberger, Fast simulation of rare events in queueing and reliability models, ACM Trans. Modeling Comput. Sim. 5(1) (1995) 43–85. [4] M. Devetsikiotis, W. Al-Qaq, J.A. Freebersyser, J.K. Townsend, Fast simulation of queueing networks using stochastic gradient techniques and importance sampling, Trans. Modeling Comput. Sim. submitted. [5] B. Helvik, P. Heegaard, A technique for measuring rare cell losses in ATM systems, in: Proceedings of the 14th International Teletraffic Congress, ITC 14, Antibes Juan-les-Pins, France, June 1994. [6] H. Perros, A literature review of call admission algorithms, Technical Report TR-94/8, Center for Communications and Signal Processing, North Carolina State University, April 1994. [7] ATM User-Network Interface Specification, Version 3.0, The ATM Forum, 10 September, 1993. [8] A.W. Berger, A.E. Eckberg, A B-ISDN/ATM traffic descriptor, and its use in traffic and congestion control, in: Proceedings of the IEEE Global Telecommunications Conference, GLOBECOM ’91, Phoenix, AZ, May 1991. [9] I. Norros, J. Virtamo, Importance sampling simulation studies on the discrete time nD/D/1 queue, in: Proceedings of the Eighth Nordic Teletraffic Seminar, NTS 8, August 1989. [10] T. Murase, H. Suzuki, S. Sato, T. Takeuchi, A call admission control scheme for ATM networks using a simple quality estimate, IEEE J. Select. Areas Commun. 9(9) (1991) 1461–1470. [11] Y. Miyao, A call admission control scheme in ATM networks, in: Proceedings of the IEEE International Conference Communications, ICC ’91, Denver, CO, December 1991. [12] J.W. Roberts, J.T. Virtamo, The superposition of periodic cell arrival streams in an ATM multiplexer, IEEE Trans. Commun. 39(2) (1991) 298–303. [13] C. Rasmussen, J.H. Sorensen, K.S. Kvols, S.B. Jacobsen, Source-independent call acceptance procedures in ATM networks, IEEE J. Select. Areas Commun. 9(3) (1991) 351–358. [14] B.T. Doshi, Deterministic rule based traffic descriptors for broadband ISDN: worst case behavior and connection acceptance control, in: Proceedings of the 14th International Teletraffic Congress, Paris, June 1994. [15] H. Saito, Call admission control in an ATM network using upper bound of cell loss probability, IEEE Trans. Commun. 40(9) (1992) 1512–1521. [16] Y. Sato, K. Sato, Evaluation of statistical cell multiplexing effects and path capacity design in ATM networks, IEICE Trans. Commun. E75-B (7) (1992) 642–648. [17] B. Jabbari, F.Yegenoglu, An upper bound for cell loss probability of bursty sources in broadband packet networks, in: Proceedings of the IEEE International Conference on Communications, ICC ’91, Denver, CO, December 1991. [18] T.E. Eliazov, V. Ramaswami, W. Willinger, G. Latouche, Performance of an ATM switch: simulation study, in: Proceedings of the IEEE INFOCOM 1990, San Francisco, CA, June 1990. [19] J. Garcia, J.M. Barcelo, O. Casals, An exact model for the multiplexing of worst case traffic sources, in: Proceedings of the Sixth IFIP WGG.3 Conference on Performance of Computer Networks, Istanbul, Turkey, 1995. [20] R. Andreassen, P. Heegaard, B. Helvik, Importance sampling for speed-up simulation of heterogeneous MPEG sources, in: Proceedings of the 13th Nordic Teletraffic Seminar, NTS 13, Trondheim, Norway, August 1996. [21] J.A. Freebersyser, J.K. Townsend, Efficient simulation of cell loss probability using standardized ATM connection traffic descriptors, in: Proceedings of the IEEE International Conference on Communications, ICC ’95, Seattle, June 1995, pp. 298–303. [22] A.A. Akyamac, J.K. Townsend, Efficient simulation of delay threshold probabilities in ATM networks with ON/OFF arrivals, in: Proceedings of the IEEE Global Telecommunications Conference, GLOBECOM ’96, London, November 1996, pp. 612–616. [23] T. Worster, Modeling deterministic queues: the leaky bucket as an arrival process, in: Proceedings of the 14th International Teletraffic Congress, Paris, June 1994. [24] J.A. Freebersyser, Efficient simulation of cell loss probability in ATM networks using connections traffic descriptors, Ph.D. Thesis, North Carolina State University, October 1995. [25] J.A. Freebersyser, J.K. Townsend, Efficient simulation of cell loss probability in ATM networks with heterogeneous connection traffic descriptors, in: Proceedings of the IEEE International Conference on Communications, ICC ’96, Dallas, June 1996, pp. 320–327. [26] A.A. Akyamac, J.K. Townsend, Formal proof of some heuristics used in the importance sampling model to estimate rare cell delay threshold probabilities, Technical Report TR-96/30, Center for Advanced Computing and Communications, North Carolina State University, August 1996.

132

A. A. Akyamaç et al. / Performance Evaluation 38 (1999) 105–132

[27] M.C. Jeruchim, Techniques for estimating the bit error rate in the simulation of digital communication systems, IEEE J. Select. Areas Commun. SAC-2(1) (1984) 153–170. [28] R.Z. Gold, Tests auxiliary to χ 2 tests in a Markov chain, Ann. Math. Statist. 34 (1963) 56–74. [29] A.A. Akyamac, Efficient simulation of delay threshold probabilities in ATM networks with ON/OFF arrivals using connection traffic descriptors, Master’s Thesis, North Carolina State University, November 1995. [30] A.A. Akyamac, J.K. Townsend, Efficient simulation of delay threshold probabilities in ATM switches with heterogeneous traffic, in: Proceedings of the IEEE International Conference on Communications, ICC ’97, Montreal, June 1997, pp. 1115–1120.

Ahmet A. Akyamaç received his B.S. degree with thesis from Bilkent University, Ankara, Turkey in 1993, and his M.S. degree with thesis from North Carolina State University, Raleigh, NC in 1995, both in Electrical Engineering. He is currently a Ph.D. candidate in Electrical Engineering at North Carolina State University. His research interests include the design, modeling, analysis and performance evaluation of telecommunication systems and networks, and accelerated rare event simulation techniques. Ahmet is a member of the Eta Kappa Nu, Phi Kappa Phi and student member of the IEEE and IEEE Communication society.

James A. Freebersyser is the program officer in communications and networks at the Office of Naval Research in Arlington, VA. Previously, he was the program manager of mobile, wireless communications and networks at the Army Research Office in Research Triangle Park, NC. He received his degrees in Electrical Engineering from Duke University (B.S.) in May, 1988, the University of Virginia (M.S.) in January, 1990, and North Carolina State University (Ph.D.) in December, 1995. His research interests are the computer-aided modeling, simulation and analysis of communications systems, including both high-speed networks and mobile wireless networks.

J. Keith Townsend received the B.S. degree in Electrical Engineering from the University of Missouri, Rolla, in 1981, and the M.S. and Ph.D. Degrees in Electrical Engineering in 1984 and 1988 respectively from the University of Kansas. Before graduate study he was an Engineer in the Avionics Design Group at Bell Helicopter Textron, Ft. Worth, TX. While at the University of Kansas, his research activities included digital image processing, communications system modeling and analysis, and digital signal processing. He joined the faculty of the Electrical and Computer Engineering Department at North Carolina State University in July 1988, where he is now an Associate Professor. His current research interests include wireless communication systems, and techniques for simulation of rare events in communication links and networks. Dr. Townsend is a member of Sigma Xi, Eta Kappa Nu, Tau Beta Pi, Phi Kappa Phi, and the IEEE. He has been a Guest Editor for the IEEE Journal on Selected Areas in Communications, and is currently an Editor for the IEEE Transactions on Communications.