Optical Switching and Networking 2 (2005) 230–238 www.elsevier.com/locate/osn
Impact of burst aggregation time on performance in optical burst switching networks Sireen Malik ∗ , Ulrich Killat Department of Communication Networks, Hamburg University of Technology, Schwarzenbergstrasse 95 (IV D), 21073 Hamburg, Germany Received 5 October 2005; received in revised form 13 December 2005; accepted 27 January 2006 Available online 3 March 2006
Abstract In this paper, we present the performance analysis of Just-In-Time Optical Burst Switching (JIT-OBS) under realistic web traffic. We demonstrate and explain the impact of Burst Aggregation Time (BAT) on performance under the short-lived TCP flows model. We consider both data and control planes. Our results show that BAT has an impact on TCP performance through packet losses and packet delay in the data plane. The results also show that the network has an optimal point of operation dependence upon BAT. We also incorporate BAT’s impact on the control plane’s congestion. c 2006 Elsevier B.V. All rights reserved. Keywords: Burst aggregation; JIT-OBS switching; TCP over OBS
1. Introduction The Internet keeps growing faster and bigger. The two biggest factors responsible for its rapid growth are the ever-increasing number of users and the new bandwidth-hungry applications like Peer-to-Peer (P2P), etc. To meet the challenge, one evolution track is the transport of IP over Wavelength Division Multiplexing (WDM). However, in the absence of optical processing and buffering — which at present does not go beyond the bulky and unstable Fiber Delay Lines (FDL) — the use of optical layers is limited to static circuit switched types of transmissions. Three frameworks form the bulk of present and future research in the area of dynamic optical transmission: Optical Label Switching (OLS, for ∗ Corresponding author. Tel.: +49 40428783443; fax: +49 404287 82941. E-mail address:
[email protected] (S. Malik).
c 2006 Elsevier B.V. All rights reserved. 1573-4277/$ - see front matter doi:10.1016/j.osn.2006.01.002
example GMPLS [1]), Optical Burst Switching (OBS) and Optical Packet Switching (OPS) [2]. OLS operates at the granularity of wavelength. OPS, on the other hand, will offer granularity at the packet level. However, in order to reach such a milestone, both optical processors and buffering technologies have to mature. In the transition phase, OBS has emerged as a solution. OBS has an electrical control plane, but overcomes the deficit by bundling a number of IP packets into one big data burst, which is sent in the optical plane. A control packet is sent prior to the transmission of the data burst to reserve an optical circuit for a specific time duration. Once the circuit is reserved, the data burst is transmitted. The transmission of data is transparent. The circuit is later turned off. Thus, OBS is a hybrid of OLS and OPS. The dominant wavelength reservation scheme in OBS is the Tell-and-Go (TAG) scheme [3]. Various proposals under TAG can be bunched into two groups: Immediate Reservation (IR) and Delayed Reservation
S. Malik, U. Killat / Optical Switching and Networking 2 (2005) 230–238
231
Fig. 1. OBS switch. IP traffic enters the switch from the Add port and is aggregated in the Aggregation Unit (AU). On completion of burst aggregation, AU sends a request to the Reservation Unit (RU) for wavelength reservation. The burst is then switched transparently through the network. At the destination, the burst is de-aggregated and IP packets leave the switch from the Drop port.
(DR). As the name suggests, the IR scheme considers the reservation requests immediately and includes paradigms like Just-in-Time OBS (JIT-OBS) [4], etc. On the other hand, DR schemes allow a delay between the request and the actual reservation. The dominant paradigms in DR are Just-Enough-Time OBS (JETOBS) [5], Horizon [6], Assured Horizon [7], etc. Whereas there are pros and cons related to each scheme, in [8] the authors point out that, despite the conventional wisdom that JET and Horizon yield higher network utilization, under reasonable assumptions related to the current and future developments in the optical and electrical planes, the simplicity of JIT-OBS should outweigh the benefits of JET-OBS and Horizon. We have extended the Ptolemy simulator [9] to perform OBS network simulations [10]. While most of the performance analysis schemes of OBS encompass generating Pareto distribution sized bursts arriving at Poissonian intervals at the OBS node, we departed from this technique and developed a framework in which no assumptions are made about the IP and burst traffic. The burst aggregation process is controlled by parameter BAT. The HTTP–TCP source of [11] is used in this trial. The source generates very realistic traffic. We believe this design methodology is closer to reality, allowing a more precise performance analysis of the OBS networks. We present the impact of Burst Aggregation Time (BAT) on performance. Section 2 gives a brief introduction to OBS and JIT-OBS. Section 3 gives details of the model, and Section 4 gives the simulation setup and results.
2. Optical burst switching The main characteristics of OBS are the out-ofband signaling and electronic control plane, while the data is transported in the optical domain in the shape of variable length bursts and one-pass reservation, or TAG, with optional buffering [12]. Fig. 1 shows an OBS switch. IP traffic enters the Aggregation Unit (AU) via the Add port. Here the traffic is de-multiplexed on the basis of the destination addresses and pushed into Destination Based Queues (DBQ). Fig. 2 gives a visual demonstration of the concept. IP packets in DBQ are aggregated into data-bursts. There are two intuitive ways to aggregate the bursts: either define a fixed Burst Size (BS) or define a Burst Aggregation Time (BAT). In this paper, we consider BAT as the aggregation strategy, as it is more in the spirit of OBS to have variable size bursts. BAT is a timer based mechanism in which IP packets are assembled for a duration of time at the expiry of which a Burst Header Control (BHC) packet is sent to reserve wavelength(s) along the total path. The data burst goes through electro-optical conversion and is sent an off-set time, Toffset , after the control packet. The burst is switched transparently in the data plane, and de-aggregated into IP packets at the egress node. The general burst aggregation algorithm is given as Algorithm 1, where Max Burst Length is set to a very large value so that only time-outs determine the size of bursts. The BHC packet is sent on a dedicated channel to the Reservation Unit (RU). If the RU can make the
232
S. Malik, U. Killat / Optical Switching and Networking 2 (2005) 230–238
Fig. 2. Destination Based Queues. Traffic entering from the Add port is demultiplexed on the basis of the destinations. IP packets are sent to the DBQs, where burst aggregation takes place.
reservation, it not only forwards the BHC to the next node but also informs the AU to which wavelength the burst should be sent. If RU cannot provide a reservation to the request, the burst is dropped. The reservation, or the scheduling, algorithm in our study is JIT, which will be explained in more detail in the next sub-section. The BHC travels on a dedicated wavelength to the next (core) node. At the core node, BHC is converted back into electrical form and reservation decisions are made. Just like the ingress, the core node’s RU reserves a certain wavelength, expecting that the burst will arrive after an off-set time. If there is no wavelength available, no reservation is made, which results in the dropping of the relevant burst. OBS is notorious for high loss probabilities. The most common solutions to counter losses in networks are: Wavelength Converters (WC), deflection routing, optical buffering in the shape of FDLs [12], etc. We considered full wavelength conversion for this study.
2.1. Just In Time-OBS
Algorithm 1 (Burst Aggregation Algorithm).
where k is the number of nodes on the path. The last term in (1) indicates the total control plane queuing delay. We will come up with a precise expression for Tcp in the next section. It is to be noted here that the burst does not arrive until time t + Toffset , therefore the wavelength remains idle for a time equal to (Toffset − Tsetup − Toxc − Tcp ). Clearly, the choice of Toffset impacts the performance of the network. If it is too large, the wavelengths will be reserved needlessly, leading to lower utilization. Thus it is meaningful to remain as close as possible to the minimum offset-time.
Start timer Add IP packet to the burst Update burst length If (burst length == Max Burst Length) or Timer expires { send control packet send burst after offset time reset timer }
The JIT-OBS scheduling is explained in [8] as: “an output wavelength is reserved for a burst immediately after the arrival of the corresponding setup message; if a wavelength cannot be reserved at that time, then the setup message is rejected and the corresponding burst is dropped”. Fig. 3 explains the idea. Here, the setup message is the BHC packet, Tsetup is the time required by the switch to process the BHC, Tcp is the mean queuing delay in the control plane, and Toxc is the amount of time it takes for the optical cross connect to set up a connection from an input port to an output port. Say, at ingress at time t, the AU unit sends a BHC message to the RU. The RU processes the message by t + Tsetup + Tcp time, after which the wavelength reservation process is initiated and completed by t + Tsetup + Tcp + Toxc . The lower limit of the offset time can be given as Tcp (1) Toffset ≥ k ∗ Tsetup + Toxc +
S. Malik, U. Killat / Optical Switching and Networking 2 (2005) 230–238
233
Fig. 3. OBS signaling. A control packet (BHC) is sent after the completion of the burst to the RU. The burst is sent after time Toffset . Each switch on the path takes Tsetup to process the BHC and possible Tcp queuing time if there is congestion in the control plane. Toxc is the time taken by the switch to set up a path between the input an output ports.
3. Impact of burst aggregation time We will discuss the impact of BAT on both planes of the OBS switch. We first discuss its impact on congestion in the control plane, and then come up with a model for the data plane by integrating the impact of BAT on losses and delay.
Y , where Y is the total number of Destination plane is BAT Based Queues (DBQ) sinking their traffic on the control plane. The mean queuing delay for an M/D/1 system is given as: ρcp Tcp = , (2) 2 ∗ (1 − ρcp ) ∗ μcp λ
3.1. Impact of BAT on control plane As BHC packets are sent on completion of burst aggregation, a low BAT will result in small bursts, causing an increase in the traffic of control packets. Thus, a small BAT could cause congestion in the control plane [13]. The additional queuing delays could lead to a situation in which the bursts might arrive before their BHC packets have been processed, in which case they will be lost. This in turn would lead to higher burst losses. Eq. (1) gives the lower bound for Toffset . Intuitively, a large BAT will result in a less congested control plane. However, the assembly time must be not so high as to cause large delays. Therefore, the choice of BAT is a design decision essentially tied with QoS constraints. Considering that the QoS constraints are more important, a solution would be to choose an off-set time close to the lower limit but which does not cause losses. To formalize its value, the only parameter to be calculated is the stochastic component of (1), i.e., the term Tcp . Tsetup and Toxc are hardware dependent. The BHC packets have a fixed size. Assuming a deterministic service time and Poissonian arrivals, an M/D/1 model is used to find the average queuing delay. The arrival rate of control packets, λcp , at the control
where ρcp = μcp is the utilization of the control cp plane and μcp is the service, or processing, rate of the control plane. We believe that the inclusion of the stochastic component Tcp in the calculation of Toffset gives a more complete picture of the control plane. For a more detailed insight into the impact of Toffset on performance, we refer to [7], where it is used as a QoS differentiation parameter. 3.2. Impact of BAT on Round Trip Time (RTT) The HTTP–TCP source used in this trial is modeled as an on–off source based on the assumption that, since most of the web-pages are small [11], they can be transported in the slow-start phase of TCP. These flows are termed short-lived. Moreover, the model is based on the assumption that there are no losses in the transport of short-lived flows. We adapt this model to the OBS network. We briefly discuss the traffic source model. HTTP1.1 is considered at the session level because of its prevalence. It sends the web-pages through a persistent connection. This means that a single connection is used to transport all the files, or includes, in a web-page. The usage of parallel connections is not modeled in
234
S. Malik, U. Killat / Optical Switching and Networking 2 (2005) 230–238
this work. We consider TCP-Reno, as it is one of the most dominant flavors at the transport layer. TCP first establishes a connection between the web-server and the web-client with a three-way handshake and then transmits the actual data. Starting from one packet, TCP keeps doubling the number of packets in a window until it reaches its Slow Start Threshold (SSthesh ), after which it switches to the Congestion Avoidance (CA) phase. Given that there are no packet losses, the connection will open to the Maximum Congestion Window (MaxCWND) and then maintain this window till the complete file is transmitted. The time between the transmission of two consecutive windows is the Round Trip Time (RTT). If there are packet losses, TCP will switch to other modes such as Fast Retransmit and Recovery (FRR), or Exponential Back-off (Exp-BO), etc. We refrain from a lengthy discussion on TCP and refer to [14] for a deeper understanding. The aggregate throughput from M total number of HTTP–TCP sources is given as [11]: TP = M ∗
F , Nnl ∗ RTT prop + Toff
(3)
where Nnl is the average number of round-trips TCP will take to transport a web-page of average F size under an assumption of no losses in the slow-start phase. Toff is the average off-time, or the user think-time. RTT prop is the total two way propagation delay. Ton , average on-time, is Nnl ∗ RTT prop. In the case of OBS networks, however, there is an additional delay due to the burst aggregation process. The delay experienced by each segment due to aggregation is bounded by [0, BAT]. Since TCP is a closed loop protocol, burst assembly will take place at both edges of the network. We simply assume that each segment sees 2∗BAT extra delay due to aggregation, and that the transmission and queuing delays are negligible: RTT = RTT prop + 2 ∗ BAT.
(4)
For OBS networks, (3) then transforms to: F Nnl ∗ RTT prop + Toff Nnl ∗ RTT prop + Toff ∗ Nnl ∗ RTT + Toff F . TP = M ∗ Nnl ∗ RTT + Toff
TP = M ∗
(5) (6)
Eq. (6) indicates a monotone function, however we will show in the next sub-section that TCP plays a complex role in determining the overall performance.
3.3. Impact of BAT on losses The dilation of RTT because of BAT results in reduced offered traffic. The offered traffic, based on (6), then contends for the finite number of wavelengths. If all the wavelengths are busy at the request time, the incoming burst(s) will be lost. We now establish the impact of losses on the offered traffic. As in [8], the output port is modeled as an M/M/K/K system, where K is the number of wavelengths. The burst loss probability, pb , is then calculated through the Erlang-B formula, given as: pb =
ρ m /m!
m
,
(7)
ρ i /i !
i=0
where ρ is the traffic intensity and m is the number of servers, in this case the number of wavelengths. Because a data burst is generated when BAT expires, we simply define the arrival rate of data bursts from 1 each DBQ as λdbq = BAT . Then the aggregate arrival W rate from W number of DBQs is λ = BAT . Service Z time μ = C + Toffset , where Z is the mean burst size, C is the capacity of each wavelength, and Toffset is defined above. Considering that the IP traffic arrives at each DBQ and is aggregated for a time BAT, we simply define Z = BAT ∗ OT , where OT is the arrival rate of the offered traffic at each DBQ. The offered traffic can be calculated on the basis of (6). For example, in the topology of Fig. 4 there are two destinations, so there will be two DBQs at each edge. Each DBQ will then receive half of offered traffic, given by (6). Now we calculate the burst and packet loss probabilities. Assuming that the burst losses are independent events, the probability of K i successes before a loss is given as: P(K i = k) = (1 − pb )k ∗ pb
(8)
where k = 1, 2 . . . To calculate the packet loss probability, p, from the burst loss probability, we calculate the mean number of successfully delivered packets as: ∞ Z k E[γ ] = Z ∗ (1 − pb ) pb k = . (9) p b 0 Then the packet loss probability is given as: pb p= . (10) Z (Note: the term for geometric mean in (9) distribution k p = 1 − p .) has to be normalized with ∞ (1 − p ) b b b 1
S. Malik, U. Killat / Optical Switching and Networking 2 (2005) 230–238
235
Fig. 4. Simulation topology.
Now recall that (6) is based on the assumption of no packet losses. This has to be modified. We focus our attention on the short-lived flows, in contrast to the long flows model used in [15], and determine the new mean on-time, Ton,new , in the presence of losses. Then the mean on-time of a short-lived TCP transmission is given as:
The mean window at the end of the slow-start phase after a loss is detected is given as [17]:
Ton,new = Tconn + Tss + Tloss + Trest ,
E[W ss ] =
(11)
where Tconn = the mean connection opening time, Tss = the mean time spent in slow-start phase, Tloss = the mean time spent to recover after a loss is detected, and Trest = the mean time required to send the rest of the packets in the file. The mean connection opening time is given as [16]: 1− p −1 , (12) Tconn = RTT + Ts 1 − 2p where Ts is the initial Time-Out (TO). For simplicity, it is assumed that TOs are rare and that the TCP gets a chance to get in the slow start phase. In the case of two or more TOs, TCP would switch to the congestion avoidance phase. The initial slow-start phase opens as described above and ends with the detection of loss with a probability 1 − (1 − p)d , where d is the number of segments contained in a web-page. The mean number of successfully transmitted packets in the initial slow-start phase before a loss is detected is: (1 − (1 − p)d )(1 − p) . E[Y ] = p ss
The mean time spent in the initial slow-start phase is then given as: Tss = RTT ∗ log2 E[Y ss ] + 1.
(1 − (1 − p)d )(1 − p) + 2 p , pg 2
(14)
(15)
where constant g is a constant. The loss that ends the slow-start phase could be detected through Retransmission Time-Out (RTO) or by the Triple Duplicate acknowledgments (TD). Suppose that the congestion window W ss = q, and out of the q sent packets, h packets are then acknowledged in the next round W ss = q + h and 2q packets are sent. If more than three packets from these 2q packets are acknowledged, the connection will see a TD loss detection, otherwise a TO will take place. The probability that the slow-start phase will end with RTO loss detection is given as [17]: 2 Q ss = min 1, . (16) E[W ss ] 1 − Q ss gives the probability that the slow-start phase will end with a TD loss detection. The mean time lost due to packet loss is given as: Tloss = (1 − (1 − p)d )
(13)
∗ (Q ss ∗ E[Z TO ] + (1 − Q ss ) ∗ E[n]),
(17)
236
S. Malik, U. Killat / Optical Switching and Networking 2 (2005) 230–238
where E[Z TO ] is the mean time lost because of TO, and E[n] is the mean time lost due to Fast Retransmit and Recovery (FRR). They are given as [16]: TO
E[Z ] = T0 1 + p + 2 p 2 + 4 p 3 + 8 p 4 + 15 p 5 + 32 p 6 , ∗ 1− p
TP = M ∗
2 − (1 − p)W −3 − (1 − p)W , (19) ss 1 − (1 − p)W ss
where the W ss
ss
W ss ,
the congestion window, is given as: E[Y ss ] + 2 = min MaxCWND, . (20) g2
The mean time required to send the rest of the packets in the file is based on the extended steadystate, rateca , the rate achieved by the connection in the congestion avoidance phase [17]: rateca = E[W TD ]g 2 2
+
( 1−p p +(E[W TD ]−1)(1− p))
TD ] (logg ( E[W 2g2 ) +
Q TD (E[W TD ]) TD ( E[W2 ] +2) TD Q (E[W TD ])
)RTT +
−1 f ( p)T0 1− p
,
(21)
where E[W TD ] is the average congestion window size given in (23). g2 is a constant. It is to be noted that we have used the equation for E[W TD ] < MaxCWND which is the interesting case. For the other case, we refer to [17]. Moreover, the equation is adapted to the fact that, in our implementation, there is one ACK for every received segment. Q TD is the probability of loss detection by a TO: √ 3 3 TD Q = min 1, , (22) E[W TD ] where the average congestion window is given as: 2(1 − 2 p) 3 2 − 4p 2 4( p + 2(1 − p2 )) + + . (23) 3p 3
E[W TD ] = −
Finally, the mean time to send the rest of the packets in the file is given as: Trest =
d − E[Y ss ] . rateca
TP = M ∗
(18)
where T0 is the duration of the first TO and, from [17], E[n] = RTT ∗
Putting everything together, the aggregate traffic from M total sources is then given by transforming (6) as:
(24)
Nnl ∗ RT T + Toff F ∗ Nnl ∗ RT T + Toff Ton,new + Toff (25) F . Ton,new + Toff
(26)
4. Simulation setup and results We have extended the Ptolemy simulator to perform OBS network simulations. The bottleneck topology of Fig. 4 was used for simulation. The topology relevant settings are given in the figure. We followed [18] as a behavioral model of web traffic and assumed that average web-page size is F = 60 kB. Then, for a Maximum Segment Size (MSS) = 1460 B, an average web-page translates to a total number of d = 42 packets. The following settings are also relevant for the simulation setup: Ts = 1 s; T0 = 3 s; g = X 1,2 = 1.62 and g2 = C1,2 in [17]. The choice of network capacity and offered traffic are scaled versions of real values of several Gbps, however we are interested in showing the phenomenon that would still be relevant for more representative values; for example, the same results can be seen in [19], but without an explanation behind the observations. In this trial, OBS has full wavelength conversion capability. Y = 4 (traffic arrives from two DBQs at the client side and from two DBQs on the server side) and W = 2. It is important to note that, in fact, there are five wavelengths available, but one is used as a control channel and the remaining four as data channels. Only the number of data channels is shown in the figure. The offered traffic based on (3) at each DBQ is 35 Mbps. Thus, total offered traffic is approximately 2 ∗ 35 Mbps = 70 Mbps (approximately 190 HTTP–TCP sources). It is to be kept in mind that this is the maximum traffic that we can expect, considering there are no losses and that BAT = 0. We first determine the Toffset value by calculating the stochastic component of (1). The BAT is to be varied from 1 ms to 40 ms. For the smallest BAT = 1 ms, Tcp = 2 ns, as visible in Fig. 5 for λcp = 1 4ms . From [8], we set Tsetup = 1 µs and TO X C = 20 µs for the control plane. We set Toffset = 32 µs. This setting would then remove the chances of congestion in the control plane. It is also important to note that, for the calculation of mean burst size, the arrival rate OT is calculated through (6). Calculation of pb and p is then a straightforward task.
S. Malik, U. Killat / Optical Switching and Networking 2 (2005) 230–238
237
Fig. 5. Queuing delay in the control plane based on M/D/1 system. The maximum queuing delay is for the smallest BAT.
Fig. 6. Effect of BAT on goodput. The goodput achieves its optimal value for BAT ≈ RTT/2. For smaller BATs the performance is limited by TCP, and for higher values the additional delays due to the burst aggregation come into play.
The BAT is increased from 1 ms to 40 ms and goodput is measured. Fig. 6 gives the results. It is interesting to see that, for a small BAT, the performance is at its worst as TCP pulls the traffic down because of its response to losses. As BAT becomes larger, the burst size becomes larger, giving the interesting insight, through (10), that the packet loss probability is reduced, encouraging TCP to operate at a better rate. This improvement caused by bigger bursts RTT prop , after which the remains valid until BAT ≈ 2 monotonicity embedded in the dilation of RTT becomes dominant, leading to decreased traffic. The optimum for both the analytical model and simulation occurs at
RTT
prop . The trial shows a good match between BAT ≈ 2 the model and simulation results.
5. Conclusion In this paper we have shown and explained the impact of burst aggregation time on performance by incorporating the latency of short-lived TCP flows under losses and the dilation effect caused by burst aggregation for both data and acknowledgment traffic. We have shown that the OBS network for short-lived web flows has an optimal point of operation that depends on the BAT.
238
S. Malik, U. Killat / Optical Switching and Networking 2 (2005) 230–238
Acknowledgement The research reported in this paper falls under the MultiTeraNet (PLANIP) project funded by the Ministry of Education and Research, Germany.
[9] [10] [11]
References [12] [1] A. Smith, Generalized MPLS—signalling functional description, IETF-Draft, October 2000. [2] L. Xu, H.G. Perros, G. Rouskas, Techniques for optical packet switching and optical burst switching, IEEE Communication Magazine (January) (2001). [3] G.C. Hudek, D.J. Muder, Signalling analysis for a multi-switch all-optical network, in: IEEE ICC, 1995. [4] J. Wei, J. Pastor, R. Ramamurthy, Y. Tsai, Just-in-time optical burst switching for multiwavelength networks, in: IFIP Broadband Communications, November 1999. [5] M. Yoo, C. Qiao, Just-Enough-Time (JET): A high speed protocol for bursty traffic in optical networks, in: IEEE LEOS, August 1997. [6] J. Turner, Terrabit burst switching, Journal of High Speed Networks 8 (1) (1999) 3–16. [7] K. Dolzer, Assured Horizon: an efficient framework for service differentiation in optical burst switch networks, in: Opticomm, 2002. [8] J. Teng, G.N. Rouskas, A comparison of the JIT, JET, and horizon wavelength reservation schemes on a single OBS node,
[13]
[14] [15]
[16] [17] [18]
[19]
in: First International Workshop on Optical Burst Switching, October 2003. Ptolemy simulator: http://ptolemy.eecs.berkeley.edu. S. Ahmad, S. Malik, Implementation of optical burst switching framework in ptolemy simulator, in: IEEE E-Tech, 2004. K. Below, U. Killat, Reducing the complexity of realistic large scale internet simulations, in: IEEE Global Communications Conference, GLOBECOM, 2003. C.M. Gauger, H. Buchta, E. Patzak, J. Saniter, Performance meets technology—an integrated evaluation of OBS nodes with FDL buffers, in: Proceedings of the First International Workshop on Optical Burst Switching, WOBS 2003, October 2003. J. White, M. Zukerman, H.L. Vu, A framework for optical burst switching network design, IEEE Communications Letters 6 (6) (2002). W. Stevens, TCP/IP Illustrated, Vol. 1 The Protocols, AddisonWesley, 1994. A. Detti, M. Listanti, Impact of segments aggregation on TCP reno flows in optical burst switching networks, in: IEEE INFOCOM, 2002. N. Cardwell, S. Savage, T. Anderson, Modeling TCP latency, in: IEEE INFOCOM, 2000. D. Zheng, G. Lazarou, R. Hu, A stochastic model for short-lived TCP flows, in: IEEE ICC, 2003. H.-K. Choi, J.O. Limb, A behavioral model of web traffic, in: 7th Annual International Conference on Network Protocols, Toronto, Canada, 1999. X. Cao, J. Li, Y. Chen, C. Qiao, Assembling TCP/IP packets in optical burst switched networks, in: Globecomm, 2002.