Optical Switching and Networking 10 (2013) 246–257
Contents lists available at SciVerse ScienceDirect
Optical Switching and Networking journal homepage: www.elsevier.com/locate/osn
A channel reuse strategy with adaptive channel allocation for all-optical WDM networks P.A. Baziana n, I.E. Pountourakis School of Electrical and Computer Engineering, Department of Communications Electronic & Information Engineering, National Technical University of Athens, 157 73 Zografou, Athens, Greece
a r t i c l e in f o
abstract
Article history: Received 4 May 2012 Received in revised form 22 February 2013 Accepted 25 February 2013 Available online 14 March 2013
The fundamental problems of WDM networks are: (1) high rate of control packet loss and (2) high propagation delay for each (re)transmission. In this paper, we minimize the station randomness to access the control architecture introducing a collisions-free access scheme. We propose a synchronous protocol according which at the end of the propagation delay each station applies a distributed algorithm for packet transmission following the data channel collisions and the receiver collisions avoidance algorithms. We introduce two data transmission stages. The time difference between them is one packet transmission time. At the end of the first stage all data channels are free and can be reused by the remaining data packets during the second stage. The proposed protocol ensures a totally collisions-free performance. The main advantage is that the data channels reuse strategy applied during the second stage provides enhanced transmission probability to the rejected packets during the first stage. This allows the data packets to try retransmission in the same cycle without requiring control packets re-coordination that increases propagation delay. Thus, we achieve large number of data packets transmission, even more than the data channels number, providing throughput improvement and delay reduction, comparing with other studies. & 2013 Elsevier B.V. All rights reserved.
Keywords: Wavelength division multiplexing (WDM) Multi-channel control architecture (MCA) Performance improvement Propagation delay latency Receiver collisions
1. Introduction In literature, diverse access protocols suitable for Wavelength Division Multiplexing (WDM) networks of passive star topology have been proposed. Some of these WDM Access (WDMA) protocols are based on the random access method. It is obvious that the contention nature of such protocols essentially restricts the performance achieved, while they set an upper bound on the maximum bandwidth utilization. The most important parameters that reduce the network performance are: (1) the control
n Corresponding author. Tel.: þ 30 210 7722145; fax: þ30 210 7722534. E-mail address:
[email protected] (P.A. Baziana).
1573-4277/$ - see front matter & 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.osn.2013.02.005
channels collisions, (2) the data channels collisions, and (3) the destination conflicts. Recent studies in literature propose the passive star topology as the most preferable one for WDM networks especially at local area scale, while they introduce effective WDMA protocols to improve the performance measures. Thus, an access strategy that optimizes the performance by eliminating the message delay in a WDM network of passive star topology is studied in [1]. Also, some modern scheduling algorithms based on the use of clustering techniques that aim to provide performance enhancement in passive star WDM networks are presented in [2]. Moreover, a quality of service (QoS) prediction framework to accommodate different applications with various QoS requirements in WDM networks of passive star topology is proposed in [3]. A determinant parameter that plays key role in the performance evaluation is the round trip propagation
P.A. Baziana, I.E. Pountourakis / Optical Switching and Networking 10 (2013) 246–257
delay latency as it is compared with the data packet transmission time. Although its effect is critical enough, only few studies consider its influence to the network efficiency. For example, in [4] the propagation delay is measured between the stations and the hub and this value is considered for the transmission schedule in a passive star network. In addition, for WDMA protocols with no pretransmission coordination the effect of propagation delay latency is not assumed prior to the transmission phase, since there is no control information exchange [5]. On the contrary, for WDMA protocols with pre-transmission coordination schemes the propagation delay latency critically affects the data packets transmission scheduling at the control coordination phase. Thus, a collision-free pre-transmission coordination protocol is presented in [6] to predict the transmission requests and to eliminate the scheduling delay time in WDM passive star architectures. Also, in [7] the effect of the propagation delay on the performance of two reservation based access protocols is studied and the average packet delay is estimated. Some WDMA protocols of passive star topology based on pre-transmission coordination use a single separate common shared channel to exchange control information, while the remainder channels are used as data channels [8–11]. Since the maximum control information processing rate is limited by the electronic interface of the station, a fundamental problem rises: this of the disability of a station to receive and process all control packets that are transmitted over the single control WDM channel. This fact causes the electronic processing bottleneck [12]. Also, this major obstacle gives rise to the need to develop efficient network architectures and to adopt effective access techniques to overcome this problem. Nevertheless, the critical problem that many WDMA protocols with pre-transmission coordination face is that of low bandwidth utilization and limited performance measures achieved. Thus, although they aim to reduce packet loss exploiting the control information and addressing appropriate collisions-free access schemes for the data channels, they usually restrict the throughput obtained at much lower rate than that of the fiber’s. This is because of two combined reasons: (1) The control packet transmissions compete to gain access over the control channel (s) and they face control channel collisions [13,14]. In this way, the throughput obtained is limited by the percentage of the collided control packets. (2) In addition, a number of data channels remain unused during a cycle period in order to avoid data channels and receiver collisions, although there may exist available data channels for transmission [10,13–20]]. In this way, they manage low exploitation of the data channels bandwidth and they essentially restrict the system performance. Thus, a fundamental objective rises: the need to adopt efficient WDMA protocols to access the control and data channels, effectively facing the control and data channels collisions and the destination conflicts, while optimally exploiting the fiber bandwidth. In this study, we adopt the Multi-channel Control Architecture (MCA) [13,14] which employs a number of control channels for control information exchange.
247
Especially, the MCA utilizes a number v of control channels for the control packets transmission, while the remaining channels are used for the data packets transmission. In other words, if a station attempts transmission, it selects randomly one of the v control channels from the MCA for the control packet transmission. In this way, significantly less control information processing overhead is provided as compared with the single control channel case, while the electronic processing bottleneck problem is effectively faced. Additionally, we adopt a ‘‘tell and wait’’ algorithm to access the MCA and the data channels, given that the round trip propagation delay is longer than the data packet transmission time. This fact ensures that the round trip propagation delay can be exploited as acknowledgment time period after the end of which we may schedule the data packet transmissions which are totally free from both control and data channels collisions and destination conflicts [13]. Especially, the proposed access algorithm totally avoids the control packet collisions at the pretransmission phase by appropriately assigning a specific control mini-slot to each station for transmission. In this way, there is no control packet loss and the control information of all stations can be processed in a decentralized way to address the data channels access, avoiding both the data channels and the receiver collisions. Additionally, in opposition to other studies like [13–20] where a number of data channels remain unused during a cycle period in order to avoid data channels and receiver collisions although there may exist available data channels for transmission, the proposed access scheme introduces a data channel reuse strategy during a cycle period and efficiently utilizes all the data channels for transmission without leaving any of them unused while taking under consideration the data channels and receiver collisions avoidance. In this way, it gives the opportunity to many data packets (more than the number of data channels) to gain access over the data channels in two collisions-free transmission stages. In other words, the proposed access scheme avoids the access randomness on the control and data channels that other WDMA protocols like [13–20] introduce and addresses an efficient data channel reuse access strategy maximizing the probability of a successful data packet transmission. Moreover in this way, the proposed protocol allows the data packets to try retransmission in the same cycle period without requiring the control packets re-coordination, like in other protocols for example [13–20], that would provide increases of total delay experienced. Thus, the proposed access protocol exploiting the data channel reuse strategy in conjunction with the MCA and the propagation delay latency consideration, achieves performance improvement and reaches not only significant throughput enhancement but also essential delay reduction. In opposition to the studies [13,14], in this paper we assume that the number of control channels in the MCA is less than the number of data channels. This fact provides a less costly implementation, since the required number of fixed tuned receivers per station is less Moreover, this fact enhances the benefits provided by the data channel reuse
248
P.A. Baziana, I.E. Pountourakis / Optical Switching and Networking 10 (2013) 246–257
strategy since it assumes a small part of the total fiber wavelengths as control channels in the MCA and it considers the remaining channels as data channels essentially increasing the data packet transmission probability. The main novelty of this study is the introduction of two transmission stages per cycle, in conjunction with the suitable transmission algorithms that determine the collisions-free data packets transmission. In other words, the proposed protocol schedules the collisions-free data packets transmission during the first transmission stage according to the proposed access algorithms that totally avoid both the data channels and the receiver collisions, while it exploits the exchanged control information to address the additional packets transmission during the second transmission stage of the same cycle period based on extra access algorithms that introduce the data channels reuse and ensures the receiver conflicts avoidance, without requiring the control information re-coordination. This fact consists the innovation of our study. Our investigation is carried out as follows: The network model and the assumptions are described in Section 2. In Section 3, the performance evaluation is studied while the benefits of the data channel reuse strategy are examined, for diverse numbers of control channels, data channels and nodes. Finally, the concluding remarks are outlined in Section 4.
lc1,..lcv,ld1,..ldN. The outcoming traffic of a station is connected to an input of the passive star coupler. Each station uses v fixed tuned receivers one for each control channel and one tunable receiver to any of data channels ld1,..ldN. The incoming traffic to a station is splitted into vþ1 portions by an 1 (vþ1) WDMA splitter, as Fig. 1 shows. The transmission time of a fixed size control packet is used as time unit (control mini-slot) and the data packet transmission time normalized in time units is L (data slot). The control packet includes information about: (1) the source address, (2) the destination address, (3) the packet priority and (4) the data wavelength lk that belongs to the set of ld1,..ldN and has been chosen for the data packet transmission. The normalized round trip propagation time between any station to the star coupler hub and to any other station is equal to R data slots (R L time units) for all stations. Both control and data channels use the same time reference which we call cycle. We define as cycle the time interval that includes: a) the control packets transmission phase in order to provide collisions-free control packets transmission, plus b) the normalized round trip propagation time R L, plus c) the data packets transmission phase in order to obtain the transmission possibility of two contiguous data packets over each data channel.
2. Network model We assume a passive star network, as Fig. 1 shows. The system uses vþN (voN) wavelengths lc1,..lcv,ld1,..ldN to serve a finite number M of stations. The multi-channel system at wavelengths lc1,..lcv forms the MCA and operates as the control multi-channel system, while the remaining N channels at wavelengths ld1,..ldN constitute the data multichannel system. The proposed MCA network model is described as [CC]v TT [FR]v [TR]. This means that there are v control channels and each station has a tunable transmitter that can be tuned to a wavelength in the set
The cycle time duration is presented in Fig. 2. Especially, we assume that the control packets transmission phase lasts for K ¼ divðM=vÞ þ1 time units. In this way, we divide the control packets transmission phase in each MCA channel into K time units, each of which is called control mini-slot. Thus, we define at least M time units in the whole MCA area, as Fig. 2 shows. For each station a specific control mini-slot (one of the M control mini-slots) is assigned in order to transmit its control packet. In this way, the control packets transmissions are totally collisions-free. After the end of the control packets
Fig. 1. Passive star multi-wavelength architecture.
P.A. Baziana, I.E. Pountourakis / Optical Switching and Networking 10 (2013) 246–257
249
Fig. 2. Cycle structure.
transmission phase, the outcome of the control information is known to all stations R L time units later (acknowledgment time). Then, each station runs in a de-centralized way the transmission decision algorithm (that is described in the Transmission Mode that follows) in order to transmit during the data packet transmission phase, during either the first or the second transmission stage. So, the cycle duration is defined as: C ¼ K þðR þ 2Þ L time units
ð1Þ 2.1. Transmission mode
where: M þ1 time units K ¼ div v
is getting free at the next cycle if it manages to gain access over a data channel (without being rejected due to data channels collisions) and is not aborted due to receiver collisions. In the followings, the transmission and reception mode are presented. Also, in order to obtain a more realistic performance study, the receiver collisions effect is considered.
ð2Þ
We assume a common clock to all stations. Time axis is divided into contiguous cycles of equal length and stations are synchronized for transmission on the control and data channels during a cycle. At any point in time each station is able to transmit at a given wavelength lT and simultaneously to receive at a wavelength lR. Finally, for the tunable transceivers, we assume negligible tuning times and very large tunable ranges. We assume that each station is equipped with a transmitter buffer with capacity of one data packet. If the buffer is empty the station is said to be free, otherwise it is backlogged. Packets are generated independently at each station following a geometric distribution, i.e. a packet is generated at each cycle with probability p. A backlogged station retransmits the unsuccessfully transmitted packet following a geometric distribution with probability p1 and defers the transmission by one cycle with probability (1 p1). If a station is backlogged and generates a new packet, the packet is lost. Free stations whose transmission is canceled due to data channels or receiver collisions during a cycle are getting backlogged on the next cycle. A backlogged station
At the beginning of each cycle if a station has a data packet to transmit to another station, it first chooses randomly a data channel for packet transmission. Then, it transmits a control packet over the control mini-slot that corresponds to itself, in order to inform the other stations. As previously described, the control packets are transmitted without collisions and their information will be known to all stations R L time units after the end of the control packets transmission phase. At that time, the station is aware of the data channel claims for transmission of all stations, their data packets priority and destination. Also at that time, all stations attempting transmission run contiguously the following access algorithms in order to decide the transmission on either the first or the second transmission stage. The following Algorithms 1 and 2 concern the collisions-free transmission over the first transmission stage, while the Algorithms 3 and 4 concern the collisions-free transmission over the second transmission stage. Also, the proposed Algorithms 2, 3 and 4 constitute the data channel reuse strategy. Algorithm 1. Is executed by all stations that have transmitted a control packet at the beginning of the cycle (Condition 1).
250
P.A. Baziana, I.E. Pountourakis / Optical Switching and Networking 10 (2013) 246–257
If two or more stations have selected the same data channel (of the first transmission stage) for transmission, only one of them gains access on the selected data channel, according to a defined arbitration rule (such as priority, etc.), in order to avoid data channel collisions. This algorithm is referred as data channel collision avoidance (DCCA) algorithm. In addition, if two or more stations have gained access to different data channels but they have selected the same destination, only one of them is finally allowed to transmit over the selected data channel, according to the priority criteria, to avoid receiver collisions. This algorithm is called receiver collisions avoidance (RCA) algorithm. The data packet transmission will be performed during the first transmission stage. Algorithm 2. Data Channel reuse: Is executed by the stations that have not gained access to the data channels in Algorithm 1 to avoid data channels collisions, and they have different destinations from the data packets finally transmitted in Algorithm 1 (Condition 2). After running the access Algorithm 1, some data channels of the first transmission stage may remain unused. Thus, for each of the stations which satisfies the condition 2, an unused data channel of the first transmission stage is assigned for transmission (we may consider the correspondence of the station number to the first unused data channel number). Additionally in order to avoid receiver conflicts, the RCA algorithm is performed. Thus, it is examined if two or more of these stations have selected the same destination. In this case only one of them gains the right to transmit over the assigned data channel, according to the priority criteria. The data channel assignment (DCA) algorithm is completed either if all unused data channels are assigned, or if all stations of Algorithm 2 are finished. The data packet transmission will be performed during the first transmission stage. Algorithm 3. Data Channel reuse: Is executed by the stations that have not gained access to the data channels in Algorithm 1 to avoid receiver collisions (Condition 3). The data packet transmission will be performed during the second transmission stage. Especially, for each of the stations which satisfies the condition 3, a data channel of the second transmission stage of the cycle is assigned. Moreover in order to avoid receiver conflicts, the RCA algorithm is introduced. In other words, it is examined if two or more of these stations have selected the same destination. In this case only one of them gains the right to transmit over the assigned data channel, according to the priority criteria. Algorithm 4. Data Channel reuse: Is executed by the stations that have not gained access to the data channels in Algorithm 2 to avoid receiver collisions and they have different destinations from the data packets finally transmitted in Algorithm 3 (Condition 4). The data packet transmission will be performed during the second transmission stage, since some data channels may remain unused after running the Algorithm 3. Especially, for each of the stations which satisfies the
condition 4, an unused data channel of the second transmission stage of the cycle is assigned. Moreover in order to avoid receiver conflicts, the RCA algorithm is introduced. So, it is examined if two or more of these stations have selected the same destination. In this case only one of them gains the right to transmit over the assigned data channel, according to the priority criteria. The transmission mode flow chart that explains the four transmission algorithms is given in Fig. 3. Moreover in order to be clearly introduced, in the following Figs. 4 and 5 and we provide the simulation diagrams of the actions performed by a station to run the DCCA algorithm with priority criteria and the RCA algorithm with priority criteria respectively. In Figs. 4 and 5, we define as S the variable that represents the number of stations that execute each algorithm according to the corresponding condition (1, 2, 3, or 4). 2.2. Reception mode After the data packet transmission, the destination waits R L time units and then it adjusts its tunable receiver to the data channel specified from the above algorithms for the data packet reception. 3. Performance evaluation In this Section we present the numerical solutions of the proposed data channel reuse protocol. The numerical results are provided by a specific network simulator that was developed to simulate the proposed protocol performance, using the C programming language. The developed simulator implements an extensive discrete-event simulation model and simulates the actions performed by all stations during a number Nc of successive cycles, as Figs. 3–5 illustrate. The number Nc of successive cycles that the simulation runs is chosen in order the derived results to be defined under steady state conditions. In our simulation, the simulator runs for Nc ¼30.000 successive cycles. It is noticeable that the results could be also extracted for Nc ¼ 15.000 successive cycles, while in this case the performance measures values appear on average 7% deviation from the previous case. Also in our simulator for each case study, the number of the simulated packets during a cycle depends on the number of stations as well as on the birth probability p and retransmission probability p1 definition. On average and assuming that p1 ¼ 0.3, the following numerical results show that during each cycle more than 80% of the stations either transmit or retransmit and contribute at the performance measures evaluation at high new packets generation conditions (p ¼0.9). It is obvious that the exact time each simulation runs does not consist an objective issue since it strictly depends on the programming environment as well as the computer motherboard performance. The results take into account the receiver collisions phenomenon, for more realistic study. In the presented numerical evaluations, we assume that the value of the data packet length is L¼10 time units. In the following Figs. 6 and 7, the dependence of the proposed protocol performance on the variation of the
P.A. Baziana, I.E. Pountourakis / Optical Switching and Networking 10 (2013) 246–257
251
Fig. 3. Protocol flow chart.
number v of the control channels is studied. Especially, Fig. 6 presents the throughput per data channel Sd,rc curves versus the birth probability p, for M¼ 50, N ¼30, R¼5 and v ¼5,10,15 control channels. As it is observed, for a given value of R and for fixed values of M, N and p, the Sd,rc rises as v increases. This is due to the fact that the cycle duration C decreases with the increase of v, as eq. (1) denotes. In other words as v increases, the time reference (cycle) in which the successful data packets transmissions are counted decreases. Since Sd,rc is a reverse proportional of C, it is evident that Sd,rc increases with the increase of v. This can be noticed for example for p ¼0.8 and M¼50, N ¼30 and R¼5, where the Sd,rc is: 0.16 for v ¼5, 0.168 for v¼10, and almost 0.18 for v ¼15. As an immediate result, as v rises, the performance improves: the system reaches higher Sd,rc values and lower delay Drc values, since the cycle duration decreases. The explanation is based on the fact that bandwidth utilization is enhanced. This is validated in Fig. 7 that presents the Drc curves versus the Sd,rc, for M¼ 50, N ¼30, R¼5, and v¼5,10,15 control channels. For example, for Sd,rc ¼0.14 its: Drc ¼99 for v ¼5, Drc ¼ 90 for v¼10, and Drc ¼ 86 for v ¼15. In the opposite, the variation of the number of data channels N causes the reverse behavior. It is obvious that
as N increases the probability of a data packet successful transmission over the data multi-channel system increases too. This fact causes the Sd,rc decrease, as N increases. Also, for this reason, the probability of a data packet rejection at destination due to the receiver collisions increases. This is the reason why Sd,rc decreases, as N increases. This behavior is noticed in Fig. 8 that presents the throughput per data channel Sd,rc curves versus the birth probability p, for M ¼50, v ¼10, R¼5 and N ¼30,40,50 data channels. The above results are observed for example, for p ¼0.9 where Sd,rc is: 0.17 for N ¼30, 0.13 for N ¼40, and 0.1 for N ¼50. The above remarks are confirmed in Fig. 9 that depicts the Drc curves versus the Sd,rc, for M¼ 50, v ¼10, R¼ 5, N ¼30,40,50 data channels. Also, it is worth mentioning that as N increases the number of backlogged stations increases too, due to the rise of receiver collisions. This fact causes the significant increase of the delay Drc with the N increment, as Fig. 9 shows. The performance as a function of the node population variation is studied in Fig. 10 that presents the Sd,rc curves versus the birth probability p, for v¼10, N ¼30, R¼5, and M¼50,75,100 stations. It is observed that as M increases, Sd,rc increases too. This is because, as M increases for fixed N, v, R and p, the offered load to the system increases.
252
P.A. Baziana, I.E. Pountourakis / Optical Switching and Networking 10 (2013) 246–257
Fig. 4. Simulation diagram for a station actions to run the DCCA algorithm.
Also, it is noticeable that for small populations (M¼50) the system has not reached saturation yet and it is able to serve all the incoming traffic. This is noticed, for example, for M¼ 50 where the Sd,rc curve is almost linear for all values of p. On the contrary, for large population systems the protocol reaches congestion and provides the maximum value of Sd,rc. This is observed, for example, for M¼ 100 where the Sd,rc curve is not linear and reaches the maximum value for p ¼0.8, while it almost keeps it for higher values of p. Finally, the propagation delay latency R effect on the system performance is studied in Figs. 11 and 12. Especially, Fig. 11 shows the Sd,rc curves versus the birth probability p, for M¼50, v¼10, N¼30, R¼0,5,10. It is observed that Sd,rc is a decreasing function of R. This is understood, since the cycle duration C is a proportional function of R as eq. (1) denotes. So, it is evident that Sd,rc decreases with the R increment. For example, for p¼0.9 where Sd,rc is: 0.47 for R¼0, 0.17 for R¼50, and 0.1 for R¼10. Also as an immediate result, for the same value of Sd,rc the delay Drc increases as R rises. This is validated in Fig. 12 that presents the delay Drc curves versus the
throughput per data channel Sd,rc, for M ¼50, v ¼10, N ¼30, R¼0,5,10. For example, for Sd,rc ¼0.1 it is: Drc ¼ 160 for R¼10, Drc ¼80 for R ¼5, and Drc ¼27 for R ¼0. The latest two figures representatively depict the significant effect of R on the system performance and prove the faulty performance estimation of other studies that ignore its influence. 3.1. Protocol performance comparison In this sub-section, we study the advantages of the proposed access protocol and especially we investigate the benefits provided by the proposed data channel reuse strategy. In order to obtain a quantitative base for the estimation of the proposed protocol performance, we choose to compare its performance measures with those of the relative access scheme which: adopts the same collisions-free control packet coordination scheme but it assumes only the Algorithm 1 for data packet transmission during the first transmission stage. This means that it does not exploit the data channel reuse strategy and simply uses a single data slot per cycle for data packet
P.A. Baziana, I.E. Pountourakis / Optical Switching and Networking 10 (2013) 246–257
253
Fig. 5. Simulation diagram for a station actions to run the RCA algorithm.
transmission. In this way, the comparison representatively describes the performance enhancement obtained by the proposed channel reuse strategy, as it is shown in figures. It is evident that the time reference definition, i.e. the cycle duration, in the two compared protocols is different. Especially, the cycle duration Cs of the protocol without the data channel reuse strategy is smaller than that of the proposed one, i.e C s ¼ K þ ðR þ 1Þ L time units
ð3Þ
Since CaCs the performance comparison should not consider the measures obtained per cycle, but it assumes a specific network configuration: Thus, the number of stations is assumed to be M¼50. The number of WDM data channels is the variable N, while the number of WDM control channels in the MCA is the variable v. Each of the channels is assumed to operate at the transmission rate of Rw ¼1 Gbit/s. Since the vast majority of the Local Area Neworks (LANs) are Ethernet based, we assume that the data packets are fixed-sized, with size equal to the Ethernet Maximum Transfer Unit (MTU) frame size, i.e.
Z ¼12000 bits (1500 Bytes). Thus, the data packet transmission time is T¼Z/Rw ¼12 ms. Since the control packet carries: the source address, the destination address, the priority and the selected data channel number, its size is (2 x)þ 1þy, because: 2x 1 oM r 2x and 2y 1 oN r 2y. The transmission time of each control packet is tr ¼((2 x)þ 1þy)/Rw. Assuming that the propagation delay latency parameter R is 5 times bigger than the data packet transmission time (R¼5 data slots), the propagation delay time is Prop ¼ 5 T ¼60 ms. Finally, the cycle duration is: (a) for the proposed protocol with the data channel reuse strategy is given by (1), and (b) for the protocol without the data channel reuse strategy is given by (3). The advantages provided by the proposed data channel reuse strategy are studied in the following Figs. 13 and 14 where the performance comparison between the two protocols is investigated. Especially, Fig. 13 presents the delay Drc curves versus the throughput per data channel Sd,rc, for M¼50, N ¼30, R¼5, and for diverse MCA channels v ¼1,5,10,15, for the proposed protocol with the data channel reuse strategy and the relative protocol without
254
P.A. Baziana, I.E. Pountourakis / Optical Switching and Networking 10 (2013) 246–257
Fig. 6. Throughput per data channel Sd,rc versus birth probability p, for M ¼ 50, N ¼30, R¼ 5 and v ¼5,10,15.
Fig. 9. Delay Drc versus throughput per data channel Sd,rc, for M ¼50, v¼ 10, R ¼5, and N ¼30,40,50.
Fig. 7. Delay Drc versus throughput per data channel Sd,rc, for M ¼ 50, N ¼ 30, R¼ 5, and v¼5,10,15. Fig. 10. Throughput per data channel Sd,rc versus birth probability p, for v¼ 10, N ¼ 30, R ¼5, and M¼ 50,75,100.
Fig. 8. Throughput per data channel Sd,rc versus birth probability p, for M ¼ 50, v¼ 10, R¼ 5, and N¼ 30,40,50.
the data channel reuse strategy. As it is representatively shown, the proposed protocol provides significant performance improvement for all values of v as compared with the protocol without the data channel reuse strategy. Particularly, for all values of v, for the same Sd,rc value the proposed protocol with the data channel reuse strategy provides essentially lower Drc values. For example for Sd,rc ¼5 Mbits/s, the Drc experienced: (a) for the proposed protocol with the data channel reuse strategy is: 1.5 time units for v ¼5, 1.3 time units for v ¼10 and 1.2 time units for v ¼15, while (b) for the protocol without the data channel reuse strategy increases to: 2.1 time units for v ¼5, 1.8 time units for v ¼10 and 1.6 time units for v ¼15. This means that the proposed protocol with the data channel reuse strategy provides almost 27% delay reduction for all values of v. The performance improvement can be representatively noticed for the system that employs a single control channel, as in the most protocols in
P.A. Baziana, I.E. Pountourakis / Optical Switching and Networking 10 (2013) 246–257
Fig. 11. Throughput per data channel Sd,rc versus birth probability p, for M ¼50, N¼ 30, v ¼10, and R¼ 0,5,10.
Fig. 12. Delay Drc versus throughput per data channel Sd,rc, for M ¼50, N ¼30, v¼ 10, and R ¼0,5,10.
literature. As Fig. 13 depicts, in this case (v¼1) for Sd,rc ¼5 Mbits/s the proposed protocol that exploits the data channel reuse strategy provides almost 44% Drc reduction, as compared to the protocol without this strategy. This behavior is explained by the following fact: In the protocol without the data channel reuse strategy, if a station does not succeed after running the Algorithm 1 to transmit a data packet due to either the DCCA or the RCA scheme, it cancels its transmission for the next cycle. This means that it has to re-transmit the control packet during the next cycle and to wait for additional propagation delay latency time before re-trying a new data packet transmission. This fact consequently results to very high delay values. On the contrary, the proposed protocol with the data channel reuse strategy allows the stations to try a data packet retransmission in the same cycle period without requiring the control packets re-coordination. In this way, it provides essential total delay reduction, as
255
Fig. 13. Protocols Performance Comparison—Delay Drc versus throughput per data channel Sd,rc, for M ¼ 50, N ¼30, R¼ 5, and v¼ 1,5,10,15.
Fig. 14. Protocols Performance Comparison—Delay Drc versus throughput per data channel Sd,rc, for M ¼ 50, v¼ 10, R¼ 5, and N¼ 30,40,50.
Fig. 13 depicts. This behavior is more noticeable in case of a single control channel v ¼1, as previously discussed. Moreover, the protocol without the data channel reuse strategy reaches very low Sd,rc values, while it provides very high delay. This means that for all values of v, it reaches saturation at low offered loads, despite the fact that the cycle duration is smaller than that of the proposed protocol. On the other hand, the proposed protocol with the data channel reuse strategy is able to serve significantly more incoming traffic, while keeping essentially low delay values. For example, the maximum throughput obtained is: (a) for the proposed protocol with the data channel reuse strategy: 23 Mbits/s for v¼5, 24 Mbits/s for v¼10 and 24.8 Mbits/s for v ¼15, and (b) for the protocol without the data channel reuse strategy: 12.5 Mbits/s for v¼5, 13 Mbits/s for v¼10 and 13.4 Mbits/s for v ¼15. In other words, the proposed protocol with the data channel reuse strategy provides almost 85% maximum throughput improvement for all
256
P.A. Baziana, I.E. Pountourakis / Optical Switching and Networking 10 (2013) 246–257
values of v. Especially, in the case of a single control channel v ¼1, the maximum throughput improvement provided by the data channel reuse strategy reaches 114%. The explanation comes from the fact that the proposed protocol which exploits the data channel reuse strategy provides three additional algorithms for possible transmission and supports the effective data channels allocation to maximize the transmission possibility. In other words, the protocol without the data channel reuse strategy gives access only to those stations which are not aborted due to the DCCA and RCA schemes in Algorithm 1. On the contrary, the proposed protocol with the data channel reuse strategy gives additional access possibilities to those stations whose transmission is canceled due to the DCCA or RCA schemes in Algorithm 1. In this way, the proposed protocol significantly enhances the data packet transmission probability, providing impressively higher values of Sd,rc and consequently lower values of Drc, especially in the case of a single control channel v ¼1. It is evident that for the same value of v, M, and R, as the number N rises the proposed protocol with the data channel reuse strategy provides higher transmission probability. This is due to the fact that as N increases, the three additional transmission algorithms that the protocol supports, essentially contribute to the performance improvement. This behavior is observed in Fig. 14 that presents the delay Drc curves versus Sd,rc, for M¼50, v ¼10, R¼5, and N ¼30,40,50 data channels for both protocols. For example, the proposed protocol with the data channel reuse strategy provides maximum throughput improvement: 84.6% for N ¼30, 41.6% for N ¼40 and 27.3% for N ¼50. Although the cycle duration of the proposed protocol with the data channel reuse strategy is bigger than that of the protocol without the data channel reuse strategy, the former protocol provides essentially lower Drc values than those of the latest one. This behavior is an immediate result of the significant transmission probability enhancement and the essential throughput improvement that the primer protocol provides. Moreover, this behavior is explained by the fact that the proposed protocol with the data channel reuse strategy allows the transmission of more data packets per cycle, decreasing the total delay, as previously described. Summarizing, the overall performance improvement is studied in the following Tables 1 and 2 which present the Sd,rc and Drc values and the percentage improvement provided by the proposed protocol with the data channel reuse strategy comparatively to the protocol without the Table 1 Performance Improvement for N ¼30, M¼ 50, R¼ 5, v¼ 5,10,15. v
1 5 10 15
p
0.8
Improvement Sd,rc (%)
Drc (%)
167 100 84 77.7
44 118 120 124
Table 2 Performance Improvement for v ¼10, M ¼50, R¼ 5, N ¼30,40,50. N
30 40 50
p
0.8
Improvement Sd,rc (%)
Drc (%)
100 70 53
118 93.7 80.6
data channel reuse strategy, for diverse number of v and N, for M¼ 50, R¼5 and p ¼0.8. 4. Conclusions In this paper, we study a synchronous transmission WDMA protocol for passive star topology that manages to essentially improve the bandwidth utilization, as compared with other studies. This is achieved by introducing an efficient data channel reuse strategy and applying two stages for transmission in each cycle period. This means that the proposed WDMA protocol provides enhanced data packet transmission probability, since during the second transmission stage it permits the successful transmission of many of the data packets that are rejected during the first stage. The access scheme adopts a pretransmission coordination scheme that totally avoids the control packets collisions. In this way after the propagation delay time, the control information is available to all stations in a decentralized way in order to schedule data transmissions without data channels and receiver collisions. In opposition to other studies, the proposed WDMA protocol exploits all the data channels for successful transmission and gives the opportunity to a large number of data packets (bigger than the number of data channels) to be successfully transmitted during a cycle period. As a result, the proposed WDMA protocol reaches significant performance enhancement and effectively faces the critical problem that other protocols introduce: that of low bandwidth utilization. This result is validated by comparative numerical evaluations that prove that the proposed WDMA protocol with the data channel reuse strategy not only provides significant throughput improvement, but it also achieves essential delay reduction, for various network parameters. As it is representatively shown, the throughput improvement achieved reaches even 100% for specific number of MCA channels, while the delay deterioration reaches almost 124%. Particularly for the single control channel case, the proposed data channel reuse strategy provides 167% throughput improvement for high load conditions. Thus, for a given network topology and offered load traffic, the determination of the number of MCA channels should take into account the required throughput and delay improvement obtained as well as the relative financial cost. The proposed WDMA protocol requires perfect time synchronization among all stations in order to support the control and the data packets transmission. This requirement implies a more complicated access strategy with processing overhead and would appear as a protocol
P.A. Baziana, I.E. Pountourakis / Optical Switching and Networking 10 (2013) 246–257
disadvantage. For this reason, it is an engineer decision to balance the cost for synchronization with the significant performance enhancement obtained. The presented work consists a useful base for the development and evaluation of new access schemes, suitable for similar WDM network architectures. As a future work, the study of the number of contiguous transmission stages per cycle would appear scientifically interesting, as a function of the network parameters. References [1] Xiaohong Huang, Maode Ma, Optimal scheduling for minimum delay in passive star coupled WDM optical networks, IEEE Transactions on Communications 56 (8) (2008) 1324–1330. [2] S. Petridou, P. Sarigiannidis, G. Papadimitriou, A. Pomportsis, On the use of clustering algorithms for message scheduling in WDM star networks, IEEE Journal of Lightwave Technology 26 (17) (2008) 2999–3010. [3] Xiaohong Huang, Maode Ma, A heuristic adaptive QoS prediction scheme in single-hop passive star coupled WDM optical networks, Journal of Network and Computer Applications 34 (2) (2011) 581–588. [4] E. Modiano, R. Barry, A medium access control protocol for WDMbased LAN’s and access networks using a master/slave scheduler, IEEE Journal of Lightwave Technology 18 (2000) 461–468. [5] A. Ganz, Z. Koren, WDM passive star—Protocols and performance analysis, Proceedings IEEE INFOCOM (1991) 991–1000. [6] P.G. Sarigiannidis, G.I. Papadimitriou, A.S. Pomportsis, S-POSA: A high performance scheduling algorithm for WDM star networks, Photonic Network Communications 11 (2006) 211–227. [7] R. Chipalkatti, Z. Zhang, A.S. Acampora, Protocols for optical star-coupler network using WDM: Performance and complexity study, IEEE Journal on Selected Areas in Communications 11 (1993) 579–589. [8] M.I. Habbab, M. Kavehrad, C.E.W. Sundberg, Protocols for very highspeed optical fiber local area networks using a passive star topology, IEEE Journal of Lightwave Technology LT-5 (1987) 1782–1794.
257
[9] N. Mehravari, Performance and protocol improvements for very high speed optical fiber local area networks using a passive star topology, IEEE Journal of Lightwave Technology 8 (1990) 520–530. [10] G.N.M. Sudhakar, N.D. Georganas, M. Kavehrad, Slotted Aloha and reservation Aloha protocols for very high-speed optical fiber local area networks using passive star topology, IEEE Journal of Lightwave Technology 9 (10) (1991) 1411–1422. [11] J. Lu, L. Kleinrock, Wavelength division multiple access protocol for high-speed local area networks with a passive star topology, Performance Evaluation 16 (1992) 223–239. [12] P.A. Humblet, R. Ramaswami, K.N. Sivarajan, An Efficient Communication Protocol for High-Speed Packet Switched Multichannel Networks, IEEE J. Selected Areas on Communications 11 (1993) 568–578. [13] P.A. Baziana, I.E Pountourakis, Performance Optimization with Propagation Delay Analysis in WDM Networks, Computer Communications 30 (2007) 3572–3585. [14] I.E. Pountourakis, P.A. Baziana, A collision-free with propagation latency WDMA protocol analysis, Optical Fiber Technology 13 (2007) 160–169. [15] I.E Pountourakis, Performance evaluation with receiver collisions analysis in very high-speed optical fiber local area networks using passive star topology, IEEE Journal of Lightwave Technology 16 (1998) 2303–2310. [16] I.E. Pountourakis, A. Stavdas, A WDM, Control Architecture and Destination Conflicts Analysis, International Journal of Communication Systems 15 (2002) 161–174. [17] P.A. Baziana, I.E. Pountourakis, A dynamic procedure with propagation delay analysis for performance optimization in WDM networks, WSEAS Transactions on Computer Research 1 (2006) 1–6. [18] I.E. Pountourakis, P.A. Baziana, Markovian receiver collision analysis of high-speed multi-channel networks, Mathematical Methods in the Applied Sciences Journal 29 (2006) 575–593. [19] I.E. Pountourakis, P.A Baziana, Multi-channel multi-access protocols with receiver collision markovian analysis, WSEAS Transactions on Communications 4 (2005) 564–569. [20] I.E. Pountourakis, P.A. Baziana, G. Panagiotopoulos, Propagation delay and receiver collision analysis in WDMA protocols, 5th International Symposium on Communication Systems, Networks and Digital Signal Processing, Patras, Greece, 2006, pp. 120–124.