Performance comparison of scheduling algorithms in network mobility environment

Performance comparison of scheduling algorithms in network mobility environment

Available online at www.sciencedirect.com Computer Communications 31 (2008) 1727–1738 www.elsevier.com/locate/comcom Performance comparison of sched...

1MB Sizes 0 Downloads 12 Views

Available online at www.sciencedirect.com

Computer Communications 31 (2008) 1727–1738 www.elsevier.com/locate/comcom

Performance comparison of scheduling algorithms in network mobility environment Yaning Wang *, Linghang Fan, Dan He, Rahim Tafazolli Centre for Communication System Research, University of Surrey, Guildford, UK Received 22 February 2007; received in revised form 6 November 2007; accepted 6 November 2007 Available online 4 December 2007

Abstract Network mobility (NEMO) supports a network moving as a whole, and this may cause the bandwidth on its wireless link varying with time and locations. The quick and frequent bandwidth fluctuation makes the resource reservation and admission control lack of scalability but with heavy overhead. A feasible solution for this problem is using scheduling algorithms to optimise the resource distribution based on the varying available bandwidth. In this paper, the performance comparison of several well-known priority queue (PQ) and fair queue (FQ) scheduling algorithms are given and their advantages and disadvantages in the NEMO environment are analysed. Moreover, a novel scheduling algorithm, named adaptive rotating priority queue (ARPQ), is proposed to avoid the problems of the existing algorithms. ARPQ operates with a ‘‘priority first, fairness second’’ policy and guarantees the delay bounds for the flows with higher priorities and maintain the reasonable throughput for the flows with lower priorities. The simulation results show that ARPQ outperforms all the existing scheduling algorithms in mobile networks, whose capacities are time-varying and location-dependent.  2007 Elsevier B.V. All rights reserved. Keywords: Scheduling algorithms; Network mobility

1. Introduction In recent years, network mobility (NEMO) [1,3] is received more and more attentions. Because of the high mobility of a NEMO subnet, the capacity on the wireless link between the mobile router (MR) of the NEMO subnet and the base station (BS) in the access network varies with NEMO’s location change. Hence, the available radio resources can not always support all the sessions of all the users in a NEMO subnet. Although admission control and dynamic service level specification (SLS) renegotiation mechanisms [12,13] can be used to minimize the impact of the varying capacity, both of them suffer from the scalability problem caused by the frequent capacity variation. Another feasible solution for this problem is using a scheduling algorithm that can dynamically change the weights of

*

Corresponding author. Tel.: +44 1483682292. E-mail address: [email protected] (Y. Wang).

0140-3664/$ - see front matter  2007 Elsevier B.V. All rights reserved. doi:10.1016/j.comcom.2007.11.017

the different traffic classes to optimize the resource distribution, and this is the focus of this paper. In this paper, we make a comprehensive survey to the existing scheduling algorithms used in wired and wireless networks, and these scheduling algorithms can be classified as Fair Queue (FQ) and Priority Queue (PQ). The basic idea of FQ is to distribute the available resources to the different member sessions or classes with a set of pre-defined weights. Most FQ algorithms designed for the wireless network assign weights to different classes statically [23]. As improvement of static FQ algorithms, some FQ approaches apply dynamic weight distribution to deal with the varying traffic profiles or changing session requirements [15,16]. PQ algorithms schedule the different classes of flows and transmit packets with different priorities. Unlike FQ, PQ processes the packets in the queues with higher priority earlier than those with low priority. However, both FQ and PQ algorithms cannot work properly in the NEMO environment. The existing FQ algorithms, which do not consider the fluctuation of the available capacity,

1728

Y. Wang et al. / Computer Communications 31 (2008) 1727–1738

can cause the static weight-delay problem, in which the sessions with the different priorities all suffer long delay. The PQ algorithms with static priorities emphasize the importance of the high priority queues, which allocate as much resources as possible to high priority traffics. When the resources are used up by the higher priority traffic, the lower priority traffic has to wait. This may cause the starvation problem, in which low priority queues cannot transfer packets at all. In order to avoid the problems of the existing FQ and PQ algorithms in the NEMO environment, a novel scheduling algorithm, namely Adaptive Rotating Priority Queue (ARPQ) [18], is proposed in the second part of this paper. As a priority queuing algorithm, ARPQ differentiates traffic classes into different priorities. In order to keep relative fairness for the lower priority classes, ARPQ adaptively changes the weights of different priority classes with the fluctuation of the available bandwidth to leverage the use of available bandwidth, so that a maximum number of higher priority queues can be given their desired bandwidth, while a portion of bandwidth is allocated to their lower priority counterparts to share. By this means, ARPQ considers both the performance of higher priority queues and the fairness of the lower priority ones. ARPQ can not only combine the advantages of both PQ and FQ, but also take the mobile network’s characteristics into account. In this way, it can guarantee the performance of higher priority queues in the tolerable delay but provide much better throughput performance to the lower priority queues in mobile networks. The performance of the existing scheduling algorithms, e.g., several selected typical FQ and PQ algorithms and the newly proposed ARPQ, are compared in the NEMO environment via simulation. The results show that ARPQ has good tolerance to bandwidth fluctuation and guarantees queue delay for high priority queues while ensuring reasonable throughput for the low priority queues. Therefore, the starvation problem and the static weight-delay problem are effectively avoided. The overall performance of ARPQ excels those of all other comparative algorithms. The remainder of the paper is organized as following: Section 2 analyses the problem of QoS provisioning in NEMO environment and raises the requirements of NEMO scheduling algorithms. Section 3 introduces several popular FQ and PQ algorithms and analyses their advantages and disadvantages. A novel scheduling algorithm called ARPQ is also presented. In Section 4, a simulation comparison is made to three typical scheduling algorithms and ARPQ to evaluate their performance. Section 5 makes the conclusion of the comparison. 2. Problem statements for scheduling algorithm in the NEMO environment As the fluctuation on the available bandwidth happens on the wireless link between the MR and the BS, the uplink packets, which are transmitted from the mobile nodes

(MN) in the NEMO subnet to the BS in the access network, are heavily affected by the bandwidth fluctuation. Not losing the generality, this paper focuses on the uplink scheduling in NEMO-based vehicular networks. As shown in Fig. 1(b), a simple model is used for illustration. In this model, several MNs are connected to the MR via wireless links or wired links, and they can set up sessions to the corresponding nodes in the internet. All the packets will pass the MR, and the MR then forwards them to the BS. It is noted that the MR and the BS are connected with each other via a wireless link. Because of the high mobility of a NEMO subnet, the available bandwidth between the BS and the MR are not stable and it fluctuates in a short term, e.g. it can increase and decrease frequently and/or sharply. Theoretically, this bandwidth fluctuation problem can be solved by renegotiating the dynamic service level specification (SLS) [11–13] based on the varying bandwidth. However, such bandwidth change can happen in minutes or seconds, much more frequently than the handoffs. In such a short amount of time, dynamic SLS negotiation cannot work properly because it is too slow and will bring large overheads. Another candidate solution is an admission control mechanism located in the MR of the mobile network, which can keep track of the available bandwidth of the mobile network and terminate some sessions if necessary in order to guarantee the normal transmission of the other sessions. However, the lack of resources caused by the bandwidth fluctuation will not continue for long, and when the bandwidth returns to normal levels, the mobile nodes with sessions terminated has to reserve the bandwidth from the MR again and restart the data transmission of the terminated sessions. The bandwidth fluctuates so frequently that the time and signalling overhead of the resource reservation will be unaffordable for the mobile network. The third feasible solution for this problem is to develop a scheduling algorithm that can dynamically change the weights of the different traffic classes, in order to re-allocate

Fig. 1. Comparison of scheduling architectures of wireless networks and NEMO. (a) Legacy wireless network is connected to the access network via a wired link with constant available bandwidth; (b) NEMO is connected to the access network via a wireless link with variable available bandwidth.

Y. Wang et al. / Computer Communications 31 (2008) 1727–1738

the available resource to the existing sessions. From the scheduling point of view, the legacy wireless network and NEMO differentiate with each other in the overall available bandwidth. A legacy wireless network has varying wireless link bandwidth between the mobile users and the base station (BS) but the overall system capacity is fixed since the BS connects to the access network with a wired link, as shown in Fig. 1(a). In this case, the scheduler’s available resource is constant. If we define the available bandwidth of the scheduler as R, we can express R as R ¼ C;

ð1Þ

where C is a constant. However, in NEMO, there is a hierarchical architecture among the BS, the MR and the mobile users [11]. As shown in Fig. 1(b), the BS regards the MR as a normal mobile user and schedules packets for the MR and other legacy mobile users, while the MR schedules packets for the mobile users within the NEMO subnet. Since MR connects to the base station with the wireless link, its manageable resource of the scheduler in NEMO is variable due to its movement and its connection to BS. In this case, the available bandwidth of the scheduler can be shown as a function of time, shown as Eq. (2). R ¼ f ðtÞ

ð2Þ

Therefore, the scheduler in NEMO should adapt to the variation of the network resources and guarantee the QoS for as many high priority flows as possible when the network resources fall to the low level that cannot support all the active flows. An ideal scheduling algorithm in this scenario, however, should meet the following two criteria: (1) Maximize the amount of the sessions or classes that are guaranteed with the delay bound in a descending order of priority. Although not all the sessions or classes can be guaranteed with the bounded delay, the scheduling algorithm should provide bounded delay for as many sessions or classes as possible, and the higher priority sessions or classes should be guaranteed preferentially. (2) Maximize the throughputs of the sessions or classes that cannot be guaranteed with the bounded delay. As the admission control is not suitable for the bandwidth fluctuation scenario, the existing sessions will not be terminated in the NEMO environment. Consequently, the scheduler should provide basic reasonable throughputs for the sessions that are not guaranteed with bounded delay in order to keep the connection alive. Hence, the affected sessions can be recovered to normal throughputs quickly as soon as the bandwidth status turns better. In the following sections, the existing packet scheduling algorithms will be reviewed and evaluated with the two criteria. The algorithms that can meet the two criteria simultaneously will perform better than the others.

1729

3. Existing packet scheduling algorithms A lot of packet scheduling algorithms have been developed these years. Being classified into FQ and PQ, the algorithms aim to distribute resources in both wired and wireless networks. 3.1. Fair queuing algorithms The FQ algorithms’ representative is WFQ [14], which assigns a weight to each session and approximates a fluid fair model to distribute bandwidth for packet-based scheduling system. WFQ assumes an error-free network that has both constant input and output bandwidth. In that perfect case, the available bandwidth to the scheduler and the required bandwidth of each session are both constant, so static weight distribution is enough. However, in wireless network environment, channel errors happen on the link between the base station (where the scheduler is located) and the mobile nodes. A lot of algorithms consider the channel error and the wireless link capacity change and try to provide fair distribution of bandwidth to the sessions from various approaches. IWFQ [8] is designed to approximate the performance of WFQ in wireless networks with channel errors. Working in the base station that has constant available bandwidth to the access network, IWFQ compensates assigned bandwidth among different mobile nodes connecting to the base station with wireless link. It takes a perfect error-free system as the reference and judges a flow is leading, lagging or in sync if its queue size is smaller than, larger than, or equal to the queue size of the error-free system. Considering two sessions A and B are waiting for transmission, if the link status is bad when Session A is transmitting, IWFQ sends Session B but records the lagging size of Session A. When the Session A’s link gets good, IWFQ sends more packets of Session A to compensate its lagging. By this means the impact of the channel error can be minimized and the sessions can be transmitted with fair distribution of the bandwidth. CIF-Q [9] also has a virtual perfect error-free system as the reference system. A virtual time is kept in the reference system and compared to the real time in the real system, so that the scheduler gets to know whether a session is leading or lagging and makes compensation to the lagged queues. Both long-term and short-term fairness is well considered and guaranteed in CIF-Q. Different from IWFQ and CIF-Q, SBFA [10] introduces a compensation server with some bandwidth allocated. When the link error happens in the link of transmitting session, the server stores the virtual copy of the packets that should have been transferred. The scheduler serves all sessions’ queues and the server’s queue one by one. In the turn of the server’s queue, SBFA scheduler finds the virtual copies of the packets blocked by the link error and transmits them with the allocated bandwidth.

1730

Y. Wang et al. / Computer Communications 31 (2008) 1727–1738

TD-FQ [7] points out that the delays of flows are tightly coupled with the weight assigned to the flows. It considers the different features of the flows. The flows are classified as real-time flows and non-real-time ones. The real-time flows may suffer long delay by a small weight assigned to it. In order to solve this problem, TD-FQ makes extra operation based on CIF-Q. Real-time flows are assigned higher priority to reduce their queuing delays, whereas the fairness and bounded delays of the non-real-time flows are still maintained. MR-FQ [6] considers the multi-rate communication capability of the wireless networks. In most existing algorithms, an error wireless link is not capable to transmit packets. Whereas, according to the work of MR-FQ, the wireless link in bad state can be able to transmit packets in a low data rate. MR-FQ adjusts a flow’s transmission rate based on its channel condition and lagging degree. The more a flow is lagging, the lower rate can the flow transmit in. CSAP [19] and TBCP [20] aim to provide fairness resource allocation to wireless ad-hoc networks. CSAP considers two kinds of flows with guaranteed and besteffort QoS and allocates the network resources based on the predefined credit of the flows. CSAP shows good performance in providing the minimum required bandwidth to the guaranteed flows and the fair share of the bandwidth with the best-effort flows. TBCP proposes a scheduling mechanism that can work for the multi-hop wireless network and keep the fairness of all the flows by compensating the flows on the error-prone channels. Both CSAP and TBCP consider the multi-hop feature of ad-hoc networks and provide fairness for the flows throughout the network. The fair queuing algorithms introduced above all work in the base station and aim to adapt to the wireless link errors to keep a static weight distribution in channel-error networks and guarantee the queuing delay of the sessions. None of them considers that varying available bandwidth may be under the required level while the absolute fair distribution of scarce bandwidth may cause a problem that none of the competitive sessions can be guaranteed with queuing delay. We address this possible problem as the static weight-delay problem. In next section, the FQ algorithms will be simulated in NEMO environment to investigate the existence of the static weight-delay problem. Some other approaches have been proposed to realise the adaptive weight distribution for fair queues. AWFQ [16] considers the changing network requirements caused by the terminating flows or newly initiated ones. It couples the weight of each flow with the length of the queue. If the requirement of a flow is increasing, then the length of the flow’s queue increases as well; AWFQ will consequently lean the resource assignment to the long-queue flow to balance the delays among different queues. This algorithm can adapt the change of flows’ requirements and effectively reduce the difference of delays between the high-weight queues and low-weight queues.

DWFQ [15] is another approach to enable WFQ to dynamically change the weights. The authors uses a monitoring scheme to record and calculate the average size of different queues, and then dynamically adjusts the weight of each queue based on the queue sizes. By keeping the queue size small, this algorithm can achieve a small delay for EF services. AWFQ and DWFQ propose different mechanisms to balance the queue sizes by adjusting the weights of sessions, so as to shorten the queuing delays of packets. However, the two algorithms do not count in the bandwidth fluctuation while adjusting the weights and the possible static weight-delay problem still exists. L. Wang and his colleagues [21] address the problem that the current channel model is overly simplified with a good state and a bad state. They argue that the current scheduling algorithms designed with the channel model cannot work properly in the practical environment, which is much more complicated than the two-state model. To receive better fairness performance in the practical environment, Wang raises a new notion of fairness, which bounds the actual throughput normalized by channel capacity of any two data connections, and proposes a new scheduling algorithm CAFQ based on the new fairness definition to work in the real network. W. K. Wong [22] researches the fair queuing issue from another angle. It considers the MAC protocol in the heterogeneous networks with different network technologies. Wong points out that the scheduling algorithms usually couple with the wireless (MAC) protocol. However, when heterogeneous networks want to integrate and interoperate with each other, the QoS provisioning will require a scheduling algorithm decoupled from the MAC protocol. Therefore, Wong proposes a scheduling algorithm TBFQ that can work independently to the MAC protocol and provide good fair resource distribution in the heterogeneous wireless network. Although CAFQ and TBFQ show good performance in the environments they are supposed to work in, neither of them consider the varying capacity caused by the bandwidth fluctuation. Therefore, the static weight-delay problem will happen under both of the two algorithms. 3.2. Priority queuing algorithms The popular PQ algorithms include Static Priority Queue (SPQ), Probabilistic Priority (PP), Earliest Deadline First (EDF) and RPQ+ [2,4,5], etc. SPQ is the original and basic approach of all PQ algorithms. It categorises the sessions with different importance to classes with different priorities and processes packets with higher priority first. The advantage of SPQ is that the transmission of high priority sessions can be guaranteed. Whereas, the disadvantage, the so-called starvation problem, is the over occupation of the resources by the high priority classes makes the lower priority classes cannot be transmitted at all. Some proposals have been raised to avoid the starvation problem.

Y. Wang et al. / Computer Communications 31 (2008) 1727–1738

PP [5] sets a probability to each class so that the scheduler selects the packets in different classes with the probability. By this means, every session with different priorities has a chance to be processed. EDF defines delay bound for each traffic class and records each packet’s time to delay bound (deadline). It strictly arranges the packets in different classes in the order of deadline. The packet with the shortest deadline will be processed first. The precise record for each flow’s deadline brings large overhead and work load of the scheduler. To avoid the huge work load of recording each packet’s arriving time, RPQ+ [2] approximates the EDF with two times as many queues as the traffic classes. While the queues are rotating periodically, the packets with short deadline move to the head of the scheduler and are processed. By this means, RPQ+ gives all queues in different priorities the chance to be processed. The drawback of EDF and RPQ+ is that they treat the packets from different classes equally, ordering them with the packet deadline. Therefore, the classes share the network bandwidth as the proportion of the traffic loads of the different classes. In the case of scarce bandwidth available, RPQ+ and EDF will still distribute the bandwidth with the traffic load proportion, resulting in the long delay suffered by every class. PP, EDF and RPQ+ solve the starvation problem with different approach, but none of them considers the situation of the fluctuating available bandwidth. Similar to the FQ algorithms, the static weight-delay problem may appear in these PQ algorithms. 3.3. Adaptive rotating priority queue algorithm To avoid the problems of the existing scheduling algorithms, a novel algorithm, named Adaptive Rotating Priority Queue (ARPQ) [18] is proposed to provision QoS for NEMO. ARPQ aims to adapt to the fluctuation of the available bandwidth in NEMO by changing the proportions of resources distributed to the different traffic classes. In one word, ARPQ can control the throughput of the sessions with low priority classes in the NEMO environment to guarantee the transmission of the sessions with high priority classes. It means that ARPQ can guarantee the delay bounds for maximum number of sessions with high priority classes while keeping basic throughput for the sessions with low priority classes. An ARPQ scheduler consists of several groups of queues, and each group is assigned to one priority class. The newly arrived packets are queued to the tail of the end queue of the corresponding queue group. A rotating scheme is designed to shift the packets from the high priority queues to the low priority queues periodically, and the rotating period for each group is adaptive to the available bandwidth in order to control the maximum delay of the traffic class. In the following sub-sections, we will introduce the rotating scheme, the maximum delay prediction and rotating period selection, as well as the packet selection scheme.

1731

3.3.1. Traffic model Before introducing the ARPQ algorithm, we construct a traffic model as the base of our calculation and deduction. Assume there is a scheduler with a number of external connections. Let C denote the set of sessions of all the connections. And if we assume the sessions can be partitioned into M classes, we have C = [16m6MCm, where Cm denotes the set of priority-m sessions. For a given session i 2 C, we build our traffic model based on the traffic (r, q)-model [17]. Specifically, we define Ai[t0, t0 + t] as the total traffic of session i that arrive to the scheduler in time interval [t0, t0 + t]. The maximum of Ai[t0, t0 + t] is bounded by a traffic constraint function Ai as follows Ai ½t0 ; t0 þ t 6 Ai ðtÞ;

8t P 0; 8t0 P 0

ð3Þ

Ai

One well-known example of is the (r, q)-model [17]. This model depicts the worst-case traffic on a connection i with a burst parameter ri and a rate parameter qi. The traffic constraint function of (r, q)-model is Ai ðtÞ ¼ ri þ qi t

3.3.2. Rotating scheme ARPQ schedules packets from different priority classes with queue groups. Each queue group stores packets from one specified priority class. A queue group numbered m contains its own rotating period Dm and the number of queues. When ARPQ is working, the arriving packets with priority m are inserted in the rear queue of Queue Group m. A rotation takes place for every Dm time. In each rotation, all packets are moved from higher number of queues to lower number ones, as shown in Fig. 2. 3.3.3. Maximum delay prediction and rotating period selection With analytical derivation, the maximum queuing delay of a stable queue in ARPQ is predicted as a function of the rotating period. Consequently, a rotating period is selected for the queue to retain the queuing delay lower than the delay bound. The meanings of the parameters used in the equations are as follows:

Fig. 2. The rotating process of Group m in ARPQ.

1732

• • • • • • • • •

Y. Wang et al. / Computer Communications 31 (2008) 1727–1738

d max m : the maximum queuing delay of the queue group m; Nm: the number of queues in the queue group m; Dm: the rotating period for queue group m; Cm: the set of active sessions in the queue group m; ri: the burst of Session i; qi: the maximum data rate of the session i; R: the available bandwidth of the wireless link; Lmax : the maximum packet length of Session i; i Dm: the delay bound of the queue group m. P P m k¼1

T max ¼

d m ¼ txm þ T ;

ð4Þ

where dm is the queuing delay of a class-m packet, txm is the rotating time for the packet to be moved to Q(m, 0) since it arrives at Q(m, Nm), and T denotes the time that the packet spends in Q(m, 0). The waiting time for the first rotation is defined as s, 0 6 s 6 Dm. For any other rotation, the packet waiting time is Dm. Therefore, txm ¼ ðN m  1Þ  Dm þ s 6 N m Dm

ð5Þ

To calculate T, we consider how much traffic has to be transmitted since time 0 before the tagged packet is completely transmitted. The traffic that should be transmitted before the departure of the tagged packet and the tagged packet itself in the scheduler at time tx can be expressed as Wm(t, tx), which is given as m1 X X

Ai ½0; t þ tx Þ þ

k¼1 i2C k

þ

X

X

Ai ½0; t

i2C m

Ai ½t0 þ s  Dm ; t0  þ Lr  R  ðt þ tx Þ

ð6Þ

i2C m

where Lr is denoted as the remaining part of the packet with priority > m being transmitted at time t. According to the traffic model, the maximum value of Wm(t, tx) can be derived as: W

max m ðt; t x Þ

¼

m1 X X

Ai ðt

k¼1 i2C k

þ

X

i2C m

m1 X X

½ri þ qi ðt þ tx Þ þ

k¼1 i2C k

þ

X

Ai ðDm

þ tx Þ þ

X

i>m

 R  ðt þ tx Þ

ð9Þ

From (8), we can calculate out

ð10Þ

To make sure Tmax is convergent, P Pthe scheduler must be priority-m stable and satisfy mk¼1 i2Ci qi  R < 0. And because t P 0, Tmax reaches the maximum when t = 0. P Pm P Lmax i i2C i ri þ i2C m ½ri þ qi ðDm  sÞ þ max k¼1 i>m T max 6 Pm1 P R  k¼1 i2Ck qi ð11Þ Recalling (2) and (5), when s = Dm, dm reaches maximum d max m , which is expressed as: P Pm P max k¼1 i2C i ri þ i2C m ri þ max Li i>m max d m ¼ N m  Dm þ Pm1 P R  k¼1 i2Ck qi ð12Þ To make sure the queuing delay of the tagged packet not to exceed the delay bound Dm, d max must be not larger than m Dm, shown as follows, P Pm P Lmax i i2C i ri þ i2C m ri þ max k¼1 i>m N m  Dm þ 6 Dm ð13Þ Pm1 P R  k¼1 i2Ck qi Given fixed number of queues Nm, we have the upper bound of rotating period Dm for a stable queue group m. P Pm P Lmax i k¼1 i2C i ri þ i2C m ri þ max Dm i>m   Dm 6  ; P P m1 Nm R q  Nm k¼1

8R >

m X X

i2C k

i

qi

k¼1 i2C k

i2C m

 sÞ þ max Lmax  R  ðt þ tx Þ: i i>m

ð7Þ According to the definition of Wm(t, tx), there exists a time Tmax that can satisfy

In the (r, q)-model, Wm(t, tx) is expressed as

ðri þ qi tÞ

½ri þ qi ðDm  sÞ þ max Lmax i

i2C m

Ai ðtÞ

W max m ðt; T max Þ ¼ 0

X i2C m

 P P P q  R  t þ mk¼1 i2Ci ri þ i2Cm ½ri þ qi ðDm  sÞ þ max Lmax i i2C i i i>m Pm1 P R  k¼1 i2Ck qi

The queuing delay of a class-m packet in ARPQ can be expressed as follows:

W m ðt; tx Þ ¼

W max m ðt; t x Þ ¼

ð8Þ

By setting the rotating period smaller than the upper bound shown in (14), the scheduler can make the queuing delay of the packets of the stable priority classes within the delay bound Dm. If Priority Pm PClass m is not in the stable status, which means k¼1 i2C i qi  R > 0, a finite maximum queuing delay does not exist. So ARPQ approximately sets the rotating period of Priority Class m as Dm 6

Dm ; Nm

8R 6

m X X k¼1 i2C k

qi

ð15Þ

Y. Wang et al. / Computer Communications 31 (2008) 1727–1738

Eq. (15) makes sure that the packets of an unstable priority class will move to Queue 0 of its queue group within its delay bound so as to be involved in the packet selection scheme.

this case, the head packet of queue group m is selected and dequeued with the transmission rate limit rm. ! L½Qðk;0Þ>0 X X X rm ¼ max V m  qi ; qi ; 81 6 m 6 M: m
3.3.4. Packet selection scheme The packet selection scheme is introduced for the ARPQ to select the next packet to be transmitted. The scheme analyses the desired bandwidth for each class and different queue groups are categorised in two states: in State I, the queue group is distributed the required bandwidth and can transmit packets in its required data rate; in State II, the queue group has to share limited bandwidth with other queue groups and distributed low bandwidth that is smaller than the required one. To transmit packets in State I, the static PQ policy is adopted with a limit on the transmission rate. When the State I queues are empty, the packets in queues in State II are selected with FQ policy. Based on the two states introduced above, the ARPQ selects packets from the queue groups as the flowchart shown in Fig. 3. Assume there are M classes of all the sessions, the ARPQ scheduler builds M queue groups, one for each class. The scheduler searches through all the queue groups from 1 to M. If the queue Q(i, 0) (the Number 0 queue in Queue Group with priority i) is empty, then the scheduler moves to the next queue group. Otherwise, the algorithm decides the queue group state by checking the value of the remaining available bandwidth for Class m, denoted as Vm. If Vm is higher than the required bandwidth of the Class m, qm, the queue group is in State I. In

1733

i2C k

i2C m

ð16Þ If the transmission rate of one class is lower than rm, the first packet in the queue group will be transmitted. Otherwise, the scheduler will move to check the next queue group. If Vm is lower than qm, the queue group m is in State II. In this case, the function fairdeq is called and a packet is selected from the queue groups between m and M with the AWFQ algorithm. If all the queue groups are with the empty Queue 0, the scheduler will dequeue the head packet in the firs non-empty queue in the queue group with highest priority. 4. Performance comparison In this section, we select some representative algorithms to compare with the ARPQ algorithm and evaluate their performances in NEMO environments. Three different algorithms are chosen to study with simulations: RPQ+, WFQ, SPQ. 4.1. Simulation scenario The software package network simulator (NS) is used to implement the simulation model. This model consists of a mobile network with one MR and five mobile network nodes. The five mobile nodes connect to the MR via wireless links, whose bandwidth are 5Mbps each. The MR also connects to a BS via a wireless link with variable bandwidth. We choose a series of the normalised bandwidth values from 0.1 to 1.1. Each mobile node has two sessions with the BS. The sessions are based on exponential distribution (EXP) or constant bit rate (CBR) and classified to four classes, with different priorities from high to low. The normalised data rates of the four classes and their percentages in the total data rate are shown in Table 1. In each class, a delay bound is defined, as shown in Table 2. The normalized bandwidth of each wireless link between a mobile node and the MR is constant but the normalized bandwidth between the MR and the BS is variable. A QoS agreement between the MR and the BS requires

Table 1 The data rate information of all classes in the simulation scenario

Fig. 3. The flowchart of ARPQ packet selecting process.

Class

Normalized average data rate

Percentage of total required bandwidth (%)

1 2 3 4

0.204644 0.258345 0.261248 0.275762

20.46 25.83 26.12 27.58

1734

Y. Wang et al. / Computer Communications 31 (2008) 1727–1738

Table 2 Delay bounds for each class in ARPQ Class

1

2

3

4

Delay bound(ms)

100

200

300

1000

that the data rates and bursts of each class should not exceed the values shown in Table 2. Therefore, the MR shapes the traffic of different priority classes to follow the q–r values shown in Table 3, and ARPQ sets the related parameters with those values. In the simulation, we focus on the queue management in the MR and compare the performance of four different algorithms: RPQ+, WFQ, SPQ and ARPQ. The reason why choosing the four algorithms as the comparative objects is that RPQ+ is a well designed algorithm with good performance in the case of constant bandwidth, while WFQ and SPQ are the representatives of the FQ and PQ, respectively, and ARPQ is the algorithm proposed to fit the requirements in the NEMO environment. The weights of classes in WFQ and ARPQ are shown in Table 4. For the convenience of comparison, the weights of each class of WFQ and ARPQ are set the same. Besides the weights assigned to the classes, ARPQ has some other parameter setting. For example, the number of queues (Nm) in each queue group for different priority classes is as shown in Table 4. In the simulation, three metrics are defined to evaluate the performance of the four algorithms. 1) Throughput: We compare the throughput of each priority class by using different scheduling algorithms in order to investigate the possible static weight-delay problem and the starvation problem. 2) Resource distributions: We compare the proportions of each priority class in the total throughput to show the fairness to the low priority classes. 3) Queuing delay tolerance: We measure the minimum bandwidth that can guarantee 95% of the packets within the delay bound of its related traffic class and use them to show the tolerance of each scheduling algorithm to the low available bandwidth. Table 3 Admission control requirement of the BS ingress port Class

q

r

1 2 3 4

0.240929 0.307692 0.291727 0.297533

0.001451 0.002903 0.001451 0.002903

Table 4 Parameters for each class in WFQ and ARPQ Parameter

Algorithm

Class 1

Class 2

Class 3

Class 4

Weight

WFQ ARPQ

0.7 0.7

0.15 0.15

0.1 0.1

0.05 0.05

No. of queues

ARPQ

10

20

30

100

For each scheduling algorithm, we run the simulations in the bandwidth from 0.1 to 1.1 with 100-second simulation duration. The throughput of the sessions of each priority class and the queuing delay of every packet are recorded. 4.2. Results analysis In this section, we will evaluate the performance of the comparative algorithms against the three QoS metrics: throughput, resource distribution, and queuing delay tolerance. 4.2.1. Throughput comparison During the simulation, we change the normalized available bandwidth of the MR from 0.1, which is not enough to meet the requirement of the sessions of any one priority class, to 1.1, which is enough for all sessions. In each bandwidth, we run the simulation for the four different scheduling algorithms, namely RPQ+, WFQ, SPQ and ARPQ, respectively, and record the average throughput for each class. Fig. 4 shows the variation of the throughputs of different priority classes with the changing available bandwidth. The four sub-figures depict the throughputs of four different priority classes, respectively. In each figure, the x-axis stands for the normalised available bandwidth and the y-axis stands for the normalised throughput. Each spot in the figures stands for the average throughput of one algorithm at a specified bandwidth. Fig. 4(a) shows how the throughput of Class 1 traffic changes with the increase of the available bandwidth. Among the four queuing algorithms, SPQ has the best performance for its Class 1 traffic, as the Class 1 traffic is preferentially allocated all the resources and it needs the lowest bandwidth to reach its required throughput. ARPQ, who dynamically sharing resources between its high and low priority sessions, has lower throughput when the bandwidth is not enough for Class 1 traffic, but the throughput increases quickly to the required value when the available bandwidth increases. WFQ, who statically distributes the resources to the classes, also reaches the required throughput with the increase of bandwidth but its performance is not as good as those of SPQ and ARPQ. RPQ+ has the worst performance that its throughput increases slowly with the available bandwidth, and reaches the maximum when the available bandwidth is 1. Figs. 4(b) and (c) show the throughput of the sessions with priority Class 2 and Class 3 traffics, respectively. In these two figures, SPQ faces the problem that the throughputs of the sessions with priority Classes 2 and 3 maintain at 0 in the range of low available bandwidth. It means if the wireless link bandwidth maintains low, the sessions of Classes 2 and 3 have the starvation problem by using SPQ. On the contrary, WFQ can avoid the starvation problem by sharing resources among different priority classes. ARPQ’s performance is close to that of WFQ when the bandwidth is not enough for the sessions with priority Class 2 and Class 3,

Y. Wang et al. / Computer Communications 31 (2008) 1727–1738

1735

Fig. 4. The comparison of throughput of different scheduling algorithms in varying available bandwidth.

and by the dynamically resource distribution, ARPQ can reach the required throughput with a lower bandwidth than that of WFQ. RPQ+ has similar performance for the sessions with priority Class 2 and Class 3 with that of its priority Class 1. It avoids the starvation problem but is not able to reach the required throughput until the available bandwidth reaches 1. Fig. 4(d) shows the throughput of the sessions with lowest priority class. By using SPQ, the sessions with this priority class is not able to send out any packet for a long range of low available bandwidth. Whereas, both WFQ and ARPQ have similar throughput curves and can avoid the starvation. RPQ+ has highest throughput during the whole process and all the four algorithms reach the required throughput when the available bandwidth is 1. According to Fig. 4, WFQ, RPQ+ and ARPQ can all distribute the scarce bandwidth to all the priority classes of traffic and provide each class a reasonable throughput in very low bandwidths. However, SPQ suffers the starvation problem that the low priority classes cannot transmit packets at all during the bad bandwidth period. 4.2.2. Comparison of resource distributions From Fig. 5 we compare the distribution of the available resource in different bandwidth levels by using the four different scheduling algorithms. In each subfigure in Fig. 5, the simulation is run in a specified bandwidth. B denotes the available bandwidth between the MR and the BS. Every column stands for the proportions of four priority classes in one of the four scheduling algorithms. We choose four typical available bandwidth values to value the resource distributions when the available bandwidth can support 0 to 3 priority classes respectively. The findings are summarized as follows:

• B = 0.181: The normalised available bandwidth equal to 0.181 of required overall bandwidth, and is smaller than the required bandwidth of the Class 1 sessions. • B = 0.327: The available bandwidth equal to 0.327 of required overall bandwidth, which is enough for sessions with priority Class 1 but is not enough for sessions of the other classes. • B = 0.581: The available bandwidth equal to 0.581 of required overall bandwidth, which is enough for the sessions with both priority Class 1 and Class 2, but is not enough for the sessions with priority Class 3 and Class 4. • B = 0.784: The available bandwidth equal to 0.784 of required overall bandwidth, which is enough for the sessions with priority Class 1, Class 2 and Class 3, but is not enough for the sessions with priority Class 4. In Fig. 5(a), the available bandwidth B = 0.181 is smaller than the required bandwidth of the sessions with priority Class 1. In this case, RPQ+ distributes the available bandwidth according to the required bandwidth of the different priority classes, so the percentages of the allocated bandwidth for each priority class is proportional to the required bandwidth of its corresponding priority classes shown in Table 1. It is noted that SPQ uses up all resources to support the sessions with priority Class 1 but leaves the other sessions with three lower priority classes stuck. Both WFQ and ARPQ show a more fair distribution based on the weights assigned as shown in Table 3. Since ARPQ has the adaptive fair queuing mechanism and the proportions of its four priority classes takes the queue length into account, so the low priority classes of ARPQ has larger throughput than that of WFQ. As shown in Fig. 5(b), the available bandwidth B = 0.327, which is enough for the sessions with priority

1736

Y. Wang et al. / Computer Communications 31 (2008) 1727–1738

Fig. 5. The resource distribution of different queuing algorithms in different bandwidths.

class 1 (it is noted that the Class 1 required bandwidth is 0.205). RPQ+ still maintains the static proportion of the bandwidth among each priority class matching their correspondent data rate proportion. Both WFQ and ARPQ satisfy the requirement of the sessions with Priority Class 1 and share the spare bandwidth among the other sessions, with the similar weights according to Table 4. SPQ serves only the sessions with Priority Class 1 and 2 in this case, but the sessions with Priority Class 3 and 4 only occupy 0.02% of the available resources. In Fig. 5(c), the available bandwidth B = 0.581, is larger than the sum of required bandwidth of the sessions with Priority Classes 1 and 2, but it is not enough for the sessions with Priority Class 3 and 4. SPQ starves the sessions with Priority Class 4 when supplies all resources to the three higher priority classes. Compared to SPQ, RPQ+ maintains the static distribution same as the data rates proportion and cannot guarantee any class with their required throughputs. WFQ distributes the resources to the sessions with the static weights, so the sessions with Priority Class 2 in WFQ get a lower percentage of resources, which is 32.34%. Referring to Fig. 4(b), when the available bandwidth is 0.581, the throughput of the sessions with Priority Class 2 is 0.188, which is smaller than its required bandwidth 0.258, shown in Table 1. That means WFQ cannot guarantee the throughput of Priority Class 2 based on the available bandwidth. The performances of RPQ+ and WFQ prove that the static weight-delay problem. However, ARPQ changes the resource distributions by adapting the bandwidth change so as to guarantee the sessions with

Priority Class 1 and 2 while sharing the rest resources with both the sessions with priority Class 3 and 4. In Fig. 5(d), the bandwidth is B = 0.784, which is larger than the sum of the first three classes’ data rate and can support the transmission of the three higher priority classes. Here RPQ+ still maintains the proportions of each priority class, but none of the four priority classes has adequate throughput. WFQ performs better than RPQ+ that it distributes enough resources to the sessions with Priority Class 1 and 2 and satisfies their required bandwidths. But it assigns lower percentage of bandwidth to the sessions with Priority Class 3 than that in SPQ and ARPQ. According to Fig. 5, we can see clearly that SPQ suffers the starvation problem as the high priority classes occupy most part of the available bandwidth, while RPQ+ and WFQ face the static weight-delay problem as none of the priority classes can be guaranteed with the required throughputs. On the contrary, ARPQ avoids both of these problems by adjusting the distribution pattern of different priority classes. 4.2.3. Comparison of queuing delay tolerance In this section, we compare the queuing delay tolerance of each priority class in the NEMO environment by using different scheduling algorithms. The minimum bandwidth that can guarantee no more than 5% of packets exceeding the class delay bounds is observed. In Fig. 6, the minimum bandwidths of the same priority class by using different scheduling algorithms are compared. Each column stands for the minimum bandwidth for one priority class ensuring

Y. Wang et al. / Computer Communications 31 (2008) 1727–1738

1737

4.2.4. Summary In summary, ARPQ combines the advantages of RPQ+, WFQ and SPQ but avoids their disadvantages. According to the comparison of throughputs, resource distributions and scarce bandwidth tolerance for all classes, the results show that in one hand, ARPQ ensures the fairness of resource distribution and avoids the starvation problem met by SPQ; in the other hand, ARPQ maximizes the usage of resources of high priority classes by dynamically allocating its resource and avoids the static weight-delay problem, so the sessions can reach the maximum throughput in a lower available bandwidth than those of RPQ+ and WFQ. Therefore, ARPQ has the best overall performance by combining fairness and priority, and it provides reasonable high quality service for all priority classes.

in the NEMO environments. The main feature of the NEMO-based wireless network is its fast varying and scarce bandwidth to the access network. Existing mechanisms, e.g. SLS negotiation and admission control, can not deal with this problem because of the scalability problem and huge overhead. Therefore, an adaptive scheduling algorithm has emerged as a good candidate to alleviate the problems. The paper state the problems for the scheduling algorithm in the NEMO environment and studies the existing two categories of scheduling algorithms, FQ and PQ, as well as a novel algorithm ARPQ. FQ algorithms assign a weight parameter to each traffic class, and distribute the available bandwidth to the classes based on the weights. On the contrary, PQ algorithms distribute the bandwidth in the order of priority; the traffics with higher priority are transferred earlier than the low priority traffic. In order to avoid the problems of the existing scheduling algorithms, an adaptive scheduling algorithm called ARPQ is proposed. The ARPQ algorithm maintains one queue group for each priority class and periodically rotates all packets from high priority queues to the low priority ones. By adapting to the available resources, ARPQ guarantees the QoS for as many queues as possible, meanwhile the other queues share the rest resources to get an acceptable service. The performances of the ARPQ and some typical scheduling algorithms are compared in the NEMO environment. The simulation results show that FQ algorithms meet the static weight-delay problem because of the static weight distribution and the static PQ algorithm suffers the starvation problem because of the absolute and static priority process, while ARPQ avoids both the problems by giving all the sessions with reasonable throughputs and keeping as many sessions with high priority classes served in required QoS as possible. As the conclusion, existing scheduling algorithms, such as PQ and FQ, cannot work properly in NEMO environment for lacking of adaptive mechanism suitable for the scarce and varying bandwidth on the wireless link. To the contrast, the novel algorithm ARPQ guarantees the delay bounds for the sessions with high priority classes in the scarce bandwidth environment and maintains the reasonable throughput for the sessions with low priority classes and has achieved better overall performance than those of all other comparative scheduling algorithms. The future work on ARPQ will be focused on the design of an efficient packet discarding mechanism. Currently, ARPQ assumes the buffer of the scheduler is infinite and no packet will be discarded for its long delay. Clearly, it is not efficient to transmit the packets with lags that are much longer than their delay bounds. So an efficient packet discarding mechanism will be studied to improve the performance of ARPQ in the next stage.

5. Conclusions

References

Fig. 6. Comparison of minimum bandwidths that guarantee 5% of packet delay bound violation.

the 5% packets violating the delay bound. In the first series of four columns, RPQ+ has the bandwidth value 0.994. It means that whenever RPQ+ works in the bandwidth lower than 0.994, more than 5% of Class 1 packets will suffer a queuing delay longer than the delay bound shown in Table 2. Similarly, the minimum bandwidths required by WFQ, SPQ and ARPQ to get less than 5% of packets exceeding their delay bound are 0.305, 0.214 and 0.236, respectively. As shown in Fig. 6, SPQ’s Priority Class 1 has the very good tolerance to the scarce bandwidth and it is the same for ARPQ. For Classes 2, 3 and 4, RPQ+ shows worst tolerance that it cannot endure the bandwidth lower than 0.994, while WFQ, ARPQ and SPQ have the third, second and best tolerance to the low bandwidth, respectively. Generally, SPQ and ARPQ both have good tolerance to the scarce bandwidth. WFQ suffers the static weight assignments and has worse performance; meanwhile RPQ+ gets the worst results for all the classes because of its resource assignment based on the data rate proportions.

The paper firstly did a survey on the existing scheduling algorithms and compared their performances

[1] Hong-Yon Lach, Network mobility in Beyond-3G systems, Communications Magazine, IEEE 41 (7) (2003) 52–57.

1738

Y. Wang et al. / Computer Communications 31 (2008) 1727–1738

[2] J. Liebeherr, D.E. Wrege, Priority queue schedulers with approximate sorting in output-buffered switches, IEEE JSAC 17 (6) (1999). [3] V. Devarapalli, R. Wakikawa, A. Petrescu, P. Thubert, Network mobility (NEMO) Basic Support Protocol, IETF RFC3963, January 2005. [4] Jon M. Peha, Fouad A. Tobagi, Cost based scheduling and dropping algorithms to support integrated services, IEEE Transactions on Communication 44 (2) (1996) 192–202. [5] Y. Jiang, C.K. Tham, C.C. Ko, Delay analysis of a probabilistic priority discipline, European Transactions on Telecommunications, 2002. [6] Y. Wang, Y. Tseng, W. Chen, K. Tsai, MR-FQ: a fair scheduling algorithm for wireless networks with variable transmission rates, ITRE 2005 27–30 (June) (2005) 250–254. [7] Y. Wang, S.R. Ye, Y.C. Tseng, A fair scheduling algorithm with traffic classification in wireless networks, in: International Symposium on Performance Evaluation of Computer and Telecommunication Systems (SPECTS), 2004, pp. 502–509. [8] T.S. Eugene Ng, I. Stoica, H. Zhang, Packet fair queuing algorithms for wireless networks with location-dependent errors, in: Proceedings of INFOCOM98, March 1998, pp. 1103–1111. [9] S. Lu, V. Bharghavan, Fair scheduling in wireless packet networks, IEEE/ACM Transactions on Networking 7 (4) (1999) 473–489. [10] P. Ramanathan, P. Agrawal, Adapting packet fair queuing algorithms to wireless networks, in: ACM/IEEE MOBICOM’98, Dallas, TX, pp. 1–9. [11] Y. Wang, L. Fan, N. Akhtar, K. Chew, R. Tafazolli, An Aggregation-based QoS Architecture for Network Mobility, Mobile Summit 2005, IST, June 2005. [12] R. Chakravorty, I. Pratt, J. Crowcroft, A framework for dynamic SLA-based QoS control for UMTS, Wireless Communications, IEEE 10 (5) (2003) 30–37. [13] Paulo Mendes, Jorge Andres-Colas, Carlos Pinho, Information model for the specification of QoS agreements among ambient networks, in: Proceedings of the IEEE International Symposium on Personal Indoor and Mobile Radio Communications, Berlin, Germany, 2005.

[14] A.K. Parekh, R.G. Gallager, A generalized processor sharing approach to flow control in integrated services networks: the singlenode case, Networking, IEEE/ACM Transactions on, 1(3), June 1993, pp. 344–357. [15] H. Wang, C. Shen, K.G. Shin, Adaptive-weighted packet scheduling for premium service, IEEE International Conference on Communications, vol. 6, 2001, pp. 1846–1850. [16] Mong-Fong Homg, Wei-Tsong Lee, Kuan-Rong Lee, Yau-Hwang Kuo, An adaptive approach to weighted fair queue with QoS enhanced on IP network, in: Proceedings of IEEE Region 10 International Conference on Electrical and Electronic Technology, vol. 1, 19–22 August 2001, pp. 181–186. [17] R.L. Cruz, A calculus for network delay. I. Network elements in isolation, Information Theory, IEEE Transactions, vol. 37(1), January 1991, pp. 114–131. [18] Y. Wang, L. Fan, D. He, R. Tafazolli, ARPQ: A Novel Scheduling Algorithm for NEMO-based Vehicular Networks, submitted to Journal of Selected Areas on Communications,the special issue for vehicular networks. [19] H.-L. Chao, W. Liao, Fair scheduling with QoS support in wireless ad hoc networks, IEEE Transactions on Wireless Communications 3 (6) (2004). [20] H.-L. Chao, W. Liao, Fair scheduling in mobile ad hoc networks with channel errors, IEEE Transactions on Wireless Communications 4 (3) (2005). [21] L. Wang, Y.-K. Kwok, W.-C. Lau, V.K.N. Lau, Efficient packet scheduling using channel adaptive fair queueing in distributed mobile computing systems, Springer Mobile Networks and Applications 9 (4) (2004). [22] W.K. Wong, H.Y. Tang, V.C.M. Leung, Token bank fair queuing: a new scheduling algorithm for wireless multimedia services, Wiley International Journal of Communication Systems 17 (6) (2004). [23] K. Navaie, D.Y. Montuno, Y.Q. Zhao, Fairness of Resource Allocation in Cellular Networks: A Survey, Book chapter of Resource Allocation in Next Generation Wireless Networks, Nova Science Pub Inc, 2006, May.