Computer Communications 28 (2005) 713–725 www.elsevier.com/locate/comcom
Wireless packet fair queueing algorithms with link level retransmission Namgi Kim*, Hyunsoo Yoon Division of Computer Science, Department of EECS, 373-1 Kuseong, Yuseong, EECS, KAIST, Daejeon 305-701, South Korea Received 29 January 2004; revised 30 September 2004; accepted 26 October 2004 Available online 20 November 2004
Abstract In order to provide quality of service to wireless networks, a number of wireless fair queueing algorithms have recently been proposed. They, however, require perfect channel prediction before transmission and rarely consider algorithms under the link layer. Instead of perfect channel prediction, most wireless systems adopt the Link Level Retransmission (LLR) algorithm within the link layer for recovering channel errors. However, the LLR algorithm does not work well with the previous prediction-based wireless fair queueing algorithms. Therefore, we propose a new wireless fair queueing algorithm, Wireless Fair Queueing with Retransmission (WFQ-R), which is well matched with the LLR algorithm and does not require channel prediction. In the WFQ-R algorithm, the share consumed by retransmission is regarded as a debt of the retransmitted flow to the other flows. So, the WFQ-R algorithm achieves wireless fairness with the LLR algorithm by penalizing flows that use wireless resources without permission in the link layer. Through analyses, we proved that the WFQ-R algorithm guarantees throughput and delay fairness. Through simulations, we showed that our WFQ-R algorithm maintains fairness adaptively. Furthermore, our WFQ-R algorithm is able to achieve flow separation and compensation. q 2004 Elsevier B.V. All rights reserved. Keywords: Wireless fair queueing; Wireless QoS; Wireless packet scheduling; Link level retransmission; ARQ; Wireless networks
1. Introduction There has been explosive growth in the wireless communication industry in recent years. The success of the cellular system has enabled communication to go outdoors. The wireless LAN makes it possible to access packet data without a cable indoors. With the increasing usage of these wireless networks in both outdoor and indoor environments, wireless networks are quickly becoming an integral part of the Internet. Supporting multimedia communication applications requires the network to provide quality of service for packet flows. In wired networks, Fluid Fair Queueing (FFQ) [1] has been a popular paradigm for providing fairness among packet flows over a shared link. A number of approximation algorithms for the implementation of the FFQ have also been proposed. These algorithms for wired networks, however, cannot be applied directly to wireless networks * Corresponding author. Tel.: C82 42 869 5578; fax: C82 42 869 3510. E-mail addresses:
[email protected] (N. Kim), hyoon@ camars.kaist.ac.kr (H. Yoon). 0140-3664/$ - see front matter q 2004 Elsevier B.V. All rights reserved. doi:10.1016/j.comcom.2004.10.012
because wireless networks have an unreliable channel that experiences bursty and location-dependent errors. In order to provide quality of service in wireless networks by overcoming location-dependent channel errors, wireless fair queueing algorithms have been proposed, such as Idealized Wireless Fair-Queueing (IWFQ) [2], Server Based Fairness Approach (SBFA) [3], Channel-condition Independent Fair Queueing (CIF-Q) [4], and Wireless Fair Service (WFS) [5]. These algorithms dynamically reassign channel allocation by predicting channel errors. If a channel to the flow is expected to be dirty, the flow disclaims its own share of resources to other flows and obtains more resources later for compensation when the channel is clean. Hence, these prediction-based algorithms do not allocate resources to the error channel in vain. Consequently, they provide fair channel access among multiple contending hosts over a scarce and shared wireless channel through wireless channel prediction [6]. The prediction-based wireless fair queueing algorithms, however, assume unrealistically ideal conditions and rarely consider practical issues. All of the algorithms require a perfect prediction of the current channel state before packet
714
N. Kim, H. Yoon / Computer Communications 28 (2005) 713–725
scheduling and they are rarely coupled with the link layer. Perfect channel prediction before scheduling is difficult in practice. Instead of channel prediction, most wireless networks adopt the Link Level Retransmission (LLR) algorithm within the link layer for recovering channel errors. However, the previous prediction-based algorithms rarely considered the link layer and do not work well with the LLR algorithm. In the WFS algorithm, a specific medium access method has been proposed [5] but it is not commonly used in the wireless world. Therefore, in this paper, we propose a new wireless fair queueing algorithm that does not require channel prediction and works well with the LLR algorithm, which is the most generally used error control algorithm in the wireless world. The remainder of the paper is organized as follows. Section 2 introduces background and motivation for this paper. Section 3 presents a new wireless fair queueing algorithm with link level retransmission. Section 4 analyzes the properties of our algorithm. Section 5 evaluates the performance of our algorithm through simulations. Lastly, Section 6 concludes the paper.
2. Background and motivation 2.1. Fairness criteria in wireless networks In wireless networks, there are two kinds of fairness criteria: data fairness and resource fairness. Data fairness is fairness based on the amount of received data and resource fairness is based on the amount of used wireless resources. In wired networks, data fairness and resource fairness are generally the same. In wireless networks, however, they differ due to wireless channel errors. Data fairness depends on received data and it guarantees that each flow receives the same amount of data if their weights are equal. This concept, however, is inadequate for wireless networks. In wireless networks, an erroneous flow experiencing severe channel errors can exhaust almost all wireless resources and other flows may face starvation even if their channel conditions are good. As a result, wireless networks are ultimately unable to guarantee data fairness. On the other hand, resource fairness is based on wireless resources used by each flow. It equally distributes scarce wireless resources to all flows. Consequently, resource fairness is more suitable in wireless networks and as such we concentrate on resource fairness rather than data fairness. 2.2. Network and channel model We consider a wireless packet network architecture in which a cell has a base station or an access point with a shared wireless channel. Fig. 1 shows the architecture of a wireless channel server at a base station or an access point. Generally, scheduling for packet transmission is performed above the link layer. In the link layer, the Link Level
Fig. 1. Architecture of a wireless channel server.
Retransmission (LLR) algorithm is used for error recovery. When a packet experiences channel errors during transmission, the packet is retransmitted in the link layer until the destination receives the packet correctly or the maximum number of retransmissions is reached. In this paper, we focus on fairness of the downlink packet scheduling. Packet scheduling for downlink flows is performed in a centralized manner at a base station or an access point. The scheduling for downlink flows is usually more important than that for uplink flows. This is because a light-weighted mobile terminal is generally used as an access tool to obtain services from the wired server and the downlink flows dominate traffic patterns in wireless networks. For uplink flows, the amount of traffic is small in proportion to that of downlink flows. Moreover, the fairness for uplink flows is usually acquired through contentionbased channel scheduling in a distributed manner. Therefore, we concentrate only on centralized packet scheduling algorithms for downlink flows. 2.3. Previous wireless fair queueing algorithms In wired networks, all fair queueing algorithms are based on approximating the FFQ model [1]. The FFQ model treats each packet flow as a fluid flow. Let flow i have a weight ri, the number of bits served in a single round by the server. Let us consider a set of backlogged flows, B(t1, t2), for any time interval [t1, t2]. Then, in the FFQ model, the channel capacity granted to each flow i, Wi(t1, t2), satisfies the following property: Wi ðt1 ; t2 Þ Wj ðt1 ; t2 Þ Z 0: ci; j 2Bðt1 ; t2 Þ; K rj ri
N. Kim, H. Yoon / Computer Communications 28 (2005) 713–725
However, since networks switch flows at the granularity of packets rather than bits, packet fair scheduling algorithms for practical implementation attempt to minimize the difference between any two backlogged flows. WFQ [7], SCFQ [8], SFQ [9], and WF2QC [10] are the representative examples of the practicable wired fair queueing algorithms. In wireless networks, the FFQ model cannot be directly adapted because a wireless channel experiences bursty and location-dependent channel errors. Due to wireless channel errors, some flows may not transmit a packet even though their service turn has come round. As a result, locationdependent errors bring about blocking of a fluid flow. Accordingly, error-free and error-prone flows are allotted different shares and the fairness of the FFQ model is broken down. To overcome this discrepancy, wireless fair queueing algorithms adopt the compensation model. In the concept of the compensation model, if a flow does not get its share due to channel errors, it receives a compensative share later when transmission is possible. Based on this compensation model, a number of wireless fair queueing algorithms have recently been proposed such as the Idealized Wireless FairQueueing (IWFQ) [2], the Server Based Fairness Approach (SBFA) [3], the Channel-condition Independent Fair Queueing (CIF-Q) [4], and the Wireless Fair Service (WFS) algorithm [5]. In these algorithms, when a flow is predicted to encounter channel errors, the flow gives up its share in that time and gets the share back later. These prediction-based algorithms, however, are not practical. All of them require perfect prediction of the channel state before packet scheduling. Also, they have not addressed the relation with the link layer, or assume an unusual specific MAC algorithm such as the WFS algorithm [5]. In particular, none of the aforementioned algorithms consider the LLR algorithm in the link layer, which is popularly used in indoor and outdoor wireless systems. Therefore, in this paper, we propose a new wireless packet fair queueing algorithm that does not require channel prediction and works well with the LLR algorithm. 2.4. Link level retransmission algorithms in the link layer Most wireless systems adopt the LLR algorithm in the link layer to recover wireless channel errors. The representative LLR algorithm is Automatic-Repeat reQuest (ARQ) schemes. The ARQ schemes are widely used in wireless systems for error control because they are simple and provide high system reliability. Based on retransmission strategies, there are many types of ARQ schemes: StopAnd-Wait ARQ (SAW-ARQ), Go-Back-N ARQ (GBNARQ), Selective-Repeat ARQ (SR-ARQ), and Hybrid-ARQ [11–15]. SAW-ARQ represents the simplest ARQ procedure and was implemented in early error control systems. In SAWARQ, the transmitter sends a packet to the receiver and waits for either a positive acknowledgement (ACK) or a negative acknowledgement (NAK). This scheme is simple
715
but inherently inefficient because of the idle time spent waiting for acknowledgement of each packet. GBN-ARQ is more advanced than SAW-ARQ. In GBN-ARQ, the transmitter continuously transmits packets up to N packets in order and then stores them pending receipt of an ACK or NAK. When an ACK for a packet arrives, the transmitter sends new packets totally up to N pending receipts. However, if a NAK arrives, the transmitter restarts the protocol from that point and sends all pending packets after the negative acknowledgement. Thus, GBN-ARQ is still inefficient in using resources because whenever a received packet is detected in error, the receiver also rejects the next NK1 received packets even though many of them may be error-free. The ineffectiveness of GBN-ARQ caused by the retransmission of many error-free packets can be overcome by using the SR-ARQ scheme. In SR-ARQ, packets are also transmitted continuously. However, the transmitter only resends negatively acknowledged packets by the receiver. After retransmitting that packet, the transmitter sends new packets continuously. Variations of the SR-ARQ scheme have been applied in many contemporary wireless systems such as IS-95 [16], cdma2000 [17], and UMTS [18]. Another approach to control errors is Forward Error Correction (FEC) schemes. In FEC, an error correction code is used to recover transmission errors. That is, when the receiver detects errors in a packet, it attempts to correct the errors by the code of the packet itself without retransmission. The FEC scheme, however, also has some drawbacks. Since the probability of correcting errors is much lower than the probability of detecting errors, it is harder to achieve high system reliability. Thus, the FEC scheme, which requires another error correction method, has been combined with ARQ schemes. Such a combination of the two basic error control schemes is referred to as a Hybrid-ARQ [11,13]. A Hybrid-ARQ system consists of an FEC subsystem contained in an ARQ system. The function of the FEC system is to reduce the frequency of retransmission by correcting the errors. This increases the system throughput performance. Hybrid-ARQ schemes are classified into three categories, Type-I, Type-II, and Type-III [11, 19,20] and many variations of Hybrid-ARQ have been proposed for the LLR algorithm in future wireless systems such as HSDPA for 3GPP Release 5 [21], and 1!EV-DO, 1!EV-DV for 3GPP2 [22,23]. As previously mentioned, almost all wireless networks adopt a LLR algorithm in the link layer such as SR-ARQ and Hybrid-ARQ. In principle, the LLR algorithm needs packet retransmissions in the link layer for error recovery. A retransmitted packet has to be sent as soon as possible because the retransmitted packet generally spends a significant amount of time on a wireless system. The irregular delay in a wireless system causes degradation of service quality of the flow because it leads to enlarging of jitter and variation of end-to-end delay. Thus, the packet retransmission is quickly performed in the link layer without involving a packet scheduler in general. However, this
716
N. Kim, H. Yoon / Computer Communications 28 (2005) 713–725
retransmission induces a breakdown of the fairness controlled by the packet scheduler above the link layer. Therefore, the packet fair scheduling algorithm should be closely coupled with the LLR algorithm to maintain the endto-end fairness in wireless networks. 2.5. Problems of previous wireless fair queueing algorithms with link level retransmission Fig. 2 illustrates the FFQ model with the LLR algorithm. The channel server in a base station distributes the service share for a wireless channel to flows in proportion to their weights. However, if a flow experiences channel errors, the link layer protocol retransmits the packet and this results in the removal of shares of other flows. Accordingly, the shares of other flows disappear and the fairness is broken. Fig. 3 illustrates a problem of unfair scheduling with the LLR algorithm in detail. In this example, flows 1, 2, and 3 have weights 1, 1, and 2, respectively. In an error-free model, each flow sends 2, 2, and 4 packets in two rounds. However, in a common real system with the LLR algorithm, if the first packet of flow 1 has experienced an error during transmission, the packet is immediately retransmitted in the link layer. In this case, three more resources are used for retransmission. Consequently, each flow receives 2, 1, and 2 packets. Moreover, possessions of the wireless channel are 5, 1, and 2, respectively. This means that the previous prediction-based wireless fair queueing algorithms, which do not consider the LLR algorithm, cannot provide fairness in general wireless networks. Therefore, we propose a new wireless fair queueing algorithm that can cope with this situation. The wireless fair queueing algorithm should possess the following properties: (a) delay bound and throughput guarantees for error-free flows; (b) long-term fairness for error-prone flows; (c) short-term fairness for error-free flows; and (d) graceful service degradation for leading flows [4]. Accordingly, our algorithm is designed to satisfy these properties.
Fig. 2. Broken fairness due to link level retransmission.
Fig. 3. Unfair packet scheduling with link level retransmission.
3. Wireless fair queueing algorithm with retransmission In this section, we present our Wireless Fair Queueing with Retransmission (WFQ-R) algorithm. The basic concept of the WFQ-R algorithm is that the share used for retransmission is regarded as a debt of the retransmitted flow to the other flows. When a retransmission occurs, the overhead caused by the used wireless resources in the retransmission is charged to the retransmitted flow. Consequently, the retransmitted flow becomes leading, and the other flows become lagging. Next round, the lagging flow has higher priority to obtain a wireless channel than the leading flow. The WFQ-R algorithm has two kinds of compensation types: Flow-In-Charge (FIC) and Server-In-Charge (SIC). FIC regards the entire overhead caused by retransmission as a charge of the retransmitted flow. That is, an error-prone flow should take responsibility for its own channel condition. For instance, consider a flow that has consumed two wireless resources by the retransmission in Fig. 4. Then, in the FIC, since the flow has the entire responsibility for the two consumed resources, the flow does not use the next two resources and makes a concession to other flows in the second and third turns. The FIC is fair with respect to allocated wireless resources. However, it may be too severe for a flow experiencing frequent channel errors. Therefore, we propose another compensation type, SIC. For SIC, the channel server takes responsibility for channel errors and the overhead is charged to all backlogged flows in distributed
Fig. 4. Example of Flow-In-Charge type.
N. Kim, H. Yoon / Computer Communications 28 (2005) 713–725
Fig. 5. Example of Server-In-Charge type.
manner. All flows obtain the overhead and the retransmitted flow has responsibility for only a portion of the overhead in proportion to its weight. As a result, the charging overhead of the retransmitted flow would be naturally reduced. For example, when a flow has consumed four resources by the retransmission and the weight of the flow is a quarter of the sum of the weights of all flows in Fig. 5, the retransmitted flow is responsible for only one resource. Accordingly, the flow disclaims only one resource in the second turn. 3.1. Algorithm description In order to account for the service lost or gained by a flow, we associate each system, S, with a reference errorfree system, Sr. Then, a flow is classified as leading, lagging, or in-sync with respect to Sr. That is, a flow is leading if it has received more service in S than it would have received in Sr, lagging if it has received less, and insync if it has received the same amount. We use the Starttime Fair Queueing (SFQ) algorithm [9] as Sr because, as in [4], it is harder to conduct scheduling based on the finishing times than on the starting times in a system with channel errors. We denote it as SrSFQ . To provide fairness, we use the parameter lag, which keeps track of the difference between the service that the flow should receive in SrSFQ and the service it has received in S. A flow i is said to be lagging if its lagi is positive, leading if its lagi is negative, and in-sync otherwise. In the absence of errors, lagi of all flows are zero. A flow is said to be active if it is backlogged, or as long as it is leading. Lagging flows have higher priority to receive compensation services to expedite their compensation. Compensation service is made available because a leading flow gives up its lead. To provide short-term fairness, we distribute these compensation services among lagging flows in proportion to the lagging flow’s rates. To accomplish this, the compensation virtual time, ci, keeps track of the normalized amount of compensation services received by flow i while it is lagging. To achieve graceful degradation in service for leading flows, we use system parameters, a (0%a%1) and si. a is used to control the minimal faction of service retained by a leading flow. That is, a leading flow gives up at most (1Ka) amount of its service to compensate for lagging flows.
717
To do this, each leading flow i associates with a parameter si, which keeps track of the normalized service actually received by the leading flow. The full WFQ-R algorithm is shown in Fig. 6 and parameter definitions are provided in Table 1. The basic framework and notations follow that of the CIF-Q algorithm [4]. When a packet is received, it is enqueued at a buffer in the on receiving function. Then, the server picks up a packet from the buffer and sends it in the on sending function. If a flow has no more packets to send, the flow leaves scheduling in the leave function. In the on sending function, a packet is sent by the send_pkt( ) function after the packet is selected to be sent. The send_pkt( ) function gets the packet from the buffer and adjusts the degrees of flows as leading or lagging. Then, it calls the send_and_charge( ) function, in which the packet is actually sent through the link layer by the send( ) function. The send( ) function returns the amount of wireless resources used by the LLR algorithm. Subsequently, the charging overhead, for which the retransmitted flow takes responsibility, is calculated by the charge( ) function depending on the compensation type. Lastly, the charging overhead is distributed over all backlogged flows. Detailed descriptions of the functions are as follow: † on receiving: When a flow i becomes backlogged and active, its virtual time, vi, is initialized to the maximum of its virtual time and the minimum virtual time among other active flows (line 3). Then, its lag is initialized to zero (line 4). † on sending: The algorithm selects the active flow i with the minimum virtual time for service (line 9). If that flow is not leading or leading but did not receive a minimal fraction of service, then the packet at its queue is transmitted (lines 10–12). However, if the flow is leading and receiving more than minimal service, we search for the lagging flow j with the minimum cj (line 15). If there is such a flow j, the packet at its queue is transmitted, instead of the packet of flow i (lines 16–19). Otherwise, the packet of the original flow i is transmitted (lines 20–21). † send_pkt( ): After the packet to transmit has been determined, the virtual time of flow i, vi, is advanced as viCp.length/ri where ri is the rate of flow i (line 27). Then, the parameters are adjusted depending on the flow’s situation (lines 28–41). If flow i is leading but receives services due to graceful degradation, si is updated to siCp.length/ri (lines 28–30). If flow j is served but the overhead is charged to i (isj), then the flow j has gained extra service and flow i has lost service (lines 31–41). Accordingly, lagj is updated to lagjKp.length (line 32), and lagi is updated to lagiC p.length (line 38). cj, sj, and ci are also adjusted depending on flow j and i’s respective situations
718
N. Kim, H. Yoon / Computer Communications 28 (2005) 713–725
Fig. 6. WFQ-R algorithm.
(lines 33–37, 39–41). After that, the selected packet is passed to the send_and_charge( ) function (lines 42– 43). † send_and_charge( ): In this function, the packet is actually transmitted through the link layer and the effect of the LLR algorithm is reflected in our wireless fair queueing algorithm. The send( ) function transmits the packet and returns the amount of used wireless resources, usedret, caused by the link level retransmission in the link layer (lines 46–47). If there
is no retransmission (usedret%0) or no other active flow (AK{j}ZØ), then the procedure is completed and the WFQ-R algorithm is returned to the first step (lines 48—49). Otherwise, if the flow j has gained extra service by the link level retransmission, then it takes responsibility for that. For responsibility, the charging overhead, charged, is calculated through the charge( ) function with usedret (lines 50–51). The charged is the amount of resources for which the retransmitted flow is responsible. At that time, if flow
Table 1 Definitions of parameters used in the WFQ-R algorithm Parameter
Definition
vi lagi
Virtual time of flow i The difference between the service that flow i should receive in a reference error-free packet system and the service it has received in the real system (lagging if positive, leading if negative, and in-sync otherwise) The set of active flows Minimal fraction of service retained by any leading flow Normalized amount of service actually received by a leading flow i since it became leading Normalized amount of compensation service received by a lagging flow i The rate of flow i
A a si ci ri
N. Kim, H. Yoon / Computer Communications 28 (2005) 713–725
j is leading, sj is increased to sjCcharged/rj (lines 52–53). Otherwise, lagj is reduced to lagj-charged (line 55) and cj and sj, are updated depending on the situation of flow j (lines 56–59). Because the other flows are deprived of their share due to the retransmission of flow j, they have to get more shares later. Thus, the other flows’ lags, lagl, are increased in proportion to their weights (linesP 60–65). Hence, lagl is updated to lagl C charged !rl = k2AKfjg rk (line 62). † charge( ): This function calculates and returns charging overhead caused by the LLR algorithm. The charging overhead, charged, is calculated based on the amount of wireless resources used by link level retransmission, usedret. There are two compensation types: FIC and SIC. For FIC, the retransmitted flow j has to take responsibility for the entire resources used by the link level retransmission. Thus, FIC returns the amount of Pall the other flows’ resources, usedret ð1K rj = k2A rk Þ, which were used in the retransmission without permission (line 71). On the other hand, SIC distributes the charge of used resources to all backlogged flows. Hence, SIC returns only a portion amount of used resources, P of the P usedret ð1K rj = k2A rk Þrj = k2A rk , which has to be charged by the flow in proportion to its weight (line 74). † leave: When a lagging flow i is no longer backlogged and decides to leave, its positive lagi is proportionally distributed among all the remaining active P flows j such that each lagj is updated to lagj C lagi rj = k2A rk (lines 78–84).
4. Analytical results The WFQ-R algorithm subsumes properties that a packet fair queueing algorithm should possess in a wireless environment: throughput and delay guarantees, long-term and short-term fairness, and graceful service degradation. The descriptions of terminologies are shown in Table 2 and full proofs are shown in Appendix A. Table 2 Terminologies for analysis Term
Definition
v(t) A a ri Lmax Cmax R N
Virtual time at time t The set of active flows Minimal fraction of service retained by any leading flow The rate of flow I (in bits per second) The maximum size of a message (in bits per second) The maximum charged capacity per retransmission (in bits) The capacity rate of the server (in bits per second) The number of active flows
719
Theorem 1. (Throughput Guarantee for error-free flows) If a flow P i is continually backlogged over the interval [t1, t2) and k2A rk % R, then its aggregate service, Wi ðt1 ; t2 Þ, is bounded by: r X Wi ðt1 ; t2 ÞR ri ðt2 K t1 Þ K i ð3L C nCmax Þ R k2A max K ð3Lmax C nCmax Þ: Theorem 2. (Delay Guarantee for error-free flows) The delay experienced by the kth packet of an error-free flow i in real system, PDTðpki Þ, is bounded as follows PDTðpki Þ% EATðpki Þ C C
ðn K 1ÞLmax C lki R
Lmax C ðn K 1ÞCmax ri
where EATðpki Þ and lki are the expected arrival time and length of kth packet from flow i, respectively. Theorem 3. (Long-term fairness Guarantee for error-prone flows) For a continually backlogged flow i, it achieves the following long-term throughput fairness index: lim
vðtÞ/N
Wi ð0; tÞ Z ri : vðtÞ
Theorem 4. (Short-term fairness Guarantee for error-free flows) The difference between the normalized service received by any two flows i and j during an interval [t1, t2) in which both flows are continuously backlogged, error-free, and their status does not change is bounded as follows Wi ðt1 ; t2 Þ Wj ðt1 ; t2 Þ Lmax Lmax K % b r C r rj ri i j where bZ3 if both flows are lagging, bZ2 if both flows are in-sync, and bZ2Ca if both flows are leading. Theorem 5. (Graceful Service Degradation for leading flows) The graceful service degradation is explicitly enforced by the algorithm via the parameter a such as [5]. Through these analyses, we find that our WFQ-R algorithm satisfies the properties of the wireless fair queueing algorithm and guarantees performance in the manner of CIF-Q [4] and WFS [5].
5. Simulation experiments In this section, we present results from simulation experiments to demonstrate the fairness of the WFQ-R
720
N. Kim, H. Yoon / Computer Communications 28 (2005) 713–725
Fig. 7. Simulation topology.
algorithm. The following metrics are measured to evaluate the algorithm: † Allocated resources: the amount of wireless channel resources consumed by a flow. It directly represents the resource fairness. † Queueing delay: experienced delay in a queue. † Received data: the actual amount of received data at a receiver. It represents the data fairness. However, the received data is a second metric as compared to the allocated resources because the data fairness cannot be strictly sustained in wireless networks, as previously noted. 5.1. Simulation environments For the simulation, we used an NS simulator [24]. The simulation topology is shown in Fig. 7. There are three flows in the simulation: two error-free flows and one error-prone flow. The retransmission probability of flows 1 and 2 are zero, and that of flow 3 is 0.2. Each flow starts to transmit packets at 0.0, 0.4, and 1.3 s, respectively. All flows have the same weight and stop at 10 s. The summarized properties of each flow are shown in Table 3. In the simulation, the sizes of the packet are 1 KB and the time interval between packet generation is 8 ms. Each flow has a 5 KB queue separately. The flows share a wireless channel with a 1.5 Mbps bandwidth and a 10 ms delay. In the simulation, if a packet is retransmitted in the link layer, four more resources are consumed by the link level retransmission. We simulate our WFQ-R algorithms and CIF-Q algorithm [4] with the LLR algorithm. The CIF-Q algorithm is simulated to evaluate the performance of prediction-based wireless fair queueing algorithms with the LLR algorithm, instead of perfect channel prediction.
In Fig. 8a and Table 4, we see that the CIF-Q algorithm with the LLR (CIF-Q_LLR) does not properly provide fairness. It assigns too many wireless resources to the error-prone flow (shown in Fig. 8a-i). Additionally, the delay of the error-free flow is also increased due to the interference of the errorprone flows (shown in Fig. 8a-ii). The CIF-Q_LLR appears to be fair in terms of received data (shown in Fig. 8a-iii). However, this is a result of a lopsided sacrifice of error-free flows. It will eventually not be able to maintain the received data fairness when the error-prone flow suffers severe channel errors and exhausts most wireless resources. Contrary to the CIF-Q_LLR algorithm, our WFQ-R algorithms distribute wireless channel resources fairly to each flow. The WFQ-R_FIC gives the same amount of resources among flows precisely (shown in Fig. 8b-i). The WFQ-R_SIC gives slightly more resources to the errorprone flow to compensate for error recovery (shown in Fig. 8c-i and iii). In addition, the WFQ-R_FIC separates the delay of error-free and error-prone flows completely (shown in Fig. 8b-ii). Meanwhile, the WFQ-R_SIC separates the delay smoothly (shown in Fig. 8c-ii). According to the simulation results, we see that our WFQ-R algorithms effectively maintain fairness with the LLR algorithm. The difference between FIC and SIC is a trade-off between flow separation and compensation. Flow separation ensures that an error-free flow is not impacted at all by other flows. Strict flow separation, however, brings about starvation of a frequent error-prone flow due to the shortage of compensation resources. Thus, we propose two compensation types, FIC and SIC, to coordinate flow separation and compensation appropriately. That is, the FIC rigidly maintains resource fairness and the SIC provides slightly more resources to error-prone flows for Table 3 Properties of flows in the simulation
5.2. Simulation results Fig. 8 shows the amount of allocated resources, queueing delay, and received data served to each flow over time. The average values of the simulation results are shown in Table 4.
Weight Retrans. prob. Start time (s) Stop time (s)
Flow1
Flow2
Flow3
1 0 0 10
1 0 0.4 10
1 0.2 1.3 10
N. Kim, H. Yoon / Computer Communications 28 (2005) 713–725
721
Fig. 8. Simulation results.
compensating data fairness. Through the WFQ-R algorithms, we can also adaptively achieve separation among flows in delay and throughput depending on each channel condition.
6. Conclusions Wireless networks have an unreliable wireless channel that experiences bursty and location-dependent errors. To recover these channel errors, most wireless systems adopt the Link Level Retransmission (LLR) algorithm in the link layer. However, the LLR algorithm does not match well with previous prediction-based wireless fair queueing algorithms. This is because the prediction-based algorithms have assumed
that a wireless channel can be predicted completely before transmission and the LLR algorithm does not exist in the link layer. The prediction-based algorithms maintain fairness by dynamic reallocation of resources based on perfect channel prediction before transmission. The perfect channel prediction, however, is very difficult in a real wireless system. Instead of perfect channel prediction, most wireless systems actually adopt the LLR algorithm in the link layer, as in ARQ schemes. Accordingly, we need a fair queueing algorithm that is well suited to the LLR algorithm in wireless networks. In this paper, we evaluated the previous prediction-based wireless fair queueing algorithms with the LLR algorithm and proposed a new wireless fair queueing algorithm. The prediction-based algorithms are carried out above the link layer and are not related with the LLR algorithm.
722
N. Kim, H. Yoon / Computer Communications 28 (2005) 713–725
Table 4 Average value of simulation results Flow1 Alloc. resrc. CIF-Q_LLR WFQ-R_FIC WFQ-R_SIC Q. Delay CIF-Q_LLR WFQ-R_FIC WFQ-R_SIC Recvd. Data (Kbps) CIF-Q_LLR WFQ-R_FIC WFQ-R_SIC
lagi RKðLmax C Cmax Þ: Flow2
Proof. It has been proved in [4] that the lag of an error free flow is never greater than Lmax in CIF-Q (Channel-condition Independent Fair Queueing) system. In our system, the lag can increase additionally because the flow, which experiences the link layer retransmission, uses the server resources in advance. Due to the retransmitted flow, the lag of the other flows will increase in proportion to their weight, ri. Hence, P a flow can maximally receive the lags as much as Cmax ri = k2A rk from each retransmitted flow. The maximum number of the retransmitted flows, which can give lags, is the number of active flows except the flow itself, i.e. AK{i}. Hence, we obtain the following inequality: X r lagi % Lmax C Cmax P i : k2A rk j2AKfig
Flow3
Wireless resources used by each flow (kbps) 423.908 424.828 649.195 499.31 500.23 499.31 441.379 442.299 614.253 Delay time in a queue (ms) 91.06 89.334 92.209 75.564 74.501 140.93 85.616 85.19 111.785 Actual amount of received data (kbps) 423.908 499.31 441.379
424.828 500.23 442.299
(A2)
422.989 282.299 353.103
Since ri =
Consequently, they cannot maintain fairness in the wireless systems based on the LLR algorithm. Therefore, we proposed a new wireless fair queueing algorithm, the Wireless Fair Queueing with Retransmission (WFQ-R) algorithm. The WFQ-R algorithm achieves wireless fairness by penalizing flows that use resources in the link layer without permission of the packet scheduler. Thus, our WFQ-R algorithm works well with the LLR algorithm. Moreover, contrary to the predictionbased algorithms, our algorithm does not need channel prediction before transmission. Through analyses, we proved that our algorithm has the properties that the wireless fair queueing algorithm has to guarantee. Through simulations, we also found that our WFQ-R algorithm maintains the fairness adaptively. Furthermore, we demonstrated that our algorithm achieves flow separation and compensation. For future work, we will study the WFQ-R algorithm more intensively with the implemented LLR algorithm in real wireless systems such as cdma2000 [17], UMTS [18], and other future wireless systems [21,22,25,26]. Furthermore, we plan to analyze TCP performance with the WFQR algorithm in wireless networks.
Acknowledgement This work was supported by the Korea Science and Engineering foundation (KOSEF) through the advanced Information Technology Research Center (AITrc) and University IT Research Center Project.
P
k2A rk % 1,
we have: X r lagi % Lmax C Cmax P i % Lmax C Cmax k2A rk j2AKfig j2AKfig X
Z Lmax C ðn K 1ÞCmax : The lag of flow i can decrease in one of the following two cases: (a) when the flow i is lagging or in-sync, it receives service from another flow j, and (b) when the flow i gets the service, it experiences retransmission and uses server capacity in advance. Since, in case (a), flow i has been lagging or insync before receiving service, the lag of flow i cannot decrease more than Lmax after receiving service. Therefore, we obtain lagi Z lagi K lkj RKlkj RKLmax where lkj is the kth length of the packet at the head of the queue of flow j. However, at that time, the flow i experiences the retransmission (case (b) above), the lag can decrease more as much as Cmax. Thus, we have: lagi RKðLmax C Cmax Þ: This concludes the proof. , Lemma A2. Let Wi(t1,t2) be the service received by an error-free flow during the interval [t1, t2) (t1and t2are packet transmission finish times) while flow i is continually backlogged, and let W >ri ðt1 ; t2 Þ be the service received by the same flow in a SFQ (Start-time Fair queueing) reference system. Then, we have: Wir ðt1 ; t2 Þ Z Wi ðt1 ; t2 Þ C lagi ðt2 Þ K lagi ðt1 Þ: Proof. The proof has been shown in [4].
(A3) ,
Appendix A
Lemma A3. If flow i is backlogged throughout the interval [t1, t2), then in a real system S
A.1. Throughput Guarantee
Wi ðt1 ; t2 Þ% ri ðv2 K v1 Þ C 3Lmax C nCmax
(A4)
Lemma A1. The lag of a flow i is bounded as follows:
Wi ðt1 ; t2 ÞR ri ðv2 K v1 Þ K 3Lmax K nCmax
(A5)
lagi % Lmax C ðn K 1ÞCmax
where v1Zv(t1) and v2Zv2(t2).
(A1)
N. Kim, H. Yoon / Computer Communications 28 (2005) 713–725
Wi ðt1 ; t2 ÞR ri ðv2 K v1 Þ K 3Lmax K nCmax :
Proof. It has been proved in [9] that: Wir ðt1 ; t2 ÞR ri ðv2
723
K v1 Þ K Lmax
(A6)
Since t2 % t2 , using Eq. (A10) we get: Wi ðt1 ; t2 ÞRWi ðt1 ; t2 ÞRWi ðv1 ; v2 ÞRri ðt2 Kt1 ÞK
Wir ðt1 ; t2 Þ% ri ðv2 K v1 Þ C Lmax :
(A7) !
From Eq. (A3), we know
Wi ðt1 ; t2 Þ% ri ðv2 K v1 Þ C Lmax K lagi ðt2 Þ C lagi ðt1 Þ: Thus, from Eqs. (A1) and (A2), we conclude the proof. , Theorem A1. If a flow P i is continually backlogged over the interval [t1, t2) and k2A rk % R, then its aggregate service is bounded by: r X Wi ðt1 ; t2 ÞR ri ðt2 K t1 Þ K i ð3L C nCmax Þ R k2A max K ð3Lmax C nCmax Þ:
(A8)
Proof. We will derive a minimum throughput using arguments similar to those of [7,9]. Let v(t1)Zv1 and let 1 ; v2 Þ denote the aggregate length of packets served by Wðv the server in the virtual time interval [v1, v2), Then, from Eq. (A4), we have: X X 1 ; v2 Þ% Wðv ri ðv2 K v1 Þ C ð3Lmax C nCmax Þ: Since
k2A
k2A rk % R:
1 ; v2 Þ% Rðv2 K v1 Þ C Wðv
X ð3Lmax C nCmax Þ:
(A9)
v2 Z v1 C ðt2 K t1 Þ K
Lemma A4. Assume an error-free flow i becomes active at time t0 and continually backlogged during any time interval [t0, t) (t0 and t are packet transmission finish times). Then the difference between the service received by flow i in real system S and the service that the flow would receive in the reference system SrSFQ , is bounded as follows: Wir ðt0 ; tÞ K Wi ðt0 ; tÞ% Lmax C ðn K 1ÞCmax :
(A11)
Proof. Since when flow i becomes active at time t0, lagi(t0)Z0. According to Eq. (A3), we obtain: Wir ðt0 ; tÞ Z Wi ðt0 ; tÞ C lagi ðtÞ: Further, since flow i is assumed to be error-free during the interval [t0, t), according to Eq. (A1), we can obtain: Wir ðt0 ; tÞ K Wi ðt0 ; tÞ Z lagi ðtÞ% Lmax C ðn K 1ÞCmax : This concludes the proof.
,
Definition A1. For an arbitrary packet of flow i, the expected arrival time, EATðpki Þ, of kth packet from flow i is defined as [9] lk1 i ÞC EATðpki Þ Zmax Aðpki Þ;EATðpk1 ; k O1 (A12) i ri where Aðpki Þ is actual arrival time of kth packet from flow i, and EATðp1i ÞZKN.
k2A
Define v2 as:
,
A.2. Delay Guarantee
and:
i2A
X ð3Lmax CnCmax ÞKð3Lmax CnCmax Þ: k2A
Wi ðt1 ; t2 ÞR ri ðv2 K v1 Þ K Lmax K lagi ðt2 Þ C lagi ðt1 Þ
P
ri R
P
k2A ð3Lmax
C nCmax Þ
R
:
(A10)
Then from Eq. (A9), we conclude: 1 ; v2 Þ Wðv P k2A ð3Lmax C nCmax Þ K v1 % R v1 C ðt2 K t1 Þ K R X ð3Lmax C nCmax Þ% Rðt2 K t1 Þ: C k2A
Theorem A2. The delay experienced by the kth packet of an error-free flow i in real system S is bounded as follows PDTðpki Þ% EATðpki Þ C C
ðn K 1ÞLmax C lki R
Lmax C ðn K 1ÞCmax ri
(A13)
where PDTðpki Þ is the kth packet of flow i’s departure time. Proof. It has been proved in [9] that the delay of any packet k of a flow i under SFQ is bounded by:
Let t2 be such that vðt2 ÞZ v2 . Therefore, the real time it 1 ; v2 Þ, given by t2 K t1 , takes the server to finish serving Wðv satisfies:
PDTðpki Þ% EATðpki Þ C
1 ; v2 Þ Rðt2 K t1 Þ Wðv % : t2 K t1 Z R R
Since by time PDTðpki Þ, the kth packet of flow i has been transmitted, we obtain:
Hence, it follows that t2 % t2 : From Eq. (A5) we know that
Wi ðAðp1i Þ; PDTðpki ÞÞ Z
ðn K 1ÞLmax C lki : R
k X jZ1
lji :
(A14)
724
N. Kim, H. Yoon / Computer Communications 28 (2005) 713–725
However, according to Eq. (A11), in the reference errorfree system SrSFQ , we have: Wir ðAðp1i Þ; PDTðpki ÞÞ% Lmax C ðn K 1ÞCmax C
k X
lji :
are both lagging is bounded as follows:
K
Lmax L % ci K cj % max : rj ri
(A20)
jZ1
Thus, in the worst case, the work that flow i needs to receive in reference error-free system SrSFQ until the kth packet of flow i in the real P system S, is transmitted is at most Lmax C ðnK 1ÞCmax C kjZ1 lji . Consequently, according to Eq. (A12), the expected arrival time of the kth packet of flow i in SrSFQ , denoted EATr ðpki Þ, is bounded by: EATr ðpki Þ% EATðpki Þ C
Lmax C ðn K 1ÞCmax : ri
From Eqs. (A15) and (A14), the proof follows.
(A15) ,
A.3. Long-term fairness Guarantee Theorem A3. For a continually backlogged flow i, it achieves the following long-term throughput fairness index: W ð0; tÞ Z ri : lim i vðtÞ/N vðtÞ
(A16)
Proof. From Eq. (A3) and lagi(0)Z0, we have:
From Eqs. (A6) and (A1), we obtain: (A17)
Similarly, from Eqs. (A7) and (A2), we have Wi ð0; tÞ% ri vðtÞ C 2Lmax C Cmax
(A18)
From Eqs. (A17) and (A18), we have the following inequality: ri vðtÞ K 2Lmax K ðn K 1ÞCmax % Wi ð0; tÞ% ri vðtÞ C 2Lmax C Cmax : Divide all sides by v(t), we have: ri K
2Lmax ðn K 1ÞCmax Wi ð0; tÞ 2L C % ri C max C max : K % vðtÞ vðtÞ vðtÞ vðtÞ vðtÞ
The result follows readily by letting v(t)/N.
,
A.4. Short-term fairness Guarantee
Lmax L % vi K vj % max : rj ri
Lemma A7. For any leading error-free flow i: ða K 1Þ
Lmax L % avi K si % a max : ri ri
Proof. The proof has been shown in [4].
(A21)
,
Theorem A4. The difference between the normalized service received by any two flows i and j during an interval [t1, t2) in which both flows are continuously backlogged, error-free, and their status does not change is bounded as follows Wi ðt1 ; t2 Þ Wj ðt1 ; t2 Þ % b Lmax C Lmax K ri ri rj rj
(A22)
Proof. We consider three cases: both flows are (1) lagging, (2) in-sync, or (3) leading during the interval [t1, t2). (1) Both flows are lagging. In this case, the proof is similar to [4]. Thus: Wi ðt1 ; t2 Þ Wj ðt1 ; t2 Þ Lmax Lmax K : r % 3 r C r rj i i j (2) Both flows are in-sync. In this case both flows are served each time they are selected based on their virtual times. Because there is no giving up service, they cannot receive excess service like [4]. Therefore, it follows that the total service received by an error-free in-sync flow during [t1, t2) is bounded by: vi ðt2 Þ K vi ðt1 Þ K
C
Lemma A5. The difference between the virtual times of any two active flow i and j is bounded as follows: K
,
where bZ3 if both flows are lagging, bZ2 if both flows are in-sync, and bZ2Ca if both flows are leading.
Wi ð0; tÞ Z Wir ð0; tÞ K lagi ðtÞ: Wi ð0; tÞR ri vðtÞ K 2Lmax K ðn K 1ÞCmax :
Proof. The proof has been shown in [4].
(A19)
Proof. The proof has been shown in [4]. , Lemma A6. The difference between the virtual compensation times of any two active error-free flows i and j that
Lmax Wi ðt1 ; t2 Þ % % vi ðt2 Þ K vi ðt1 Þ ri ri
Lmax : ri
Thus, by using Eq. (A19), we obtain: Wi ðt1 ; t2 Þ Wj ðt1 ; t2 Þ % 2 Lmax C Lmax : K r rj ri rj i (3) Both flows are leading. Similar to the previous case, the service received by a leading flow i during [t1, t2) is bounded by:
N. Kim, H. Yoon / Computer Communications 28 (2005) 713–725
si ðt2 Þ K si ðt1 Þ K K si ðt1 Þ C
Lmax Wi ðt1 ; t2 Þ % % si ðt2 Þ ri ri
Lmax : ri
(A23)
Further, according to Eq. (A21), for any leading errorfree flow i and any time t, while it is active, we have: avi ðtÞ K a
Lmax L % si ðtÞ% avi ðtÞ K ða K 1Þ max : ri ri
Consequently, for any two leading error-free flows that are active at time t, we have: Lmax Lmax L aðvi ðtÞ K vj ðtÞÞ C a K K max % si ðtÞ rj ri rj K si ðtÞ si ðtÞ K si ðtÞ% aðvi ðtÞ K vj ðtÞÞ L L L C a max K max C max : rj ri rj From Eq. (A21), we obtain Ka
Lmax Lmax L L K % si ðtÞ K sj ðtÞ% max C a max : ri rj ri rj
Finally, from Eq. (A23), we have: Wi ðt1 ; t2 Þ Wj ðt1 ; t2 Þ Lmax Lmax K : r % ð2 C aÞ r C r rj i i j This concludes the proof of the theorem. , A.5. Graceful service degradation The graceful service degradation is explicitly enforced by the algorithm via the parameter a such as [5].
References [1] A. Demers, S. Keshav, S. Shenker. Analysis and simulation of a fair queueing algorithm, in: Proceeding of ACM SIGCOM’89, 1989. [2] S. Lu, V. Bharghavan, R. Srikant, Fair scheduling in wireless packet networks, IEEE/ACM Transactions on Networking 7 (4) (1999). [3] P. Ramanathan, P. Agrawal, Adapting packet fair queueing algorithms to wireless networks, in: proceeding of ACM MOBICOM’98, Dallas, USA, Oct. 1998. [4] T.S.E. Ng, I. Stoica, H. Zhang, Packet fair queueing algorithms for wireless networks with location-dependent errors, in: Proceeding of IEEE INFOCOM’98, 1998. [5] S. Lu, T. Nandagopal, V. Bharghavan, Design and analysis of an algorithm for fair service in error-prone wireless channels, ACM Wireless Networks Journal 6 (4) (2000) 232–343.
725
[6] V. Bharghavan, S. Lu, T. Nandagopal, Fair queuing in wireless networks: issues and approaches, IEEE Personal Communications Feb. (1999). [7] A. Parekh, R. Gallager, A generalized processor sharing approach to flow control in integrated services networks: the single-node case, IEEE/ACM Transactions on Networking 1 (3) (1993). [8] S. Golestani, A self-clocked fair queueing scheme for broadband applications, in: Proceeding of ACM SIGCOMM’94, Toronto, Canada, Jun. 1994. [9] P. Goyal, H.M. Vin, H. Cheng, Start-time fair queueing: a scheduling algorithm for integrated services packet switching networks, IEEE/ACM Transactions on Networking 5 (5) (1997). [10] J. Bennett, H. Zhang, Hierarchical packet fair queueing algorithms, IEEE/ACM Transactions on Networking 5 (5) (1997). [11] S. Lin, D.J. Costello Jr., Error Control Cording: Fundamentals and Applications, Prentice-Hall, New Jersey, 1983. [12] S. Lin, D.J. Costello Jr., M.J. Miller, Automatic-repeat-request error-control schemes, IEEE Communication Magazine 22 (12) (1984) 5–17. [13] J. Li, H. Imai, Performance of hybrid-ARQ protocols with rate compatible turbo codes, in: Proceeding of International Symposium on Turbo Codes, Brest, France, Sept. 1997, pp. 188–191. [14] M. Frodigh, S. Parkvall, C. Roobol, P. Johansson, P. Larsson, Futuregeneration wireless networks, IEEE Personal Communications Oct. (2001) 10–17. [15] M. Chatterjee, G. Mandyam, S.K. Das, Fast ARQ in high speed downlink packet access for WCDMA Systems, Proceedings in European Wireless Conference, Florence, Italy, Feb. 2002, pp. 451– 457. [16] TIA/EIA/IS-95 Rev A, Mobile station-base station compatibility standard for dual—mode wideband spread spectrum cellular systems, Telecommunication Industry Association/Electronics Industry Association Interim Standard, Washington, DC, Jul. 1995. [17] TIA/EIA/cdma2000, Mobile station—base station compatibility standard for dual-mode wideband spread spectrum cellular systems, Telecommunication Industry Association/Electronics Industry Association Interim Standard, Washington, DC, 1999. [18] 3GPP, RLR Protocol Specification, 3GPP TS 25.322 version 3.0, Oct. 1999. [19] S. Lin, P.S. Yu, A hybrid ARQ scheme with parity retransmission for error control of satellite channels, IEEE Transaction on Communications 7 (1982) 1701–1719. [20] D. Chase, Code combining: a maximum-likelihood decoding approach for combining an arbitrary number of noisy packets, IEEE Transaction on Communications 33 (1985) 593–607. [21] GPP, UTRA High Speed Downlink Packet Access (HSDPA), Overall description; Stage 2, 3GPP TS 25.308 version 5.4.0 Release 5, Apr. 2003. [22] GPP2, Physical Layer Standard for cdma2000 Spread Spectrum Systems Release C, 3GPP2 C.S0002-C V1.0, May 2002. [23] D.S. Park, P.Y. Joo, H.K. Choi, H.W. Lee, Y.K. Kim, Some Requirements of As-Yet-Defined MBWA Air Interface, IEEE C802m_ecsg-02/04 in http://grouper.ieee.org/groups/802/20/Contribs/C802m_ecsg_02-04.pdf, Sep. 2002. [24] http://www.isi.edu/nsnam/ns/. [25] IEEE 802.16 Task Group e (Mobile Wireless MAN), http://www. ieee802.org/16/tge/. [26] IEEE 802.20 Mobile Broadband Wireless Access (MBWA), http:// grouper.ieee.org/groups/802/20/index.html.