Adaptive scheduling method for maximizing revenue in flat pricing scenario

Adaptive scheduling method for maximizing revenue in flat pricing scenario

Int. J. Electron. Commun. (AEÜ) 60 (2006) 159 – 167 www.elsevier.de/aeue Adaptive scheduling method for maximizing revenue in flat pricing scenario Jy...

385KB Sizes 0 Downloads 18 Views

Int. J. Electron. Commun. (AEÜ) 60 (2006) 159 – 167 www.elsevier.de/aeue

Adaptive scheduling method for maximizing revenue in flat pricing scenario Jyrki Joutsensaloa , Timo Hämäläinena,∗ , Kari Luostarinenb , Jarmo Siltanenc a Faculty of Information Technology, Department of Mathematical Information Technology, University of Jyväskylä, Agora,

Mattilanniemi 2, P.O. Box 35, FIN-40351 Jyväskylä, Finland b Metso Paper Inc., P.O. Box 587, FIN-40101 Jyväskylä, Finland c Jyväskylä Polytechnic, Jyväskylä, Finland

Received 5 March 2004; received in revised form 13 January 2005

Abstract End-to-end quality of service is critical to the success of current and future networked applications. Applications, such as real-time actions and transactions, should be given priority over less critical ones (such as web surfing). Furthermore, many multimedia applications require delay or delay variation guarantees for acceptable performance. Weighted fairness is also important both among customers or aggregates (depending on the tariff or subscription), and also within an aggregate (for example, to prevent starvation among sessions or service categories). This paper presents an adaptive scheduling algorithm for the traffic allocation. We use flat pricing scenario in our model, and the weights of the queues are updated using revenue as a target function. Due to the closed form nature of the algorithm, it can operate in the nonstationary environments. In addition, it is nonparametric and deterministic in the sense that any assumptions about connection density functions or duration distributions are not made. 䉷 2005 Elsevier GmbH. All rights reserved. Keywords: Dynamical tracking of network parameters; Flat pricing; Revenue maximization; QoS

1. Introduction While using the network services, each user is concerned with his own quality of service that he derives from the network. In the absence of novel connection admission control (CAC) and charging methods, the users try to maximize their usage of the network resources. However, any user’s traffic takes up resources such as link capacity, switching capacity of the routers, buffer space in the queues and so on. This affects the QoS received by other users’ traffic flows. For example, a non-delay sensitive flow generates a lot of bursty traffic which hurts a performance sensitive application like ∗ Corresponding author. Tel.: +358 14 260 3292; fax: +358 14 260 2545.

E-mail addresses: [email protected].fi (J. Joutsensalo), [email protected].fi (T. Hämäläinen), [email protected] (K. Luostarinen), jarmo.siltanen@jypoly.fi (J. Siltanen). 1434-8411/$ - see front matter 䉷 2005 Elsevier GmbH. All rights reserved. doi:10.1016/j.aeue.2005.02.009

VoIP which needs strict delay bounds. However, the utility function of the performance insensitive flow is a slowly decreasing function of the congestion whereas for the delay performance sensitive application, the utility function drops sharply with increased congestion. Allowing better QoS to the sensitive flow at the expense of the insensitive flow improves the utility of the sensitive flow a lot while making little or no difference to the utility of the insensitive flow; hence achieving a better total network utility. Numerous pricing proposals have been developed for the efficient use of packet-switched networks. Here, we review the research work that has similar features as is used in our model. One pricing approach proposed by several researchers is that a small number of service classes should be offered on the network [1–3]. A method of charging bursty traffic sources based on their effective bandwidth is presented in [4,5]. Effective

160

J. Joutsensalo et al. / Int. J. Electron. Commun. (AEÜ) 60 (2006) 159 – 167

bandwidth is a function of the traffic profile of a source and provides a good measure upon which to base pricing decisions. The pricing of real-time traffic with QoS requirements that only involve time and volume charges is presented in [6]. Congestion dependent pricing is studied in [7]. The paper analyzes optimal and near-optimal pricing schemes by using a decision theoretical framework under an explicit model of users’ reaction to prices (demand functions). Cost-based scheduling and dropping algorithms for integrated services are presented in [8]. Their heuristic algorithms attempt to optimize network performance as perceived by the applications by minimizing the total cost incurred by all packets. Appropriate cost functions are presented for common applications, too. The pricing of a single network which provides multiple services at different performance levels is studied in [9]. They present a good example which shows that in comparison to flat rate pricing for all services, a price schedule based on performance objectives can enable every customer to derive a higher surplus from the service, and at the same time, generate a bigger revenue for the service provider. Ref. [10] describes yet another scheme for packet-based pricing as an incentive for more efficient flow control. The emergence of real-time traffic substantially complicates the picture and requires QoS measures that are much harder to analyze ([11–13]). User control for trading delay with price is studied in [14]. A single queueing model is developed by [15,16]. In this model the network is formulated as a server (or servers) with limited capacity, and consumers demand the same service from the server but vary in both willingness to pay for the service and tolerance for delay. Based on that model, they discussed the optimal pricing policy and capacity investment strategy. Ref. [17] suggests a spot-price model for Internet pricing. In their model, every Internet packet is marked with the consumer’s willingness to pay for sending it. The network always transmits packets with higher willingness to pay and drops packets with lower willingness to pay. The network charges a spot price that equals the lowest willingness to pay among all packets sent during each short period. The major benefit of this approach is that it provides consumers with an incentive to reveal their true willingness to pay, and based on that information, the network can resolve capacity contention in transmitting packets in a way that maximizes social welfare. In the work by Gubta et al. [1], priority-based pricing and congestion based pricing are integrated. In their pricing model, services are divided into different priority classes. Packets from a high-priority class always have precedence over packets from a low-priority class. The price for each packet depends not only on the packet’s priority level, but also on the current network load. In the pricing models mentioned, the fact that different applications may have different performance objectives was not properly considered. Our research differs from the above studies by linking pricing and queueing issues together; in addition our model does not need any additional informa-

tion about user behavior, utility functions, etc. (like most pricing and game-theoretic ones need). This paper extends our previous pricing and QoS research [18–21], to take into account queueing scheduling issues by introducing dynamic weight tracking algorithm in the scheduler. QoS and revenue aware scheduling algorithm is investigated. It is derived from Lagrangian optimization problem, and optimal closed form solution is presented. The rest of the paper is organized as follows. In Section 2, used flat pricing scenario is presented and generally defined. Closed form scheduling algorithm is also derived in this section. Section 3 contains experimental part justifying theorems. Discussions and the future work are made in Section 4.

2. Flat pricing scenario and algorithm In the pricing scenario, there are m different traffic classes for different applications and priorities. In the literature, there are commonly referred to the gold, silver, and bronze classes, and in this case, m = 3. As an example of three classes, the gold class customers are ready to pay much more to get guaranteed bandwidth, delay and packet loss probability. The silver class customers are not so tight for the QoS requirements. In silver class bandwidth and delay should be guaranteed. The bronze class delay and packet loss can vary a lot, the most important QoS parameter is guaranteed bandwidth. Table 1 depicts QoS properties of each class in our scenario. Let d0 be the minimum processing time of the classifier for transmitting data from one queue to the output in Fig. 1. The data packets of the class i have the size E(bi ). In our scheduling model, real processing time (delay) for class i in the packet scheduler is di =

Ni E(bi )d0 Ni = , wi wi

(1)

where wi (t) = wi , i = 1, . . . , m are weights allotted for each class, and Ni (t) = Ni is a number of accepted connections (customers) in the ith queue. Consider, e.g. scheduler in Fig. 2. In the figure, there are two classes: gold class and silver class. There are three connections in the gold queue. They are denoted by N11 (packet size 1 kbyte), N12 (packet size 2 kbytes), and N13 (packet size 4 kbytes). In the silver queue, there are two connections. Let us consider the time moment Table 1. QoS parameters for the traffic classes Traffic class

Bandwidth

e-to-e delay

Jitter

Packet loss

Gold Silver Bronze Best effort

x x x

x x

x

x

J. Joutsensalo et al. / Int. J. Electron. Commun. (AEÜ) 60 (2006) 159 – 167

161

Gain factor

GOLD w1

w2

OUTPUT

PACKET SCHEDULER

SILVER

w3

BRONZE Delay

Fig. 1. Traffic classification at the output buffers.

Fig. 3. Pricing functions for three QoS classes; uppest line for Gold class, and lowest for Bronze class.

Gold N

11

1 kbyte

N

N

13

4 kbytes

delay (s/bit) will be

N

12

11

2 kbytes

1 kbyte

t1

w= 1

2 3

di =

t0 Packet Scheduler

Silver N

N

N

3 kbytes

1 kbyte

3 kbytes

21

22

21

1 w= 2 3

Output

Processing time d0

Ni E(bi )d0 (=7 × 3/2d0 = 10.5d0 ). wi

Without loss of generality, processing time has been scaled to d0 = 1. Also, for notational simplicity, packet size E(bi ) has been merged to the Ni ; in fact, our weight updating algorithm does not depend on the packet size at all. Here discrete time index t has been dropped for convenience until otherwise stated. The natural constraint for the weights are wi > 0

Fig. 2. Variable size packet scheduler.

(3)

(4)

and t0 in Fig. 2. At that moment, the gold class customer’s N11 data (1 kbyte) is moved to the packet scheduler with a processing time of d0 . Next, the packets of connections N12 and N13 are moved to the scheduler. Let us observe the time delay = t1 − t0 , which passes before the connection N11 sends the next 1 kbyte packet of data at the time t1 . The delay depends on the number of packets N1 = 3 as well as the average size E(b1 ) of the packets in the gold queue in the following way: 1 delay(s/kbyte) = (1 + 2 + 4)d0 = 3 × (1 + 2 + 4)d0 3 3  1 = N1 b1i d0 N1 i=1

= N1 E(b1 )d0 = 7d0 ,

(2)

where b1i is the packet size of the ith connection in the first (i.e. gold) class and E(bj ) is the average packet size of the jth class. In Fig. 2, a weight w1 = 23 is reserved for the gold class. That is, for every 2 kbytes of the gold class, 1 kbyte of the silver class is moved to the output – on the average. This increases the delay of the gold class, and the overall

m 

wi = 1.

(5)

i=1

Without loss of generality, only non-empty queues are considered, and therefore wi  = 0, i = 1, . . . , m. If some weight is wi = 1, then m = 1, and the only class to be served have the minimum processing time d0 = 1, if Ni = 1. For each service class, a pricing function ri (di ) = ri (Ni /wi + ci )

(6)

(euros/min) is non-increasing with respect to the delay di . Here ci (t) = ci includes insertion delay, transmission delay, etc., and here it is assumed to be constant. In the flat pricing scenario, pricing function is defined via maximum delay for each class and queue as a QoS parameter. Fig. 3 illustrates the maximum delays for three classes to be serviced by scheduler. Gain factor ri of class i is measured by money paid by one customer to the service provider per unit time, e.g. euros/min. Then pricing function in Eq. (6) reduces to the piecewise flat function ri (di ) = ri

(7)

162

J. Joutsensalo et al. / Int. J. Electron. Commun. (AEÜ) 60 (2006) 159 – 167

under the constraint Ni di,max , i = 1, . . . , m. wi

(8)

The algorithm can also be derived by optimizing number of connections and a constraint (8)

If the constraint is not satisfied, then ri (di ) = 0.

• Larger the number of connections Ni is, larger is the corresponding weight wi .

(9)



m 

m 

Ni

i=1

di,max



Here constants ci have been dropped for convenience from Eq. (6). When Ni customers are in the class (or in queue) i, revenue achieved in that class is

F=

F i = N i ri

 wi jF = ri − = ri −  = 0, jNi di,max Ni

(10)

euros/min. Total revenue paid by Ni customers in m classes is m m   F= Fi = (11) N i ri i=1

i=1

i=1

(13)

From Eqs. (12) and (5), revenue is presented as follows:   m m   Ni (14) N i ri +  1 − F= di i=1

i=1

or

 ri d i w i +  1 −

i=1

m 

 wi .

(21)

(22)

(23)

jF = ri di −  = 0. jwi  = r i di =

But this is exactly the same solution as (19). By using optimal weights, revenue F can be expressed as follows: F=

m 

N i ri =

r i Ni , wi

r i Ni wi = .   Because i wi = 1, then r i Ni ri Ni ri Ni =  rN =  wi =  l l  l wl  l  l rl N l

w i ri d i =

i=1

m  r i N i d i ri  l rl N l i=1

i=1

(24)

i=1

and F is   m  F = ri2 di Ni

(25)

i=1

with constraint (16)

m  Ni 1, di

(26)

i=1

(17) (18)

(19)

under the constraint (8). From Eqs. (13) and (19) one obtains m 1  rl N l . ri

m 

m m  r 2 di N i 1  2 i = = ri di Ni F l rl N l

(15)

i=1

Optimal weights are obtained from the first derivative

di =

r i Ni ri Ni . =  l rl N l

i=1

Ni . wi = di

F=

wi =

,

i=1

under the constraint that pre-selected maximum delays di,max are not exceeded. By using Lagrangian approach, revenue can be presented in the form   m m   (12) N i ri +  1 − wi , F=

m 

Ni ri +  1 −

i=1

(20)

l=1

Interpretation of (19) is obvious: • Larger the gain factor ri is, larger is the corresponding weight wi .

di =

m 1  rl Nl di,max , ri

i = 1, . . . , m.

(27)

l=1

From Eqs. (25) and (27) it is seen that gain factors ri , maximum allowed delays di,max , as well as number of connections Ni increase revenue, which is a plausible result. Call admission control (CAC) mechanism can be made by simple hypothesis testing without assumptions about call or dropping rates. CAC is performed in the connection level, not in the packet level, to avoid too costly computation. Moreover, weights can be updated also only when the connection appears. Let the state at the moment t be Ni (t), t = 1, . . . , m. Let the new hypothetical state at the moment t + 1 be N˜ i (t + 1), t = 1, . . . , m, when one or several connections appear in some class/classes. In hypothesis testing,

J. Joutsensalo et al. / Int. J. Electron. Commun. (AEÜ) 60 (2006) 159 – 167

revenue formula (25) is applied as follows:   m  ri2 di Ni (t), F (t) = 

(28)

i=1

  m  ˜ F (t + 1) =  ri2 di N˜ i (t + 1), di (t) =

1 ri

1 d˜i (t + 1) = ri

i=1 m  l=1 m 

(29)

rl Nl (t),

(30)

rl N˜ l (t + 1).

(31)

l=1

If F (t) > F˜ (t + 1) or maximum delay is exceeded (d˜i (t + 1) > di,max ), then call is rejected, otherwise it is accepted.

3. Implementation issues The current Internet assumes the “best-effort” service model. Two basic concepts, Integrated Services (IntServ) and Differentiated Services (DiffServ), have been proposed by the Internet Engineering Task Force (IETF) to implement QoS in packet switched networks. In the IntServ architecture [22], the QoS requirements are provided on the per-flow basis. This architecture implies presence of the signalling protocol and requires that all intermediate nodes keep state information. The DiffServ architecture [23], in turn, works with traffic aggregates. It has become the preferred method in providing QoS because the need to exchange signalling data between network nodes is eliminated. Packets are forwarded according to the DiffServ codepoint, associated with each traffic aggregate and written into the header of the Internet Protocol (IP). For obvious reasons, this architecture is simpler and provides better ability to scale in large networks. Regardless of the chosen architecture, QoS are realized by routing nodes that deploy certain queueing disciplines. Though the specifications of the DiffServ and the IntServ give several solution examples, it is the responsibility of a service provider to select appropriate queueing mechanisms and their preferred implementations. The set of queueing policies, which are used to guarantee required QoS, has been considered in a number of studies [24]. However, if a queueing discipline has input parameters then in most cases static configuration is used. It results in inefficient allocation of resources because this approach does not take into account the fact that the number of active flows varies all the time. In the centralized management framework, it is a burden for the management entities to collect working characteristics from routing nodes and to update their configuration based on the measured parameters. Thus, the solution is to use an adaptive approach that involves local decisions on usage, including the distribution of bandwidth between traffic aggregates on local links. The adaptive approach can share processing

163

resources more effectively, ensuring the required QoS and improving a provider’s performance from the viewpoint of a certain criteria. We have selected edge routers of the DiffServ architecture for implementation target to our adaptive model. In this case, all major adaptive issues will be implemented at the edge of a domain. Unlike the core routers, the edge routers have the per-flow information that enables them to perform classification and policing. Since data regarding each traffic flow is available, it is much more convenient to perform the adaptive resource allocation in this part of a DiffServ domain. In this framework, the core routers could remain intact and are not overburdened with additional adaptive software that can slow the packet forwarding process. Moreover, such an approach fits into the original idea of the DiffServ technology, which states that the edge routers perform sophisticated functions, while the core routers perform only the simple forwarding. In the proposed adaptive framework, the key role of the adaptive egress routers is to control the amount of traffic injected into a DiffServ domain. By tracking the number of active data flows and their QoS parameters, the adaptive edge routers can allocate optimally the output bandwidth between different traffic aggregates. Of course, this solution does not diminish the use of other adaptive solutions that could be implemented in the core routers to provide finer resource allocation and to achieve better utilization [25].

4. Computational complexity Assume that there are m service classes. Calculation of the weights and CAC equations are performed in a connection level. Evaluation of the numerator of the weights (19) needs m multiplications. Evaluation of the denominator needs m multiplications and m additions. Thus the total number of calculations is O(3m) operations. Usually, m is quite small, so that the total number of operations in one switch is at most about tens/connection. Conclusion is that the weight updating algorithm is computationally quite simple. Consider next the CAC mechanism. Calculation of the revenue F as well as the hypothetical revenue F˜ needs 3m multiplications and m additions. Computation of the delays di and d˜i needs 2m multiplications and m additions. Thus the total number of computations in the CAC mechanism is O(7m) operations. Again, when m is small—say M =4—total number of operations is at most tens/connection.

5. Experiments In the experiments, the effects of gain factors ri and maximum delays di,max on the revenue is demonstrated. In all the experiments, calls and duration are stochastic, and they are Poisson and exponentially distributed, respectively. In addition, number of classes is m = 3. For gold, silver, and bronze

J. Joutsensalo et al. / Int. J. Electron. Commun. (AEÜ) 60 (2006) 159 – 167 300

300

250

250

200

200

number of users

delays

164

150

100

50

0

150

100

50

0

200

400

600

800

0

1000 1200 1400 1600 1800 2000

0

200

400

600

800

time

time

Fig. 4. Evolution of the delays d1 (t), d2 (t), and d3 (t) as a function of time in the first experiment.

Fig. 6. Evolution of the number of connections N1 (t), N2 (t), and N3 (t) as a function of time in the first experiment.

x 104

250

5

200

4

revenue

number of users

300

150

3

100

2

50

1

0 0

1000 1200 1400 1600 1800 2000

200

400

600

800

1000 1200 1400 1600 1800 2000

time

0 0

200

400

600

800

1000 1200 1400 1600 1800 2000 time

Fig. 5. Evolution of the number of calls N1 (t)call , N2 (t)call , and N3 (t)call as a function of time in the first experiment.

classes, Poisson call parameters are 1 = 0.30, 2 = 0.40, and 3 = 0.50, respectively. Duration parameters (“decay rates”) are 1 = 0.010, 2 = 0.010, and 3 = 0.003, where probability density functions for duration are fi (t) = i e−i t ,

i = 1, 2, 3, t 0.

(32)

The number of unit times in the experiments was T = 2000. Because 1 < 2 < 3 , user demands depend on charging in such a way that there are least calls in gold class, and most calls in bronze class. Experiment 1: In the first experiment, three service classes have the gain factors r1 = 200, r2 = 150, and r3 = 100. Maximum delays are d1,max = 50, d2,max = 150, d3,max = 200. Fig. 4 shows the evolution of the delays d1 (t), d2 (t), and d3 (t) as well as the maximum boundaries of them as a function of time in the first experiment. It is seen that the delays are always below the maximum values, as guaranteed

Fig. 7. Evolution of the revenue as a function of time in the first experiment.

by the algorithm. Fig. 5 show the number of calls. Fig. 6 shows the number of users accepted to the network, and Fig. 7 the revenue. Experiment 2: In the second experiment, three service classes have the gain factors r1 =300, r2 =150, and r3 =100. Maximum delays are d1,max =50, d2,max =150, d3,max =200 (Fig. 8). Here r1 has been selected larger than in the first experiment. Now the mean number of users is larger than in the first experiment, which is seen by comparing Figs. 6 and 9 . Also the revenue is larger than in the first experiment, as can be seen by comparing Figs. 7 and 10. Experiment 3: In the third experiment, three service classes have the gain factors r1 = 200, r2 = 150, and r3 = 100. Maximum delays are d1,max = 100, d2,max = 150, d3,max = 200. In this experiment, too, the number of users and the revenue are larger than in the first experiment, be-

J. Joutsensalo et al. / Int. J. Electron. Commun. (AEÜ) 60 (2006) 159 – 167 300

165

x 104 6

250 5 200

revenue

delays

4 150

3

100 2 50

1

0 0

200

400

600

800

0

1000 1200 1400 1600 1800 2000

time

Fig. 8. Evolution of the delays d1 (t), d2 (t), and d3 (t) as a function of time in the second experiment.

0

200

400

600

800

1000 1200 1400 1600 1800 2000

time

Fig. 10. Evolution of the revenue as a function of time in the second experiment.

300 300 250

200 150

delays

number of users

250 200

100

150

100 50 50 0

0

200

400

600

800

1000 1200 1400 1600 1800 2000 time

Fig. 9. Evolution of the number of connections N1 (t), N2 (t), and N3 (t) as a function of time in the second experiment.

cause d1,max is selected in the third experiment to be larger than in the first experiment. This experiment shows that the revenue increases even the best traffic class is delayed more than in the experiment 2. These kind of results give valuable information how to tune the model working at the most optimal way under different kind of traffic scenarios, e.g. at the situations when the connection arrival rate varies at different times scales—say night, morning, day, and evening. This can also lead to the situation where the customers of the highest traffic classes move to use resources of the lower traffic classes at night time. This is because there is enough capacity available, and thus needed QoS level can be achieved with the price of the lower class. Next, we present summary of our approach as well as experiments. • The proposed weight updating algorithm is computationally inexpensive in our scope of study, when weights are

0 0

200

400

600

800

1000 1200 1400 1600 1800 2000

time

Fig. 11. Evolution of the delays d1 (t), d2 (t), and d3 (t) as a function of time in the third experiment.

updated and CAC is performed in the connection level. • Experiments clearly justify the performance of the algorithm. For example, revenue curves are positive, and the maximum delays are guaranteed using the weight constraint. • Some of the statistical and deterministic algorithms presented in the literature assume quite strict a priori information about parameters or statistical behavior such as call densities, duration or distributions. However, such methods usually are – in addition to computationally complex – not robust against erroneous assumptions or estimates. On the contrary, our algorithm is deterministic and non-parametric, i.e. it uses only the information about the number of connections, and thus we believe that in practical environments it is competitive candidate due to the robustness.

166

J. Joutsensalo et al. / Int. J. Electron. Commun. (AEÜ) 60 (2006) 159 – 167

nario operates well, i.e. obtained total revenue will increase when the optimal values for the different QoS parameters are found. In the future work, other than strictly flat pricing models are studied. We have also started to work with Linux routers, and the goal is to implement presented algorithm to a real router environment.

300

number of users

250

200

150

100

References 50

0 0

200

400

600

800

1000 1200 1400 1600 1800 2000

time

Fig. 12. Evolution of the number of connections N1 (t), N2 (t), and N3 (t) as a function of time in the third experiment.

x 104 6

5

revenue

4

3

2

1

0

0

200

400

600

800

1000 1200 1400 1600 1800 2000

time

Fig. 13. Evolution of the revenue as a function of time in the third experiment.

General conclusion is that the flat pricing scenario is quite simple, perhaps tempting and practical (Figs. 11–13).

6. Discussion In this paper we designed a QoS-aware scheduling and pricing model, that takes into account the user’s satisfaction (price vs. received QoS) and the optimal use of the limited network resources. The presented solution gives the service provider and consumers a new way in which to use and get services from the networks. Another issue concerns the model’s simplicity for the network operators. As can be seen from the results, presented model shares limited network resources in a fair way. Also flat pricing sce-

[1] Gubta A, Stahl DO, Whinston AB. A stochastic equilibrium model of internet pricing. J. Econom Dyn Control 1997;21(4–5):697–722. [2] Odlyzko AM. Paris metro pricing for the internet. Proceedings of ACM Conference on Electronic Commerce (EC’99), 1999. p. 140–7. [3] Gibbens R, Mason R, Steinberg R. Internet service classes under competition. IEEE J Sel Areas Commun 2000;18(12):2490–8. [4] Kelly F. Charging and rate control for elastic traffic. Eur Trans Telecommun 1997;8:33–7. [5] Kelly FP, Maulloo AM, Tan DKH. Rate control in communication networks: shadow prices, proportional fairness and stability. J Oper Res Soc 1998;49.. [6] Kelly PF. On tariffs, policing and admission control for multiservice networks. Oper Res Lett 1994;15:1–9. [7] Paschalidis IC, Tsitsiklis JN. Congestion-dependent pricing of network services. IEEE/ACM Trans Networking 2000;8(2):171–83. [8] Peha JM, Tobagi FA. Cost-based scheduling and dropping algorithms to support integrated services. IEEE Trans Commun 1996;44(2):192–202. [9] Cocchi R, Shenker S, Estrin D, Zhang L. Pricing in computer networks: motivation, formulation and example. IEEE/ACM Trans Networking 1993;1(6):614–27. [10] Gibbens RJ, Kelly EP. Resource pricing and the evolution of congestion control. Automatica 1999;35(12):1969–85. [11] Kelly FP. Notes on effective bandwidths. In: Zachary S., Ziedins IBKelly EP, editors. Stochastic networks: theory and applications, vol. 9. London, UK: Oxford University Press; 1996. p. 141–68. [12] Bertsimas D, Paschalidis ICh, Tsitsiklis JN. On the large deviations behavior of acyclic networks of G/G/1 queues. Ann Appl Probab 1998;8(4):1027–69. [13] Paschalidis ICh. Class-specific quality of service guarantees in multimedia communication networks. Automatica 1999;35(12):1951–68. [14] Danielsen K, Weiss M. User control modes and IP allocation. Internet economics. Cambridge: MIT Press; 1997. p. 1502–17. [15] Dewan S, Mendelson H. User delay costs and internal pricing for a service facility. Manage Sci 1990;36(12):1502–17. [16] Wang Q, Peha JM, Sirbu M. The design of an optimal pricing scheme for ATM integrated service networks. J Electron Publ, Special Issue Internet Econom 1995. [17] Mackie-Mason JK, Varian HR. Pricing the internet public access to the internet. Englewood Cliffs, NJ: Prentice-Hall; 1995.

J. Joutsensalo et al. / Int. J. Electron. Commun. (AEÜ) 60 (2006) 159 – 167

[18] Joutsensalo J, Hämäläinen T. Optimal link allocation and revenue maximization. J Commun Networks 2002;4(2): 136–47. [19] Joutsensalo J, Hämäläinen T, Mikko Pääkkönen, Sayenko A. Adaptive weighted fair scheduling method for channel allocation. Proceedings of IEEE ICC 2003, Anchorage, Alaska, 2003: 228–232. [20] Joutsensalo J, Gomzikov O, Timo Hämäläinen, Luostarinen K. Enhancing revenue maximization with adaptive WRR. Proceedings of IEEE ISCC 2003, Antalya, Turkey, 2003: 175–180. [21] Joutsensalo J, Hämäläinen T, Paakkonen M, Sayenko A. QoSand revenue aware adaptive scheduling algorithm. J Commun Networks 2004;6(1):68–77. [22] Braden R, Clark D, Shenker S. Integrated services in the internet architecture: an overview. IETF RFC 1633, June 1994. [23] Blake S, Black D, Carlson M, Davies E, Wang Z, Weiss W. An architecture for differentiated services. IETF RFC 2475, December 1998. [24] Zhang H. Service disciplines for guaranteed performance service in packet-switching networks. Proc IEEE 1995;83(10):1374–96. [25] Sungwon Yi, Xidong Deng, Kesidis G, Das CR. Providing fairness in DiffServ architecture. In: IEEE Global Telecommunications Conference, vol. 2, November 2002; p. 1435–9. Jyrki Joutsensalo was born in Kiukainen, Finland, in July 1966. He received diploma engineer, licentiate of technology, and doctor of technology degrees from Helsinki University of Technology, Espoo, Finland, in 1992, 1993, 1994, respectively. Currently, he is Professor of Telecommunications at the University of Jyväskylä. His research interests include signal processing for telecommunications, as well as data networks and pricing.

167

Timo Hämäläinen received his B.Sc. in automation engineering from the Jyväskylä Institute of Technology in Finland in 1991 and the M.Sc. and Ph.D. degrees in telecommunication from Tampere University of technology and University of Jyväskylä, Finland in 1996 and 2002, respectively. His Ph.D. works studied Quality of service and pricing issues in broadband networks. His current research interests include traffic engineering and QoS in wired and wireless networks. Kari T. Luostarinen received his B.Sc. in computer and communication engineering from the Jyväskylä Institute of Technology in Finland on 1996. He received his M.Sc. and Ph.Lic. degrees in telecommunication from University of Jyväskylä Finland in 1998 and 2004, respectively. His Ph.Lic. works studied Quality of service in WLAN and 3G networks. His current research interests include traffic engineering and QoS in wired and wireless networks. Jarmo Siltanen received his B.Sc. in computer and communication engineering from the Jyväskylä Institute of Technology in Finland on 1996. He received his M.Sc. and Licentiate in Philosophy degrees in telecommunication from University of Jyväskylä Finland in 1998 and 2003, respectively. His Ph.Lic. works studied Dynamic Resource Allocation and Quality of Service in networks. His current research interests include traffic engineering and QoS in wired and wireless networks.