System dynamics simulation of computer networks: Price-controlled QoS framework

System dynamics simulation of computer networks: Price-controlled QoS framework

Available online at www.sciencedirect.com Mathematics and Computers in Simulation 78 (2008) 27–39 System dynamics simulation of computer networks: P...

242KB Sizes 1 Downloads 106 Views

Available online at www.sciencedirect.com

Mathematics and Computers in Simulation 78 (2008) 27–39

System dynamics simulation of computer networks: Price-controlled QoS framework Tomaˇz Turk ∗ University of Ljubljana, Faculty of Economics, Kardeljeva pl. 17, 1000 Ljubljana, Slovenia Received 6 September 2006; received in revised form 13 March 2007; accepted 31 May 2007 Available online 17 June 2007

Abstract The dynamics of computer networks was until recently modelled mostly with discrete-event simulations and queuing systems. Modern computer networks are becoming large and complex, and discrete-event simulation schemes are too slow to successfully model their behavior. Researchers and engineers are switching to more novel approaches, like fluid simulation schemes, where network elements are described as fluid servers which process workload continuously, yet the traffic source can be of arbitrary type, including discrete-event source and fluid source. We are developing this idea further and propose even simpler schema, using the systems dynamics simulation approach in our schema, the network topology and routing information is represented by relatively simple matrix algebra. The schema can be used to model computer systems for various purposes, from the topology and capacity design and optimization in every day practice, to more theoretical work like testing new technology and policy approaches. Besides the formal description of the proposed schema, the paper also gives an example of its adaptation to model the behavior of a price-controlled end-to-end QoS provision in computer networks. © 2007 IMACS. Published by Elsevier B.V. All rights reserved. Keywords: Computer networks; Fluid simulations; Network topology; System dynamics simulation; Quality of service

1. Introduction Computer and communication networks are becoming more and more complex in size and speed. In the past, discrete-event simulations and queuing systems were mostly used to describe and design various elements of these networks. Unfortunately, they are becoming too slow to successfully model the behavior of complex and high speed networks. Many proposals were given by researchers to overcome these problems. On the one hand, there are efforts to strengthen the discrete-event simulations, for instance parallel simulation [10]. On the other hand, time-driven system dynamics simulation can be used to model large-scale networks [1]. In this simulation scheme, network elements are modelled as fluid servers which process workload continuously. The data traffic is simulated as a continuous (fluid) flow in discrete time (time is partitioned into fixed-length intervals.) In the basic network model of this system’s dynamics approach, physical network paths between nodes (routers) are described with matrix notation, but the actual order of network resources used was not outlined. This obstacle was corrected in ref. [6]. Fluid models have deficiencies with the granularity of the network (in comparison to event driven simulations), and one possible solution to this issue with the combination of fluid and discrete-event sources of flows was described in ref. [4]. ∗

Tel.: +386 1 5892 400; fax: +386 1 5892 698. E-mail address: [email protected].

0378-4754/$32.00 © 2007 IMACS. Published by Elsevier B.V. All rights reserved. doi:10.1016/j.matcom.2007.05.003

28

T. Turk / Mathematics and Computers in Simulation 78 (2008) 27–39

In this paper, we are developing these ideas further. Our goal is to develop a relatively simple simulation approach which could be used when designing the network topologies for backbones and corporate networks, where end-to-end traffic from particular sources to destinations should be simulated, together with testing for appropriate policies for traffic shaping. The system dynamics approach is suitable for this, since its computing complexity grows linearly with the number of network nodes (network size). In discrete-event simulations, the computing complexity grows superlinearly with the network size [6]. In our solution, we are focusing on the granularity issues of fluid networks. Besides the simple description of the network paths between nodes (this represents routers and physical connections between them in practice), the explicit routing information for each end-to-end network flow (from client to a server) is considered. In this manner, the actual path of an end-to-end data flow is described as a series of links between nodes. The model can continuously simulate each source’s traffic and its influences towards the network elements. Each link on this path occupies or uses one network connection between network nodes (e.g. fiber, virtual private connection), and network connections aggregate the traffic of several links, arriving from several sources. The information about actual paths (the definition of links) represents routing information, and can be dynamically changed during the simulation. This introduces, on the one hand, a possibility to study the behaviour of different routing mechanisms with fluid network simulation model. On the other hand, with the increased granularity the traffic types of each traffic source can be defined, and different traffic shaping policies can be studied with this kind of network model. According to our goal, the network topology and routing information is represented by relatively simple matrix algebra, since many modern simulation systems support it in system dynamics simulations (e.g. GoldSim [11], AnyLogic [2]). The structure of the rest of the paper is as follows: we firstly develop a general network model using the formal notation according to the system dynamics approach. The applicability of the proposed general model is then illustrated by constructing the environment to test a novel price-controlled Quality of Service (QoS) provisioning framework, which exploits the possibility to study different traffic shaping policies with the proposed model. Namely, Internet users are charged mostly on a flat-fee basis, regardless of the network load they introduce to the network, and one way to deal with congestion issues is to pay for actual traffic [3,9,14]. Different QoS models have been proposed, such as price-controlled best-effort model, which introduces the general idea of usage sensitive or variable pricing [7]. Other possible models include per-packet or per-volume flat rate pricing, and flat rate pricing dependent from QoS class, to name just a few [12]. In almost all proposed models, the price is established for each network connection or a network source (e.g. routers, links). This is hard to achieve in reality for several reasons, most important one being the accounting and billing activities. If we consider usage sensitive or variable pricing scheme alone, one of the possible solutions to this price aggregation problem is to include the information about the highest price the user is prepared to pay per transferred MB (“bid price”) along the whole path into his data stream. For each hop along the route, the network node which controls data entry onto the connection would subtract the connection price from the price indicated in the data stream. This is repeated along the path the data stream is passing. It can happen that the highest price is too small, so the proposition assumes at least two QoS classes—paid and unpaid traffic. The user pays according to actual connection price, not according to his bid. The question is, however, whether the proposed schema will work in practice. To establish this, the stability of the system should be examined. If the basic model gives preferable estimates, the model can be used to test different pricing policies. 2. General network model In the proposed system dynamics simulation scheme, data flows between network nodes are represented in matrix notation. Before the network topology can be constructed we need to define its basic elements: • • • •

Source (s): Data traffic generator, e.g. actual use of a communication service by a user. Connection (c): Physical or virtual connection between two nodes, e.g. optical fiber, virtual private connection. Link (l): Path between two neighboring nodes; a part of the end-to-end network path the traffic is passing. Node (n): Data buffer for the connections.

In the notation below we will use m, o, p and q to represent the total number of sources, connections, links and nodes, respectively, in a given network topology.

T. Turk / Mathematics and Computers in Simulation 78 (2008) 27–39

29

Fig. 1. Example of simple network topology with two routed data flows.

Each source introduces some traffic flow onto the network. Data streams flow through the network along the path, which is represented as a series of links. Links share common connections. Each connection has its capacity (bandwidth) and price. An example of a simple network is represented in Fig. 1. Connections c provide means to establish data flows. The data flow from the source s1 is routed from node n1 to node n3 ; the route is constructed with the link l1 from node n1 to node n2 , the link l2 from node n2 to node n4 and the link l3 from node n4 to node n3 . The second source s2 uses the link l4 from node n2 to node n1 and the link l5 from node n1 to node n3 , for the traffic to go from node n2 to node n3 . 2.1. Network topology The network topology can be provided as a set of four binary matrices. The matrix elements represent a relationship between two network elements (sources, connections, links or nodes). If elements are related, this is denoted by 1, otherwise the value of matrix element is 0. The matrices are: sources to links matrix, Ssl , which defines the path for data flows from sources to links: ⎞ ⎛ s11 s12 · · · s1p ⎟ ⎜s ⎜ 21 s22 · · · s2p ⎟ ⎟ ⎜ Ssl = ⎜ . .. .. ⎟ .. . ⎝ .. . . ⎠ sm1 sm2 · · · smp links to links adjacency matrix, Lll , which defines the path for data flows from link to link: ⎞ ⎛ l11 l12 · · · l1p ⎟ ⎜l ⎜ 21 l22 · · · l2p ⎟ ⎟ ⎜ Lll = ⎜ . .. .. ⎟ .. . ⎝ .. . . ⎠ lp1

lp2

· · · lpp

links to connections matrix, Clc , which defines the way connections are used by links: ⎞ ⎛ c11 c12 · · · c1o ⎟ ⎜c ⎜ 21 c22 · · · c2o ⎟ ⎟ Clc = ⎜ .. .. ⎟ ⎜ .. .. . ⎝ . . . ⎠ cp1

cp2

· · · cpo

30

T. Turk / Mathematics and Computers in Simulation 78 (2008) 27–39

links to nodes matrix, Nln , which defines the node as an origin of the link: ⎞ ⎛ n11 n12 · · · n1q ⎟ ⎜n ⎜ 21 n22 · · · n2q ⎟ ⎟ Nln = ⎜ .. .. ⎟ ⎜ .. .. . ⎝ . . . ⎠ np1

np2

· · · npq

The first and the second matrices define the basic traffic paths. Firstly, the traffic from each source is directed to one of the links (this is defined in Ssl ). Secondly, the traffic from each source continues its flow towards the end node according to the adjacency matrix Lll . The third matrix (Clc ) represents connections sharing among links. It defines which links are occupying a particular connection. The last matrix (Nln ) represents the way links use network nodes at their end-points. (This information could have been represented in another way, e.g. with the “connections to nodes” matrix.) For instance, the network topology for the traffic from the first source s1 to the node n3 in Fig. 1 can be represented by the following matrix notation: ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ 0 1 0 1 0 0 0 1 0 0 0 ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ S = ( 1 0 0 ); L = ⎝ 0 0 1 ⎠ ; C = ⎝ 0 0 1 0 ⎠ ; N = ⎝ 0 1 0 0 ⎠ 0 0 0 0 0 0 1 0 0 0 1 The traffic utilizes three links, three connections and four nodes. We can see that the traffic from the source s1 is directed to the first link (as defined in S), and it continues its way by using the second and the third link (as defined in L). The links l1 , l2 and l3 are occupying connections c1 , c3 and c4 , respectively (as defined in C). Similarly, the three links are connected together and are utilizing the nodes n1 , n2 and n4 (as defined in N). If we add the second route for the traffic from source s1 to the node n3 , the above description becomes ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ 0 1 0 0 0 1 0 0 0 1 0 0 0 ⎜ ⎟ ⎜ ⎟ ⎜ ⎟  ⎜0 0 1 0 0⎟ ⎜0 0 1 0⎟ ⎜0 1 0 0⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ 1 0 0 0 0 0 0 0 0 0⎟ 0 0 0 1⎟ 0 0 0 1⎟ S= ; L=⎜ ; C=⎜ ; N=⎜ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ 0 0 0 1 0 ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ 0 0 0 0 1 1 0 0 0 0 1 0 0 ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ 0 0 0 0 0 0 1 0 0 1 0 0 0 It can be seen that the network can be arranged as needed. Normally, connections are fixed whereas links can be changed dynamically to represent dynamic routing activities in the network. Three characteristics of the data traffic are important for our purpose: • The nature of the network load the sources are introducing to the network. • Capacities of connections (bandwidth). • Capacities of nodes (buffer size). 2.2. Model formalization The traffic volume the sources are introducing into the network can be defined as a vector: ss (t) = [s1 (t), s2 (t), . . . , sm ]T

(1)

where its elements are measured as the amount of data in a given time interval (e.g. Mbits per second). The elements of vector ss can be modelled as Poisson processes, which suffice in most cases. For more accurate description of network traffic, for instance with inclusion of the self-similarity phenomenon, other approaches can be used [5].

T. Turk / Mathematics and Computers in Simulation 78 (2008) 27–39

31

The capacities of connections can be represented as a vector cc (t) = [c1 (t), c2 (t), . . . , co (t)]T

(2)

being measured as the amount of data in a given time interval (e.g. Mbits per second). Vector nn (t) = [n1 (t), n2 (t), . . . , nq (t)]T

(3)

represents node (buffer) capacities measured as the amount of data (e.g. Mbits). The connection capacities are used by all links which use a given connection. Similarly, the capacities of nodes are used by all links which originate in a given node. Data flows (d) are represented by three main components, namely inputs into nodes, buffers occupancy and outputs from nodes. Input into a node combines the new traffic from sources together with the traffic which originates from other nodes. For all of the source–link pairs, this can be described as idsl (t) = [ss (t)ITs ⊗ Iss ]Ssl + odsl (t + z)

(4)

where Is and Iss are unit vector and unit matrix, respectively, osl represents outputs from nodes, described below, and z is data traffic delay on nodes (constant and equal for each node in our simulation model). Operator ⊗ represents matrix multiplication on term-by-term basis, such that: A ⊗ B = (aιλ )(bιλ ) = (aιλ bιλ ) where ι = 1, . . . , μ; λ = 1, . . . , ν and μ, ν represent the number of rows and the number of columns, respectively. Similarly, operator ⊗ will represent matrix division on term-by-term basis in the rest of the paper:

aιλ aιλ A⊗B= = bιλ bιλ The occupancy of buffers bsl without limited capacities at time t can be described as follows: t d d b sl (t) = [idsl (s) − odsl (s)]ds + b sl (t0 )

(5)

t0

where isl (s) and osl (s) represents inflows and outflows at any time s between the initial time t0 and current time t. If buffers have limited capacities nsl in the next timestep (t + u), the amount of dropped packets can be calculated as: t min[bsl (s) − nsl (s + u), nsl (s + u)]ds (6) dsl (t) = bsl (t) − t0

where

t

t0

min[b sl (s) − nsl (s + u), nsl (s + u)]ds = bdsl (t) d

(7)

is the occupancy of buffers with limited capacities. The buffer capacities per source–link pair are not constant in time, but vary according to the amount of traffic from a particular source and other influences, the most important one being the amount of traffic of competing flows. In our simulation, we can estimate the occupancy of buffers by each source at the end of the next timestep (t + u) as: bdsl (t + u) = bdsl (t) + u[idsl (t) − odsl (t)]

(8)

where u is the length of the simulation timestep. The aggregated occupancy of a particular buffer bdn (t + u) =

m

[bdsl (t + u)Nln ]

s=1

should be proportionally distributed for every source–link pair.

(9)

32

T. Turk / Mathematics and Computers in Simulation 78 (2008) 27–39

The traffic will occupy buffers (b) according to the ratio (b)

rsl (t) = bdsl (t + u) ⊗ bsd• l• (t + u)

(10)

where T

bds• l• (t + u) = [Nln bdn (t + u)Is ]

(11)

and m

bdn (t + u) =

[bdsl (t + u)Nln ]

(12)

s=1

If the aggregated occupancy of a particular buffer (12) is less than its capacity according to (3), then the buffer capacities should be distributed proportionally for each source-link pair according to the ratio (10): (b)

nsl (t + u) = rsl (t) ⊗ bdn (t + u)

(13)

If this is not true, then the total buffer capacity (3) is proportionally distributed: (b)

nsl (t + u) = rsl (t) ⊗ (Nln nn Is )T

(14)

The next step is a calculation of output flows for each source–link pair. The idea is that each connection’s capacity is used by data traffic, proportionally for each data source. Firstly, we will need the ratio between data load on a connection and the total data load on that connection. The quantity of data traffic from each source which will present a load to connections in next period can be estimated as: odsc (t) = [b sl (t) + uidsl (t)]Clc d

(15)

and for each connection: odc (t) =

m

odsc

(16)

s=1

The ratios between data load from a particular source on a connection (c) and the total data load on that connection for each source–link pair is then obtained as: (c,n)

d,(n)

(t) ⊗ ods• l• (t)

(17)

rsl (t) = osl (t) ⊗ ods• l• (t)

(18)

rsl

(t) = osl

(c,s)

d,(s)

for data originating indirectly from buffers (n) and directly from inputs (s), where: T

ods• l• = [Clc odc (t)Is ]

(19)

Output data flows are then: d,(n)

osl

d,(s)

(c,n)

(t) = rsl

⊗ [Clc cc (t)Is ]T

(c,s)

osl (t) = rsl (t) ⊗ [Clc cc (t)Is ]T The matrix cc (t) represents the connections’ capacities and is obtained as   odc (t) cc (t) = min u, cc

(20) (21)

(22)

for each connection c = 1, . . ., o. Finally, we have output data flows as a sum of data flows originating directly from inputs and indirectly from data buffers: d,(s)

d,(n)

odsl (t) = osl (t) + osl

(t)

(23)

T. Turk / Mathematics and Computers in Simulation 78 (2008) 27–39

33

Fig. 2. Conceptual representation of the network simulation model.

Output data flows are then directed to appropriate network nodes: odsl (t) = o sl (t)Lll d

(24)

If one wishes to include delay calculations on the network nodes in the simulation, this can be established by using delayed values for t in the above equation, possibly a different value for each link. One simple way to achieve this is to use a constant delay value (z in Eq. (4)). 2.3. Deployment possibilities A conceptual diagram of the proposed computer network model is shown in Fig. 2, where isl and osl represent input and output flows from routers, and bsl is the matrix with buffer occupancy data, for each source–link pair. Ssl , Clc and Nln are sources to links matrix, links to connections matrix and links to nodes matrix, respectively. Lll (links to links matrix) represents the routing behavior of the network. In our formal model, it is represented as a constant, but the model can be further adapted to include dynamic routing activities, simply by changing Lll . Diagram also shows dsl as a collector of dropped packets. The above general schema can be used in various ways. When designing complex networks one can easily simulate network behavior under different circumstances and different setups, even to include different levels of computer networks. Optimization techniques can be applied, together with stochastic (Monte-Carlo) approaches, for instance solving the problems of optimal routing flows with known end-to-end traffic demand, design problems with multi-layer networks, restoration designs to recover from failures, etc. Stochastic procesess are normaly used to generate data traffic on its sources. They can be used also for other purposes, such as random failures on parts of the network. In the rest of the paper, we are adapting the general network model to test the behavior of a price-controlled end-to-end QoS provision in computer networks. 3. Application of the simulation model One of the issues in the pricing mechanisms is the calculation of total costs of data transfer, which are basically the sum of costs incurred along the network path, or in other words the costs for data transfer over each node-to-node connection. One of possible solutions would be to include the information about the highest price the user is prepared to pay per transferred MB (“bid price”) along the whole path into his data stream. For each hop along the route, the network node which controls data entry onto the connection would subtract the connection price from the price indicated in the data stream. This would be repeated along the path the data stream is passing. It can happen that the highest price is too small, so the proposition assumes at least two QoS classes—paid and unpaid traffic. The user pays according to actual connection price, not according to his bid. However, the open question is whether the proposed schema will work in practice. For this the stability of the system should be examined. If the basic model gives preferable estimates, the model can be used to test different pricing policies. The traffic which can be paid for will occupy buffers (b) according to the ratio (b,P)

rsl

d,(P)

(t) = bsl

d,(P)

(t + u) ⊗ bs• l• (t + u)

(25)

34

T. Turk / Mathematics and Computers in Simulation 78 (2008) 27–39

where T

d,(P)

(t + u)Is ] bs•l• (t + u) = [Nln bd,(P) n m

bd,(P) (t + u) = n

d,(P)

[bsl

(t + u)Nln ]

(26)

s=1

and

 d,(P) bsl (t

+ u) =

  d (t) + u id (t) − od (t) , if w(n) (t) ≥ p (t) bsl s•l sl sl sl 0, otherwise

(27)

for each source–link pair s = 1, . . . , m,

l = 1, . . . , p d,(P)

in this way obtaining the matrix bsl pair) and

. In the above equation, wsl represents willingness to pay (for each source–link

ps·l (t) = [Clc pc (t)Is ]T

(28)

represents prices for network connections. If the aggregated occupancy of a particular buffer by paid traffic (4) is less than its capacity according to (1), then the buffer capacities for paid traffic should be distributed proportionally for each source–link pair according to the ratio (3): (P)

(b,P)

nsl (t + u) = rsl

(t) ⊗ bd,(P) (t + u) n

(29)

If this is not true, then the total buffer capacity (1) is proportionally distributed: (P)

(b,P)

nsl (t + u) = rsl

(t) ⊗ (Nln nn Is )T

(30)

When the aggregated occupancy of a buffer by paid traffic is less than buffer capacity, the rest of it can be proportionally distributed to free traffic: (N)

(b,N)

nsl (t + u) = rsl

(P)

(t) ⊗ [(Nln nn Is ) − nsl (t + u)]

(b,N)

(31) (b,P)

Here, rsl is established in a similar fashion as the ratio rsl is no capacity for free traffic:

. If the buffer is totally occupied by paid traffic, there

(N)

nsl (t + u) = 0

(32)

Lastly, the buffer capacities for each source–link pair (which we need to estimate the occupancy of buffers with limited capacities) are: (P)

(N)

nsl (t + u) = nsl (t + u) + nsl (t + u)

(33)

Output data flows are a combination of output flows of two traffic classes, paid and unpaid: d,(P)

odsl (t) = osl

d,(N)

(t) + osl

(t)

(34)

To obtain specific output data flows, we firstly establish whether the traffic should be regarded as paid or free. Data packets are regarded as paid if their associated willingness to pay for transfer through the next link is equal or greater than the price of that link. Since the willingness to pay could differ between the traffic which is already buffered (n) and the traffic which is coming through the incoming ports (s), we have two different rules:  d (t), if w(n) ≥ p bsl s•l d,(n,P) sl (t) = osl (35) 0, otherwise

T. Turk / Mathematics and Computers in Simulation 78 (2008) 27–39

and

 d,(s,P) osl (t)

=

35

(s)

if wsl ≥ ps•l otherwise

uidsl (t), 0,

(36)

for each source–link pair s = 1, . . . , m,

l = 1, . . . , p d,(n,P)

d,(s,P)

(t) and osl (t). in this way obtaining matrices osl The next step is a calculation of output flows for each source–link pair. The idea is that each connection’s capacity is used by data traffic, proportionally for each data source. Firstly, we will need the ratio between data load on a connection and the total data load on that connection. The quantity of paid data traffic from each source which will present a load to connections in next period can be estimated as: d,(n,P)

od,(P) sc (t) = [osl

d,(s,P)

(t) + osl

(37)

(t)]Clc

and for each connection: m

d,(P) od,(P) oc (t) = sc

(38)

s=1

The ratios between data load from a particular source on a connection (c) and the total data load on that connection for each source–link pair is then obtained as: (c,n,P)

rsl

d,(n,P)

(t) = osl

(t) ⊗ ods•l• (t),

(c,s,P)

rsl

d,(s,P)

(t) = osl

(t) ⊗ ods•l• (t)

(39)

for data originating indirectly from buffers and directly from inputs, where: T

(t)Is ] ods•l• = [Clc od,(P) c

(40)

Output data flows for paid traffic are then: d,(n,P)

osl

(c,n,P)

(t) = rsl

T

(t) ⊗ [Clc c(P) c (t)Is ] ,

d,(s,P)

osl

(c,s,P)

(t) = rsl

T

(t) ⊗ [Clc c(P) c (t)Is ]

(41)

and d,(P)

osl

d,(n,P)

(t) = osl

d,(s,P)

(t) + osl

(t)

In the above, the capacity for each connection, used by paid traffic, is:   d,(P) (t) o cc(P) (t) = min c u, cc

(42)

(43)

where c = 1, . . . , o (obtaining the matrix c(P) c (t)), meaning that the capacity of a connection is used only by paid traffic, unless there is less (paid) demand for it. The same set of rules applies for free data traffic, the only difference being the available capacity for it: (P) c(N) c (t) = cc (t) − cc (t)

(44)

3.1. Value flows Value flows (v) represent the “other side” of the data traffic. This is the value the users are willing to pay to send or receive data by the network. The willingness to pay for the traffic which enters the network in time t for all sources can be represented as a vector: ws (t) = [w1 (t), w2 (t), . . . , wm (t)]T

(45)

36

T. Turk / Mathematics and Computers in Simulation 78 (2008) 27–39

Value flows travel along the network links in a similar fashion as data traffic flows, except that at each node the corresponding value is subtracted from the value flow, for each source separately. This value is the cost of data transfer along the next connection. The cost of data transfer is essentially the product of the amount of data and current price of data transfer for this connection. If there is not enough value, the data traffic is considered to be free and it does not receive priority service. Similarly as we did for data traffic, each value flow which comes to a node can be described as a sum of direct and indirect flows: ivsl (t) = ws (t)Ssl + ovsl (t − z)

(46)

Since in each moment data buffers on nodes contain some data packets, we should collect information about the value associated with these data: t v,(n) v,(s) v bsl (t) = [ivsl (s) − osl (s) − osl (s) − dvsl (s)]ds + bvsl (t0 ) (47) t0

where value in buffers is being reduced by outgoing data packets which have been buffered, by outgoing data packets which are coming through the incoming ports and by dropped packets, all multiplied by users’ willingness to pay denoted in that packets: d,(n,P)

v,(n)

(t) = osl

v,(s)

(t) = osl

osl osl

d,(s,P)

(n)

(t) ⊗ wsl (t) (s)

(t) ⊗ wsl (t) (n)

dvsl (t) = dsl (t) ⊗ wsl (t)

(48)

Value can leave a node in two ways. Firstly, it can be (partly) “consumed” by a node which represents the payment for data traffic along the next network link. Secondly, the remaining value of a data flow should follow the data flow. The value which remains on nodes (payments) is current connection price multiplied by the amount of paid traffic which leaves the network node through that connection (output data flows): d,(P)

msl (t) = osl

(t) ⊗ ps• l (t)

(49)

After the payment being collected the remaining value flows together with the data flow. The best way we can calculate this remaining value is to split it to the value which originates from data in buffers (n) and the value originating from inflows (s): o sl (t) = osl v

v,(n)

v,(s)

(t) + osl (t)

(50)

For the first we have:  d,(n,P) (n) (n) osl (t)[wsl (t − u) − ps• l (t)], if wsl (t − u) > ps• l (t) v,(n) osl (t) = 0, otherwise and for the second:  v,(s)

osl (t) =

d,(s,P)

osl 0,

(s)

(51)

(s)

(t)[wsl (t − u) − ps• l (t)], if wsl (t − u) > ps• l (t) otherwise

(52)

both calculated for each source–link pair: s = 1, ..., m,

l = 1, ..., p

Output value flows are then routed through the network according to: ovsl (t) = o sl (t)Lll v

(53)

T. Turk / Mathematics and Computers in Simulation 78 (2008) 27–39

37

Finally, for data flow originating from buffers (n) we can estimate users’ willingness to pay for each source–link pair as: (n)

wsl (t) = bvsl (t) ⊗ bdsl (t)

(54)

The corresponding willingness to pay for inflows (s) is: (s)

wsl (t) = uivsl (t) ⊗ uidsl (t)

(55)

3.2. Measurements Since dynamic pricing mechanism depends from traffic measurements, this can be established in our simulation model relatively easy. The dynamic price mechanism should be sensitive to the connections occupancy, so we measure average connection occupancy of paid and free traffic: o¯ dc (t − x, t) = o¯ d,(P) (t − x, t) + o¯ d,(N) (t − x, t) c c

(56)

the average values being calculated for the time interval [t − x, t]. Different policy scenarios lead to different measurement strategies. In this paper, we are limiting the structure of measurements to the dimensions of paid/free traffic and to inflow and buffer traffic: o¯ d,(P) (t − x, t) = o¯ d,(n,P) (t − x, t) + o¯ d,(s,P) (t − x, t), c c c d,(n,P)

Average values are obtained by the values of osl

o¯ d,(N) (t − x, t) = o¯ d,(n,N) (t − x, t) + o¯ d,(s,N) (t − x, t)(57) c c c d,(s,P)

(t), osl

d,(n,N)

(t), osl

d,(s,N)

(t) and osl

(t).

3.3. Pricing Price of the traffic via each network connection is being dynamically recalculated as: ¯ dc (t)] + c(P) ¯ dc (t)} ⊗ {ec [c(P) ¯ dc (t)] − c(P) ¯ dc (t)} pc (t) = pc (t − u) ⊗ {ec [c(P) c (t) + o c (t) − o c (t) + o c (t) + o

(58)

where ec is a vector of price elasticities for each network connection and c(P) c (t) is the connection capacity available for

paid traffic, according to some policy (described below). The elasticity can be estimated by employing the methodology further developed in ref. [13]. 3.4. Simulation results

Figs. 3–6 show the results from running one realization, where a connection c1 (see Fig. 1) is shared for 20 sources. Each source introduces 10 Mbps of unpaid traffic on average at peak time, that is in time interval 1500–2000 s. The

Fig. 3. Average demand for traffic over c1 .

38

T. Turk / Mathematics and Computers in Simulation 78 (2008) 27–39

Fig. 4. Cumulative amount of dropped packets for s1 and s2 .

Fig. 5. Traffic from source s1 at the end of connection c1 , before entering node n2 .

Fig. 6. Traffic from source s2 at the end of connection c1 , before entering node n2 .

T. Turk / Mathematics and Computers in Simulation 78 (2008) 27–39

39

network load is lower before and after this period. The connection capacity is 150 Mbps, so it is over its limits (the network is in congestion state). The average demand (averaging period 10 s) for traffic over c1 is shown in Fig. 3. The first user (source s1 ) decides to pay for his traffic and sets its bit price at 80 MU/MB (monetary units per megabyte). As a result, price for using this connection rises from 0 to 0.01 MU/MB and stays on that level for the entire duration of the simulation. In Fig. 4, the cumulative amount of dropped packets is shown for two users. The first one is paying for his traffic (s1 ), while his colleague (s2 ) is using the network in a best-effort manner. Figs. 5 and 6 show the traffic from sources s1 and s2 (respectively) at the end of connection c1 . We can see that the traffic form s1 is not obstructed with peak load on connection c1 , like the traffic from s2 is. 4. Conclusion We have shown in this paper that systems dynamics approach can be used to model the complex behavior of modern computer networks, and that granularity issues can be solved with the inclusion of detailed routing information. In our model, matrix notation is used to represent the topology of the network, together with routing characteristics of the network, and this is the main advantage of our approach. Besides this, the modelling of systems where traffic characteristics are important (like priority class) is possible, as we have shown in example, where we tested the behavior of a price-controlled QoS end-to-end provision in computer networks. The development of a new simulation model for decision making purposes (e.g. when deciding about investments into a particular network topology) with this kind of approach is very straightforward and simple process, since only the topology of the network and the routing behavior should be described by defining matrices L, C and N. It can be used when designing complex networks, together with optimization techniques and stochastic (Monte-Carlo) modelling. However, if the model is considered to be used in analyses where detailed data packet attributes should be tracked (for each data packet), the model should be enriched with discrete-event servers. There is also a possibility to combine a system dynamics approach and agent-based simulations, where detailed characteristics of network elements are described as properties of object instances, and different network element types are defined as object classes. Acknowledgement The research presented in this paper was funded by the Slovenian Research Agency (ARRS) (ref. no. P2-0037). References [1] Y. Anlu, G. Wei-Bo, Time-driven fluid simulation for high-speed networks, IEEE Transactions on Information Theory 45 (1999) 1588. [2] A. Borshchev, A. Filippov, AnyLogic—multi-paradigm simulation for business, engineering and research, in: The 6th IIE Annual Simulation Solutions Conference, Orlando, Florida, USA, March 15–16, 2004. [3] G. Davies, M. Hardt, F. Kelly, Come the revolution—network dimensioning, service costing and pricing in a packet switched environment, Telecommunications Policy 28 (2004) 391–412. [4] Y. Gu, Y. Liu, D. Towsley, On integrating fluid models with packet simulation, in: Proceedings of IEEE Conference on Computer and Communications, Hong Kong, China, 2004. [5] I.W.C. Lee, A.O. Fapojuwo, Stochastic processes for computer network traffic modeling, Computer Communications 29 (2005) 1–23. [6] Y. Liu, F.L. Presti, V. Misra, D.F. Towsley, Y. Gu, Scalable fluid models and simulations for large-scale IP networks, ACM Transactions on Modeling and Computer Simulation 14 (2004) 305–324. [7] J.K. MacKie-Mason, A Smart Market for Resource Reservation in a Multiple Quality of Service Information Network, University of Michigan, Ann Arbor, 1997. [9] T.T. Nguyen, G.J. Armitage, Evaluating Internet pricing schemes: a three-dimensional visual model, ETRI Journal 27 (2005) 64–74. [10] D. Nicol, P. Heidelberger, Parallel execution for serial simulators, ACM Transactions on Modeling and Computer Simulation 6 (1996) 210–242. [11] D.M. Rizzo, et al., The comparison of four dynamic systems-based software packages: translation and sensitivity analysis, Environmental Modelling & Software 21 (2006) 1491–1502. [12] F. Sallabi, A. Karmouch, K. Shuaib, Design implementation of a network domain agency for scaleable QoS in the Internet, Computer Communications 28 (2005) 12–24. [13] T. Turk, B. Jerman-Blaˇziˇc, Users’ responsiveness in the price-controlled best-effort QoS model, Computer Communications 24 (2001) 1637–1647. [14] M. Yuksel, S. Kalyanaraman, Distributed dynamic capacity contracting: an overlay congestion pricing framework, Computer Communications 26 (2003) 1484–1503.