Stochastic performance bounds by state space reduction

Stochastic performance bounds by state space reduction

Performance Evaluation 36–37 (1999) 1–17 www.elsevier.com/locate/peva Stochastic performance bounds by state space reduction Nihal Pekergin a,b,Ł,1 a...

155KB Sizes 1 Downloads 41 Views

Performance Evaluation 36–37 (1999) 1–17 www.elsevier.com/locate/peva

Stochastic performance bounds by state space reduction Nihal Pekergin a,b,Ł,1 a

PRiSM, Universite´ de Versailles Saint-Quentin, 45 Avenue des Etats Unis, 78035 Versailles Cedex, France b CERMSEM, Universite´ de Paris I, 106-112 Boulevard de l’Hopital, 75647 Paris Cedex 13, France

Abstract In this work, we present a methodology to derive stochastic bounds on discrete-time Markov chains. It is well known that the state space explosion problem of Markovian models may make them numerically intractable. We propose to evaluate bounding models with reduced size state spaces, in order to be able to analyze considered systems for larger values of parameters. Moreover, these bounding models are comparable in the sense of sample-path (strong) ordering with the underlying model. Obviously, this method does not provide exact values, however, it has the following advantages: the errors are stochastically bounded, and it is suitable to analyze transient behaviors, and the stationary ones, as well. We present how this methodology may be applied to evaluate cell loss rates in ATM switches.  1999 Elsevier Science B.V. All rights reserved. Keywords: Discrete-time Markov chains; Stochastic comparison; Sample-path stochastic ordering; ATM networks

1. Introduction For recent computer and communication systems, the interaction of different objects is very complex and the classical methods of performance evaluation are not adequate to model this complexity. Hence, in the performance evaluation area, it is a very important issue to develop new mathematical methods and software techniques in order to be able to analyze these complex systems. In this paper, we consider Asynchronous Transfer Mode (ATM) networks [12], already widespread and being in full development. In the ATM switching technique, the information is transmitted using short, fixed-size (53 bytes) packets, called cells. Cells from different sources are carried through a multiplexer and asynchronously transmitted through the network. The fixed cell size reduces the variance of delay making the networks suitable for integrated traffic of voice, video, and data. Obviously, the nature of these traffics, and required quality of service (QoS) for them are quite different. Hence, we can state the following phenomena which makes the underlying performance models complex: deterministic service distribution, bursty nature of traffic, the order of magnitude of considered performance measures Ł 1

E-mail: [email protected] This work is partially supported by a grant from CNET France-Telecom.

0166-5316/99/$ – see front matter  1999 Elsevier Science B.V. All rights reserved. PII: S 0 1 6 6 - 5 3 1 6 ( 9 9 ) 0 0 0 2 6 - 7

2

N. Pekergin / Performance Evaluation 36–37 (1999) 1–17

(loss rates of 10 6 , 10 10 ). The performance evaluation studies of ATM networks make important the analysis methods in discrete time [17]. On the other hand, in performance evaluation studies, it is usual to consider Markovian models in order to be able to analyze them quantitatively. However, because of the state space explosion problem these models are numerically feasible for small values of parameters determining the state space size. Hence, it is not always possible to evaluate real systems which are generally of important state space size. In this paper, we propose a methodology which consists of evaluating simpler models to derive stochastic bounds on the complex underlying model. In many areas of applied probability, the stochastic comparison methods are largely used [13,14]. There are several stochastic order relations and their application areas and the methods used to prove their existence are slightly different. In this work, we use the most known but at the same time the one which imposes the strongest constraints, called the strong stochastic ordering or the stochastic dominance relation. This relation, denoted generally by st , or d has a sample path property that makes it easy to apply. Obviously, the results computed from bounding models are not exact. However, since we obtain bounds, we will be able to predict if the considered system performs better than the upper bounding, and worse than the lower bounding one. In many cases, the bounds provide enough information on how well a system is functioning, or if some predefined requirements are guaranteed or not. Moreover, the stochastic comparison allows us to have information on the distribution of the considered parameters but not only on their average values. The other important advantage of this methodology is the fact that it can be applied to evaluate transient behavior as well as stationary behavior of underlying models. The main goal of the methodology presented in this paper is to find Markovian bounding models in reduced state spaces to be able to avoid the state space explosion problem. It is possible also to construct models having some specific properties allowing one to apply special methods as product form or matrix-geometric solutions [8,11]. In this work, we apply the first approach to evaluate the considered complex systems by means of the bounding systems defined on a reduced state space. The state space reduction approach to obtain stochastic bounds has been used by different authors. In [2], they propose to analyze the evolution of functionals describing the performance measures instead of the Markov chain itself. Therefore, the state space of the discrete Markov chain model is mapped to {1; : : :; N } defining the considered performance measures. They present an algorithm to derive optimal upper and lower bounding chains. This method is quite general, but usually it does not provide accurate results. In other words, evaluating the model on the state space of performance measures results in losing a lot of information on the dynamic of the considered system. In [16], stochastic bounds on aggregated chains are provided. Intuitively, the state space reduction approach consists in representing the states having high probabilities and ignoring the ones with low probabilities. This reduction procedure is generally possible, because of the fact that in the models with a large number of states, the probabilities are not distributed uniformly, but they are rather concentrated on a subset of states. However different reduction procedures may result in different tightness of bounds. The first step of the methodology presented in this work is to define the state space of comparison. The image of the original and the bounding models are stochastically compared on this space. Note that the bounding systems may be constructed by applying aggregation procedures on the original one, or by imposing constraints to define different bounding models. We present an overview of different proof techniques, and their applications. It is obvious that finding bounding models is an heuristic procedure,

N. Pekergin / Performance Evaluation 36–37 (1999) 1–17

3

and we try to give some intuitive ideas to derive bounding models. Moreover, the quality of bounds depend largely on underlying bounding models. This paper is organized as follows: in Section 2, we give a brief introduction on the sample-path stochastic ordering. Section 3 is devoted to the proposed methodology. In Section 4, we give some application examples of this methodology, and finally we conclude in Section 5.

2. Sample-path stochastic ordering In this section, we give only the basic definitions and theorems of sample-path ordering that will be used in this paper to make it self-contained. We refer to the book of Stoyan [15] for an excellent survey of stochastic bounding techniques applied in queueing theory and to the books on stochastic bounding techniques [14] for further information. First, let us give the definition of the sample path stochastic comparison of two random variables X and Y defined on a totally ordered space S, (a subset of R or N ), since it is the most intuitive one. Definition 1. X is said to be less than Y in the sense of the sample-path (strong) ordering (X st Y ) if and only if X st Y () Pr.X > a/  Pr.Y > a/

8a 2 S:

In other terms, we compare the probability distribution functions of X and Y : it is more probable for Y to take larger values than for X. Moreover, X Dst Y means that X and Y have the same distribution. The state representation vectors of complex systems are generally multidimensional, thus the state spaces may not be totally ordered. In such cases, first we must choose the order relation on this space that must be reflexive and transitive but not necessarily anti-symmetric. In the sequel, we denote by ¼ the preorder or the partial order relation on the state space. The stochastic order associated with this vector ordering will be then denoted by ¼st . The generic definition of a stochastic order is given by means of a class of functions. The strong stochastic ordering is associated with the increasing functions. We now give the generic definition in the general case: the random variables are defined on a space S, endowed with a relation order ¼ (preorder or partial order): Definition 2. X ¼st Y () E f .X/  E f .Y / for every function f : S ! R ¼-increasing, whenever the expectations exist. f is ¼-increasing if and only if 8x; y 2 S, x ¼ y ! f .x/  f .y/. We state only the sample-path and coupling properties of the strong stochastic ordering that will be applied to demonstrate the existence of stochastic comparison. The general case (Strassen’s theorem) can be found in (theorem 1 of [9]), [3]. Theorem 3. The following are equivalent: ž X ¼st Y , ž sample-path: there exist random variables XN , YN defined on the same space, such that – XN Dst X and YN Dst Y , – XN ¼ YN almost surely .Pr. XN ¼ YN / D 1/,

N. Pekergin / Performance Evaluation 36–37 (1999) 1–17

4

ž coupling: there exists a probability measure ½ defined on S ð S with support in K D Bf.x; y/ 2 S ð S : x  yg, with first marginal distribution equals to the distribution of X, and the second marginal distribution equals to that of Y . As it is stated above, the main goal of this work is to find bounding systems on a reduced state space. Therefore the state space of the considered system and the bounding ones are not the same. In such cases, we compare them on a common state space. To do this, we first project the underlying spaces into this common one, and then compare the images on this space. This type of comparison is called comparison of images or comparison of state functions [3]. In the sequel, since our main goal is comparing Markov chains, we assume that the considered state spaces are discrete. Definition 4. Let X (resp. Y ) be a random variable which takes values on a discrete, countable space E (resp. F), and G be a discrete, countable state space endowed with a preorder ¼I Þ : E ! G (resp. þ : F ! G) be a many-to-one mapping. The image of X on G is less in the sense of ¼st than the image of Y on G if and only if Þ.X/ ¼st þ.Y /: The comparison of the images may be defined more intuitively by representing the projection applications by matrices. Let MÞ , Mþ denote the matrices representing the underlying mappings, and the probability vectors p, q represent respectively the random variables X, Y . If ( 1 if Þ.i/ D j; MÞ [i; j ]; i 2 E and j 2 G D 0 otherwise; then Þ.X/ ¼st þ.Y / () pMÞ ¼st q Mþ :

(1)

Let us now assume that the state space comparison G be {1; : : :; n}, then the comparison of images (Eq. (1)) is defined by partial sums: 8i : n : : : 1

n X n X

p[ j ] ð MÞ [ j; k] 

kDi jD1

n X n X

q[ j ] ð Mþ [ j; k]:

kDi jD1

Obviously, the stochastic comparison of random variables is extended to the comparison of stochastic processes. There are two definitions, one of them corresponds to the comparison of one-dimensional increasing functionals, while the other is the comparison of the multidimensional functionals. We give both definitions in the context of Markov chains, nevertheless they are more general. Let {X .t/; t 2 T } and {Y .t/; t 2 T } be two Markov chains with discrete state space S (time parameter space may be discrete T D N C or continuous T D R C ). Definition 5. {X .t/, t 2 T } is said to be less than fY .t/; t 2 T g with respect to ¼st .fX .t/g ¼st fY .t/g/ if and only if X .t/ ¼st Y .t/

8t 2 T

N. Pekergin / Performance Evaluation 36–37 (1999) 1–17

5

which is equivalent to E f .X .t//  f .Y .t//

8t 2 T

for every ¼-increasing functional f , whenever the expectations exist. The second definition that will be called strict comparison [15] is stronger than the previous definition, since the second one implies the first one. Definition 6. fX .t/; t 2 T g is said to be strictly less than fY .t/; t 2 T g with respect to st , for every m and every interval t1 < t2 < Ð Ð Ð tm iff .X .t1 /; X .t2 /; : : :; X .tm // st .Y .t1 /; Y .t2 /; : : :; Y .tm // where  denotes the component-wise vector ordering. Equivalently, if and only if E f .fX .t/; t 2 T g/  E f .fY .t/; t 2 T g/ for every ¼-increasing functional f whenever the expectations exist. (A functional f is called increasing if f .fx.t/; t 2 T /g/  f .fy.t/; t 2 T g/ whenever x.t/ ¼ y.t/; t 2 T .) In this work, since we are interested in bounding the functionals of a Markov chain at each instant t, we apply the first definition to compare Markov chains. Besides, Markov chains may be compared by means of their sample paths or by means of coupling as in the case of random variables. We now state the following propositions which are the extension of Theorem 3 to the case of Markov chains. In fact, in these propositions the comparison of Markov chains is in the sense of strict comparison (Definition 6), so the comparison according to Definition 5 is also satisfied [14]. Proposition 7 (sample-path property). fX .t/; t 2 T g ¼st fY .t/; t 2 T g/g if and only if there exist two N Markov chains f X.t/; t 2 T g, and fYN .t/; t 2 T g, defined on the same probability space such that N ž f X.t/; t 2 T g Dst fX .t/; t 2 T g and fYN .t/; t 2 T g Dst fY .t/; t 2 T g, N ž Pr. X.t/ ¼ YN .t// D 1 8t 2 T . The coupling property for time-homogeneous Markov chains may be defined in terms of probability transition matrices [9,15]. Let P (resp. Q) be the probability transition matrix of the Markov chain fX .t/; t 2 T g (resp. fY .t/; t 2 T g), and P[x; Ł] be the row probability vector corresponding to state x in P. Proposition 8. fX .t/; t 2 T g ¼st fY .t/; t 2 T g if and only if ž X .0/ ¼st Y .0/, ž 8x; y j x ¼ y; P[x; Ł] ¼st Q[y; Ł]. Moreover, it has been shown that the monotonicity and the comparability of probability transition matrices yield the sufficient conditions for the comparison of Markov chains under different stochastic orderings [10,11,15]. We state here the case of the strong stochastic ordering. Proposition 9. fX .t/; t 2 T g ¼st fY .t/; t 2 T g if ž X .0/ ¼st Y .0/,

N. Pekergin / Performance Evaluation 36–37 (1999) 1–17

6

ž monotonicity of at least one of the transition matrices: 8x; y j x ¼ y;

either P[x; Ł] ¼st P[y; Ł] or Q[x; Ł] ¼st Q[y; Ł];

ž comparability of the transition matrices: 8x;

P[x; Ł] ¼st Q[x; Ł]:

Notice that we present here the case of the comparison of Markov chains; the extension to the comparison of images is given in the following section.

3. Methodology In this section, we explain the proposed methodology, which is composed of two main steps: (1) the choice of the state space of comparison and the relation order on this space, (2) finding the bounding models, whose images on the comparison space are comparable in the sense of the sample-path ordering with that of the underlying model. In the sequel, we denote by S the state space of comparison. Recall that S must be countable, discrete and endowed with a relation of preorder or partial order, ¼. In this space of comparison, we have a compact representation of the system which is directly related to the computation of performance measures. It means that on space S, instead of having an exact representation of the underlying system, we try to have a representation of the evolution of underlying performance measures. First of all, the considered performance measures must be computed on state space S. Moreover, they must be defined as an ¼-increasing function of the state space: Let f : S ! R be the function describing the considered performance measure on S, and s1, s2 be two states. If these states are comparable, let us say s1 ¼ s2, then the function associated to state s1 is less than the function associated to state s2: 8s1; s2 2 S

if s1 ¼ s2 then f .s1/  f .s2/:

Obviously, the reduced state space is a subset of the original one, so it is also possible to project it into S. Therefore, when the state space S is determined, the many-to-one mappings must be defined in order to project the original state space and the reduced spaces of bounding models into S. Finding the bounding models on S constitutes the most important point of the methodology, since the tightness of bounds depends largely on this choice. Indeed, the tightness of bounds depends on how well we represent the evolution of the considered system on this reduced space. In other words, it depends on how well we capture the essential of the considered system dynamic on this reduced state space. Intuitively, this reduction procedure corresponds to ignoring the states having less probabilities compared to the ones having more important probabilities. Furthermore the bounding systems must provide stochastic bounds with respect to ¼st . We give in the following some proof techniques to show the existence of the stochastic ordering among the considered models. 3.1. Proof techniques The stochastic comparison between the original model and the bounding ones can be proved by applying different proof techniques depending on the description of bounding models. First, notice that

N. Pekergin / Performance Evaluation 36–37 (1999) 1–17

7

a homogeneous discrete-time Markov chain may be defined either by its evolution equations, or by its probability transition matrix. In addition, the initial state must be known for both cases. Let us first consider the case where discrete-time Markov chain models are described by their evolution equations. Proposition 7 and Definition 4 yield that the sample-path comparison between the underlying system with evolution equations X .ti / and an upper bounding one with evolution equations X sup .ti / exists if and only if Þ.X .ti // ¼ þ.X sup .ti //

8ti ; i 2 N C ;

where Þ; þ are the mapping applications to project corresponding spaces into the comparison space. Generally, the initial states are assumed to be the same. Thus by induction, it is sufficient to show that if Þ.X .t// ¼ þ.X sup .t// 8t < ti , then the images on the comparison space at instant ti are also comparable. This kind of proof which consists of explicitly comparing the sample-paths may be established, if there are two distinct models to compare. However in some studies, it is more practical to change some parameters of the underlying model, or to impose some constraints on it to have bounding models. In this case, it must be ensured that the impact of these changes, or imposed constraints guarantee the sample-path comparison. Thus implicit proofs may be useful when a bounding model is constructed from the underlying one. Now consider the case where the underlying model is defined in terms of its transition probability matrix P. Let consider the case of the upper bounding model. In this case, it must be shown that for every state m of the upper bounding model, it is more probable to transit to a greater state according to ¼ than from all states n of the considered model such that the images are comparable .Þ.n/ ¼ þ.m//. Formally, let Q be the probability transition matrix of the upper bounding model; it must be shown that 8m; n j Þ.n/ ¼ þ.m/;

Þ.P[n; Ł]/ ¼st þ.Q[m; Ł]/:

(2)

Moreover, it may be more practical for some cases to verify the sufficient conditions on the monotonicity and the comparability of probability transition matrices. For example, when the underlying model is monotone or the bounding model is constructed to be monotone, then it is sufficient to compare probability matrices to prove the stochastic order relation. Formally the monotonicity (for instance, of the original model) is expressed as follows: for any two states x; y such that the image of x on the comparison state is less than the image of y, it is more probable to transit to greater states from y than from x 8x; y j Þ.x/ ¼ Þ.y/;

Þ.P[x; Ł]/ ¼st Þ.P[y; Ł]/:

(3)

In addition, the comparability condition yields that for any couple of states x; y having the same image on the comparison space, it is more probable in the upper bounding model to transit in one step to greater states. In the same way, it is more probable in the lower bounding model to transit to lower states. We give in the following the comparison of the upper bounding model: 8x; y j Þ.x/ D þ.y/;

Þ.P[x; Ł]/ ¼st þ.Q[y; Ł]/ () P[x; Ł]MÞ ¼ Q[y; Ł]Mþ :

(4)

When the stochastic comparison is proved, as a result of the main property of the strong stochastic ordering (definition generic), we have the following inequalities on increasing functionals: E f .Þ.X .ti ///  E f .þ.X sup .ti ///

8ti i 2 N C

where f : S ! R is a ¼-increasing function.

(5)

8

N. Pekergin / Performance Evaluation 36–37 (1999) 1–17

Recall that the state space S, and the relation order are chosen to define the considered performance measures as ¼-increasing functions on S. Therefore, we obtain the bounding values on performance measures. Notice that the obtained bounds are valid at each instant, providing transient bounds. Furthermore, if a steady-state exists then the steady-state bounds can be obtained as well.

4. Applications In this section, we give some applications of this methodology to evaluate cell loss rate in ATM networks. In such networks, since the transmission unit is fixed, we have discrete time models, where time is discretized in slots of one cell switching time. We consider two problems in ATM networks, and explain how this methodology is applied within an unified approach. We emphasize the proof techniques that may be applied rather than the explicit proofs. Furthermore, more numerical results for these problems may be found in the stated references. 4.1. Comparison of buffer management policies in an ATM switch First, we apply this approach to evaluate the loss rates in an ATM switch. Cell loss rates are one of the important problems in broadband integrated services digital networks (ISDN). In an ATM switch, buffers are of finite capacity and they may receive two flows of cells with different cell loss requirements. The constraints on cell loss rates may be guaranteed by controlling the spatial and service priorities between cells. We assume that the spatial priority is controlled by the PushOut mechanism. In this mechanism, when the buffer is not full, all arriving cells can be stored. When the buffer is full, an arriving low priority cell (class L) is lost, while an arriving high priority cell (class H) pushes out of the buffer a low priority cell if there are any in the buffer; otherwise the high priority cell is lost. The deletion discipline is Last-in-First-Out (LIFO). We consider that there is no time priority between cell classes and they are scheduled in the First-In-First-Out (FIFO) manner. This system can be modeled by a discrete-time Markov chain with general assumptions on the arrival processes. However, the size of the state space size is huge, O.2 B /, where B is the buffer size, which makes numerical solution intractable. The considered performance measure is the cell loss rate of high priority cells. The arrivals are supposed to occur just before the beginning of service. Moreover, for the sake of the simplicity, we assume independent identical distributed batch arrivals. We keep the same spatial priority and propose bounding systems by changing the service (temporal) priority. We first show that Head-Of-Line (HOL) priority scheduling policies provide bounds on FIFO policies. In HOL policies, one class has the service priority over the other one, and the service is non-preemptive [4]. Therefore, only the number of cells per class must be known, so the state space size is reduced considerably for these policies (O.B 2 /). Furthermore, we propose to aggregate procedures to obtain bounding systems on O.B/ state space size. Finally, we give other bounding models which are obtained by considering fictive scheduling policies. These bounding models defined on larger state spaces but provide more accurate results. The first step of the methodology is to determine the state space of comparison, S, and the vector ordering on this space, ¼. The state space S must be chosen to be able to compare the evolution of the considered performance measure. First of all, the considered performance measure must be expressed on this space. Let the state vector of S be composed of 3 components: .N ; H; C/, where N is the total number of cells, H is the number of high priority cells, and C is the class of the cell in service. We now

N. Pekergin / Performance Evaluation 36–37 (1999) 1–17

9

define the cell loss rate of high priority cells at time t, R.t/ on space S. Let X .t/ D .N .t/; H .t/; C.t// denote the image on S of the considered system at time t and s D .n; h; c/ denote a state of this space. R.t/ is then given as follows: R.t/ D

X

Pr.X .t/ D s/

s2S

G X

ph [ j ].h C j

B

1lcDH /C ;

(6)

jD1

where ph [ j ] is the probability of j high priority cells arrival and G is the maximum batch size. Notice that the cell arrival probabilities ph [ j ] are independent of the state S. The second choice is to determine the order relation on this space (¼). Recall that the considered performance measure (Eq. (6)) must be increasing with respect to the relation order ¼. Intuitively, this means that if two states sa D .na; ha; ca/, sb D .nb; hb; cb/ are comparable, let us say sa ¼ sb, then the cell loss rate will be greater at state sb than the rate at state sa. We can rewrite Eq. (6) as follows: R.t/ D E f .X .t// where f : S ! R. Then the cell loss rate at state s, f .s/ is: f .s/ D

G X

ph [ j ].h C j

B

1lcDH /C :

(7)

jD1

Therefore, the function f must be ¼-increasing. We propose the following partial order: 8 > : or ha D hb and not ..ca D L/ and .cb D H //: Moreover, sa D sb if there is a component-wise equality. Notice that if sa ¼ sb, it follows from the conditions on the number of high priority cells and the fact that cell arrival probabilities do not depend on state s that the functional (Eq. (7)) is ¼-increasing. Moreover the order relation (¼) may not be the unique one with respect to which the functional defining the considered performance measure is increasing. For instance, the equality on the total number of cells of the order relation is not used in the functional defining the cell loss rate. Therefore the functional is also increasing without this condition. However, this condition limits the number of aggregated states on the comparison space. In the same way, the cell loss rate is different to zero for some values of H cells depending on the maximum batch size, but the preorder is defined for each value of H cells. 4.1.1. Bounding systems We now give the bounding systems in the sense of the sample-path stochastic ordering. We assume that all considered systems, including the original model is empty at the beginning (t D 0), and they are subjected to the same arrival processes. Hence the equality of the total cell number is straightforward for work-conserving policies (policies for which the server can not be idle if there is any work in the system). It follows from the sample path property of the vector ordering ¼ that an upper bounding system has either more high priority cells, or in the case of the same number of high priority cells, a high priority cell service in the upper bounding system implies a high priority cell service in the original one.

10

N. Pekergin / Performance Evaluation 36–37 (1999) 1–17

Obviously, these constraints must be satisfied at each instant. Similarly, a lower bounding system must have either less high priority cells, or in the case of equality, the lower bounding system must not have a low priority cell service in the lower bounding system, and a high priority cell service in the original one. We now define two HOL policies: HOL1 where the service (time) priority is given to low priority (L) cells; HOL2 where the service priority is given to high priority (H) cells. It has been proven that HOL1 provides an upper bound, while HOL2 provides a lower bound for FIFO scheduling [6]. The sample path comparison with respect to the vector ordering ¼ can be established by means of evolution equations of the underlying systems. We emphasize here only the dynamics of HOL systems letting to have these sample-path comparisons with the FIFO service policy. For the comparison of HOL2 and FIFO, we must show that in the case of the equal number of H cells in both buffers, an L cell service under HOL2 and an H cell service in the FIFO buffer cannot occur. It is obvious that an L cell service under HOL2 corresponds to the case where there is no H cell. If the numbers of H cells are equal in both buffers, then there is no H cell in the FIFO buffer. Hence an L cell is also served in the FIFO buffer. In the same way, in the case of the equal number of H cells, it must be shown that an H cell service under HOL1 and an L cell service cannot occur. Clearly, an H cell service under HOL1 corresponds to the case where there is no L cell in the buffer: the total cell number is equal to the number of H cells. Hence, an H cell service under HOL1 and the equal number of H cells in both systems yield no L cell in the FIFO buffer, either. Therefore, an H cell is also served in the FIFO buffer. The HOL1 policy may be represented by considering the total cell number and the number of low or high priority cells. Hence the state space size is equal to .B C 1/ Ł .B C 2/=2. The state space size is reduced considerably compared to 2 BC1 1 for the FIFO policy. However this state space size complexity is still important, since the numerical solution of a Markov chain is composed of a vector matrix multiplication. We describe in the following the aggregation procedures to reduce the state space size of the HOL1 policy. On the other hand, the representation of HOL2 policy is reduced to the number of high priority cells, since this class has both priorities (spatial, service). Therefore, the state space size is B C1. The aggregation procedures of the HOL1 policy consist of having a limited representation of the underlying Markov chain. For instance, assume that the maximum difference between the total cell number and the number of high priority cells is F, 0  F  B. Hence, if F D 0 then cell classes are not distinguished, and we consider only the total number of cells. The case where F D B corresponds to HOL1 policy, then there is no aggregation. For the other cases when N .t/ < H .t/ C F, we overestimate the number of high priority cells, and suppose that it is equal to N .t/ F. This aggregated chain is given in Fig. 1. The states where H D N F are called macro-states representing each one more than one state. Different aggregation schemes satisfying the sample-path ordering may be proposed [5]. The proof for this bounding model results from the fact that by aggregation procedures the transitions to the states having an image n; h; c such that n > h C F are directed to the state having an image n; n F; c. This modification implies that some probabilities of the probability transition matrix are translated to greater states. Therefore the proof of this bounding model may be easily established by coupling the images of the underlying chains (Eq. (2)). Clearly, if cell loss rates are computed in order to dimension buffers in order to guarantee some cell loss rates, then we must find the buffer size for which the upper bound on cell loss rate is less than or equal to the predefined requirements. Hence, for the buffer dimensioning problem the upper bounds provide sufficient information.

N. Pekergin / Performance Evaluation 36–37 (1999) 1–17

11

H B

H=N-F unchanged states

macro-states

F

B

N

Fig. 1. The aggregated chain.

Fig. 2. Loss rates versus buffer load with different aggregation factors F.

In Fig. 2, we give the upper bounds computed with different aggregation factors, F for a buffer of size 60 versus buffer load to illustrate the dependence of the tightness of the bounds on F. It can be seen that for increasing values of F, the upper bounds decrease to be closer to the exact values (F D 60), which is not numerically tractable. Moreover, we can obtain more tight bounds by mixing HOL and FIFO policies. We call these policies HF(K ), and in order to define these policies, we assume that the buffer is composed of two parts (see Fig. 3). Let the buffer be indexed in increasing order, (1  i  B), and the index of the cell at the head of

Fig. 3. Bounding models.

12

N. Pekergin / Performance Evaluation 36–37 (1999) 1–17

buffer (which is in service) be 1. The first part (1  i  K ) is managed according to the FIFO policy, and the second part (K C 1  i  B) is managed according to a HOL policy. Obviously, the parameter K determines the tightness of the bounds: if K D B, we have the FIFO policy, and if K D 1 then it represents a HOL policy. The external arrivals occur to the HOL buffer, and a service for this buffer corresponds to a cell departure to the FIFO buffer. Then the arrivals to the FIFO buffer occur from the HOL buffer, and the service is the real switching time of the cell. The spatial priority is managed with the PushOut mechanism, which is applied first in the HOL part, and if there is no L cell to PushOut, then it is applied in the FIFO part. It has been shown that if the second part is managed by HOL1, which means that L cells have priority to go to the FIFO buffer, HF(K ) provides an upper bound. Similarly, if it is managed by HOL2: H cells have priority to go to the FIFO buffer, and then it provides a lower bound [1]. Moreover, the tightness of bounds are determined by the values of K : larger values of K provide tighter bounds but the state space size increases with a 2 K factor. In fact, these fictive policies (HF(K )) are defined to have the following constraint: in the upper bounding model, each H cell is not served later than the service instant in the FIFO buffer. In the same manner, in the lower bounding model they are not served earlier than that of the FIFO buffer. Therefore, the proof of this sample-path is established in an implicit manner [1]. Let {X 1K .t/; t} (resp. {X 2K .t/; t}) be the image of the Markov chain on S representing HOL1 C FIFO (resp. HOL2 C FIFO) policy, where the FIFO buffer is of size K , and {X FIFO .t/; t} be the image of the Markov chain for the FIFO policy. The stochastic comparison between the Markov chains is as follows: HOL1 C FIFO policies provide upper bound while HOL2 C FIFO policies provide lower bounds: fX 20 .t/; tg ¼st Ð Ð Ð fX 2K .t/; tg Ð Ð Ð ¼st fX 2B .t/; tg ¼st fX FIFO .t/; tg; fX FIFO .t/; tg ¼st fX 1B .t/; tg ¼st Ð Ð Ð fX 1K .t/; tg Ð Ð Ð ¼st fX 10 .t/; tg: Indeed, one must find a trade-off between the tightness of bounds and the numerical complexity. We give Fig. 4 to show the impact of the parameter K on the accuracy of results for a buffer of size 10. The load is assumed to be 0.7 for this example. Especially, the quality of the lower bound is improved by increasing K . 0.01

"infH" "supH" 0.001

0.0001

1e-05

1e-06

1e-07

1

2

3

4

5

6

Fig. 4. The impact of K on the tightness of the bounds.

7

N. Pekergin / Performance Evaluation 36–37 (1999) 1–17

13

B1 = 20 p1 B2 = 20

p2 p3

B0 = 20

B3 = 20 p4

B4 = 20

Fig. 5. Exact model.

4.2. Multi-stage ATM switch We now present the application of this methodology to evaluate the loss rates in the second stage of an ATM switch. In queueing theory terms, the considered system is a feed-forward network, where finite capacity queues are arranged in several stages, and the external arrivals take place only in the first (input) stages. Since we consider ATM networks, the considered model can be modeled as a discrete-time Markov chain with very general assumptions on the arrival processes. However, the numerical solution is limited to only little values of the capacities ( Bi ). Because of the topology, approximative methods based on decomposition are used to evaluate these kinds of models. For ill-balanced configurations, this approach may not give satisfactory results [7]. We apply the proposed methodology to derive the bounds and we have obtained good results. We now give the model in more details (see Fig. 5). We assume that all queues have the same capacity B. Obviously, the cell loss rates in the first stage can be obtained exactly in isolation of the other stages. Therefore, we are interested in the loss rate of the buffer in the second stage. Therefore, the state representation vector may be chosen as (N0 ; N1 ; : : :; Nm ), where Ni is the number of cells in buffer i. Hence, the asymptotical state space size is O.B m /. Because of this state space explosion on the values of B, we attempt to find bounding models on a reduced state space. The first step of the methodology is to determine the state space of comparison, S, and the relation order on this space, ¼. It is easy to see from the inherent characteristic of the model that in order to compute the cell loss rates in the second stage buffer, we must know the exact number of cells in this buffer, while it will be sufficient to know if there is any cell or not in input buffers. This leads us to have a limited state representation vector: X D .N0 ; X 1 ; X 2 ; : : :; X m /. X i equals 1 if there is at least one cell, and 0 if there is no cell in buffer i. First, we must be able to express the considered performance measure on this space. The image of the considered systems on the space S, at time t is denoted by X .t/ D .N0 .t/; X 1 .t/; : : :; X m .t//, and s D .n 0 ; x 1 ; : : :; x m / is any state of this space. The cell loss rate at the second stage at time t, R.t/ is given by: R.t/ D

X s2S

Pr.X .t/ D s/

m X jD1

p[ j; s]..n 0 .t/

1/C C j

B0 /C

(8)

N. Pekergin / Performance Evaluation 36–37 (1999) 1–17

14

where p[ j; s] is the probability of having j arrivals at state s. For the sake of simplicity, we are keeping the notation p[ j; s] for the probability of having j arrivals at state s, but notice that this probability is derived only from the routing probabilities and the values of x i , 1  i  m. Moreover, the considered performance measure must be an ¼-increasing functional of the Markov chain. Intuitively, the states must be ordered in such a way that if x ¼ y, then the loss of cells at state x is less probable than losing them at state y. Moreover, the cell loss rate (Eq. (8)) may be rewritten as follows: R.t/ D E f .X .t// where f : S ! R f .s/ D

m X

p[ j; s]..n 0

1/C C j

B0 /C :

(9)

jD1

Therefore, Eq. (9) must be ¼-increasing. The chosen vector ordering is as follows: Let x D .x 0 ; x 1 ; : : :x m /, y D .y0; y1; : : :ym / 2 S. x¼y

if

x 0  y0 and x 1 D y1 Ð Ð Ð x m D ym :

(10)

In addition, x D y if all components are equal. On the other hand, if x ¼ y then f .x/  f .y/: in this case x i D yi , 1  i  m, and the arrival probabilities are the same; hence, f .x/ differs from f .y/ only on the values of x 0 and y0. As a result of the vector ordering, x 0  y0, yielding that f .x/  f .y/. The second step is to find bounding systems verifying the sample path property with respect to the vector ordering ¼. The images of bounding systems on the comparison state space, S, must be comparable with the image of the original model on S. It is assumed that all considered systems are empty at time 0, and are subjected to the same arrival process. Let fX inf .t/; tg; fX sup .t/; tg; fX .t/; tg be respectively the images on S of the lower bounding, upper bounding, and the original system. Then fX inf .t/; tg ¼st fX .t/; tg ¼st fX sup .t/; tg: Now we define for the considered example (Fig. 5) the constraints imposed on a bounding system that must be satisfied: ž lower bounding models: – for all input buffers, 1  i  m: if X i .t/ D 0, then X iinf .t/ D 0; 8t ½ 0. This condition means that when no arrival occurs from buffer i to buffer 0 at the original system, then no arrival occurs at the lower bounding one. – and N0 .t/ ½ N0inf .t/; 8t ½ 0. ž upper bounding models: sup – for all input buffers, 1  i  m: if X i .t/ D 1, then X i .t/ D 1, 8t ½ 0. This condition means that when an arrival may occur from buffer i to buffer 0 at the upper bounding system then an arrival occurs at the original one. sup – and N0 .t/  N0 .t/; 8t ½ 0. Obviously, several bounding systems satisfying these conditions may be found. However, the tightness of these bounds depends on how well the underlying system dynamic is presented with a

N. Pekergin / Performance Evaluation 36–37 (1999) 1–17

15

B1 = 20 p1 B2 = 20

p2 p3

B0 = 20

p4

Fig. 6. Model for upper bound.

reduced size state space. In other words, a trade-off between the numerical complexity and the accuracy must be found. In [7], the lower ounding systems by reducing the capacities in the input stages have been proposed. Intuitively, if the capacities of input buffers having less routing probability to buffer 0 are reduced, the cell loss probability will be less affected. We ignore the behavior of buffers sending less cells, but we take into account the behaviors of buffers sending more cells. A trivial bound but not probably very tight, may be obtained by reducing some buffer capacities to 0. The upper bound may be obtained in the same manner by overestimating the behavior of the buffers sending more cells. Hence we replace some input buffers with sources. Clearly, the bounds will be tighter if we replace buffers having high probabilities to send cells to buffer 0. We give some results to emphasize the effectiveness of this approach. Notice that the size of the exact model is .B C 1/5 . We assume that the external arrivals to the each buffer of the first stage is the superposition of 2 independent Bernoulli processes with probability p. Therefore, average queue load is 2 p in each buffer of the first stage. We assume unbalanced routing probabilities as follows: p1 D q, p2 D q, p3 D 0:4 q, p4 D 0:6 q. The upper bound is obtained with a model of two input buffers and two sources (Fig. 6). Thus the chain size is only .B C 1/3 . To compute the lower bounds, we keep the first buffer unchanged, divide the size of the second by 2, and change the size of the two others to only 1 cells (Fig. 7). This leads to a chain of size .B C 1/2 ð .B C 2/. Clearly, the upper bound is much simpler to compute than the lower bound. In Fig. 8, we give the bounds for two different values of q. It can be seen that these above bounding models provide very tight bounds for q D 0:01. Obviously, other bounding models verifying the sample-path property may be derived. For instance, in the case of q D 0:1 the lower bound may be improved by increasing the size of buffers B1 and B2. As it has been said before, a trade-off between the accuracy of the results and the numerical complexity must be found. It is also interesting to remark that this approach may be easily extended to feed-forward networks.

5. Conclusions In this work, we explain how stochastic bounds may be derived for complex discrete event systems by analyzing simpler models. In the context of this paper, simpler models are discrete-time Markov chains

16

N. Pekergin / Performance Evaluation 36–37 (1999) 1–17

B1=1 p1

B2=1

p2 B0=20 p3

B3=10 p4

B4=20

Fig. 7. Model for lower bound.

Fig. 8. Buffer of size 20, q D 0:01 and q D 0:1.

which are defined on reduced size state spaces. It is obvious that the main drawback of Markovian models is the state space explosion problem. Therefore, it is possible to find numerical results for larger parameter sizes, by considering bounding models with a reduced size state space. The main idea of this reduction approach is to capture the evolution of the considered performance measure with a reduced state space. Indeed, in large Markov chains, the state probabilities are not uniformly distributed but rather concentrated on a subset of state. Hence, if the reduction procedure lets to take into account the states having higher probabilities and to neglect the ones with lower probabilities, the obtained stochastic bounds will be tight. Obviously, this procedure is an heuristic one and different bounding models provide different tightness parameters. The goal of this paper is to present a methodology to find bounding models in the sense of sample-path ordering. First, we give an overview of the sample-path stochastic ordering associated with a preorder relation. The choice of this order depends on the considered performance measure, since it must be defined as an increasing function with respect to it. Moreover, this preorder determines the reduction procedure implicitly, because the images of the bounding models and the original model are compared with respect to it. Roughly speaking, if a

N. Pekergin / Performance Evaluation 36–37 (1999) 1–17

17

relatively high number of states are aggregated to have the same image, because of the sample path constraints on these states, the obtained bounds may be loose. In other words, the state having the worst behavior with respect to the preorder determines the behavior of the other states having the same image. Once, the reduced state space is determined, it is possible to propose different bounding models that may be defined on different spaces. Moreover, it is practical to define bounding models with different reduction parameters to find a trade-off between the tightness of bounds and the state space size which determines the numerical complexity. In fact, we obtain bounds on performance measures, but not the exact values. However the errors of the approximation is bounded. The other advantage of the stochastic comparison method is the fact that we have information on the distribution of performance measures, and not only on the average values. Obviously, these stochastic bounds may be looser than the bounds on average values. Nevertheless, they have the advantage to provide also the bounds on the transient behavior of the underlying system. Clearly, the transient bounds may be important for some problems like in the network dimensioning problem.

Acknowledgements The author wishes to thank to J.M. Fourneau for his helpful comments. References [1] O. Abu-amsha, J.M. Fourneau, N. Pekergin, Bornes stochastiques pour les taux de pertes dans un tampon me´moire ATM, Research Report PRISM, 97=012, 1997. [2] O. Abu-amsha, J.M. Vincent, An algorithm to bound functionals of Markov chains with large state space, Technical Report IMAG-Grenoble, France, RR Mai-25, April 1996. [3] M. Doisy, Comparaison de processus markoviens, Ph.D. thesis, Universite´ de Pau et des Pays de l’Adour, 1992. [4] A. Gravey, G. He´buterne, Mixing time and loss priorities in a single server queue, ATM Workshop, 13th International Teletraffic Congress, Copenhagen, 1991. [5] J.M. Fourneau, N. Pekergin, H. Taleb, Stochastic bounds and QoS: application to the loss rates in ATM networks, European Simulation Multiconference (ESM’96), Budapest, Hungary, 1996. [6] J.M. Fourneau, N. Pekergin, H. Taleb, An application of stochastic ordering to the analysis of the pushout mechanism, in: D. Kouvatsos (Ed.), Performance Modelling and Evaluation of ATM Networks, Chapman-Hall, London, 1995. [7] J.M. Fourneau, L. Mokdad, N. Pekergin, Bounding the loss rates in a multistage ATM switch, in: Marie et al. (Eds.), Proc. Tools, St. Malo, France, LNCS 1245, Springer, Berlin, 1997. [8] J.C. Muntz, S. Lui, R. Muntz, D. Towsley, Bounding the response time of a minimum expected delay routing system: an algorithmic approach, IEEE Trans. Comput. 44 (12) (1995). [9] T. Kamae, U. Krengel, G.L. O’Brien, Stochastic inequalities on partially ordered spaces, Ann. Probab. 5 (6) (1977) 899–912. [10] J. Keilson, A. Kester, Monotone matrices and monotone Markov processes, Stoch. Process. Appl. 5 (1977) 231–241. [11] W. Massey, A family of bounds for the transient behavior of a Jackson network, J. Appl. Probab. 23 (1986). [12] R. Onvural, Asynchronous Transfer Mode Networks: Performance Issues, Artech House, Boston, 1993. [13] K. Mosler, M. Scarsini, Stochastic Orders and Applications, LNEMS 401, Springer, Berlin, 1993. [14] M. Shaked, G. Shantikumar, Stochastic Orders and Their Applications, Academic Press, Orlando, 1994. [15] D. Stoyan, Comparison Methods for Queues and Other Stochastic Models, Wiley, New York, 1976. [16] L. Truffet, Near complete decomposability: bounding the error by stochastic comparison method, Adv. Appl. Probab. 29 (1997). [17] M. Woodward, Communication and Computer Networks: Modelling with Discrete Time Queues, Pentech Press, London, 1993.