Switching Control Analysis and Design in Queue Networks
Journal Pre-proof
Switching Control Analysis and Design in Queue Networks } ´ Lorinc Marton PII: DOI: Reference:
S0016-0032(19)30698-2 https://doi.org/10.1016/j.jfranklin.2019.09.027 FI 4179
To appear in:
Journal of the Franklin Institute
Received date: Revised date: Accepted date:
6 November 2018 28 July 2019 19 September 2019
} ´ Please cite this article as: Lorinc Marton, Switching Control Analysis and Design in Queue Networks, Journal of the Franklin Institute (2019), doi: https://doi.org/10.1016/j.jfranklin.2019.09.027
This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2019 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.
Switching Control Analysis and Design in Queue Networks L˝ orinc M´ artona a Department of Electrical Engineering, Sapientia Hungarian University of Transylvania, Tirgu Mures, Romania
Abstract This paper deals with the switching traffic control of such communication networks that can be modeled using interconnected queue systems. A general method is proposed to analyze the stability and to compute the expected upper bounds of the queue backlogs and recovery times in controlled queue networks which apply switching control algorithms. It is also discussed, how the state switching control algorithms can be designed for network traffic control applications by applying the principle of lexicographic optimization. This optimization method ensures a unified approach to design different types of switching traffic controllers. The delay control problem in networked control systems or known data traffic control methods, as the additive increase/multiplicative decrease algorithm, can be treated in the proposed analysis and design approach. Experimental measurements are also presented to show the effectiveness of the switching data traffic controllers in delay-critical wireless telerobotic applications. Keywords: Network traffic control, Networked control systems, Stability analysis, Queue systems
1
1. Introduction
2
Networked control systems apply communication networks to implement the
3
information exchange among some of their components. Typical examples are
4
Multi-Agent Systems [1] or Wireless Sensor Network-based control applications Preprint submitted to Journal of The Franklin Institute
November 12, 2019
5
[2]. The performances of such control systems are largely influenced by the
6
characteristics of the used communication channels [3]. In the applied commu-
7
nication links it is essential to implement such traffic control algorithms which
8
are able to assure proper communication conditions in almost all circumstances.
9
The congestion control mechanisms are meant to ensure that the commu-
10
nication network is not overloaded, by regulating the output flow rates at the
11
source side. This is done based on feedback information (e.g. round trip time or
12
packet loss) obtained from a congestion detection mechanism [4]. In many cases,
13
different transfer rate computation formulas are applied in different operating
14
conditions; this yields to switching control schemes such as the increase/decrease
15
algorithms [5].
16
In the case of the networked control systems, besides congestion avoidance,
17
other, control-specific criteria have to be fulfilled by the traffic control algo-
18
rithm. Since the communication delay directly influences the control perfor-
19
mances in networked feedback loops, its expected value should not exceed pre-
20
defined, application-specific limits. In the study [6] the effects of delay and
21
delay variation on the stability, performance of wireless networked control sys-
22
tems were treated, and a set of basic requirements were presented that support
23
successful deployment of such networked control systems.
24
For traffic control design it is beneficial to have a model which can suitably
25
describe the behavior of the communication traffic in the network. The inter-
26
connected queue systems are accepted models to describe the behavior of many
27
types of communication networks [7]. More recently the theory of stochastic
28
queue networks was proposed for control analysis and design in communication
29
networks [8].
30
The deterministic, continuous queue system models allow the application of
31
conventional control methods to design network traffic controllers. The paper [9]
32
introduces a discrete-time H∞ controller for congestion control in data networks
33
with communication lags. A robust L2 stable networked controller for packet
34
data queue level control between two nodes with an Internet connection was
35
introduced in the study [10]. Integral control approaches were also developed 2
36
for network traffic control to suppress the effect of unmodeled disturbances,
37
see e.g. [11]. The cooperative control concept was applied in [12] for queue
38
backlog control in a class of communication systems. Stability analysis methods
39
for network congestion control systems were introduced based on continuous
40
bottleneck link models of the communication networks, see e.g. [13].
41
The interconnected stochastic queues can capture adequately the behavior
42
of the communication systems [7], but the control analysis and design based
43
on such models are more challenging. A promising approach is based on the
44
Lyapunov theory extended to stochastic queue systems [8]. This was effectively
45
applied to analyze and design backpressure routing algorithms [8] or to design a
46
communication framework in radio sensor networks for smart grid applications
47
[14]. Lyapunov techniques were also applied to determine the average delays
48
and queue backlogs in input-queued cell-based switches [15]. The enumerated
49
methods in these previous works impose restrictions on the drift of the Lyapunov
50
functions to conclude on stability and performances of controlled queue systems.
51
The approach presented in this paper does not necessitate the Lyapunov drift
52
condition or other restrictions on the Lyapunov function candidate. It only
53
imposes that the control input should be bounded to conclude on the system
54
stability and performances. The assumptions related to the boundedness of the
55
control are more suitable for practical applications.
56
The significant contributions of this paper are:
57
• An analysis method is introduced for a general class of switching control
58
algorithms that are applied in queue networks. By using the proposed
59
method, the expected value of the cumulative maximum queue backlog
60
and the expected recovery time can be predicted based on the controller
61
and queue parameters.
62
63
• A unified approach for switching control algorithm design in queue systems is proposed, which is based on lexicographic optimization.
64
• A switching delay control method is presented for wireless networked con-
65
trol systems. Wireless networked robot control experiments were per3
66
formed, which apply the proposed switching delay control in the commu-
67
nication channels.
68
The rest of this paper is organized as follows: Section 2 introduces the
69
basic modeling assumptions. Section 3 presents the stability analysis method
70
for queue networks that are controlled using switching control algorithms. The
71
switching control design problem in queue networks is treated in section 4. The
72
additive increase/multiplicative decrease control and the delay control problems
73
are discussed here using the proposed analysis and design approaches. Case
74
studies are presented in section 5 to show the applicability of the proposed
75
analysis and design methods. Finally, section 6 concludes this study.
76
2. Controlled Queue Systems
77
78
Let a system of queues in which the evolution of each queue is described by the Lindley’s recursion (see e.g. [16]): Qj [k + 1] = max(0, Qj [k] + Uj [k]), 0 ≤ E[Qj [0]] < ∞, j = 1 . . . n < ∞.
(1)
79
Here E[·] denotes the expected value, Qj [k] is the jth queue’s backlog in the
80
integer time slot k ≥ 0, Uj [k] is the control input. The Lindley’s recursion is an
81
accepted model to describe links with storing capacity in real communication
82
networks, see e.g. [17] and the references therein.
83
It is considered that the expected value of the control input is upper bounded,
84
as it is formulated in the following assumptions.
85
Assumption 1. E[Qj [k]Uj [k]] < E[Qj [k]]UM j , ∀j, k, UM j > 0.
86
Assumption 2. E[ 21 (Uj [k])2 ] < UM j , ∀j, k, UM j > 0.
87
88
89
90
(2)
(2)
(2)
Denote UM = maxj (UM j ) = max(UM 1 , . . . , UM j , . . . , UM n ), and UM = (2)
maxj (UM j ). The stability of the queue systems is commonly related to the boundedness of the expected value of queue backlogs.
4
91
Definition 1. [18] The queue network is strongly stable if: n k−1
XX 1 limk→∞ sup E[Qj [i]] < ∞. k j=1 i=0 92
93
(2)
A method to study the stability of queue networks is based on following Lyapunov function candidate: n
1X 2 Q [k] 2 j=1 j
L(Q[k]) =
94
(3)
where Q[k] = (Q1 [k] Q2 [k] . . . QM [k])T .
95
The control input Uj [k] can be defined as the difference between number
96
of arrived items (Aj [k]), and the number of served/departed items (Dj [k]). A
97
common approach to assure stability under mild conditions is to design such
98
control policies which assure that the expected values of the arrivals are strictly
99
smaller then the expected value of the departures [19], i.e. E[Uj [k]] < −Umj ∀j, k,
(4)
100
where Umj > 0 is the minimum value of the difference between E[Aj [k]] and
101
E[Dj [k]]. Let Um = minj (Umj ) = min(Um1 , . . . , Umj , . . . , Umn ).
102
Theorem 1. [18] Consider a system of queues modeled by (1). If there exists
103
UM > 0 and Um > 0 such that
(2)
(2)
E[L(Q[k + 1]) − L(Q[k])|Q[k]] ≤ UM − Um 104
n X
106
(5)
then the queue system is strongly stable and n k−1 (2) U E[L[0]] 1 XX E[Qj [k]] ≤ M + . nk j=1 i=0 Um Um nk
105
E[Qj [k]],
j=1
(6)
The conditional expectation E[L(Q[k+1])−L(Q[k])|Q[k]] is called Lyapunov drift.
5
107
3. Analysis of Queue Systems with Switching Control
108
The stability condition (4) for the expected value of the control is restrictive
109
as it imposes that the expected value of the control input should always be
110
strictly negative.
111
The control strategies that are based on the condition (4) do not explore the
112
storing capability of the queues. A better strategy is to enforce the condition
113
(4) only if the queue backlog overpasses a predefined limit.
114
115
116
117
Moreover, many control strategies cannot instantly satisfy the condition (4). It is the case of control algorithms with integral terms. For a more efficient and flexible utilization of the queues, consider the following switching control algorithm for each queue: U [k], if E[Q [k]] > Q , j− j ε Uj [k] = U [k], otherwise.
(7)
j+
118
where Qε is a predefined queue backlog threshold value.
119
No restriction is imposed on Uj+ . It may be set by the applications, which
120
use the queue system, to satisfy the demands of the tasks implemented over the
121
queue system.
122
123
124
125
To ensure stability and to compute an upper bound for the average queue backlog, here more relaxed conditions for the control input are proposed. The condition below is similar to (4) but it imposes a restriction on the control only if the expected queue backlog overpasses Qε . Assumption 3. E[Qj [k]Uj− [k]] < −E[Qj [k]]Umj , if E[Qj [k]] > Qε ∀j, k.
126
The next condition is less restrictive as it does not enforce instantaneous
127
sign restriction when the condition E[Qj [k]] ≤ Qε is not satisfied:
128
Assumption 4. If in the l-th time slot Qj [l] overpasses Qε , Uj− assures that
129
E[Qj [k]] returns under Qε within 0 ≤ Pl < ∞ successive time slots. 6
130
Let the recovery time be P = maxl (Pl ).
131
Assumption 4 involves that there exists a number of time slots in which
132
Assumption 3 is satisfied but it also allows a finite number of such time slots
133
when Assumption 3 is violated.
134
The Assumption 4 is suitable to analyze such switching control algorithms
135
which have terms with integral character, e.g. terms in the form Uj− [k + 1] =
136
Uj− [k] − ∆j (Qj [k]), where ∆j (·) is the control decrement function. Such
137
control actions cannot instantly ensure that Assumption 3 is satisfied. In the
138
next section it will be shown that common switching control algorithms, such
139
as the additive increase/multiplicative decrease or the switching delay control
140
algorithm obey to Assumption 4.
141
142
(U )
(U )
Bounded control input change is assumed. (U )
(U )
Assumption 5. E[|Uj [k + 1] − Uj [k]|] < δM j , ∀j, k, δM j > 0. (U )
(U )
143
Let δM = maxj (δM j ).
144
The following theorem summarizes the stability results for different types of
145
switching control algorithms. It also gives the average queue backlog bound in
146
the function of control input bounds.
147
Theorem 2. Let a system of queues that are described by (1) and apply the
148
control (7). Then,
149
(I) if the Assumptions 1, 2 and 3 hold, the queue system is strongly stable
150
and the average of the expected cumulative queue backlog in each time slot
151
k > 0 satisfies n k−1 (2) UM 1 XX UM E[L[0]] E[Qj [i]] ≤ + Qε +1 + . nk j=1 i=0 Um Um Um nk
(8)
152
(II) if the Assumptions 1, 2, 4 and 5 hold, the queue system is strongly
153
stable and the average of the expected cumulative queue backlog in the kth
154
time slot satisfies n k−1 (2) U UM E[L[0]] 1 XX (U ) +1 + . E[Qj [i]] ≤ M + (2Qε + δM P ) nk j=1 i=0 Um Um Um nk 7
(9)
Proof: From the model (1) it results that
155
Qj [k + 1]2 = (max(0, Qj [k] + Uj [k]))2 ≤ (Qj [k] + Uj [k])2 .
Consider the Lyapunov function candidate (3). By the inequality (10) it
156
157
(10)
results that n n X 1X 2 L[k + 1] − L[k] ≤ Uj [k] + Qj [k]Uj [k]. 2 j=1 j=1
(11)
By taking the expectations, the Lyapunov drift reads as n X 1 E[L[k + 1] − L[k]|Q[k]] ≤ E Uj [k]2 + Qj [k]Uj [k])|Q[k] . 2 j=1
158
(12)
It yields
159
k−1 X i=0
(E[L[i + 1] − L[i]|Q[i]]) ≤
n k−1 XX i=0 j=1
1 E[ Uj [i]2 |Q[i]] + E[Qj [i]Uj [i]|Q[i]]] . (13) 2
160
Denote N− [i] the set of queues that in the ith iteration satisfy E[Qj [i]] > Qε .
161
The set of the other queues in the system is denoted by N+ [i]. The cardinality
162
of the set N+ [i] is n− [i] = card(N− [i]), and let n+ [i] = card(N+ [i]). Note that
163
n+ [i] + n− [i] = n ∀i.
164
The sum in the relation (13) is separated corresponding to the sets N− [i]
165
and N+ [i]. Using the law of telescoping sums and by Assumption 2 it results
166
that: (2)
E[L[k + 1]] − E[L[0]] ≤ nkUM + 167
168
169
E[Qj [i]Uj [i]] +
k−1 − [i] X nX
E[Qj [i]Uj [i]]. (14)
i=0 j=1
i=0 j=1
It was explored that E[E[X|Y ]] = E[X]. As E[L[k + 1]] ≥ 0 and by using the Assumption 1, the relation above simplifies as: −E[L[0]] ≤
170
k−1 + [i] X nX
(2) nkUM
+ Qε UM
k−1 X
n+ [i] +
i=0
k−1 − [i] X nX
E[Qj [i]Uj [i]].
(15)
i=0 j=1
(I) First, consider that Assumption 3 is satisfied. Then (2)
−E[L[0]] ≤ nkUM + Qε UM
k−1 X i=0
8
n+ [i] −
k−1 − [i] X nX i=0 j=1
Um E[Qj [i]],
(16)
171
k−1 − [i] X nX
E[Qj [i]] ≤
i=0 j=1
172
As
Pk−1 Pn+ [i] j=1
i=0
k−1 n XX i=0 j=1
E[Qj [i]] ≤ Qε (2)
E[Qj [i]] ≤
Pk−1
k−1 (2) knUM UM X E[L[0]] + Qε . n+ [i] + Um Um i=0 Um
Pk−1 i=0
knUM + Qε Um
(17)
n+ [i] it results that
k−1 X UM E[L[0]] +1 . n+ [i] + Um Um i=0
(18)
n+ [i] ≤ kn, the relation (8) directly results.
173
Since
174
The strong stability yields by taking the limit k → ∞ of the inequality (8):
i=0
limk→∞
k−1 n (2) nUM UM 1 XX E[Qj [i]] ≤ + nQε +1 . k i=0 j=1 Um Um
(19)
175
(II) Second, consider that Assumption 4 is satisfied.
176
Let N± [i] the set of queues that in the ith iteration satisfy both E[Qj [i]] >
177
Qε and the Assumption 3. The set of the queues in the system that satisfy
178
E[Qj [i]] > Qε but does not meet Assumption 3 is denoted by N= [i]. Let n± [i] =
179
card(N± [i]) and n= [i] = card(N= [i]). Note that n± [i] + n= [i] = n− ∀i.
180
With these notations the inequality (15) takes the form: −E[L[0]] ≤
181
182
i=0
n+ [i] − Um
k−1 = [i] X nX
E[Qj [i]] + UM
i=0 j=1
k−1 ± [i] X nX
E[Qj [i]]. (20)
i=0 j=1
(U )
yields: E[Qj [i]] ≤ δM Pl + Qε , ∀j, l < i ≤ l + Pl . It results that
i=0 j=1
184
+ Qε UM
k−1 X
If in the lth iteration Qj [i] overpasses Qε , from the Assumptions 4 and 5
k−1 = [i] X nX 183
(2) knUM
E[Qj [i]] ≤
k−1 k−1 (2) X knUM UM X UM (U ) + Qε n+ [i] + (δM P + Qε ) n± [i] + E[L[0]]. (21) Um Um i=0 Um i=0
Pk−1 Pn+ [i] Pk−1 Pk−1 Pn± [i] (U ) E[Qj [i]] ≤ Qε i=0 n+ [i] and i=0 j=1 E[Qj [i]] ≤ (δM P + As i=0 j=1 Pk−1 Qε ) i=0 n± [i] it results that k−1 n XX
(2)
knUM E[Qj [i]] ≤ + Qε Um i=0 j=1 +
k−1 X UM +1 n+ [i] Um i=0
k−1 X UM (U ) + 1 (δM P + Qε ) n± [i] + E[L[0]]. Um i=0 9
(22)
Pk−1
(n+ [i] + n± [i]) ≤ kn, the relation (9) directly results.
185
Since
186
The strong stability also yields by taking the limit k → ∞ of the inequality
187
i=0
(9): limk→∞
k n (2) nUM 1 XX UM (U ) + n(2Qε + δM P ) +1 . E[Qj [i]] ≤ k i=1 j=1 Um Um
(23)
188
189
If Qε = 0 and P = 0, the queue backlog bounds in the Theorem 2 simplify
190
to the bound obtained in the Theorem 1. The bound obtained in (8) can be
191
viewed as a generalization of (6) for the switching control case. The relation (9)
192
further generalizes the computed backlog bound for the case of such switching
193
controllers that contain terms with integral character.
194
The Theorem 2 treats the stability of queue systems and gives an estimate
195
of the average queue backlog under restrictions on the control action when a
196
critical queue backlog is overpassed. The condition formulated in Assumption 4
197
requires a known upper bound for the recovery time (P ). In the next section, it
198
will be presented, how P can be computed for different switching traffic control
199
algorithms.
200
4. Switching Control Design
201
4.1. Lexicographic Optimization Approach for Switching Control Design in Queue
202
Systems
203
Consider a communication network in which the transmission links are mod-
204
eled by the equation (1). The network is used by applications which initiate data
205
flows among the different nodes of the network.
206
In realistic communication scenarios, the queues in the communication links
207
have an upper bound of the backlog (QM ). A queue is called here overloaded if
208
E[Q[k]] ≥ QM .
209
A bottleneck link is a communication link that is shared by a high number
210
of data flows which could generate together the overload of the link. Assume a
10
211
number of N data flows that share the bottleneck link, i.e. Q[k + 1] = max(0, Q[k] +
N X
U` [k])
(24)
`=1
PN
where
213
arrived items from the individual flows and the served items. The applications
214
are considered to be able to gather information about the state of the used
215
bottleneck link, and are also able the regulate their own transmission rates.
216
Here A` [k] is considered to be controlled by the application which generates the
217
flow (source based control).
218
219
220
221
222
223
224
225
`=1
U` [k] =
PN
212
`=1
A` [k]−D[k], i.e. the difference between the sum of the
During the transmission rate computation, several objectives should be fulfilled. First, the overload of the links in the communication network should be avoided. Second, the communication medium should assure the data transmission rate requested by the applications. When high transmission rates are expected, the two objectives above could be contradictory.
226
It can also be affirmed that the first objective generally has higher priority
227
since a possible overload of the bottleneck link compromises all the flows which
228
pass through it. The traffic control design can handle this by using the principle
229
of the lexicographic optimization [20].
230
Lexicographic optimization method with two objective functions: Consider
231
the objective functions F (1) (A) and F (2) (A) depending on a decision variable A.
232
The optimization problem can be formally written as: minimize (F (1) (A) F (2) (A))
233
subject to gi (A) ≤ 0 ∀i, where gi (A) ≤ 0 represents the ith inequality constraint.
234
According to the principle of the lexicographic ordering [21], the objective
235
functions are aligned in function of their priorities. Let F (1) the objective func-
236
tion with the higher priority. In the first step find such decision variables for
237
which F (1) (A) ≤ Fε , where Fε
238
(1)
jective F
(1)
(1)
is an acceptably low limit value for the ob-
. The second step can be formalized as: minimize F (2) (A) subject
11
239
(1)
to F (1) (A) ≤ Fε
and gi (A) ≤ 0 ∀i.
240
Data traffic control design in the view of lexicographic optimization:
241
Define two objective functions.
242
The first objective function (F (1) ) is formulated such to assure that the
243
overload is avoided. It can be formulated as a strictly increasing function of the
244
queue backlog.
245
The second objective function (F (2) ) can be defined in function of A` such to
246
assure the desired transfer rate (sent data packets over a sending period) for the
247
data flow. The minimum of F (2) corresponds to the case when the real transfer
248
rate is equal to the desired transfer rate.
249
250
251
252
(1)
In the view of the lexicographic optimization a limit value Fε (1)
chosen such that F (1) < Fε
has to be
is equivalent to E[Q[k]] < Qε ∀k, where Qε < QM .
Then the lexicographic optimization-based queue control can be implemented as the following switching control strategy:
253
• If E[Q[k]] ≥ Qε , compute A` such to ensure the decrease of F (1) .
254
• Otherwise, compute A` such to ensure the decrease of F (2) .
255
The constraints in the optimization problem can be the bounds of the control
256
inputs in (24), see e.g. the Assumptions 1, 2.
257
4.2. Additive Increase/Multiplicative Decrease (AIMD) Algorithm
258
The AIMD algorithm is a wide-spread data traffic regulation method in wide-
259
area computer networks as it can simultaneously assure congestion avoidance
260
and fairness [5, 22].
261
The control algorithms, meant to avoid congestion, are generally based on a
262
congestion detection mechanism that gives an estimate of the cumulative back-
263
logs of the queues on the route used by the data flow. The well-known conges-
264
tion detection methods apply acknowledgment package loss, Round-Trip Time
265
measurements or Explicit Congestion Notification [4]. The route is considered
266
congested if the estimated queue backlog overpasses a threshold value (Qε ).
12
267
This value has to be chosen such to avoid the overload of the queues on the
268
route.
269
Let a data flow which uses a communication channel that includes a bot-
270
tleneck link. To apply the principle lexicographic optimization for congestion
271
avoidance, two objectives are formulated. The first objective with higher prior-
272
ity can be formulated as: assure that the bottleneck queue backlog is smaller
273
than the prescribed threshold (Qε ). The second, lower priority objective is to
274
assure as high data rate for the data flow as possible. The scalable AIMD policy
276
[4] solves this problem by applying the following algorithm: µA [k − 1], if E[Q[k]] ≥ Q , ` ε A` [k] = A [k − 1] + α, otherwise.
277
in [23] in the case of real communication networks. The first term in (25) assures
278
the decrease of the sent packets to avoid the overload in critical situations. The
279
second term assures the increase of the sent packets whether it is possible.
275
(25)
`
The controller parameters α > 0 and 0 < µ < 1 can be chosen as it is proposed
280
As follows it will be shown that the switching control algorithm (25) assures a
281
finite recovery time, i.e. the control (25) is in concordance with the Assumption
282
4.
283
284
Proposition 1. Consider the bottleneck link model (24) in which each data PN (N ) flow applies the control (25). If E[Q[0]] = Q0 > Qε , `=1 A` [0] = A0 and
285
E[D[k]] ≥ Dm > 0, ∀k, then the expected value of the queue backlog becomes
286
smaller the Qε within 1 P = Dm
287
288
289
(N )
A Q0 − Qε + 0 1−µ
!
(26)
time slots. Proof: Consider the successive time slots k = 0 . . . P when E[Q[k]] ≥ Qε . In this case the expected queue backlog satisfies: E[Q[k + 1]] = E[Q[k]] +
N X `=1
13
A` [k] − E[D[k]].
(27)
290
If E[Q[k]] ≥ Qε , A` [k] = µA` [k − 1] = µk A` [0]. It yields: E[Q[P + 1]] = Q0 +
P X N X
k=1 `=1 291
µk A` [0] −
(N ) 1
E[Q[P + 1]] ≤ Q0 + A0 292
P X
E[D[k]].
(28)
k=1
− µP +1 − P Dm . 1−µ
(29)
Since 0 < µ < 1 (N )
E[Q[P + 1]] ≤ Q0 + 293
294
A0 − P Dm . 1−µ
(30)
By equating this upper bound with Qε , yields the equation (26). 4.3. Delay Control
295
Due to the delay dependency of the control performances in networked con-
296
trol systems, it is necessary to design such data rate control algorithms which
297
assure that the expected end-to-end delay (E[T [k]]) in the communication links
298
of the networked control system remains under a predefined bound Tε > 0, i.e.
299
E[T [k]] ≤ Tε .
300
It is considered that the data flows under consideration pass through a bot-
301
tleneck link, the queuing delay of which is the dominant component of the
302
end-to-end delay.
303
The queuing delay is closely related to the expected queue backlog. This
304
relation is formulated in the Little’s law for queue systems [24], or more specific
305
formulas are given in data communication networks which also take into consid-
306
eration in the propagation delay (Tm ≥ 0) [25]. Here a general relation between
307
the queuing delay and the queue backlog in a bottleneck link is considered: E[Q[k]] = a[k](E[T [k]] − Tm )
308
309
310
(31)
where E[T [k]] ≥ Tm and 0 < am ≤ a[k] ≤ aM is finite. During the control design, the delay measurement errors should also be taken into consideration. Here it is assumed that |Tb[k] − E[T [k]]| ≤ ∆ 14
(32)
311
312
313
314
315
316
where Tb[k] denotes the measured delay and ∆ > 0 is the finite upper bound of the measurement error.
It is assumed that the prescribed delay bound Tε is chosen such that Tε > max(∆, Tm ). From the relations (31) and (32) yield that, if Tb[k] < Tε , then E[Q[k]] ≤ Qε
where
Qε = aM (Tε + ∆ − Tm ).
(33)
317
If Tε is chosen such that Qε is below QM , then the delay control implicitly
318
assures the avoidance of the overload.
319
320
For the lexicographic approach of the delay control design, let the first objective function be the measured delay: F (1) [k] = Tb[k].
(34)
321
Many applications, such as video streaming, need to transfer a predefined
322
number of data packets within a given number of time slots. If it is not possible,
323
the service offered by the application can also be functional but of lower quality.
324
Consider that in the bottleneck link each data flow is required to send a (d)
325
desired number of A`
326
objective function as:
data packets in each time slot. Formulate the second
N
F (2) [k] =
1 X (d) (A` − A` [k])2 . 2
(35)
`=1
327
The following switching control algorithm solves the lexicographic optimiza-
329
tion problem with the objective functions (34) and (35), see [26]: A [k − 1] − γ Tb[k], if Tb[k] > T , ` ε A` [k] = A [k − 1] + α(A(d) − A [k − 1]), otherwise. ` ` `
330
The first term of (36) decreases the number of the sent data packets. It implicitly
331
yields to the decrease of the queue backlog and queuing delay, respectively. The
332
second term assures the decrease of the objective function (35).
328
(36)
where the decrement and increment gains are γ > 0, and 0 < α < 1 respectively.
15
333
334
335
336
The control (36) assures a finite recovery time as it is presented in the proposition below. Proposition 2. Consider the bottleneck link model (24) in which each data PN (N ) flow applies the control (36). If E[Q[0]] = Q0 > Qε , `=1 A` [0] = A0 and
337
E[D[k]] ≥ Dm > 0 ∀k, then the expected value of the queue backlog becomes
338
smaller then Qε , defined in (33), within P time slots, where P is the positive
339
340
341
solution of the equation: γN (Tε − ∆) 2 γN (Tε − ∆) (N ) P − A0 − Dm + P + Qε − Q0 = 0. 2 2
Proof: Consider the successive time slots k = 0 . . . P when Tb[k] ≥ Tε . The
control input satisfies:
A` [k] = A` [0] − γ
k X i=1
Tb[i],
(38)
A` [k] ≥ A` [0] − γk(Tε − ∆). 342
P −1 X k=1
(N )
E[Q[P ]] ≤ Q0 + P (A0 E[Q[P ]] ≤ Q0 + P
344
345
346
347
348
(39)
If Tb[k] ≥ Tε , the expected queue backlog satisfies: E[Q[P ]] = Q0 +
343
(37)
(N )
A0
N X `=1
!
A` [k] − E[D[k]] ,
− Dm ) − γN (Tε − ∆)
− Dm − γN (Tε − ∆)
P −1 X k=1
P −1 2
k,
(41)
(42)
Equate the computed upper bound with Qε , given by (33): P −1 (N ) Q0 + P A0 − Dm − γN (Tε − ∆) = Qε 2 The equation above corresponds to (37). Remark: Since
γN (Tε −∆) 2
(40)
.
(43)
> 0 and Qε − Q0 < 0 the quadratic equation (37)
always admits a real positive solution. The benefits of the presented delay control algorithm in time-critical networked control applications is shown in section 5.
16
349
4.4. Implementation Issues
350
The control algorithms (25) or (36) can be implemented using timer event
351
functions, that are invoked repeatedly with a predefined interval (period). The
352
implementation is exemplified through the algorithm (36). Consider that an
353
application intends to use a bottleneck link for data transmission such that the
354
communication delay stays around a given threshold Tε . If possible, the source
355
application intends to send a number of A`
356
Ts .
(d)
packages within the time interval
357
A possible implementation of the control algorithm (36) for the `th applica-
358
tion, which computes the number of data packets (A` ) to be sent in the current
359
sending period, reads as follows:
360
Delay Control : (d)
get A` , γ, α, Tε
361 362
A` = 0
363
repeat {with interval Ts } get Tb
364
if (Tb > Tε )
365 366
else
367
A` = max(0, A` − γ Tb);
(d)
A` = max(0, A` + α(A`
368
− A` ));
end
369
end
370 371
end
372
4.5. Simulation of Delay Control Algorithm
373
Simulation experiments were performed to analyze the transient perfor-
374
mances of the delay control algorithm (36). Consider the bottleneck link model
375
given in the equation (24). Communication packets are sent through this link.
376
The link can serve D = 10000 packets in a discrete-time slot. It was consid-
377
ered that the delay in the bottleneck link is given by the relation (31) with the
378
parameters Tm = 1ms and a = 5E5.
17
379
Let an application (` = 1) which uses the bottleneck link and applies the
380
delay control algorithm (36). The control parameters were chosen as γ = 500,
381
α = 0.01 and A1 = D.
(d)
382
It is assumed that, besides this communication channel, other channels also
383
use the same bottleneck link, and they send together 5000 packets in each dis-
384
crete time slot.
385
Figure 1 shows the behavior of the queue state Q, the delay value T and the
386
computed control signal A1 during the transient state of the control for different
387
Tε values. It can be seen that, if Tε decreases, the attenuation of the transient
388
oscillations also decreases. It is because in the switching control algorithm (36)
389
the term that ensures the delay decrease is proportional to T . This effect can
390
be compensated by increasing the controller parameter γ for larger Tε values.
391
5. Telerobotic Application
392
5.1. Video-supported Teleoperation Systems
393
Networked telerobotic systems are a special class of networked control sys-
394
tems that are designed to assure the remote control of a distant robot by a
395
human operator [27]. Figure 2 presents end nodes and the communication chan-
396
nels in a common telerobotic system. The two end nodes are represented by
397
the master side computer, used by the human operator-controlled master robot,
398
and by the slave side computer, used by the distant robot. Through the PCh
399
channel, the reference position is sent continuously from the master robot to
400
the remote mobile slave robot. In the case of bilateral teleoperation [28] the
401
environmental forces are sent from the slave to the master through the FCh
402
channel to ensure haptic feedback for the human operator. The VCh channel is
403
used to transmit video data from the salve side to the master side.
404
The time-critical data flows are represented by PCh and FCh, as the de-
405
lays in these communication channels are responsible for control performance
406
degradation [27]. The cumulative transfer rate (sent data packets over a sending
407
period), necessary for the teleoperation system, is R = RV + RP + RF where
18
6000
A1 (packets)
5000 4000 3000 2000 1000 0
0
500
1000 1500 Discrete time (s/sending period)
2000
0
500
1000 1500 Discrete time (s/sending period)
2000
0
500
1000 1500 Discrete time (s/sending period)
2000
4
4
x 10
Q (packets)
3
2
1
0
0.1
T (s)
0.08 0.06 0.04 0.02 0
Figure 1: Transient behavior of the delay control for different Tε values (Tε = 24ms - red; Tε = 12ms - blue; Tε = 6ms - green)
408
RV , RP and RF denote the transfer rates in VCh, PCh and FCh respectively.
409
The video channel (VCh) needs a substantially greater transfer rate than the
410
PCh and FCh.
411
In wireless communication networks, the ensured service rates for the end
412
nodes are determined based on the observed radio signal strength among the
413
end nodes and the access point. It is controlled by the access points’ dynamic
414
rate scaling algorithm such to assure a reliable access point - end node com-
415
munication link [29]. The service rate regulation is application-independent.
19
WLAN
VCh PCh FCh
PCh FCh VCh
Figure 2: Video assisted teleoperation of a mobile manipulator
416
The rate can take predefined values in function of the measured wireless signal
417
strength. If the mobile robotic slave (the end node) moves away from the access
418
point, its service rate is decremented by this algorithm. The wireless channel
419
represents the bottleneck link in most of the mobile telerobotic systems.
420
The communication delays in the PCh and FCh channels influence the con-
421
trol performances of teleoperation, it can even compromise the stability of the
422
bilateral teleoperation systems [28]. The delay can be controlled by regulating
423
the transfer rates in the bottleneck link. The transport layer endpoint multi-
424
plexing algorithm ensures that the flow of each end node has the same chance of
425
transmission [23], independent of its transfer rate. In the case of teleoperation
426
systems, by decreasing the amount of sent video data, the packets in the PCh
427
and FCh can get faster through the wireless access point and lower communica-
428
tion delay in these channels can be obtained. Hence, by modifying the transfer
429
rate in VCh, the delay in the FCh and PCh can be adjusted, RV can be applied
430
as control input in the data traffic control algorithm of the video supported
431
telerobotic system. The transfer rate in the VCh can be regulated by several
432
approaches: by changing the sending rate of the video data, by modifying the
433
size of the video frames, or by varying the video frame quality (e.g. by applying
434
compression to the video frames at different levels).
20
435
5.2. Real-time Data Transfer Rate Control Experiment
436
Experimental measurements were performed to show the applicability of
437
the delay control approach presented in subsection 4.3 in wireless teleoperation
438
systems. On the master side, two Sensable Phantom Omni haptic devices were
439
used as master robots. A KUKA Youbot mobile manipulator was used as the
440
slave robot. The first haptic device was applied to control the mobile platform;
441
the second one assured the remote control of the manipulator. A video camera
442
was placed on the mobile platform to follow the motion and the actions of the
443
slave robot. The master and the slave sides were connected by using a wireless
444
TPLINK TL-WR941ND access point.
445
The sending periods in the time-critical channels (PCh and FCh) were chosen
446
Ts = 5ms. The frames from the slave camera were captured with 20Hz. Each
447
video frame has 640 × 480 pixels with 24 bit color resolution. JPEG compression
448
was applied to reduce the amount of video data that has to be sent. The JPEG
449
compression rate is between 0% and 100%. Before sending, the compressed video
450
data was decomposed into 8 kbyte packets. The received video frame packets
451
were reconstructed at the master side. Figure 5.2 presents the quality of a
452
displayed video frame at the master side for u = 10% and u = 90% respectively.
453
The data traffic controller which regulates the video transfer rate was im-
454
plemented at the slave side, the average delay was measured in the FCh. The
455
delay was calculated using the clock synchronization technique between the
456
master and slave side computers, see e.g. [30].
457
For the validation of the performed rate control measurements, the data
458
traffic in the WLAN was observed by using an Aircap NX wireless data packet
459
capturing device. It was used to monitor the data traffic with such measure-
460
ments that are independent of the rate control application. The wireless signal
461
strength at the slave robot and the data transfer rates from the slave were
462
measured using this equipment.
463
Two experiments were performed. In both cases, the mobile robotic agent
464
was teleoperated by the human operator on the same track. The robot’s velocity
465
was about 0.4m/s. The motion started from the vicinity of the wireless access 21
466
point and the robot departed around 45m from the access point in an indoor
467
environment. After about 50 seconds the robot left the room in which the access
468
point was. In the second part of the motion (after about 110 seconds) the robot
469
returned to the wireless access point on the same path.
470
In the first experiment, the video frames from the slave were sent with the
471
constant u = 90% compression rate to the master. Figure 5 shows that in
472
the first part of the robot’s motion the wireless signal strength decreases and
473
during its return the detected signal strength increases (see Signal Strength).
474
This figure also shows that the available service rate of the mobile slave was
475
adjusted automatically by the access point according to the detected signal
476
strength, independent of the teleoperation application (see Service Rate).
477
The effect of the decreasing wireless signal strength on the delay can clearly
478
be observed, see Figure 6. Here TP denotes the delay from the master to the
479
slave and TF denotes the delay from the slave to the master in the time-critical
480
data flows (PCh and FCh in Figure 2). As the experimental measurements
481
show, the mean of the delay increases up to 300 ms.
482
During the second experiment, the delay control algorithm (36) was imple-
483
mented. Under constant sending period assumption, the video transfer rate can
484
be considered to be proportional to the number of sent video packets. If the
485
JPEG compression rate decreases, the number of video packets, which have to
486
be sent, also decreases. Hence u was used here as the control action. In the
487
algorithm (36) the following parameters were considered: γ = 100, α = 0.1,
488
u(d) = 90% and the prescribed upper bound for the communication delay was
489
set to Tε = 12ms. The advanced bilateral control algorithms, such as the time
490
domain passivity-based controllers, assure reliable teleoperation if the commu-
491
nication delay values are in the order of tens of milliseconds [31].
492
The behavior of the data flows with the proposed traffic control algorithm
493
is shown in Figure 7. When the robot departs from the access point and the
494
signal strength decreases, the delay tends to increase. As this figure shows, the
495
traffic controller reacts to the delay change and it adjusts the compression rate
496
u such that the delay is kept near to the threshold Tε . 22
Figure 3: Displayed video frame quality - u=10%
Figure 4: Displayed video frame quality - u=90%
497
The effect of the data traffic regulation on the communication performances
498
in the presence of varying wireless signal strength is presented in Figure 8. The
499
control algorithm assures that the average communication delay values TP and
500
TF in the delay-critical data flows (PCh and FCh) remain around the threshold
501
value (Tε = 12ms). At the same time, the controller ensures the best video
502
quality corresponding to these delay values and the momentary wireless radio
503
signal strength.
23
504
6. Conclusions
505
Switching control algorithms are a reasonable choice for many data traffic
506
regulation problems. The switching allows the application of different control
507
strategies in critical and regular cases respectively. In this study, an analysis
508
method for a general class of switching traffic controllers was introduced that is
509
applicable to such communication networks which can be modeled by intercon-
510
nected queues. The method concludes on stability and average queue backlog
511
bounds in function of the controller parameters. The proposed analysis method
512
can handle such switching control algorithms that include terms with integral
513
character. In this case, the stability can be assured if the controller assures a
514
bounded recovery time. It is presented, how the recovery time can be computed
515
for different types of switching control algorithms.
516
As the control objectives are often contradictory in the case of data traffic
517
regulation problems, the principle of lexicographical optimization can be ap-
518
plied to switching control design. This optimization method can be applied to
519
approach different types of data traffic control design problems in communica-
520
tion networks. As an example, a delay control algorithm design was presented
521
for networked control systems. The experimental measurements performed in a
522
wireless telerobotic system that shows the applicability and effectiveness of the
523
proposed switching data traffic control method.
524
In future works, it will be investigated how further data flow regulation tech-
525
niques, such as the Active Queue Management, can be treated in the proposed
526
analysis and design approach.
527
Acknowledgments
528
The author acknowledges Zolt´ an Sz´ ant´ o, Piroska Haller and Tam´ as Vajda,
529
Sapientia - Hungarian University of Transylvania for their contribution to the
530
experimental measurements. This work was supported in part by a grant of
531
the Romanian National Authority for Scientific Research CNCS - UEFISCDI,
532
project number PN-II-RU-TE-2011-3-0005. 24
533
References
534
[1] Z. Liu, W. Yan, H. Li, M. Small, Cooperative output regulation problem
535
of multi-agent systems with stochastic packet dropout and time-varying
536
communication delay, Journal of the Franklin Institute 355 (17) (2018)
537
8664 – 8682.
538
[2] W. Qi, Q. Song, X. Kong, L. Guo, A traffic-differentiated routing algorithm
539
in flying ad hoc sensor networks with SDN cluster controllers, Journal of
540
the Franklin Institute 356 (2) (2019) 766 – 790.
541
542
[3] E. Garcia, M. Antsaklis, P., L. A., Model-Based Control of Networked Systems, Birkhauser, 2014.
543
[4] C. N. Houmkozlis, G. A. Rovithakis, End-to-End Adaptive Congestion
544
Control in TCP/IP Networks, CRC Press - Taylor& Francis Group, 2012.
545
[5] D. Chiu, R. Jain, Analysis of the increase/decrease algorithm for congestion
546
avoidance in computer networks, Journal of Computer Networks 17 (1989)
547
1–14.
548
[6] R. H. Middleton, T. Wigren, K. Lau, R. A. Delgado, Data flow delay
549
equalization for feedback control applications using 5G wireless dual con-
550
nectivity, in: IEEE 85th Vehicular Technology Conference, 2017, pp. 1–7.
551
552
[7] G. Giambene, Queuing Theory and Telecommunications: Networks and Applications, Springer, 2005.
553
[8] M. J. Neely, Stability and probability 1 convergence for queueing networks
554
via Lyapunov optimization, Journal of Applied Mathematics (2012) 35
555
pages.
556
[9] N. E. Fezazi, F. E. Haoussi, E. H. Tissir, T. Alvarez, Design of robust H∞
557
controllers for congestion control in data networks, Journal of the Franklin
558
Institute 354 (17) (2017) 7828 – 7845.
25
559
[10] T. Wigren, Robust L2 stable networked control of wireless packet queues
560
in delayed internet connections, IEEE Transactions on Control Systems
561
Technology 24 (2) (2016) 502–513.
562
[11] Y. Huang, S. Mao, S. Midkiff, A control-theoretic approach to rate con-
563
trol for streaming videos, Multimedia, IEEE Transactions on 11 (6) (2009)
564
1072–1081.
565
[12] S. Manfredi, Decentralized queue balancing and differentiated service
566
scheme based on cooperative control concept, Industrial Informatics, IEEE
567
Transactions on 10 (1) (2014) 586–593.
568
[13] Z. Wang, F. Paganini, Global stability with time-delay in network conges-
569
tion control, in: Proceedings of the 41st IEEE Conference on Decision and
570
Control, Vol. 4, 2002, pp. 3632–3637.
571
[14] G. A. Shah, V. C. Gungor, O. B. Akan, A cross-layer QoS-aware communi-
572
cation framework in cognitive radio sensor networks for smart grid applica-
573
tions, IEEE Transactions on Industrial Informatics 9 (3) (2013) 1477–1485.
574
[15] E. Leonardi, M. Mellia, F. Neri, M. A. Marsan, Bounds on average delays
575
and queue size averages and variances in input-queued cell-based switches,
576
in: Proceedings IEEE INFOCOM Conference on Computer Communica-
577
tions, Vol. 2, 2001.
578
579
[16] P. Glasserman, K. Sigman, D. D. E. Yao, Stochastic Networks, Springer, Lecture Notes in Statistics, 1996.
580
[17] M. J. Fischer, D. M. Bevilacqua Masi, Analyzing internet packet traces
581
using Lindley’s recursion, in: Proc. of IEEE Winter Simulation Conference,
582
2006, pp. 2195–2201.
583
584
[18] M. J. Neely, Stochastic Network Optimization with Application to Communication and Queueing Systems, Morgan & Claypool, 2010.
26
585
[19] C.-S. Chang, Stability, queue length, and delay of deterministic and
586
stochastic queueing networks, IEEE Transactions on Automatic Control
587
39 (5) (1994) 913–931.
588
[20] J. Ros, W. Tsai, A lexicographic optimization framework to the flow control
589
problem, Information Theory, IEEE Transactions on 56 (6) (2010) 2875–
590
2886.
591
[21] R. T. Marler, J. S. Arora, Survey of multi-objective optimization meth-
592
ods for engineering, Structural and Multidisciplinary Optimization 26 (6)
593
(2004) 369–395.
594
[22] R. H. Middleton, C. M. Kellett, R. N. Shorten, Fairness and convergence re-
595
sults for additive-increase multiplicative-decrease multiple-bottleneck net-
596
works, in: Proceedings of the 45th IEEE Conference on Decision and Con-
597
trol, 2006, pp. 1864–1869.
598
[23] A. Tanenbaum, D. J. Wetherall, Computer Networks, Pearson, 2010.
599
[24] N. Gautam, Analysis of Queues - Methods and Applications, CRC Press -
600
Taylor & Francis Group, 2012.
601
[25] S. Pourmohammad, A. Fekih, D. Perkins, Stable queue management in
602
communication networks, Control Engineering Practice 37 (2015) 67 – 79.
603
[26] L. M´ arton, P. Haller, T. Vajda, Z. Sz´ ant´ o, H. S´ andor, T. Szab´ o, Data
604
transfer regulator for wireless teleoperation, Transactions of the Institute
605
of Measurement and Control 38 (2016) 141–149.
606
607
608
609
[27] M. Ferre, M. Buss, R. Aracil, C. Melchiorri, C. Balaguer (Eds.), Advances in Telerobotics, Springer, 2007. [28] P. F. Hokajem, M. W. Spong, Bilateral teleoperation: An historical survey, Automatica 42 (2006) 2025 – 2057.
27
610
[29] S. Pal, S. R. Kundu, K. Basu, S. K. Das, IEEE 802.11 rate control al-
611
gorithms: Experimentation and performance evaluation in infrastructure
612
mode, in: Proc. of 7th Passive and Active Measurement Conference, 2006.
613
[30] V. Paxson, On calibrating measurements of packet transit times, SIGMET-
614
RICS Perform. Eval. Rev. 26 (1) (1998) 11–21.
615
[31] L. M´ arton, P. Haller, T. Vajda, Z. Sz´ ant´ o, T. Haidegger, P. Galambos,
616
J. K ovecses, Internet-based bilateral teleoperation using a revised time-
617
domain passivity controller, Acta Polytechnica Hungarica 14 (8) (2017) 27
618
– 45.
28
Service Rate (Mbs)
WLAN Monitor 100
50
0
0
50
100
150
200
150
200
Signal Strenght (dB)
Time (s) 0 −10 −20 −30 −40 −50 0
50
100 Time (s)
Figure 5: Signal strength and service rate during motion - Traffic controller not active
T F (ms)
1500 1000 500 0
0
50
100 Time (s)
150
200
0
50
100 Time (s)
150
200
T P (ms)
1500 1000 500 0
Figure 6: Delay during motion - Traffic controller not active
29
u (%)
100
50
Service Rate (Mbs)
100
Signal Strenght (dB)
0 0
0
50
100 Time (s)
150
200
50
100 Time (s)
150
200
50
100 Time (s)
150
200
50
0 0
−20 −40 0
Figure 7: Traffic controller behavior during motion 300
T F (ms)
TF Tǫ
200
100
0
0
50
100 Time (s)
150
200
0
50
100 Time (s)
150
200
T P (ms)
300
200
100
0
Figure 8: Delay during motion - Traffic controller active
30