Copyright © IFAC Low Cost Automation, Buenos Aires, Argentina, 1995
LQG OPTIMAL CONTROL SYSTEM DESIGN UNDER DELAYED PERTURBATIONS Hansheng Wu and Koichi Mizukami Faculty of Integrated Arts and Sciences Hiroshima University 1-7-1 Kagamiyama, Higashi-Hiroshima City 739, Japan
Abstract. This paper is mainly concerned with the problem of linear quadratic Gaussian (LQG) optimal control system design in the presence of uncertain perturbation and time-delay. For such a problem, some robust stability conditions are derived by mak.ing use of the conventional LQG theory and some analytical methods , which results III an approach to LQG optimal control design for linear stochastic dynamical systems with time-delay. Keywords. Linear stochastic system, LQG optimal control, time delay, robust stability, sufficient condition , state estimation.
theory developed in Johnson (1971) was extended to both noise-type disturbances and waveformtype disturbances. In Looze et al. (1983), the minimax approach was used to treat the linear stochastic systems with noise uncertainty, and a minimax control law and a minimax state estimator were proposed for LQG control systems with noise uncertainty. Particularly, in Chen and Dong (1989) , linear stochastic system with both uncertain parametric perturbations and uncertain noise covariances was considered, and a sufficient condition for robust stability of the system was derived by making use of minimax theory and the Bellman-Gronwall inequality.
1. INTRODUCTION
It is well known that the linear-quadraticgaussian (LQG) optimal control problem combines the available theory of optimal quadratic control with that of optimal estimation to provide a unified design procedure (Kwakernaak and Sivan , 1972). Generally, the standard LQG optimal control design for a linear stochastic system requires an exact system model for a real plant and an accurate description of statistical behaviour of noise signal. In the past decades , the theory of the standard LQG optimal control design has been successfully applied to a variety of control systems.
In this paper, the control problem of linear stochastic systems including delayed perturbations is considered. It is well known that in many practical problems, especially in various process control problems, the systems to be controlled includes time-delay, and its existence is frequently a source of instability of systems . Therefore , it is necessary to discuss the robust stability problem of stochastic systems with delay. For such a problem, some robust stability conditions are derived by using the conventional LQG 'theory and some analytical methods , which results in an approach to LQG optimal control design for linear stochastic systems with time-delay.
It is worth pointing out that in many practical control problems, the system model for a real plant may include some degrees of uncertainties and it may be impossible to obtain a complete description of statistical behaviour of the noise signal due to modelling errors, measurement errors, linearization approximations , and so on . Therefore , in recent years , some schemes have previously been proposed around the subject about the form of uncertain disturbances or noise. In J ohnson (1971) , for example, the disturbances , which are both not known beforehand and not accessible for measurement but do have a known set of possible waveforms, have be modelled as a state-space description which characterizes these possible waveform modes of disturbances. In Johnson (1984), the kind of disturbance-accommodating control
The paper is organized as follows . In Section 2, a conventional LQG optimal control problem is 303
where
described, and the problem to be tackled is stated. In Section 3, the main results are given . Finally, the paper is concluded in Section 4 with a brief discussion of the results. Throughout this paper, the following notations will be used. Ai(A)
ith eigenvalue of matrix A
11 All
maximum sigular value of matrix A, i.e.
IIAII
:=
Similar to the conventional LQG optimal control problem, consider the performance index J to be minimized, described by
.
max{ VAi(AT A)}
11 xII
Ilxll:= VE[xTx]
cov[.,.]
covariance function
E[·]
expected value operator
tr[.]
trace operator
c( · )
Dirac delta function
Re[·]
real part of a complex number
e is a random variable with
J
=
-T1 E{ lo(
[x T (t)Qx(t)
+ puT (t)Ru(t)] dt }
(4)
where R = RT > 0, Q = QT 2: 0, and p is a positive constant . It is well known from the LQG optimal control theory that the optimal admissible control u* (t), which minimizes the performance index J in (4) subject to dynamical system (1) without delayed perturbations (i.e. ~A( · ) = 0), is given by
2. PROBLEM FORMULATION
2.1 Problem Formulation u*(t)
Consider a linear stochastic dynamical system with time-delay described by the following stochastic differential equations :
= -G x(t)
(5a)
where
(5b)
dx(t) dt
Ax(t) + Bu(t)
(la)
+~A(t)x(t
(lb)
- het)) +«t)
Cx(t) + B(t)
yet)
ATp+PA-~PBR-1BTp =-Q
(lc)
and x(t) is the output of the Kalman-Bucy filter given by
diet) dt
E[B(t)]
=0
36(t - r) , 32:0
(2b)
cov [B (t), B( r)]
06(t - r), 02:0
(2c)
(6b)
and where (6d)
Now , the main problems to be tackled in this paper are to find some conditions on controller (5) that ensures the stability of dynamical systems (1) with delayed perturbations. 2.2 Assumptions
In this paragraph , the following standard assumptions are introduced for system (1).
Without loss of generality, the initial condition of system (1) is given by
t E [to - h, to]
+K( yet) - Ci(t) )
(6c)
Meanwhile , {«t)} and {B(t)} are independent of each other.
.;0 ,
(6a)
(2a)
«r)]
cov [';(t),
Ax(t) + Bu(t)
where
In addition, suppose that the noise processes {«t), B(t) } are stationary white Gaussian with the following properties:
E[«t)]
(5c)
P
where t E R is the "time"; x(t) E Rn is the state vector; u(t) E R m is the control vector; and yet) E Rr is the observation vector;
x(t) =
lim
T .... oo
Assumption 2.1 The pairs (A , VlJ) and (A , ../3) are respectively observable and controllable .
(3) 304
+~A(t)x(t
The uncertain ~A(·) R -+ is bounded in a non-empty set described
Assumption 2.2 R nxn
+~(t)
as
~A(t) E {1I~A(t)II::;!3}
- h(t)
(9)
- J{(J(t)
with initial condition
(7)
'I/;(to) - £(to) , t = to e(t) = { 'I/;(t), t E [to -
where !3 is a positive constant obtained by experiences or experiments.
Tt, to)
On the other hand, substituting control law (5a) into (la) yields
In the rest of this section, the Bellman-Granwell lemma is directly stated which will be used in the next section.
dx(t) dt
Lemma 2.1
Let v(t), f(t) , and c(t) be a real continuous functions of t, and let f(t) > O. If
+ BCe(t)
[A - BC] x(t) +~A(t)x(t
- h(t»)
+ ~(t)
(10)
Then, combibing (9) with (10) yields
v(t) ::; c(t) + it f(r)v(r)dr to
then
v(t) ::; c(t) +
dx(t) dt
1:
=
Ax(t) + Wv(t) +~A(t)x(t - h(t»
f(s)exp {it f(r)dr} c(s)ds
(ll)
where
In the light of the above lemma, the following two results have been obtained .
[xT(t)
x(t)
Corollary 2.1 If f(t) = f =constant, then
[
v(t) ::; c(t) + fit v(r)dr [
implies
v(t) ::; c(t) + f (exp {f(t - s)} c(s)ds
BC
o
A - KC
~A(t)
1
1
0
W .-
Jto
v(t)
Corollary 2.2 If both f(t) and c(t) are constants , i.e . f(t) = f and c(t) = c, then v(t)::;
A - BC
~A(t) 0
~A(t)
to
eT(t)]T
and where
c+ fit v(r)dr
cov[v(t) , v(t)].
to
[~
:
1
implies Now, the question is to find some conditions such that the stability of (11) can be guaranteed. For this purpose , let the transition matrix 1l1(t , r) be defined as
v(t) ::; cexp {J(t - to)}
111 (t , r) = exp { A(t - r) }
3. MAIN RESULTS
where 1l1(t , r) is the solution of the following matrix differential equation /.,
Now , consider linear stochastic dynamical system (1) with time-varying delay. Substracting (6a) from (la) and defining
e(t) = x(t) - £(t)
(8)
yield
de(t) dt
a1l1(t , r) at
=
A1l1(t ,r)
(I2a)
1l1(t , r)
=
I
(I2b)
Then , the solution of (11) for t 2: to can be ex-
[A - KC]e(t) 305
Furthermore, substituting (15) into (17) yields
pressed as
x(t)
=
wet, to)x(to) +
It
wet, r)
Ilx(t)11
[~A( r)x(r -
< h( r))
+1t 1t
to
(13)
+Wv(r)] dr
mIIWllexp{-J1.(t - r)}llv(r)lldr
to
+
It is obvious from Assumption 2.1 that the eigenvalues of the matrix A are all in the lefthand half of the s-plane. Therefore, there exists a positive constant value J1. such that -J1. is the real part of the eigenvalue of A nearest to the imaginary aXIS, I.e.
J1. = - max{Re[Ai(A)], i = 1, ... ,n}
mexp{ -J1.(t - to)}IIDII
m exp{ -J1.(t - r)}
to
xll~A(r)x(r - h(r))lldr
On the other hand , from (7) one can have that
[
0 0
x(-)
e(-)
~A(-)x(-)
Ilw(t , r)1I ~ m exp{ -J1.(t - r)} , t ~ r (15) where m is a certain appropriate positive constant and J1. satisfies (14) . The following theorem provides a sufficient condition to guarantee the asymptotic stability of closed-loop dynamical system (1) , (5) .
<
211~A(-)x(-)11
<
211~A(-)lllIx( ')11
<
2f3llx(-)11
<
2f3llx( ·)11
(19)
and from the definition of the norm one can also have that
Theorem 3.1 Consider uncertain time-delay dynamical system (1) and suppose that Assumptions 2.1 to 2.3 are satisfied. Let J1. and m satisfy (15) . If the following inequality holds:
Ilv(t)11
11 [
(16)
2m
1[ II ~A(-)x(-) 1
[~A(-) ~A(-)
II~A(· )x(-) 11
(14)
Thus , it follows from Chen (1984) and Desor and Vidyasagar (1975) that
f3 < ~
(18)
!~:~ III
JE [~T (t)~(t) + OT (t)O(t)]
then , the controller (5) stabilizes system (1) in the presence of delayed perturbations, and the output of the closed-loop system will converge asymptotically to some small fixed value which is dependent on the intensity of noises.
JE [~T (t)~(t)] + E [OT (t)O(t)] v'11~(t)112 + 110(t)11 2
Proof: Let x(t) be any solution of (11) . Taking the norms of both sides of (13) and making use of the general properties of the norms yield
11~(t)11 + 110(t)11
<
JE [~T (t)~(t)] + JE [OT (t)O(t)]
Ilx(t)11
<
(20) Ilw(t , to)IIIIDII
+
1t
Ilw(t , r)11
[11~A(r)x(r -
h(r))11
to
IIWII +IIWll llv(r)ll] dr
11
[~ -I~ III
(17 ) ~
2 + IIKI I
where Thus , substituting (19)-(21) into (18) yields IIDII '-
sup
Ilx(t)11
tE[t o- h, t oJ
306
(21 )
it follows from (24) and the definition ofY(t) that IIx(t)11
yet)
< mexp{-J.l(t +
It mdexp{-J.l(t to
r)}dr
+
It 2m,B exp{ -J.l(t to
r)}
+ It 2m,BY( r)dr
+ (2m,Bexp{2m,B(t - r)}m(r)dr (26) lto
On the other hand, one can have that
Ito
t
+ :d (1- exp{-J.l(t - to)})
It 2m,Bexp{-J.l(t to
(25)
to
Yet) :S met)
mexp{ -J.l(t - to)}IIDII
+
met)
Then , by making use of the Bellman-Gronwall inequality (see , e.g. Corollary 2.1), one can further obtain that
xllx(r - h(r))lIdr
=
:S
to)}IIDII
exp{2m,B(t - r)}m( r)dr
1:
r)}
xllx(r - h(r))/ldr
exp{2m,B(t - r)} [mlIDII
(22)
+ :d ( exp{J.l( r - to)} -
where
d := (2 + IIKI/) ( Vtr [3] + Vtr le])
=
Multiplying both sudes of (22) by exp{J.l(t - to)} yields
1) ] dr
mllD/I [exp{2m,B(t - to)} - 1]
2m,B
md
+ J.l(J.l- 2m,B)
[
exp{J.l(t - to)}
/lx(t)/I exp{J.l(t - to)}
<
- exp{2m,B(t - to)}]
mllD/I
+ :d (exP{J.l(t - to)} -
+Itto
1)
+ 2:~,B [1 -
yet)
(23)
:S
Letting
yet)
md -,;:
(1 + J.l-2m,B2m,B ) x (exP{J.l(t - to)}
:= [
su~
IIX(P)II] exp{J.l(t - to)}
- exp{2m,B(t - to)})
PE[t-h ,t]
+mlIDII exp{2m,B(t - to)}
and
met)
:= ml lD/I +
Ilx(t)11 exp{J.l(t - to)} :S Yet)
Ilx(t)11 exp{J.l(t - to)}
met) +
It 2m,BY(r)dr to
(28)
From (28) and the definition ofY(t) one can also have that
:d (exP{J.l(t - to)} - 1)
from (23) one can have that
:S
(27)
Therefore, substituting (27) into (26) yields
2m,Bexp{J.l(r - to)} xllx(r - h(r))/ldr
exp{2m,B(t - to)}]
<
md J.l
(1 + J.l-2m,B2m,B ) _<;,
(24) x (exP{J.l(t - to)}
Since the left- hand side of (24) is non-decreasing ,
- exp{2m,8 (t - to)})
307
+mIID/I exp{2m,B(t - to)}
Step 2. Based on a control system, there are three situations to be considered repeatedly until one of them holds.
I.e.
(1) If J.l2 > J.lI and only J.lI cannot satisfy (32) , then some appropriate p in (5) has to be selected in order to satisfy (32) . (2) If J.ll > J.l2 and only J.l2 cannot satisfy (32), then an appropriate a in (31) has to be selected in order to satisfy (32) . (3) If both J.ll and J.l2 cannot satisfy (32), then some appropriate p in (5) and a in (31) has to be selected in order to satisfy (32) .
Ilx(t)11
<
m exp{ -(J.l -
2m,B)(t - to) }IIDII
(1 + _2--=m_,B-=-) 2m,B
+ _m_d J.l
J.l-
x (1 - exp{ -(J.l- 2m,B)(t - to)}) (29)
> 0, if condition (16) is satisfied , IIx(t)1I in (29) will converge to a certain value as t --+ 00 ,
It is obvious from (29) that since J.l
md
--;;
(1 +
2m,B) 2m,B
J.l -
4. CONCLUSION The control problem of linear stochastic dynamical systems including delayed perturbations has been considered. By combining the conventional LQG optimal control theory with the method used in time-delay system analysis , some robust stability conditions have been derived so that the linear stochastic dynamical systems with time-delay can remain stable. In the light of the robust stability condition developed in the paper , a robust LQG optimal design algorithm has been also proposed.
(30)
which is dependent on the density of external noises . Hence, the system is asymptotically stable in the presence of the perturbations of uncertainties including delayed state . • In order to design a robust controller, firstly define that
REFERENCES J.lI J.l2
- m?X{Re[A;(A - BC)]} ,
Anderson, B. D. O. (1973) . Exponential data weighting in the Kalman-Bucy filter . Information Science , 5, 217-230. Chen, B.S. and T .Y. Dong (1989) . LQG optimal control system design under plant perturbation and noise uncertainty: A state-space approach. Automatica, 25,431-436. Chen , C.T .(1984). Linear System Theory and Design. CBS College Publishing. Desoer, C.A . and M. Vidyasagar (1975) . Feedback Systems: Input-Output Properties. Academic Press, New York. Johnson , C.D .(1971). Accommodation of external disturbances in linear regulator and servomechanism problems. IEEE Trans . on Automatic Control, AC-16, 635-644. Johnson , C.D.(1984) . Disturbance-utilizing controllers for noise measurement and disturbances: I. The continuous-time case. International Journal of Control, 39, 859-868 . Kwakernaak , H. and R. Sivan (1972). Linear Optimal Control Systems. Wiley-Intersience , New York. Looze , D.P. , H.V. Poor, K.S . Vastola and J .C . Darragh (1983). Minimax control of linear stochastic systems with noise uncertainty. IEEE Trans . on Automatic Control, AC-28, 882-888.
- max{Re[A;(A - KC)]} ,
In addition, it is well known from Anderson (1973) that if a is a nonnegative constant and the following steady-state version of the Riccati equation in (6d) is changed
(A+aI)E+E(A+aI)T (31) all eigenvalues of matrix (A - K C) have real parts less than - a . From (11), using the separation principle, the robust stability condition (16) is equivalent to
,B < min(J.ll ' J.l2) 2m
(32)
Thus , in the light of Theorem 3.1, a robust LQG optimal design algorithm can be proposed as follows. Step 1. Design a control system C» and (6) and check whether condition (16) is satisfactory or not. If (16) is not satisfied , go to Step 2; otherwise , complete the design .
308