Chapter 10 The Time-Optimal Control of Linear Differential Equations of Neutral Type In this chapter we study the time-optimal control problem of various forms of 1.9.1. Controllability questions are answered. For linear systems, the existence and forms of an optimal control are established in both Euclidean and in function space. The theory developed here was reported in [13]and ~41.
10.1
The Time-Optimal Control Problem of Linear Neutral Functional Systems in En:Introduction
In this section, the time-optimal problem is explored for the linear neutral system of the form d
z D ( t ,.t) = q
x ,.t>
+ B(t)U(t),
tL
6,
(10.1.1)
= 4,
xu
where the control set is a unit rn-dimensional cube and the target is a continuous set function in an n-dimensional Euclidean space. Necessary and sufficient conditions for the existence and uniqueness of optimal controls are stated and proved. The bang-bang form of optimal control is given. For zero target, easily computable conditions are stated for the system t o be proper or normal. Also proved is a sufficient condition for (10.1.1) t o be Euclidean null controllable. As an application we consider a differential pursuit game described by the equation d
z D ( t , . t > = L ( t , .t) q E -$Xu,
-P(t)
+dt),
t L u,
001, Q ) , P E L k W , 031, PI,
xu
= 4,
P, Q C En. (1O.l.lb)
We show that the differential game is equivalent to a control system of the form (10.1.1) where the control set U is a Pontryagin difference of sets. In ( l O . l . l ) , B E Ll([a,v],EnXn),u 5 v , and D ( f ,4) = #(Of - g ( t , #), g ( t , $), L ( t , .) are bounded linear operators from C into En for each fixed t in [0, m), g ( t , 4) is continuous for ( t , 4) E [0, m) x C. s(t9 4) =
J
0
-h
d3P(t, s>4(s),
Jw,4) = J
0
-h
4 r l ( t ,S M S ) ,
Is(t,4)I I WJllllqt14)I5 -qt>ll4ll, ( t , 4 ) E
373
[O,m) x
c,
374
Stability and Time-Optimal Control of Hereditary Systems
for some nonnegative constant k, continuous nonnegative t and p ( t ,.), ~ ( t.), are n x n matrix functions of bounded variation on [-h, 01. We also assume that g is uniformly nonatomic at zero, that is, there exists a continuous, nonnegative, nondecreasing function t(s) for s E [0, h] such that t(0) = 0,
We shall consider the set of admissible controls that are measurable functions u : [ ~ , t-+ ] C", t 2 T , where C" is the unit cube in Em given by
C" = { u E Ern : l U j l 5 1). Such controls will simply be denoted by u E C". Finally we assume that
g(t) = B ( t ) u ( t ) is locally integrable from [u,m) into E n , where u E L p ( [ um), , En). Under the above hypotheses, there exists a continuous function z : [u h , ~ -+) En such that 2, = 4, which is a solution of system (10.1.1) for all t 2 6. If we denote the solution z = z(u,q5,u), then for each t 2 6, 4 E C , 21 E L p ( [ g , t ] , E " ) x t ( g , 4 , u ) = zt(o,4,0) Z t ( ~ , O , u ) . Define the operators
+
T ( t , a ): C
----f
C , K ( t , a ) : L,([a,t],E m ) + C
by zt(u,4, u )
+
= T ( t ,c)+ K ( t , u ) u .
Then T , I( are bounded linear operators. The solution of System (10.1.1) is also given by
Y ( t , t )= I, Y ( a , t )= 0 for u > t
(10.1.2)
375
The Time-Optimal Control of Linear Differential Equations
All integrals are understood to be Lebesgue-Stieltjes integrals. The function Y ( u ,t ) is Bore1 measurable subject to the conditions given in System (10.1.1). It is left continuous on its first argument and
where P < 00 and is independent of ( u , t ) . (See Henry [16] or Banks and Kent [3].) Because of these properties of Y , we can easily study the adjoint of the operator T ( t ,u) in equation (10.1.2). We need some properties of the adjoint of the operator T ( t ,a),which is defined as follows: Let Bo denote the Banach space of functions of bounded variation 1c, : [-h,O] -+ En* (the row vectors) that are continuous from the left on (-h,O) and vanish at 0; we may use the norm II$II = Var $. We [-h,OI
identify Bo with the conjugate space C with the pairing
($,4) =
J
0
-h
dlcl(s)4(s), II, E S o , 4 E c.
The adjoint of T ( t , u ) , T * ( u , t ) : Bo
whenever u
-+ Bo
(10.1.4)
with domain in Bo is defined by
t , 1c, E Bo, q5 E C. We now state the following lemma:
Lemma 10.1.1
Let 11, E B o , 11, # 0. Define - y ( s , t ) = [ T * ( s , t ) $ ] ( O - ) ,0
s
5t.
(10.1.5)
Then s -+ y(s) satisfies the adjoint equation
5 { Y(S) ds
L+
+
d y ( a ) p ( a ,s - a)
6'
1
y ( a ) v ( ~s ,- a)da = 0 , s
5t.
( 10.1.6)
This follows from Henry [16].
Remark: T * ( s , t ) is characterized in [16,Theorem 31 in a way similar to [9, Theorem 4.1, p. 1521. We now designate the strongly continuous semigroup of linear transformation defined by d -[D(t>.tl dt
= L(t,.t),
(10.1.7)
376
Stability and Tame-Optimal Control of Hereditary Systems
by T ( t , o ) ,t 2 a,so that (10.1.8) T ( t ,a14 = Z t ( U , 490). Let the function U ( t ,s) E L,([a,t], P a ) for each t be defined by
a K ( t ,s ) a.e. in s , as where K(t,s) is the unique solution of ( 4 E,(., = 0, (b) K(t,S)= !?(t,Wt(., + L ( 4 K A ( . , s ) ) d A- (t - s ) l , 0 I s L t . Then the variation of constant formula for a more general version of System (10.1.1), namely, U ( t , S )=
s,"
- h 5 e < 0, 0 = 0, I identity, we are justified in writing (10.1 .lob)
T ( t ,b ) X o = Ut(., s ) . Here U ( t , t ) = I , the identity matrix. If we consider the system in E n , we have
4 ~ , 4~ ,) ( t=) T ( t ,a)$(O)+
J
t
d,U(t, s ) [G(a )(4) G(s)(t))
U
(10.1.lOc)
t
x
J, U ( t ,s)B(s)u(s)ds.
The Euclidean constrained reachable set is given by
R(t,a) =
{ J,' V ( t ,s ) B( s ) u ( s ) d s
: u E L,([a,t],
and the constrained reachable set in C is given by
{L
C")
},
(10.1.11)
t
cr(t,a) =
Ut(.,s)B(s)u(s)ds: u E L,([n,t],Cm)},
(10.1.12)
We summarize the basic properties of these sets in the following proposition.
The Time-Optimal Control of Linear Differential Equations
377
Proposition (1O.l.la) Assume that D and L satisfy the basic assumptions in the introduction, and furthermore there exist t > 0, L > 0 such that
( 10.1.13) Then (i) 0 E R(t,u ) for each t 2 u.
(ii) W ( t , s ) R ( s , a )C_ R ( t , u ) , u 5 s 5 t , where W ( t , s ): C + En is the operator, w(t,s)4 = z(t, u, 4,o) is a solution of (1O.1.7". (iii) R(t,u ) is compact in En, and convex. Also, (iv) 0 E a ( t ,u ) for each t 2 u . (v) T ( t ,s ) a ( s ,a) a ( t ,u ) , u 5 s 5 t . (vi) a ( t ,u ) is compact in C , and convex.
(vii) Also d ( t ) is compact anzd convex where d ( t ) is defined as follows: Let S be a compact and convex subset of C . The attainable set d ( t ) of (10.1.1) a t time t is defined by
d ( t ) = { z ( t ) : 4 E S , u E C"} c En
Proof: Convexity of d(t)follows trivially from that of S and C". That of R(t) follows from the convexity of C". Because S is compact and z ( t ,6,., 0) continuous, ~ ( u, t ,S , 0) is bounded. Since U ( t ,s ) B ( s ) is integrable and u ( s ) E C", d ( t ) is bounded in En.Also R(t) is bounded. We use a weak compactness argument and the compactness of S t o deduce that d ( t ) is closed in En. The same weak compactness argument proves the closedness of R(t). It now remains to prove (10.1.7). Let T E R(s); then for some E C", r = U ( s ,r ) B ( r ) u ( ~ ) d Define r.
s,"
Then
u*(T) E
Cm.
378
Stability and Time-Optimal Control of Hereditary Systems
Consider the point
Hence
W ( t , s ) R ( s ) R(t), u 5 s 5 t .
Theorem 10.1.1
Let Cr = {u E C" : lujl = 1). If
d o ( t )= { z ( t ): q!~ E S, uo E G}, RO(t) =
{ J,' U ( t ,s)B(s)uO(s)ds
: uo E
1
,
then Ro = R ( t ) and d o ( t )= d ( t ) for each t 2 u. Proof: Because V ( t , . ) E L,([u,t],Ena) and B(.) E L l ( [ a , t ] , E n X mwe ), have that V ( t, s ) B(s) E L 1 ([u, t ], E" x ") .
It follows from Lemma 2 of LaSalle [2]that
Ro(t)= R(t) for each t . Hence d ( t ) = d o ( t ) .
Remark 10.1.2: The bang-bang principle of Theorem 10.1.2 is not valid for a ( t ,u) in function space as shown by the following example of Banks and Kent [3].Consider the neutral system .(t) = .(t - 1)+ u ( t ) , t E [O, 21, U = { u E E : lzll 5 l } , t ( t )= 2 - t , t E [1,2]. The assumption that u is bang-bang and is attained leads to the conclusion Iq5(t)l = 0 or d ( t ) = -3,
The Time-Optimal Control of Linear Differential Equations
379
where q!~ is the initial function. However, beginning with initial function 1 one can attain using an admissible control u. Thus, there are 4 initial functions q5 for which T ( t ,u)d a ( t , u) using bang-bang controls is a proper subset of the set attained using all admissible controls.
<
+
Definition 10.1.1: Let r" denote the metric space of all nonempty compact subsets of En with the metric p defined as follows: The distance of a point 2 from d ( t 1 ) = d ( z , d ( t l ) )= inf{lz - a1 : a E d ( t ) } ,
p ( d ( t i ) , d ( t 2 ) )= inf{c : d ( t i ) C N ( A ( t 2 1 , ~and ) d(t2) C N(d(ti),E)}. The target in our system (10.1.1) is the continuous set function: G : [ T , co) r". The problem of reaching G in minimum-time will be called the general problem. For this we want at some t 2 u t o have z ( t , u,4, u)E G ( t ) , that is, to have -+
w(t) =
J,' U ( t ,s ) B ( s ) u ( s ) d s ,
(10.1.14)
where w(t) = z ( t ) - z ( t , u , d ; ' O ) ,for some z ( t ) with t ( t ) E G ( t ) . This is equivalent t o requiring
-44 for some t
flG ( t )
#0
2 u.
Remark 10.1.3: Theorem 10.1.2 states the bang-bang principle of LaSalle is valid for (10.1.1): that if of all bang-bang steering functions there is an optimal one relative t o C r , then it is optimal (relative t o C"). Also, if there is an optimal steering function, there is always a bang-bang steering function that is optimal.
Proposition 10.1.lb The attainable set A ( . ) : [ a , ~ ) r" is continuous. Also t + R(t, 6); t --+ a ( t ,a) are continuous. -+
Remark 10.1.4: The proof of this proposition is essentially the same as the assertion which is contained in Theorem 4.1 of Banks and Kent [3]. One proves that there exists an M > 0 such that 112, (u,d, u)II 5 M for t E [a, t l ] ,4 E S , u E C". With this, one then proves that
"4 = { 4 a , 4 , u ) : d E s , u E C"} is an equicontinuous subset of C ( [ u- h , t l ] ,En). As a consequence, t + z ( t ,u,4,u) is continuous uniformly in 4,u. The continuity of t -+ A ( { )
Stability and Time-Optimal Control of Hereditary Systems
380
follows readily from this. As pointed out in [3],because the fundamental matrix t + U ( t ,s) is not continuous one cannot prove A continuous directly using the variation of parameter formula (10.1.3) as is done for the example in [4].
Theorem 10.1.2 If for the general probfem there exists a pair 4 E S, u E C" such that z ( t , 4 , u ) E G ( t ) for some t , then there is an optimal pair 4* E u+E C".
s,
Proof: By assumption there is some t 2 u,c$E S such that
d ( t ) n G ( t ) # 0. If we now define the minimal-time function
t * ( S )= inf{t 2 u : d ( t )n G ( t )#
8},
where d ( t ) = d(t,S),we can prove first as is done by Strauss [5, p. 631 that d ( t * )n G ( t * )# 0. We use the compactness of G ( t ) ,d ( t ) , and their continuity.
Definition 10.1.2: Let
and suppose
A ( t , P ) = r](t,P+)- r ] ( t , P - ) . Then D(t,q5) is atomic at ,t? on E x C if det A(t,,t?)# 0 for all t E E . In particular, if p # 0, P E [-h,O], and D ( t , 4 ) = 4(0) M(t)+(P), then M ( t ) = A ( t ,0) and D ( t ,4) is atomic a t P on E x C if det M ( t ) # 0 for all t E E . See Hale [9,p. 501. For the linear system (lO.l.l), a fundamental theorem of Hale shows that the solution operator
+
T ( t , u ): c --+
c,
defined by T ( t , u ) d = z t ( u l ~ where ), I is a solution of system (lO.l.l), is a homeomorphism, provided D is atomic at 0 and at -h [9, p. 2793. AS asserted by Hale, since q t ,I t ) = f ( t l I t )
38 1
The Time-Optimal Control of Linear Daflerential Equations
is linear and therefore continuously differentiable in the second argument, the solution operator is a diffeomorphism. It follows from an argument similar to Hale [9] that the solution operator
T ( t , a ): w
p
+
wp, 1 5p 5
00,
is also a homeomorphism.
Definition 10.1.3: The control u E C" is an extremal control on [a,tl] if for some 4 E C and each t E [a,tl]the solution z(u,q5,u) of (10.1.1) through a,Q belongs t o the boundary ad(t) of d(t). Theorem 10.1.3 Assume that conditions of Proposition (10.1.la) hold. Assume the solution of 10.1.7 is pointwise complete. Let u* be optimal on [a, t*]. Then there is a nonzero n-dimensional row vector c* depending on $* and t* such that
{ u * (t )}= sgn{c*U(t*,t)B(t)}j, u 5 t 5 t * , for each 1
( 10.1.15)
5 j 5 rn for which {C*U(t*,t)B(t)}j# 0.
Proof: To prove that any optimal control is extremal, one uses the method of Strauss and the following proposition which is proved exactly as in [ 5 ] . Proposition 10.1.2 If cr is an interior point of d(t),then there is some E > 0 such that the €-neighborhood N ( a , c ) of (Y satisfies N ( & ,€1 C - 4 s ) for all s E [t - E , t].
To prove the last assertion, we note that x ( t * ,u,+*, u * ) E ad(t*). Because d(t*)is convex and closed, there exists a support plane T through x ( t * ,a,q5*, u * ) = x* such that d(t)lies on one side of T . Let w be a unit normal to 7r directed away from d(t*). Clearly, for each u E C", x = z(t*,d*,u) E d(t*). Hence (v,z* - z) equivalent to (w,
5 0. From the variation of parameter (10.1.3a), this
l*
U ( t * ,s ) B ( s ) u ( s ) d s ) 5
l*
(%
)
Y ( t * s, ) u * ( s ) d s
is
382
Stabilaty and Time-Optimal Control of Hereditary Systems
for all u E C". Let A(s) = v'V(t*,s)B(s). Then
Hence we see that, on any interval of positive length where A(.) must be that $ ( s ) = sgn Aj(s). We have proved the theorem.
# 0,
it
Definition 10.1.4: System (10.1.1) is said to be normal on [u,t]if no component of c T Y ( t , s ) , c # 0 , vanishes on a subinterval of [u,t]of positive length. This definition is equivalent to the following: Let yj(t,s) be the j t h column vector of Y ( t , s ) . For each j = 1, ... , m the functions yjt ( t ,s), . . . ,yjn ( t ,s) are linearly independent on each subinterval of [u, t] of positive length. Corollary 10.1.1 I f System (10.1.1) is normal on [u,t],then u*(t),the optimal controJ, is uniquely determined by (10.1.9) and is bang-bang. Definition 10.1.5: Let S E En be a set. S is strictly convex if for every z1 and 22 in S, 11 # z2,the open line segment (21
+ (1 - x>z:, : 0 < x < 1)
is in the interior of S. Following the methods of Straws [5,p. 701 we can prove the following:
Theorem 10.1.4 Assume conditions o f Theorem 10.1.3. Suppose (10.1.1) is normal and G : [T,m) + rn continuous and strictly convex valued. Then i f there exists a pair 4 E C, u E C" such that z ( t , u,4, u ) E G ( t ) for some t 2 u ,then there is exactly one (time) optimal control and it is bang-bang.
10.2 Forcing to Zero In this section, we consider System (10.1.1) with a fixed target G ( t ) {0}, where the aim is to start at any initial state 4 E C at time u E [T,OO)and to reach the origin. The next definition will enable us to derive a computational criterion for solving our problem.
Definition 10.2.1: System (10.1.1) is proper on an interval [ u , t * ] , if c'U(t*,s)B(s) 0 almost everywhere on [a,t*]implies c = 0. If (10.1.1) is proper on [u,u+51 for each 6 > 0, we say that (10.1.1) is proper at time
The Time-Optimal Control of Linear Diflerential Equations u. If (10.1.1) is proper on each interval [a,t],t proper.
383
> u,we say the system is
Definition 10.2.2: System (10.1.1) is Euclidean null controllable if given any initial state r,b E C = C([-h,O],E")there exists a tl and an admissible control u E L c o ( [ u , t l ] , E m such ) that the solution z(u,q5,u) of (10.1.1) satisfies zo(u,$,u) = q5, z(tl,u,r,b,u) = 0 . Definition 10.2.3: The domain N of null controllability of (10.1.1) is the set of all initial functions E C for which the solution z = z ( u , # , u ) of (10.1.1) with 2, = r,b satisfies z ( t 1 ) = 0 at some 1 1 using an admissible control u E L , ([a, t 1 ] , C"). Theorem 10.2.1 The domain N of null controllability contains zero in its interior whenever (10.1.1) is proper.
To prove Theorem 10.2.1 we need the following proposition: Proposition 10.2.1 System (10.1.1) is proper on [ u , t ] if and only if the origin is an interior point of R ( t ). Proof of Proposition 10.2.1: Observe that 0 is always in R ( t ) for each t 2 u. Suppose (10.1.1) is proper but zero is not in the interior of R ( t ) for some t 2 u. Because R ( t ) is a closed and convex set and because the constraint set C" is the unit cube, 0 being on the boundary of R(t) is
equivalent t o CT
J,' U ( t ,s)B(s)u(s)ds= 0
for some nonzero c E E" and for u E Ll([a,t],C")given by u(s) = sgn(cTV(t,s)B(s))where c is the outward normal t o the support plane to R ( t ) at 0. Thus
cTU(t,s ) B ( s ) G 0, almost everywhere s E [ a , t ] , where c # 0, which contradicts the fact that (10.1.1) is proper on [ a , t ] .We now reverse the argument to complete the proof. Proof of Theorem 10.2.1: With 0 = u E C", z ( t ) = 0 is a solution of (10.1.1) so that 0 E N . Suppose 0 is in the interior of R ( t ) for each t , but 0 is not in the interior of N. Then there is a sequence {r,bm)y C C such that -+ 0 as m + c*3 (the convergence is in the sup norm of C ( [ - h , 01, En))
Stability and Time-Optimal Control of Heredatary Systems
384
and no 4" is in N (so 4" deduce that
# 0).
From the variation of constant formula we
for any tl > 5 and any u E C". Hence ym = T(t1,u ) ~ ~ ( Ois )not in R(t1) for any t l . We therefore obtain a sequence {ym)y En,ym $! R(t1) for any t1, ym # 0 such that ym -+0, as m + 00. We conclude that 0 is not in the interior of IW(t1) for any t l , a contradiction. Hence 0 E Int N.
Theorem 10.2.2 If (10.1.1) is proper and (10.1.2) is uniformly asymptotically stable, then (10.1.1) is Euclidean null controllable. Proof: Let N be the domain of null controllability of (10.1.1). Since (10.1.1) is proper we have from Theorem 10.2.1 that 0 E Int N. In (10.1.1) consider the trivial admissible control u = 0 on [u,a). Then the solution z(u,+,O)of (10.1.1) with u = 0 (that is the solution of (10.1.2)) satisfies
for some k > 1 and c > 0. This behavior of (10.1.2) follows from Crux and Hale [l]. Hence zt(u,d,O) + 4 0 = 0 E Int N , as t 3 m. Therefore there exists a tl > u such that ztl (0,4,0) E B where B is a ball in N with the origin as center and radius sufficiently small. With xtl (a,4,O) E N as an initial point, there exists some 12 and some u E Ll([tl,tz],C")such that the solution z(t1,ztl , u) of (10.1.2) satisfies z(t1,ztl , u ) = 0. This proves the theorem. We conclude this section by considering a more general class of controls,, in which we only require them t o be integrable on finite intervals. If a control u satisfies this condition, we simply write u E C*.
Definition 10.2.4: System (10.1.1) is Euclidean controllable on [u,tl]if for each E C and each x1 E Enthere exists a control u E C' such that the solutionz(a,~,u)of(lO.l.l)satisfiesx,(a,~,u)= 4 a n d x ( t l , a , 4 , u ) = X I .
+
Theorem 10.2.3 System (10.1.1) isproper on [a, tl] ifandonlyif(10.1.1) is Euclidean controllable on [a, tl].
Remark 10.2.1: The ideas behind the proof are standard in the literature. See, for example, Zmood [7, Theorem 21, and Lemma 6.1.1 here.
The Time-Optimal Control of Linear Differential Equations
385
10.3 Normal and Proper Autonomous Systems In this section an autonomous special case of (10.1.1) will be considered. Sharp necessary and sufficient conditions for normality and for the system to be proper are deduced. The method of Gabasov and Kirillova [12]is used to obtain results that generalize the recent normality conditions of Kloch [S], and that specialize to well-known conditions for autonomous linear ordinary differential equations. Consider the system
d
- ( ~ ( t )- A - l ~ ( t h ) ) = Aoz(t) + A l ~ (-t h ) + Bu(t), dt
z ( t ) = d ( t ) for t E [-h,O],
'
O1 (10.3.1)
where A, are n x n constant matrices and B is an n x rn constant matrix. Here u E C" and #I E C([-h,01, En) C. It is well known that for the above assumptions, for each admissible control u E Lp([O,m),Cm),there exists a unique solution to (10.3.1) on [ - h , m ) through d. Furthermore, if A-1 # 0, this solution exists on (-00, 00) and is unique. (See;Hale [9, p. 261.) The fundamental matrix of
=
d dt
-(z(t)
+
- A-lz(t - h ) ) = A o ~ ( t ) A
l ~ (-t h )
( 10.3.2)
is a solution of (10.3.2) with initial data
0, t < 0, I , t = 0 (Identity),
(10.3.3)
for which U ( t ) - A-,U(t - h ) is continuous and satisfies (10.3.2) for t 2 0 except at kh, k = 0 , 1 , 2 , . . . . Indeed, U ( t ) has a continuous first derivative on each interval (kh, (k l)h), k = 0 , 1 , 2 , . . . , the right- and left-hand limits of U ( t ) exist at each kh, k = 0 , 1 , 2 , . . . , so that U ( t ) is of bounded variation on each compact interval and satisfies
+
U ( t ) - A - i U ( t - h ) = AoU(t) + A1U(t - h ) , tfkh, k = 0 , 1 , 2, . . . .
(10.3.4)
Also if U ( t ) is the fundamental matrix solution of (10.3.2) described above, then the solution ~ ( 4u,) of (10.3.1) is given by
Stabality and Time-Optimal Control of Hereditary Systems
386 where
1
0
z ( t , 4 , O )= U(t)(4(0) - A - 1 4 ( - h ) ) + A 1
u(t - s - h>d(s)ds
-h
- A-1
dU(t - s - h)+(s).
In order t o introduce computational criteria to check when (10.3.1) is proper, or normal, we introduce the following notation by defining
+
+
Q ~ ( s=) AOQk-l(S) AlQk-I(S - h ) A - I Q ~ ( s- h), k = O , 1 , 2 , . . . , s = O , h , 2 h, . . . , Q o ( 0 ) = I identity matrix, &(s) z 0 if s < 0. Theorem 10.3.1 A necessary and sufficient condition for System (10.3.1) to be proper on the interval [0, T ] is that the matrix
I-p)=
{Qk(S)B,
= 0 , 1 , . . . , n - 1, s E [ O , T I }
has rank n. Proof: The proof is exactly as in [12,pp. 51-60]. Corollary 10.3.1 In (10.3.1), assume that A-1 = 0. Then a necessary and sufficient condition that (10.3.1) is proper on [O,T]is that
n ( T ) = { Q k ( s ) , k. = 0 , 1 , . . . , n - 1, s E [O,T]} has rank n where
This is Theorem 6.1.1, located in this book. Corollary 10.3.1 is the algebraic criterion for complete controllability given by Gabasov and Kirillova for the delay system
i ( t )= Aoz(t)
+ A l z ( t - h ) + BU(t),
(10.3.6)
when the controls are not restrained to lie on a compact set but are only required to be integrable on compact intervals. We note that an algebraic
The Time-Optimal Control of Linear Differential Equations
387
criterion for the delay equation (10.3.6) t o be proper is given by Corollary 10.3.1. This is a generalization of the fundamental result of LaSalle in Hermes and LaSalle [6, p. 741 on the autonomous system
i. = A x + Bu(t) . Recall that (10.3.1) is normal on [0, TI, T
(10.3.7)
> 0, if for any r = 1, . . . ,rn,
vTU(T - s)Br = 0 almost everywhere s E [O,T], implies 17 = 0. If we follow the idea of Theorem 10.3.1, we deduce the following theorem:
Theorem 10.3.2 A necessary and sufficient condition for (10.3.1) to be normal on the interval [0, T ] is that for each T = 1 , 2 , . . . ,rn, the matrix
has rank n. We now apply our result to the general nth order scalar autonomous neutral equations of the form n-I
n
(10.3.8) where 1.1
5 1. Define
0
1
0
A o = [ 0;
0
:
1
a0
a1
a,
.'.
...
0 0
0 0
...
0 1 , A l = I '.
'
.
...
0 bo
bl
0
an- 1
0
" '
0 0
:
0
... bn-l-
.
Stability and Time-Optimal Control of Hereditary Systems
388
Corollary 10.3.2 The scalar control system (10.3.8) is proper for every T > 0. Proof: The result follows immediately from (10.3.9) and Theorem 10.3.1. It is fairly obvious from Theorem 10.3.1 that if (10.3.1) is proper on [O,Tl], then it is proper on [O,T]for T 2 TI.
Theorem 10.3.3 Consider the pointwise complete system (10.3.1). Suppose (i) for each T 2 0 and for each r = 1 , 2 , .. . , rn, the matrix
H ( T )= ( Q k ( s ) B r ,k = 0 , 1 , . . . , n - 1, s E
[O,q}
(10.3.10)
r
has rank n, and (ii) suppose Q = sup{R E X :det A(X) = 0)
< 0 where
A(X) = X ( I - A - l e - x h ) - A0 - A l e - x h . Then there is precisely one time-optimal control that drives any
4 E C([-l,O), En)to the origin in minimum-time t * . It is given by u ; ( t ) = sgn(cTX(t* - t ) B ) j , j
= 1 , . . . , m ,0 5 t 5 t*.
(10.3.11)
Proof: Because of (i), (10.3.1) is normal and a fortiori proper. Because of (ii), (10.3.2) is uniformly asymptotically stable. Hence (10.3.1) is Euclidean null controllable; see Theorem 10.2.2. Because (10.3.1) is null controllable, Theorem 10.1.3 guarantees there is an optimal control that is extremal by Theorem 10.1.4 and uniquely determined by (10.1.9) because of Corollary 10.1.1.
10.4 Pursuit Games and Time-Optimal Control Theory In this section we show that a class of differential pursuit games is equivalent to some time-optimal control problems. We can therefore appropriate the earlier theory that we have developed for System (10.1.1) to solve this later problem. The emphasis in this section is exactly as in H6jek [lo]: the emphasis is on the reduction rather than on the consequences. We consider the linear neutral differential game described by d
#,
Xt) 2,
= q t , xt) - P ( t ) = f$ E
c,
+4 t ) I
t L s,
(10.4.1)
The Tame-Optimal Control of Linear Daflerentaal Equations
389
where p ( t ) E P, q(t) E Q with P & En, Q & En is the pursuer con, ), q € trol and quarry control constraints. The functions p e L m ( [ u , t ]P L , ( [ u , t ] , Q ) are called pursuer and quarry controls. They are said to steer the initial function 4 E C([-h,O],E") to the origin in En in time t l if the solution z ( c ~ , + , p , q of ) (10.4.1) with zU(u,4,p,q) = 4 satisfies X(tl,u,4 , P , q ) = 0. The information pattern of our game can be described as follows: For any quarry control q, (i) there exists (that is, "the pursuer can choose") a pursue control p such that for each s E [u,t]the value of p ( s ) depends only on q ( s ) (and of course on 4, D, L ) . (ii) The pair of controls p , q steer d, to 0 E En. (iii) This is done in minimum-time. Associated with the game (10.4.1) is a linear control system d
Z D ( t , " ' ) = L(t,Xl) - v ( t ) , t 2
0,
(10.4.2)
where w(t) E V ( t ) . Here D and L are as given in (10.4.2), but the control constraint set is defined by
V = PyQ = { x : z + Q
c P}
(10.4.3)
where y denotes the Pontryagin difference of P and Q. It is important t o observe that we can define
where B is an n x m matrix function and C is an n x r matrix function, provided we assume u ( t ) E Em,v ( t ) E Er with the constraint sets PI C Em and Q1 E r . Viewed in this way there is nothing in (10.4.1) that suggests that the control functions have the same dimensions as the state space. These same comments are valid for (10.4.2).
c
Definition 10.4.1: A point 4 E C is said to be in position to win in time tl > u if, for any quarry control q , there is a pursuer control subject to the information pattern (i) - (iii) in the introduction.
390
Stability and Time-Optimal Control of Hereditary Systems
Theorem 10.4.1 Assume 0 E Q and P is compact. Thegame (10.4.1) is equivalent to the associated time-optimal control problem (10.4.2), where V is given by
V ( t )= ( P + kerU(tl,t))ir&, f or sometl. Furthermore,
+
f ( q , t ) = ~ ( t )q modulo kerU(t1,t)
(10.4.4)
(for all q E Q , t E [ u , t l ] )can be used to obtain a suitable strategy from U
E
V).
~ ~ ( [ ~ , ~ l I ,
In detail, qi is in position to win in time tl if and only if 4 can be steered to 0 in time tl within the control system (10.4.2), and the corresponding minimum- times coincide. Proof: Assume C#J E C is in a position to win at time t l . Then from the definition, given any quarry control q : [ u , t l ] -+ Q, the mapping f : En x [a,tl]4 Enexists with values p ( t ) = f(q(t),t), t -+ p ( t ) is integrable, p ( t ) E P , and
or
J,
tl
4tl19,Ol 0) =
U(tl7 S)(f (qfs),
- q(s))ds,
(10.4.5)
where z(tl,qi,0,O) is the solution of the homogeneous equation (10.1.7). Because 0 E Q , we can consider the quarry control to be 0. Then
( 10.4.6) Ja
where w(s) = f(0, s). Now take any point q E Q and a time t E [ a , t l ] ,and consider the piecewise constant quarry control qo,
Apply (10.4.5) to obtain rti
The Time-Optimal Control of Linear Differential Equations
39 1
On subtracting (10.4.7) from (10.4.6),
1"
U ( t 1 ,.)(.(.)
+ q - f(q,
=0
for all t E [ u , t l ] . Observe that the integrand is independent o f t , so that by differentiation
+
u ( t , , t ) ( v ( t ) q - f ( q , t ) )= 0 almost everywhere. Use the kernel t o interpret this as
v ( t ) + q E f ( q ,t )
+ ker U ( t 1 ,t ) almost everywhere t E [ u , t l ] .
(10.4.8)
Since f has values in P ,
v(t)
+ q E P + ker U ( t 1 , t ) .
From the continuity of solutions of (10.1.1) with respect to initial conditions, ker U ( t l , t ) is closed. Hence ( P + lter U ( t 1 , t ) ) is closed. Because of this, Hfijek's Lemma [ll,p. 591 yields
v(t)
+Q
P -tker U(t1 , t ) almost everywhere,
or
v ( t ) E ( P + ker U ( t 1 , t ) ) ~ Q V ( t ) almost everywhere. Hence v E L w ( [ u , t l ] , V )where V is as defined in (10.4.4). It now follows that v is an admissible control for (10.4.2), and hence from (10.4.6)
or
0 = " ( t l ,d,O, 0) =
l1
U ( t 1 ,s)v(s)ds.
Observe that z,(u,q5,0,0) = q5, so that z(o,q5,0,0) is a solution of (10.1.2). Note that (10.4.8) proves the last assertion of the theorem. Hence v steers 4 to 0 in time t l . Next v and 0 are pursuer and quarry controls (0 E Q , PzQ c P ) ; for this choice the dynamical equations of the game and the control systems coincide; so do their solutions. Hence the optimality problems are the same.
392
Stability and Time-Optimal Control of Hereditary Systems
Conversely, let an admissible control v steer q5 to 0 at tl within the control system (10.4.2). To show that 4 is in position to win at t l , take quarry control q. Then 0 = "(tl,$b,O,O) -
l1
U(t1,s)v(s)ds
(10.4.9)
where V(S) E V ( s )is such that v ( s ) + q E P+ker U(t1, s). We now construct a pursuer-control strategy as follows: by Filippov's Lemma (see the form in [ll,p. 1191) there exist measurability preserving maps
f :Q x
[u,ti] P, A : Q x [u, tl] -+ ker U(t1, s)
for each u
5 s 5 tl with
-+
X(q(s), s) E ker U(t1, s) such that
4.1
+ P = f ( q , s).
Since f ( q , s) E P is a compact set, we have f E L,([u,tl], P ) . We now verify that to any quarry control q , f thus constructed, will force 4 t o 0 E En. Indeed, if q E LM([u,tl],Q), f - q = V - A so that the solution of (10.4.1) at time tl with the pair of f , q and initial function 4 is
= "(tlI4,O) -
+
l1
J
tl
U(t1 ,s)v(s)ds
U
U(t1, s ) A ( q ( s ) , s)ds = 0,
from (10.4.9) and the definition of A . Hence on taking f and q as controls we find again that the dynamical equations coincide, so that steering t o 0 in t l and the minimal times are preserved. This completes the proof. For more general targets
G : [u,co) + T" that are continuous we do not have the duality of Theorem 10.4.1. Instead we have the following:
The Tame-Optimal Control of Linear Diflerential Equations
393
Theorem 10.4.2 Let the pursuer-constraint set be compact and let G : [a, co) + T" be continuous. An initial position q5 E C can be forced to a target G (using the usual information pattern) a t time t l within the game (10.4.1) whenever in the associated control system (10.4.2) q5 can be steered to the target in time t l . Furthermore,
f ( q , t ) = w(t)
+q
modulo k e r U ( t 1 , t )
determines a control strategy for (10.4.1) that counters any quarry action E L , ( [ u , t l ] , Q ) where w is an admissible control for (10.4.2).
q
Proof: Assume that in (10.4.2) C#J can be steered to G in time t l . Then there exists w E L l ( [ u , t l ] V , )such that thesolution 2 = z(u,q5,v)of (10.4.2) satisfies
Because w : [u,t1]+ V , w(s)
+ q E P + ker U ( t 1 , s ) for
s E [ u , t l ] and every q E Q .
With w fixed, we now construct a pursuer-control strategy f by applying Filippov's Lemma. We obtain measurable functions
f : Q x [u,cm) + P, X : Q x [u,cm) + ker U ( t 1 ,s ) , X ( q ( s ) , .) E ker U ( t 1 ,s )
such that
4.1 +
Q(S)
= f(q(s),s)
+ X(q(s)s).
Just as in Theorem 10.4.1, f E L m ( [ o , t 1 ]P, ) . For any quarry action q , f q = w - X so that the solution of (10.4.1), with this f and q and initial data 4, satisfies
394
Stability and Time-Optimal Control of Hereditary Systems
by (10.4.10). This proves the theorem. We now examine the quantitative properties of P and Q from a close examination of Theorem 10.4.1.
Definition 10.4.2: Consider a mapping T : Q x E -* P , where T ( q , t ) is a point in P whenever q is a point of Q and t E E . Suppose is an induced mapping on the collection of Q of functions q(.), which are defined by <[q](t) = T ( q ( t ) , t ) . Then is called a stroboscopic function if it preserves measurability. We describe the following example: Suppose t + u(t) is given in advance. Then
<
<
is a stroboscopic strategy if u is measurable and P 3 Q
+ u ( t ) for all t .
Theorem 10.4.3 In (10.4.1), assume that 0 E Q, P is compact, P, Q are nonvoid, convex, and symmetric. Then the set inclusion
P + kerU(t1,t) 3 Q,
u
5 t 5 tl
(10.4.11)
is necessary and sufficient for the presence of initial 4, which can be controlled to zero in some time tl when the composite system (10.4.1) uses stroboscopic strategy. The inclusion Int(P
+ ker U ( t 1 , t ) ) 3 Q,
uI t I tl
( 10.4.12)
is necessary and sufficient for the domain of null controllability (10.4.1) to have zero in its interior. Proof: Theorem 10.4.2 states that the set of functions 4 that can be driven t o zero at some time tl using the dynamics (10.4.1) is the same as the set of initial functions that can be controlled t o zero when System (10.4.2) is used. Because V ( t )= ( P ker U ( t 1 , t ) ) 2 Q
+
is the control set for (10.4.2), a set which is convex and symmetric, we deduce that V ( t )is nonvoid if and only if 0 E V ( t ) ,i.e.,
P
+ ker U ( t 1 , t )3 Q,
uI t I ti.
The Time-Optimal Control of Linear Digeredial Equations
395
there are initial positions that can be forced t o zero stroboscopically at time t l if and only if lR(tl) is nonvoid. By Filippov's Lemma (see also Chukwu [17,pp. 437-4391), lR(tl) is nonvoid if V(t) is nonvoid, for CT 5 t 5 t l , so that P+kerU(tl,t) >Q.
+
+
For the converse, if P kerU(t1,t) > Q, then 0 E ( P kerU(tl,t),O is an admissible control steering zero t o zero. The second assertion is valid if and only if 0 E Int V(t) = Int((P ker U ( t 1 , t ) ) Q).
+
But 0 E Int V(t) is a requirement for (10.4.2) to have zero in the interior of the Euclidean reachable set, and therefore in the domain of null controllability of (10.4.2)' which by the duality Theorem 10.4.2 coincides with that of (10.4.1). For ordinary differential linear systems, H6jek has obtained the following results [11,pp. 60-871: Consider the system i ( t ) = AX - P
+ 4;
( ~ ( tE ) P, q(t) E Q)
( 10.4.13)
in En, where A is an n x n matrix. Proposition 10.4.1 In (10.4.13) assume that P , Q are nonvoid, convex, and symmetric. Then P>Q (10.4.14) is a necessary and sufficient condition for presence of initial positions that can be forced to zero stroboscopically in strictly positive time. If
Int P
> Q,
(10.4.15)
then for every t > 0 the set of positions that can be forced to 0 stroboscopically at time t is a neighborhood of 0. If
G = {x : x E En;Mx = Mb}
(10.4.16)
where M is an m x n matrix and b E En, then M ( P - P ) 2 M(Q - Q )
(10.4.17)
is necessary, and if (Q is compact),
Int M P IIM Q
( 10.4.18)
396
Stability and Time-Optimal Control of Hereditary Systems
is sufficient for the presence of positions that can be forced to G a t strictly positive times. Suppose the pursuer's (firm's) control order k is defined as that k such that M A j ( P - P ) = 0, (10.4.19)
(the set consisting of the zero vector, not the empty set) holds for j = 0 , . . . , k - 2 but not for j = k - 1. Assume P is compact. A necessary condition for the presence of initial points that can be forced to G a t t > 0 is that (1) firm's control order (pursuer's) 2 solidarity control order; and if k is the solidarity (quarry) control order then (2) M A k - - ' ( P- P ) 2 M A k - l ( Q - Q ) . (10.4.20)
(i) Suppose P, Q are compact, convex, and symmetric, and (ii) firm's control order 2 solidarity control order; (iii) Int M A k - ' P 2 M A k - ' Q . ( 10.4.21) Then for sufficiently small t l . > 0, the set of initial endowments that can be forced to G a t time tl has a nonempty interior. Let (10.4.13) describe the growth of capital stock with any initial value z(0) = zo and target 0 or target G in (10.4.16). If the target is 0, we can define the firm's initiative (investment, consumption) as the control set P , and the set Q as solidarity (e.g., government taxation or subsidy). If G is the target, M A k - ' P is defined to be the firms control initiative and M A k - l Q is solidarity, With this terminology we deduce the following universal principle:
Principle 10.4.1 N o initial value of capital stock can be controlled to the target unless the firm's initiative contains solidarity as a subset. To ensure growth of any initial value of capital stock to the target that is sufficiently close, the firm's initiative must dominate solidarity. These principles provide a broad policy prescription for national economies, which will be explored in a subsequent communication. Fundamental principle 10.4.1 emerges from studies of linear systems. Consideration of general nonlinear dynamics in Theorem 3.2 of [15]show that the solidarity assumption is the generalization of fundamental principle 10.4.1.
The Time-Optimal Control of Linear Diflerential Equations
397
10.5 Applications and Economic Growth From the discussions of Section 1.8 that yielded (1.8.5) and the insight of (1.8.4), it is not unreasonable t o postulate that the dynamics of capital function of n stocks in a region may be described by the equation
i i ( t ) - A ; i ; ( t - h ) = Li(t,~
t
-)p i ( t ) + q < ( t )
(10.5.1)
where p i ( t ) G B i ( t ) u i ( t ) , q i ( t ) = C i ( t ) w ; ( t ) where q i ( t ) is the solidarity function that is the effect of government intervention (taxation, subsidy) in economic growth. Equation (10.5.1) describes the net capital formulation of the ith subsystem. The effects of government acts are measured locally. We now interpret Theorem 10.4.2. Let q5 E C be any initial capital endowment function and G the desired target consisting of the range of values of capital desired at time t l . Let p(t) represent the effects (measured locally at the firm) of government economic and other policies. The firm can grow from q5 to G in time t l using its investment and other strategies p ( t ) if and only if the control system k i ( t ) - Ai+;(t - h ) = L i ( t ,xi) - ~ ( t )
(10.5.2)
can grow from q5 to G at time t l . The control investment strategy is f(q,t) = v ( t ) q modulo ker U ( t 1 ,t ) , which reacts to any government action q where w is an admissible control for (10.5.2). If Q is the totality of “government power”, and if it is permissible for government not to intervene (0 E Q ) , and if the firm’s investment capacity P is limited and small (compact), then the optimal-control strategy is constrained to lie in V , which is defined by V ( t ) = ( P ker U ( t 1 , t ) ~ Q ) .The second of the universal principles of control on the limitation of government power is valid:
+
+
Q c I n t ( P + ker U ( t 1 , t ) ) . Principle 10.4.2 To guarantee economic growth from any initial endowment to a target, it is necessary that the firm’s capacity for investment and its internal power for waste dominate whatever government can do. Government intervention is needed, but it should not be too big. We now consider System (10.4.1) very carefully. First, consider the system d (10.5.3) -dtq t , .t) = qt,.t) - p ( t ) , P ( t ) E p.
398
Stability and Tame-Optimal Control of Hereditary Systems
If (10.5.3) is controllable on [a,t1] with p ( t ) E P , then
0 E Int d ( t 1 )
( 10.5.4a)
where d ( t i ) = {z(p)(ti) : p ( t ) E P } . The attainable set of (10.4.1) is then
If
Q c P or Q c Int P, System (10.4.1) is such that
0 E Int d ( t 1 ) c A ( t ) ,
(10.5.4b)
provided (10.5.3) is controllable-. Thus with
--p(t) = B l ( t ) u ( t ) and q ( t ) = B2u(t), for example, the system
is made controllable. It is possible that the intervention of q may make matters worse. Indeed, even if
d ,m,
.t)
= q t , .t)
+ B1u(t)
(10.5.6)
is controllable, the composite system (5.4.1) may not be controllable. A “proper amount” of q is needed. We can formulate an economic interpretation with the following remarks: To ensure growth from an initial endowment to the target we may assume (10.5.5) is Euclidean controllable. Hence matters should be so arranged that the solidarity function q brought t o bear on the isolated system ensures controllability. These observations are formalized in the following principle:
The Time-Optimal Control of Linear Daflerenttal Equations
399
Principle 10.5.3 If an isolated system is not proper and is not locally controllable with constraints, the composite system may be made "proper" and locally controllable provided there is an external source of power or initiative q available to enforce controllability and proper behavior. It is possible that the intervention ofsolidarity can make matters worse. Only a "proper" amount is needed. There is no theorem at the present time that states that the interconnected system is well behaved when i t is misbehaving in isolation, and there is no compensating external solidarity q. We observe that the intervention of q can make matters worse. If (10.5.4a) holds and we consider (10.5.1) with q ( t ) qo # 0, then 0 need not be in d ( t ) ,the attainable set of (10.4.1) and controllability may fail. In this case, the solidarity function is not flexible. This seems to be the case in centralized economies.
Principle 10.5.4 If the solidarity function is inflexible and rigid, or if the isolated systems controlling initiatives are ignored in the construction of a nontrivial solidarity function, the interconnected system will fail to be proper and locally null controllable. We have isolated null controllability in E" as our objective in the analysis of the growth of capital stock. Though zero target at the final time seems artificial, the theory incorporates nontrivial targets. Indeed, if z1 is a nontrivial arbitrary target and y(t) = y(t, (6, u o ) is any solution of
with ye = (6 and uo admissible such that y(t1) = 2 1 , one can equivalently study the null controllability of the system
+
where z, = 0, so that 2 , = y, = q!~ and ~ ( t=)z ( t ) y(t). If we can show that there is a neighborhood 0 of the origin in z space such that all initial (6 E 0 can be brought to z(tl, u, (6, u o ) = 0 by some admissible control at time t l , then z(t17 u,4, u0) = Z ( t 1 , 4, - Y(t1) = 0, so that z(t1,(6,u) = Y(h) = z1.
Stability and Time-Optimal Control of Hereditary Systems
400
The remarks we have made on the consequences and insights of Theorem 10.4.1 - Theorem 10.4.2 justify a qualitative description of the solidarity function q and its constraint Q. How big should it be? This q can be viewed as a control disturbance, and we can then study System (10.4.1).
10.6 Optimal Control Theory of Linear Neutral Systems In this section we study the optimal problem of the system
1
m
-
dt
C A - l j ~ ( t- h j ) j=1
m
= Aoc(t)
(10.6.1)
+ C A j z ( t - hj) + B(t)u(t),
t > 0,
j=1
c(0) = go E En, c ( t ) = gl(t),
t E (-h,O],
g1 E C.
Define the fundamental matrix W of 10.6.1 by
where c ( t , u , g o , g l ) is the solution of (10.6.1). Hence W ( t )is the unique solution of
m
+C~
(- thj)A-ljW(t - h j )
j=1
where x ( s ) = 0 if s
< 0, x ( s ) = I if s 2 0, I identity.
Proposition 10.6.1
L
(10.6.2)
The variation of constant formula for (10.6.1) is
J
j=1
z ( t , o , g o , gl)
+
I'
W ( t- s ) B ( s ) u ( s ) d s .
The Time-Optimal Control o f Linear Diflerential Equations
40 1
We study the optimal control and the adjoint system. Consider (10.6.1), where
uE
uad c L p ( [ O , T I , En),
P E [1,m), B E LM([O,Tl,Enxm)
Assume U d closed and convex set in L,. We denote the solution of (10.6.1) by x ( t , u ) . Let the integral cost function be given by
(10.6.lb)
We study the following problems: PI Find a control u E U d that minimizes the cost J subject to (10.6.1). P2 Find optimality conditions for the optimal pair (u*,x*(u*)) EUd x C([O,T], En) such t h a t .
We shall prove the existence of optimal controls far P I , and solve P2 by deriving necessary optimality conditions. Let H be a compact target set in En. Suppose
Uo = {u E
Uad : z ( u , t ) E
H for some t E [ O , T I } ,
and suppose Uo # 0. This is the constrained-controllability assumption. We now formulate the time-optimal problem. P3 Find a control U* E UOsuch that
t * ( u * )5 t ( u ) for all u E UO subject to (10.6.1). The number t * ( u * )is the first time t ( t * ,u') E H , i.e., t* is the optimal time. We now consider the adjoint system. Let qo E (En)*be a row vector,
402
Stability and Time-Optimal Control of Hereditary Systems
The adjoint system for (10.6.1) is
a.e. t E I ,
Y(T)= -!70, Y ( S ) = 0, a.e. s E [T,T + h ] . (10.6.4a) The solution of the adjoint equation is given by the adjoint state y ( t ) = W * ( T- t ) ( - q g )
+
1
T
W * ( s- t)(-q;(s))ds,
t
where W * ( t )is the adjoint of W ( t ) , t E [ O , T l , and is the fundamental matrix solution of (10.6.4a), which is unique. We note that
Y ( s , t )G W * ( t- s) = W ( t - s ) a.e. in s , where W ( t )is the fundamental solution of (10.6.4a). Thus t+
Y ( s , t )= I +
d a Y ( a , t ) p ( a , s )-
[U,tl, Y ( t , t )= I , Y ( s , t )= 0 for s
I’
Y(a,t)q(a,s)da,
s E
>t,
where
M
rt
dq(s)x(s) I--h
C Ajx(t - h i ) . j=1
We next state conditions for the existence of optimal controls for problem
4. Theorem 10.6.1 Assume that: (i) 40 : En ---i E is continuous and convex. (ii) fo : En x [O,T] -+ E is measurable in t for each x E En, and continuous and convex in x € En for a.e. t € I , and assume further
The Time-Optimal Control of Linear Differential Equations
403
for each bounded set K c En there exists a measurable function Mk E L l ( [ O , T ]E , ) such that
(iii) ICo : L p ( [ O , T ]Em) , x I + E is such that for any u E U d , k O ( u ( t ) , t ) is integrable on I and the functional ro : U d -+ E defined by rT
is continuous and convex. (iv) Uad is bounded. Then there exists a control uo E U d that minimizes the cost J
Proof: The proof is an easy adaptation of its infinite dimensional analogue developed recently by Nakagiri [27,Theorem 4.11. Proof: Let {un} be a minimizing sequence of controls for J and zn the corresponding trajectory:
Inf J = lim J ( u n , z C n=) Mo.
U E U , ~ n+oo
Since we assumed that U d is bounded and since it is weakly closed, there is a subsequence (again denoted by {un}) of {tin} and a uo E U d such that Un
+
210
weakly in L p ( I ;E m )
(10.6.5)
Suppose zo is the trajectory corresponding to uo. Let c be a row vector in En*,and t E I = [O,T]be fixed. Because the fundamental matrix W is such that W ( t )= 0 if t < 0, we have that rt
(zn(t), c)
= ( z ( t ,u ,go,gl), c) +
(un(s),
B*(s)W*(t- s ) c ) d s . (10.6.6)
J O
Recall that B E L m ( I , En'"') and W is by Tadmor piecewise analytic
B*(.)W(t- .)c E L z ( I , E " ' ) .
404
Stability and Time-Optimal Control of Hereditary Systems
It now follows from (10.6.5) and (10.6.6) that
(10.6.7)
z
( z o ( t ) , c )as n
-+
co (weakly) in En.
(10.6.8)
But if a function is continuous and convex, it is weak lower semicontinuity. Therefore Assumption (i) and (10.6.8) imply that
In the same way, lim fO($n(t),t)2 fo(zo(f),t),a.e. t E I .
n-m
(10.6.10)
By standard arguments that use Holder inequality, the set
K = U(zn(t) : t E I , n = 1 , 2 , . . . } is bounded in En. Since there exists an mk E L1 ( I ;R) such that
Ifo(zn(t),t)l5 m k ( t ) , a.e. t E I ,
( 0.6.1
Lebesgue-Fatou Lemma yields the following assertion, which follows from (10.6.10) and (10.6.11):
The Time-Optimal Control of Linear Daflerentaal Equations
405
since hypothesis (iii) is valid. On gathering the above results in (10.6.9), (10.6.12), and (10.6.13), we have MO
= UEU,, InfJ 2 lim 4 o ( x n ( T ) ) + lim n-co n-co
2 4o(zo(T)) +
1
T
LT
fo(x,(t),t)dt+ n-co lim ro(un),
f o ( z o ( t ) , t )+ ko(uo(t>,t)dt,
= J(u0,xo) > -m. We have proved that Mo = J(uo,xo), i.e., the pair (u0,zo) is the optimal solution for J.
Remark: Note carefully that the set of admissible controls is bounded. Theorem 10.6.2 For problem P2, assume that: (i) 4 0 : En + E is continuous and Gateaux differentiable, and the Gateaux derivative d & ( x ) E En*for each x E En; (ii) fo : En x I + E is measurable in t E I for each x E En and continuous in x E En for a.e. t E I , and t&e (a) value & f o ( x , t ) is the Gateaux derivative of fo(x,t) in the first argument for (x,t)E En x I , and (b) lalfo(x,t)l I & ( t ) eZ(1.l) for ( x , t ) E En x 1; (iii) ko : L2([O,TI)x I + En is measurable in t for each u E L, and continuous and convex on L, for a.e. t E I and further there exist functions d l k o : L2 x I + En*,63 E L p ( I ,E ) and M4 > 0, such that (a) &Lo is measurable in t for each u E L2 and continuous in u Lz for a.e. t E I , and the value &ko(u,t) is the Gateaux derivative of ko(u,t) in the first argument for ( u , t )E L2 x I , and (b) I&ko(u,t)lEn* 5 Q 3 ( t ) M411~112for ( u J ) E LZ x I ; (iv) Uad = { u E L z ( I , Em): IIull~I a}. Let ( u , x ) E U d x C ( [ I , E " ] )be an optimal control solution for J in (10.6.15). Then the optimal control u is characterized by
+
+
where A is the canonical isomorphism of & ( I , Em)into L z ( I , E"'), and
406
Stability and Time-Optimal Control of Hereditary Systems
and y ( t ) satisfies the equations
a.e. t E I ,
Y ( T )= -ddo(z(T)), Y(.) = 0,
sE
( T , T +h ) .
( 10.6.14) Proof: Since the cost function is Gateaux differentiable it follows from [17, p. 101 that the necessary optimality condition is given by the variational inequality ( 10.6.15a) J’(u)(v - u ) 2 0, v 2, E u, when J is differentiable. Because of the hypothesis, and since Lebesgue’s Dominated Convergence Theorem is valid, we have
J’(u)(v- u ) =
( 10.6.15b) All the integrands are well defined because of the hypothesis. The first term in (10.6.15) can be transformed by using Fubini’s Theorem:
1’
j ) W ( s - T ) B ( T ) ( V ( T ) - .(.I),
=
LT(+)
- u(s),
1
alfo(zc(S),s ) d r ) d s
( 10.6.16)
T
B*(s)
W*(T- s)a,f(z(T),T)dT)ds.
If we let y ( t ) = -W*(T - t)ddJo(z(T)) -
J
T
W * ( s- t ) a , f o ( z ( s )s, ) d s ,
1
then from (10.6.15a) - (10.6.16) the following inequality follows: rT
The Tame-Optimal Control of Linear Differential Equations
407
and this is reduced t o
&ko(u(t),t)- B * ( t ) y ( t )= 0 ,
a.e. t E I .
(10.6.17)
Since U d = {u E L2(I,Em) : llull 5 a} and our hypothesis is valid, we easily deduce that -a A-' I<(u) u=
II /Ih'(u)112 '
where A is the canonical isomorphism of &(I,Em)into & ( I , Em)*and
I<(u)(t)= &ko(u(t),t)- B * ( t ) y ( t ) , a.e. t E 1 Example 10.6.1: Let
Uad
= &([0,2'],
Ern).The cost
where
We assume N is n x n matrix M ( . ) E L,([O,T],EnXn), Q E L,([O, T]EmXm); and N , M , Q , are positive and symmetric for each t E [O,T].There is a constant c > 0 such that
Thus rQis strongly continuous and strictly convex. Then there exists a unique optimal control for J1, and the following is a consequence of Theorem 10.6.2: Proposition 10.6.2 Consider the cost function in Example 10.6.1. There exists a unique optimal solution
for
J1.
The optimal control is given by
408
Stability and Tame-Optimal Control of Hereditary Systems
where
rn
+ C Y(t + hj)Aj - M ( t ) z ( t )= 0, j=1
Y(T) = - N + ( T ) , a.e. s E (T,T E h).
Y(S)
= 0,
The proof follows from Theorem 10.6.2 and Condition 10.6.9.
10.7 The Theory of Time-Optimal Control of Linear Neutral Systems In this section we study the time-optimal problem: Minimize
J ( t , .t) = t
(10.7.0)
subject to:
i ( t )- A - l ( t ) i ( t - h ) = A o ( t ) + ( t )+
N
C A j ( t ) ~ (-t h j ) + B ( t ) ~ ( t ) , j=1
(10.7.1)
where Aj are analytic n x n matrix functions and B is an n x m analytic matrix function. The controls are constrained to lie in
U = {u measurable, u ( t )E Em, Iuj(t)l 5 1, a . e j = 1,.. . , m } . (10.7.3)
For conditions for the existence of analytic solutions of N
i ( t )- A - l i ( t - h ) = AoZ(t) + C A j z ( t - h j ) j=1
(10.7.4)
The Time-Optimal Control of Linear Differential Equations
409
see Tadmor [29]. If we designate the strongly continuous semigroup of
linear transformation defined by solutions of (10.7.4) by T ( t , a ) ,t 2 u so that T ( t ,u)4 = zt(u, 4,Oh then the solution z ( u , 4 ,u) of (10.1.1) with z,(u, 4 , a) = 4 satisfies the relation zt(u, 4, ). = T ( t ,u)4
+
Xt(.,s ) B ( s ) u ( s ) d s
( 10.7.5)
where X is defined as follows: Let
o=o We are justified in writing
T(t,.)XO = Xt(.,.) where Xt(.,s)(0)= X(t+O,s) 0 E [-h,O],and where X is the fundamental matrix solution of (10.7.4) or
+
z(t) = W(t)go
Ih
+
Ut(s)g'(s)
I'
where
W(t)gO=
z(t;u,go,O) ift
W(t - s ) B ( s ) u ( s ) d s
20
go
E En,
if t < 0, and z(tlu,g o , g') is a solution of (10.6.1) with z(0) = go(s> = z(t>= gl(s), a.e. s E [A,
01,
go
(10.7.6)
(10.7.7)
E E", g 1 E w;').
The solutions of the time-optimal problem. We now solve the timeoptimal problem as formulated by reinterpreting Nakagiri in Euclidean ndimensional space. Thus in our case the state space is En, and the target is a fixed convex compact subset H of En with nonempty interior. Define U d = {u E L z ( I ,En) : u(t) E C", a.e. t E I } , where C" is the unit m-dimensional cube, i.e.,
C r n c E m , I u j I < I j = 1 , . . . ,m.
(10.7.9)
Define
uo = {u E Uad
: z ( t , u) is a solution of (10.7.1) and z(t, u)E T }
for some t E I } . (10.7.10)
410
Stability and Time-Optimal Control of Hereditary Systems
Theorem 10.7.1 Suppose the system is controllable, i.e., UO # 4. Then there exists a time-optimal control for P3. The proof is standard and can be modified from Nakagiri [27,p. 1991. We know from our existence result that ifsystem (10.7.1) is controllable (null controllable) with constraints, an optimal control exists. To further explore its properties, we consider the possibility of a maximum principle and a bang-bang principle. We have indicated earlier that for retarded finite-dimensional space, a bang-bang principle is false in function space [9, p. 601. But if we restrict J to be a terminal value cost
where 40 satisfies some regularity conditions (R1) (Rz) below, it can be demonstrated that under some conditions of the adjoint system a bangbang principle is valid. We require the following assumptions: R1: The function 40 : E n --+ E is continuous and Gateaux differentiable, and the Gateaux derivative D$o(z) # 0 for each z E En*. Ra: 40 : En + E is continuous and convex.
Theorem 10.7.2 (Optimality Conditions) For the time-optimal problem with H as target where H is convex, closed, and nonempty, assume that t* is the optimal time. Then there exists a nonzero n-row vector q* E En* such that rt*
ZE
(?J(S),B*(S)W*(t* - s)q*)ds
Jo
(10.7.11)
rt*
=
Jo
( u ( s ) , B'(s)W'(t'
where (.,.) denotes inner product in En. If and U is the unit m-dimensional cube,
- s)q*)ds, uad
is as defined in (10.7.10)
m a x ( v , B * ( t ) W * ( t *- t ) q * ) = ( u ( t ) , B * ( t ) W * ( t-*t ) q * ) , a.e. t E [ o , t * ] . WEU
( 10.7.12) Consider the adjoint system N
N
Y(t)+CY ( s + h j ) A j ( s + h j ) j=1
-
C Y ( s + h j ) A j ( s + h j ) - q ; ( t ) , a x . t E 1, j=1
( 10.7.13) y(T) = -q:, y ( s ) = 0, a.e. s E ( T , T + h ) . It is said to be regular (or proper) if whenever there is a set of positive measure I1 c I such that
The Time-Optimal Control of Linear Differential Equations
41 1
rn(I1) > 0 and y(t;q;,O) = 0 for all t E I , then q: = 0 E En*. Systems (10.7.4) that are pointwise complete for all t > 0 are regular. Recall that (10.7.4) is pointwise complete if
c ( t ,., .) : E" x Wil) + E" is a surjection, a.e.
Theorem 10.7.3 Consider the optimal problem with cost
Assume that the adjoint system is regular and the matrix B T ( t ) is one to one, a.e. t E I . Suppose the Gateaux derivative d+o(y) # 0 in En* for all y in the reachable set R ( t ) = {y E En : y = z ( t , u ) : u E U d } . Then the optimal control u ( t ) is bang-bang: that means that u ( t ) satisfies
~ ( tE )aU(t), a.e. t E I,
( 10.7.14)
where aU(t) denotes the boundary of U ( t ) , the m-dimensional unit cube.
Proof: Because of our definition of J , the maximum principle is stated as max ( u , B * ( t ) y ( t ) = ) (u(t)B , * ( t ) y ( t ) ) ,a.e. t E I ,
(10.7.15)
U€U(t)
where y(t) = y(t;ddo(z(T)),0 ) and z ( t ) is the trajectory corresponding t o the optimal control u ( t ) . It suffices t o show that
B * ( t ) y ( t )#
o
in E"', a.e. t E I .
(10.7.5)
Suppose t o the contrary that there is a set of positive measure finite I , m ( I l ) > 0 with B * ( t ) y ( t )= 0, V t E 11. Since B * ( t )is one-to-one and the system is regular, we have that ddo(z(T)) = 0. But then t ( T ) is in the attainable set, this condition ddo(z(T))= 0 is impossible. As a consequence of these results, we deduce the following solution of the timeoptimal problem:
Theorem 10.7.4 Consider the optimal problem P3 with t* the optimal time. Let the target H be a closed convex subset of En with nonempty interior. Suppose the adjoint system is regular and B * ( t )one-to-one. Then the time-optimal control u ( t ) is bang-bang on I* = [O,t*],i.e., u ( t ) is one of the vertices of the unit rn-dimensional cube. In addition, suppose U ( t ) =
412
Stability and Time-Optimal Control of Hereditary Systems
{u E Em : Iu - y(t)lEm 5 v(t)} t E I , and this replaces the rn-dimensional unit cube. Also the time T = t* is the optimal time. Then the optimal control is given by
where z ( t ) = W*(t*-t)q* is the solution of the adjoint equation t E I * , and q* is as given in Theorem 10.7.3. Here AE- is the canonical isomorphism
of En onto En*.
10.8 Existence Results Definition 10.8.1: Let tt E C([-h,01, En) be a target point function that is time varying. System (10.1.9) is controllable to the target if for each 9 E C there exists a tl 2 a and an admissible control u E L,([a,tl], Cm) such that the solution of Equation (10.1.9) satisfies
Theorem 10.8.1 Assume that System (10.1.9) is controllable to the target. Then there exists an optimal control. Proof: The variation of constant formula for system (10.1.10a) is
Controllability to the target is equivalent to
4, u)= z t l , for some
ztI (a,
that is,
tl,
The Time-Optimal Control of Linear Differential Equations
413
This is equivalent to Wt,
E "(tl,a).
Let
t* = inf{t : w tE cr(t,u)}. Now u 5 t* 5 t l . There is a nonincreasing sequence of times t , converging to t * , and a sequence of controls u" E L , ( [ u , t l ] , C m ) with
Also
where
Because X,,(.,s)B(s)u"(s) is integrable and [tn,t*] < co,the first term on the r.h.s. of the inequality tends to zero as t , + t*. We know from Henry [25] that
also X t n ( . , s )+ X , - ( . , s ) in the uniform topology of C. Hence by the bounded convergence theorem, the second summand on the 1.h.s. tends to zero as n -+ 00. From the continuity of solution in time and the continuity of the target, IIW'. - W',II -+ 0 as t , + t * . y ( t * ,u").Because cr(t*,u ) is closed and y ( t * ,u")E Hence w'. = c ~ ( t * , u )zu(t*) , = g ( t * , u*),for some u* E L , ( [ u , t l ] , C m ) , and by definition t * ,u* is optimal.
414
Stabality and Time-Optimal Control of Hereditary Systems
A controllability assumption was made in Theorem 3.1 for System (10.1.9). When the target is the zero function, what is required is the assumption of null controllability only. To get conditions for this, we need two preliminary results and precise definitions. We work in the space W g ) . The argument is also valid in C.
+
Definition 10.8.2: System (10.1.9) is controllable on [ u , t l ] , t l> u h if for each t,b, 9 E W , there exists a control u E L , ( [ u , t l ] , Em)such that the solution of (10.1.9) satisfies zu(ul9, u ) = 9 and ztl(u, 4, u)= $. If System (10.1.9) is controllable on each interval [ u , t l ] , t l > u h , we simply say that it is controllable. It is null controllable on [ u , t l ]if 4 E 0 in the above definition.
+
Definition 10.8.3: System (10.1.9) is null controllable with constraints if for each 9 E W c ) there exist a tl 2 u and a u E L , ( [ u , t l ] , C m ) such that the solution z(u,9, u ) of system (10.1.9) satisfies z,(u, 4, u ) = 9 and 2’11 (a, 4 ,). = 0. Proposition 10.8.1 Supposesystem (10.1.9) is nullcontrollable on [ u , t l ] , Then for each 6 E W,, there exists a bounded linear operator H : Wc’ -+ L , ( [ u , t l ] , E m ) such that u =H9 has the property that the solution z ( u ,9, H 4 ) of System (10.1.9) satisfies zu(u,9, H9) = 9,
9, H9) = 0.
2 1 1 (a,
Proof: From the variation of constant formula (lO.l.lO), z t ( u , 9 , u )= T ( t , u ) 9 + C ( t , a ) 9 + S ( t , a ) u
where
T ( t ,g)9 = z t ( u , u ,01,
1
1
C(tlg)9 =
ds{X,(., s)HG(a)d - G ( s ) z S ( 4)11 ~,
t
S ( t , a ) u=
J, X , ( . , s ) B ( s ) u ( s ) d s ,
t E [U,tl].
The null controllability of System (10.1.9) is equivalent to the following statement: For every # E W , there exists a t l and there exists a u E L , ( [ u , t l ] , En)such that
T ( t ~ , u ) + + S ( t ~ , u ) u + C ( t 1 , u0, ) + tl = > u+h.
The Time-Optimal Control of Linear Diflerential Equations
415
This is in turn equivalent t o
+
c S(t1, a)(LcO([a,t11, En)).
( W l , 0) W l , .))Wi)
(10.8.1)
The condition (10.8.1) is now valid by hypothesis. Denote by N the null space of S and by N* the orthogonal complement of N in L,([a,tl], En). Let so : N' S(tl,a)(L,([o,tl],E") +
be the restriction of S ( t 1 , a ) t o N'-. Then 5';' exists and is linear though not necessarily bounded, since S(tl)(Lcm([u,tl], En)is not necessarily closed. Define a mapping H : ---* Lm([a,tl],E")
w$
+ C(tl,u)q!~]. Then 4, H4)(0) = 4, H 4 ) ( t l + 01, = T(tlb)4(0) + q t 1 , .)4(0) + s(tl,e)[-s,-'(wl, .)4(0) + c(tl,.)4(e))l
by Hq5 = -S;'[T(tl,a)q5 Zt(U,
Z(U,
= 0,
- h 5 0 5 0. Since u = H 4 E Lo3([a,tl],En), we deduce that ~ t ~ ( a , q ! ~=, 0. u) We now prove the boundedness of H as follows: Let {&} be a convergent sequence in W,$) such that {Hdn} converges in LM([u,tl],En), and let
$ = lim &, n-o3
Since N'
is closed in L,([a,tl],
u = lim H & , n+co
un = H & .
En), u E N* and
T ( t i , a ) 4 + C ( t i 1 a ) d + S(ti,a)u= n-cm lim (T(t1,a)dn + c ( t i , u ) $ n
+ S(ti,a)un
= 0. Thus,
= -S,"T(t1,a)4
+ C(t1,6)4].
By the Closed Graph Theorem, H is bounded. The proposition is proved.
Definition 10.8.4: System (10.1.9) is locally null controllable with constraints if for each 4 € 0 , 0 an open neighborhood of zero in W i ) ,there exists a finite t l and a control u E Lo3([a,tl],Cm) U such that the solution Z ( U , 4, u)of Equation (10.8.1) satisfies
GT(~,4, ). = 4,
Zt*
(a, 4, ).
= 0.
416
Stability and Tame-Optimal Control of Hereditary Systems
Proposition 10.8.2 Suppose that System (10.1.9) is null controllable. Then System (10.1.9) is locally null controllable with constraints.
Proof: Because System (10.1.9) is null controllable, from Proposition (10.8.1) there exists a bounded linear operator
H :W p
+Lm([u,tJ,E")
such that for each 4 E W g ) and control u = H 4 , the solution x(u,4, H 4 ) of System (10.1.9) satisfies 4 u ,4, ff4) = 4,
2tl
(a, 4, H4)= 0.
Since H is a bounded linear map, it is continuous at 0 E W g ) . Therefore, for each open set containing zero in L , ( [ u , t l ] , E " ) , there is an open neighborhood U of 0 E W , )such that
H(U)c
v.
Clearly L m ( [ u , t l ] , C mhas ) zero in its interior. We can choose V open and contained in Lm([u,t l ] ,Cm).For this particular choice there exists an open set 0 around zero in W g ) such that
4 E 0 c W g ) can be driven to zero by a control u = H 4 E L , ( [ u , t l ] , C-). Hence System (10.1.9) is locally null controllable with
Every
constraints. Theorem 10.8.2 Suppose in System (10.1.9), with state space W # )or
c, (i) The system (10.1.9) is null controllable.
(ii) The solution x = 0 of (10.8.2) is exponentially stable. Then System (10.1.9) is null controllable with constraints. Proof: The first assumption yields an open ball 0 c W g )such that every can be transferred t o zero in some finite time t , by initial 4 E 0 c controls u E U .
"2)
The Tame-Optimal Control of Linear Daflerential Equations
417
By condition (ii), every solution (10.1.9) with u = 0, i.e., every solution of Equation (10.8.2) satisfies
so that Zt(U,q5,0)
---+
0 as t
--t
00.
Therefore, there is a finite t o < 00 such that I!,I = zto(u,q5,O) E 0.With ( t o , $ ) as initial data, there exists tl > t o such that some control uE
lJ = ~ C a ( [ t O , t lF l, )
gives a solution z(tO,I,!I,u)such that eto(tO,I,!I,u)= $, ~ t ~ ( u , + b ,= u )0. Thus, system (10.1.9) is null controllable. The control w = 0, in [ u , t o ] ,
= u , in [ t o , t l ] is contained in U and does the transfer of
4 to 0 in time tl < 00.
Definition 10.8.5: System (10.1.1) is said to be proper on [ u , t ] if and only if 0 E Int a ( t , a), t 2 CT + h.
+
It is proper if it is proper on every [u, t ] , t > u h. In the above definition, a ( t , u) is assumed to be a subset of C or of W j l ) if the controls are L,. More precisely, if we work on C or W (,1 ) , we use controls in L m ( [ u , t ] , C m ) u. However, if we are in Wj'), then Uad is a closed and bounded convex subset of Lp([u,t l ] ,E n ) with zero in its interior. The next result is stated for the space Wj". It is equally valid for W g ) or C .
Theorem 10.8.3 System (10.1.1) is proper if and only if it is functionspace controllable. Proof: Suppose system (10.1.1) is controllable on [ u , t ] ,t 2 u
H : L p ( [ u , l ] , E n-+ ) Wp"), H u = Z t ( C , O , U ) , is onto
+ h. Then
418
Stability and Time-Optimal Control of Hereditary Systems
where B is an open ball containing 0 with B c U . Because H is a continuous linear transformation of L, onto Wd'), H is an open map. H ( U ) is open and contains zero. 0 E Int a ( t , u ) . Hence System (10.1.1) is proper. Assume System (10.1.1) is proper, i.e., 0 E Int a(t,u). Because 0 E Int a ( t ,a) c Int d(t,u),where d(t,o) = {xt(u,0, u) : u E Lp} is a subspace this implies d(t,c)= W;'). For the linear system (10.1.9) we have proved that uniform asymptotic stability of System (10.8.2) and null controllability of System (10.1.9) suffice for constrained null controllability of System (10.1.9). It is clear that any test for controllability, though perhaps too strong, will guarantee null controllability. Indeed, we will now show that when D(t)zt in System (10.1.1) is atomic at both 0 and - h , then null controllability and controllability are equivalent. Proposition 10.8.3 System (10.1.1) is null controllable if and only if it is controllable, provided D is atomic at 0 and at -h. Proof: It is given that D ( t ,4 ) is atomic at 0 and at - h , for ( t ,4) E E x C. By a theorem of Hale [9, p. 2791 the operator T ( t ,u) given by the solution zt(u, 4) = T ( t ,u)d of Equation (10.1.9) is a homeomorphism for t 2 u. As a consequence of similar arguments,
T ( t ,U)WC' = W g ) , t 3 8. If system (10.1.1) is null controHable, then for each 4 E W$),
X,](.,s)B(s)u(s)ds = 0.
T(t1,a)4+ 0
With S(t1,u) : L ,
-+
W g ) defined by tl
S(t1,c)u =
J,
Xt,(.,s)B(s)u(s)ds,
null controllability is equivalent to
T(tl,). WL? Therefore, this implies
c S ( h ,.)(
L , [a, t 11, Em).
W F c s ( ~ l , ~ ) ( L , [ ~ , tE") lll so that S(t1,u) : L , -t W $ ) is a surjection. However, S(t1,u) is surjective if and only if System (10.1.1) is controllable. We have proved that null controllability implies controllability. The converse is trivially true: controllability always implies null controllability.
The Time-Optimal Control of Linear Differential Equations Corollary 10.8.1 The system d -[z(t) - A-lz(t - h)] = Aoz(t) Al z(t - h ) dt with det(A-1) # 0, is null controllable if and only if
+
+ Bu(t),
419
(10.8.3)
rank [A(X), B] = n , V X E C, rank [ X I - A-1, B] = n , V X E C, where A(X) = I X - XA-le-'
- A0 - Ale-'.
Proof: D(t)zt - A-lz(t - h ) is atomic at 0 and at -h if det(A-1) # 0. The rank conditions suffice for controllability by a result of Salamon [18, p. 1571, which we now state.
Theorem 10.8.4 System (10.8.3)is controllable in the state space Wjl), 1 5 p 5 00, if and only if rank [A(X),B] = n V X E C and rank [ X I A - l ,B ] = n, V X E C where A(X) = I X - XA-le-' - A0 - Ale-'.
10.9 Necessary Conditions for Optimal Control We now return t o our original problem of hitting a continuously moving point target wtE C in minimum time. At the time of hitting, in System (10.1.9)
t
+ J, X t ( . , s ) B ( s ) u ( s ) d s= Wt, or equivalently, ~1
- T ( t , a ) d - C ( t , a ) 4 E Z(t) =
where C(t, a14 = J,"dJt(.,
J,' Xt(.,s)B(s)u(s)ds,
s)I[G(a)(4) - G(s)(zs)l.
Thus, reaching wt in time t corresponds to Wt - T(t , u)f$- C(t,U)4 E z(t)
E a ( t , a).
We shall prove shortly that if u* is the optimal control with optimal time t', i.e. if u* is the control that is used to hit wt in minimum-time t*, then Wt. - T(t*,a)q5- C(t*,a)f$
e Z(t*) E a a ( t * ,a),
that is, z(t*) is on the boundary of the constrained reachable set.
420
Stability and Time-Optimal Control of Hereditary Systems
Theorem 10.9.1 Let U* be the optimal control with t* minimum-time. Then z ( t * ) E a a ( t * , u ) . Proof: Suppose
U*
is used t o hit w i in time t*:
z ( t * ) = wt. - T(t*,u)f$- C(t*,U)f$E a(t*,u). Suppose z ( t * ) is not in the boundary of a(t*,6):
z ( t * ) E Int a ( t * , c ) , t* > u. Therefore, there is a ball B( z ( t *) p, ) of radius p about z ( t * ) such that
B ( z ( t * ) , p )c a(t*,u). Because a ( t , u ) is a continuous function of t , we can preserve the above inclusion for t near t* if we reduce the size of B ( z ( t * ) , p ) , i.e., if there is a 6 > 0 such that
B ( z ( t * ) , p / 2 ) c a ( t , u ) , t* - 6 5 t Thus, z(t*) E a ( t ,u ) , t* - 6 are led t o conclude that
5 t*.
5 t . This contradicts the optimality o f t * . We z ( t * ) E &Y(t*,6 ) .
Theorem 10.9.2 Let D and L satisfy the basic assumptions and inequality (10.1.13). Suppose System (10.1.9) is controllable and the solution operator is a homeomorphism, and Z(t)
= wt - T ( t ,u)f#J- C ( t ,u)f#J.
(10.9.1)
If u* is an optimal control and t* the minimum time, then
cc
z* = z ( t * ) E a a ( t * , u )
if and only if U* is of the form
~ ' ( t=) sgn[-y(t,t*)B(t)],CT 2 t 5 t * ,
(10.9.2)
where y(t) is a nontrivial solution of the adjoint equation (10.1.6). Proof: Because of Proposition 10.1.1, a ( t , u )is closed and convex. If U* is an optimal control and t* the minimum time, then by Theorem 10.9.1, Z* E aa(t*,u ) , Also from controllability, and Theorem 10.8.3, 0 E Int a(t*,a)
The Tame-Optimal Control of Linear Differential Equations
42 1
By the Separation Theorem of Dunford and Schwartz [26, p. 1481, there exists a $ E Bo such that $ # 0, and
It follows from this that
for every u E Lco([n,t*],C"). On using Equation (10.1.4), we deduce that
0
= J_h[d$(e)I =
l*{
=
J,
J" T(t*,
s)Xo(Ws)u(s)ds,
[h[d$(s)lrKt*, s)X*I(0)} B(s)u(s)ds,
t* -Y(S,
t*)&u(s)ds,
where s + y(s,t*) is the solution of the adjoint equation (10.1.6). (See Henry [25].) Thus .1
J,
.1
[-y(s,t*)B(s)u(s)lds
5
J.
[-y(s,t*)B(s)u*(s>ld.
(10.9.3)
for every admissible u. We see from inequality (10.9.3) that u* is of the form u * ( s ) = sgn[-y(s, t*)B(s)l, (10.9.4) where y(s,t*) is the solution of the adjoint equation. We note that u* maximizes the 1.h.s. of inequality (10.9.3). Thus, with t* fixed and u* of the form (10.9.4), the point z* is on the boundary of a ( t * , r )and ( $ , p ) 5 ( $ , z * ) for every p E a ( t * , u ) I ,I, E Bo. In Theorem 10.9.2, we have the form of optimal control for the timeoptimal problem in the space of continuous functions C. We now show that this form is also valid in En,without the controllability assumption. This is true since the Separation Theorem is true in En for any closed and convex set, that is, through each boundary of a closed convex set in En there is a support plane. We now state the following theorem:
Stability a n d Time-Optimal Control of Hereditary Systems
422
Theorem 10.9.3 Let D and L satisfy the basic assumptions and Inequality (10.1.13). Let w ( t ) be a point function that is continuous in E". Let
4 t ) = 4 t ) - T ( t ,0)4J(O)
- C ( t ,a)@).
(10.9.5)
If u* is an optimal control and t* the minimum time, then z ( t * ) E aR(t*,u) c En if and only if u* (s)
where $(O)
= sgn[-B* ( s ) ~ ( ~1 )sgn[B* l (s)[T* ( t ,~ ) l l t l ( O ) l j ~(s), j
# 0 and y is a nontrivial solution of the adjoint
equation.
10.10 Normal Systems It is important t o know when the necessary condition for optimal control in Theorems 10.9.2 or 10.9.3 uniquely determines an optimal control. This occurs when System (10.1.1) is normal in the sense to be made precise below. First observe that the necessary condition for optimal control in C states that u*(t)= sgn[-B*(s)y(s)], u 5 s 5 t', (10.10.1) where y(s) is a nontrivial solution of the adjoint equation. This condition states that u;(s)
= sgn[-b*j(s)y(s)] on [ ~ , t j + ] , sgn[-b*j [ T * ( st*)$l(O)l , ,
for j = 1 , ... , m , where b*j(t)is the j t h row vector of B * ( t ) . Define Sj($(O))
= {s : b*j(s)y(s) = 0 , s
E
[O,t*])
= { s : b * ~ ( s ) [ T * ( s , t * ) $ ]=( O 0 ,) s E [O,t*]}. Define
= {s : Y ( S , t * , $ ) q s )= 0,
!/a($)
gij
= {s
[u,t*l}, : -y(s,t*,$)bj(s) = 0, s E [u,t*]}. sE
Definition 10.10.1: We say that System (10.1.1) is En normal on [a,t*] if yj($) has measure zero for each j = 1, . . . , rn, and for each nontrivial @ with $(O) # 0. The system is En normal if it is normal on every interval [a, 2'1. Similar definitions can be made for C-normality if we replace yj($) with g;j($) in the above definition. Utilizing the ideas of Gabasov and Kirillova [12] we are led to a necessary and sufficient condition for normality that is easily computable. It is Theorem 10.3.1.
The Time-Optimal Control of Linear Differential Equations
423
10.11 The Geometric Theory of Time-Optimal Control of Linear Neutral Systems In this section we study the time-optimal problem: Minimize
J ( t ,et) = t
(10.7.0)
subject to: N
i ( t )- A - l ( t ) i ( t
+ C Aj(t)z(t - hj) + B ( t ) ~ ( l ) ,
- h) = A o ( t ) ~ ( t )
j=1
(10.1 1. l )
where Aj are analytic n x n matrix functions, and B is an n x m analytic matrix function. T h e controls are constrained to lie in
= {u E ~k,C([a, oo),E ~ : ~) ( tE )u a.e. t E [a,00); U c Em, compact and convex; 0 E Int U .
Uad
Here L k ( [ a m), , Em)represents the space of measurable functions defined on each interval [a,t] having values in Euclidean n-space En and having essentially bounded values. We work in the space C = C([-h,O],E") of continuous functions from [-h,O] + E" with the sup norm. For these two spaces, the appropriate control set is
U = {u measurable, u ( t ) E Em, Iuj(t)l 5 1, a.e. j = 1 , . . . , m } . (10.11.3) In what follows, et E C is defined by z t ( s ) = e(t + s), s E [-h,O]. For the existence, uniqueness, and continuous dependence on initial data of the solution z ( - , c , ~ , uof) ( l O . l l . l ) , see Hale [9, pp. 25, 3011. See also Henry [16].For conditions for the existence of analytic solutions of N
z ( t ) - A-,i(t - h) = A o z ( t ) + C A j z ( t - h j ) ,
(10.1 1.4)
j=1
see Tadmor [29]. If we designate the strongly continuous semigroup of linear transformations defined by solutions of (10.11.4) by T ( t ,a),t 2 a so that T ( t ,a14 = Z t ( U , 4, O),
424
Stability and Tame-Optimal Control of Hereditary Systems
then the solution z ( u , ~ , uof ) (10.1.1) with zo(u,q5,u) = relation
4 satisfies the
where X is defined as follows: Let
We are justified in writing
+
where X t ( - ,s)(8) = X ( t 8 , s) B E [-h, 01 and where X is the fundamental matrix solution of (10.11.4).
Definition 10.11.1: The reachable set of time t is defined by
and it is a subset of C. The following results are easily proved:
Theorem 10.11.1 The following are equivalent: (i) System (10.1.1) is controllable on [ u , t ] , t 2 u (ii) T ( t , s ) a ( s , uc ) Int a ( t , u ) , u s < t . (iii) 0 E Int a ( t , u ) , 2 2 u + h.
<
+ h.
Proof: The equivalence of (i) and (iii) is established in Theorem 3.3 of [4]. To see that (ii) and (iii) are equivalent, we first note that 0 E a ( t , u )for each t 2 u ,so that
0 E T ( t ,s ) a ( s , u) c Int a ( t ,a), O 5
s
if we assume (ii). Conversely, assume (iii). Suppose T ( t ,s)q E d a ( t ,u),the boundary of a ( t , u ) where q E a ( s , u ) , and aim at a contradiction, since indeed T ( t ,s ) a ( s , a) c a ( t , u ) .
The Tame-Optimal Control of Linear Differential Equations
425
By a separation theorem of Dunford and Schwartz [26, p. 4181, there exists 4 # 0, $ E Bo the conjugate space of C, such that
for every p E a ( t , a ) ,where (., .) is the outer product in C. From (10.11.6),
But for some u E U ,
If we define
where
then u* E U determines a p E a ( t , a ) ,so that
This contradicts our assumption that T ( t ,s)q is in the boundary of a ( t ,a) and (&,p - T ( t , s ) q ) <_ 0. Hence T ( t ,s)q E Int a ( t ,a),so that T ( t ,s ) a ( s ,a ) c Int a(t,a ) .
426
Stability and Time-Optimal Control of Hereditary Systems
Theorem 10.11.2 [14,Theorem 4.21 Suppose System (10.1.1)is controllable. If U* is an optimal control and t* the minimum time, then U* is of the form u * ( s ) = ~ g n b ( t *s,, $)B(s)l ( 10.11.8) where g(t*, s, .) : Bo
+
En*is the row-vector function defined in (10.11.7).
It is easy t o verify that s + g(t*, s, $J)satisfies a linear differential equation [16]. It follows from Tadmor [29] that s + g ( t * , s , $ ) is piecewise analytic if the initial conditions are analytic.
Remark 10.11.1: In En the optimal control is given by u*(s)
= sgn[-y(t)B(t)],
u
5 t 5 t*,
where y : [a,t*] is an n-row vector function of bounded variation, which is a nontrivial solution of the adjoint equation
The form of optimal control in (10.11.8) states that each component uf of u* is given by
u f ( 4 = sgnb(t*,s,$)V(s)l on [O,tf] j = 1, . . . , m, where bj(s) is the j-th column of B ( s ) .
Remark 10.11.2: It is easy t o see from (10.11.8) that if (10.11.1) is normal, then optimal control is uniquely determined by (10.11.8) and it is piecewise constant and bang-bang. This implies that optimal trajectories are unique.
10.12 Continuity of the Minimal-Time Functions From the definitions of Section 10.1.1, we deduce that if (10.11.1) is controllable with constraints, then:
If (10.11.1) is null controllable with constraints, then:
The Time-Optimal Control of Linear Differential Equations for some t . In this case, control.
427
+ can be steered t o zero at time t by some admissible
Definition 10.12.1: The minimal-time function M is defined by
M(q5) = inf{t 2 u : -T(t,a)q5 E a ( t , u ) } . We have u
5 M(q5) 5 +m, with M(q5) < 00 if and only if
for some t 2 u. We observe that
M : A(u) + El. When (10.11.1) is controllable so that A ( u ) = C, then M is continuous. We have: Theorem 10.12.1 Let (10.11.1) be controllable. Then the minimal-time function M : A(u)+ E is continuous. The proof is similar to that of Theorem 7.2.1
Corollary 10.12.1 Fort 2 u, we have { - T ( t , u)+ : M(q5) 5 t } = a ( t , u). Also
{-qt,
u)q5 : M ( + ) = t
if det A- 1 ( t ) # 0 for each t . Proof: Note that
We now show that
} = &(t, a),
Stability and Time-Optimal Control of Hereditary Systems
428
n a (1+ i, 00
Indeed, let 4 E
D).
Then there is an admissible control up :
p=l
f" T (t + ,: XoB(s)u(s)ds, = Jd T (t + ,; .) XoB(s)u(s)ds+ J6t: T
4=
s)
0
E xp
+
(t
+ f,.)
XoB(s)u(s)ds,
yp.
But then
x p = J,'T
(2 + i , t ) T(t,s)XoD(s)u(s)ds, T(t,s)XoD(s)u(s)ds,
Therefore,
x p + T(t,t)l#J = 4 as p
+ co.
Because yp 0 as p --t 00, the elements 4 form the closed set a ( t ,a ) . For proof of the second formula, we observe that ---$
T ( t ,a ) : c ---i c is an open map, since in (10.11.1) detA-l(t) # 0 for each t , [9, p. 2791. Indeed, T ( t , a ) 4 = x t ( a , 4 , 0 ) , t 2 D where x(u,d,O)is a solution of (10.1.7). Theorem 2.6 of [9, p. 2791 insures that
T(t,u)C = c, t 2
d
+h,
that T ( t ,a) is an open map. Since T ( t ,a) is also continuous, we have { - T ( t , a)4 : M ( 4 ) < t } is open, so that
SO
{ - T ( t , a ) 4 : M ( 4 ) < t } c Int a ( t , u ) . We know that if
-T(t+,a)d E Int a ( t + , a ) ,a < t+ < m,
The Time-Optimal Control of Linear Differential Equations
429
then
- T ( t , u ) 4 E Int a ( t , u ) , for t > t * . Therefore M ( 4 ) 5 t* < t . This completes the proof. Indeed, suppose x* = -T(t*,u)4 E Int A ( t * , u ) , u < t* < 00. Then there is a ball B = B(x*,6) of radius 6 about x* such that B c a ( t * , u ) Thus .
T ( t ,t * ) C * = T ( t ,t*)T(t*, a)$, = --T(t,a)4 E T ( t , t * ) ( B c ) T(t,t*)a(t*,a) c a(t,u). Since T ( t , t * )is open, --T(t,a)4 E Int a ( t , u )for t
> t*.
Just as in HAjek [30] and Chukwu [31], we use the continuity of the minimal-time function M t o construct an optimal feedback control for (10.11.1). We need a special subset of C* = Bo, the conjugate space of C , which may be described as the cone of unit outward normals to support hyperplanes t o a(t,u) at a point 4 on the boundary of a ( t ,u).
Definition 10.12.2: For each 4 E A ( u ) , let K ( 4 ) = $ E Bo : ll$ll = 1 such that ( $ , p ) 5 ($,+), V p E a ( M , ( $ ) , a )where (.,.) is the outer product in C. Remark 10.12.1: Note that 4 E aa(M(q5),u). We now outline key properties of the set K ( 4 ) . Let denote the collection of subsets of Bo. Then Ii- : + S(B0)
s(&) c
is the mapping defined above.
Definition 10.12.3: We say that K is upper semicontinuous at only if limsupIi'(q5,) c I<(+) as 4, -+ 4.
4 if and
We have :
Lemma 10.12.1 If (10.11.1) is controllable, then li(q5) is nonvoid and I<(-+) = -1<(4). Also, K ( 4 ) is upper semicontinuous at 4. Proof: Let (10.11.1) be controllable. Then 0 E Int a ( t , a ) by Theorem 10.11.1. Because 4 E B a ( M ( + ) , a ) , there exists a $ E Bo, 4 $ 0 [26, p. 4181 such that
430
Stability and Time-Optimal Control of Hereditary Systems
The choice of 1c, can be made such that 111c,11 = 1. Hence K ( # ) is nonempty. The second assertion is true because a ( M ( $ ) ,u) is symmetric about zero. A simple argument that uses the closeness of a ( t , a ) , the continuity of t + a ( t ,u) [3]and of 4 + M(c#J),proves the last statement. Because K ( # ) is a nonvoid subset of Bo, we can choose y(#) E K(q5) if (10.11.1) is controllable. It is a consequence of McShane and Warfield [32] that this choice can be made in a measurable way. We have: Lemma 10.12.2 Let (10.11.1) be controllable. There exists a measurable function y : a(u)4 Bo
such that
Y(d) E I - ( + ) , v d E 4 u ) . Proof: We observe that C is a measure space and Bo a Hausdorff space. C is also separable. Let k : C + Bo be the mapping given by the collection
K(d) = {k(d) = 1c, : II, E Bo,
IllCIll = 1,
(1c,,P)
I (1c,,d), V P E a(M(d),u)).
From the continuity o f t + .(?,a) and of q5 + M(q5), we observe that + k($) is continuous. If we identify the notations of McShane and Warfield and ours as
M = C = A ( u ) , A = Bo, $J E Bo, theorem 4 of [32]asserts that there exists a measurable function y : A ( a ) +. BOsuch that y(q5) E K(q5). To construct an optimal feedback control for ( l O . l l . l ) , the regularity property of
g ( t , s , g )=
J
0
-h
+
de[1~l(e>lx(td , s )
( 10.12.O)
as a function of s is important. This is determined by the same property of s + X ( t , s ) . For this we now recall that the fundamental matrix X ( 2 ,s ) of System (10.11.4) satisfies as a function t of the equation 7%
X ( ~ , s ) - A _ ~ ( t ) X ( t - h=, ~A )o ( t ) X ( t , s )+ E A j ( t ) X ( t - h j , s ) j=1
for t # kh, k = 0 , 1 , 2 , .. . , t 2 u. We now assume Aj are real analytic functions. It is a consequence of Theorem 4.1 of Tadmor [29] that, for each t , s + X ( t ,s ) that is piecewise analytic in s on [0, t ] , it follows that s + g ( t , S,$J)is also piecewise analytic on [ O , t ] and more generally on any interval [g,o TI, for T > 0.
+
43 1
The Time-Optimal Control of Linear Differential Equations
Theorem 10.12.2 In (lO.ll.l), assume that Aj are each analytic functions, det A-l(t) # 0 for each t . Let (10.11.1) be controllable and normal. Then there exists a measurable function f : A(u) -* Em that is an optimal feedback control for (10.11.4) in the following sense: Consider the system n
i ( t ) -A-l(t)i(t-h)
= A o ( t ) z ( t ) + C Aj(t)z(t-hj)+B(t)
f(Zt).
(10.12.1)
j=1
Then each optimal solution of (lO.ll.l), is a solution of (10.12.1) and each solution of (10.12.1) is a solution (possibly not optimal) of (10.11.1). Proof: By Lemma 10.12.2, select a measurable function y : A(a) Bo, y(4) = 1c, E Bo, and define
+
(10.12.2) where g is defined in (10.12.0). Because B is analytic, each coordinate of
is piecewise analytic on [ u , M ( 4 ) ] . That is, h ( s ) is analytic on i = 1 , 2 , . . . ,v for a partition
[si-l,si],
Therefore sgn[g(t, s,Y(4))B(S)l is piecewise constant on each [si-l,si], i = 1 , 2 , .. . ,v, and therefore on [c,M(4)].Because of this, and because the right- and left-hand limits of X ( t , s ) exist at each kh, k = 0 , 1 , 2 , . . . , the limit (10.12.2) exists. Since y(4) is a measurable selection, so is f(4). Let 2: : [u,M(4)]+ C be an optimal solution of (10.1.1) with z, = 4. It can be verified that 2: satisfies (10.12.1). Indeed, take an arbitrary s, u 5 s < M ( 4 ) ; then t, E A(u), from controllability of (10.11.1). There exists an optimal control through z,, a control that can be taken to be
for any choice of y E K(z,), i.e., -y E K ( - z s ) . Our choice is y = y(r,). This choice is possible by Theorem 10.11.2. But then, because (10.11.1) is
Stability and Time-Optimal Control of Hereditary Systems
432
normal, optimal control is unique and determines uniquely a response x. Therefore for almost all t 2 s we have
On taking the limit as t
+ s+(a 5
s
5 t),
we obtain u,(s+) = f(z,), s E [u,M(4)],since u, is piecewise constant u,(s) = f(x,) for almost all s. The response I to u, satisfies
+
n
k ( s ) - A - l ( s ) k ( s - h) = Ao(s)x(s) C A j ( s ) z ( s- h)
+ B(s)f(z,)
j=1
a.e., xu = 4. Clearly c is a solution of (10.11.1). Now let z be a solution of (10.12.1), which is a response to v ( t ) = f ( c t ) . Then v ( t ) E U . The proof is complete.
Remark 10.12.2: If in Theorem 10.12.2 we work in En, we replace g in (10.12.2) by y where y solves the adjoint equation (10.11.9). For E n we use the usual inner product (., .).
10.13 The Index of the Control System In our main Theorem 10.12.2 for C , the function
is fundamental in determining optimal control strategy for our time-optimal problem. It is designated as the index of the control system. To study the index, a thorough study of the fundamental matrix X ( t ,s ) of (10.11.4) is required. We initiated such a study in Proposition 2.3.1 by considering the autonomous version of (10.11.4), namely k(t)
rn
m
j=1
j=1
+ C A - l j k ( t - h j ) = A z ( t )+ C A j c ( t - h j ) ,
if t 2 0 and
z ( t ) = qqt), 4 E
c, t E [ - h , o )
x(0)
= xo.
( 10.13.2)
The Time-Optimal Control of Linear Differential Equations
433
Here 0 5 hl 5 h2 5 . . . 5 h , = h. With X determined, the index of the control system is calculated using g given as
The index of the control system for
i ( t )- A-lli.(t - h ) = A o ~ ( t+) A l z ( t
- h ) + BU
(10.13.3)
is given by
Remark 10.13.1: In the space En,g is given by
y ( t , s , 20) = Z ; f X ( t- s ) , which is the solution of the adjoint equation. The index becomes
10.14 Examples Example 10.14.1: We consider linear models of the equation obtained from lossless transmission line problems, namely
d -[~(t) dt
+
where a
> 0,
- h)] = - u z ( ~ )+ b ~ (-t h ) + ~ ( t ) ,
CZ(~
c2 5 1 - 6, 6
(10.14.1)
> 0.
Variants of (10.14.1) were obtained by Slemrod [37]and Lopes [19,p. 2001. Hale [38,p. 3491 derived conditions for (global) uniform asymptotic stability of the system
d - [ ~ ( t+ ) c ~ ) i (-t h ) = - a z ( t ) dt Hale showed that if (i) a > 0, c2 5 1 - 6, for some 6 > 0,
+ br(t - h ) .
(10.14.2)
434
Stability and Time-Optimal Control of Hereditary Systems
(ii) and if there exists a
p E [0,1] such that [?-1]P+
I.-:[
2
then (10.14.2) is uniformly asymptotically stable. In (10.14.1) we assume lul 5 1 .
( 10.14.3)
Physically the function I is the voltage at the end of the line, and u(t) is related either t o the current or t o the voltage at the source. We use u t o control fluctuation of the current at the end of the line t o its equilibrium position (0) as fast as possible. That (10.14.1) is controllable follows from Theorem 10.8.4. When conditions (i) and (ii) of Hale hold, (10.14.2) is uniformly asymptotically stable. Therefore (10.14.1) is null controllable with constraint. It follows that a time-optimal control exists (Theorem 10.8.1). It is given by u*(s) = sgn[g(t, s, +PI (open loop form), or by
f(4) = tlim -+S+
-sgnb(t, s,y(+))B(s)l
in (10.12.2), where g is given by (10.11.7). With an initial choice of y(4) = $, ll+ll = 1, let us calculate f(+) for t E [0,2h]. We select the (measurable) function
+ = Y(4)
on
i-4
01.
Then for System (10.14.1) where a = 2, c = -1, 2 b =1 2.'
z ( t )= e - 2 t ,
so that
t - s E [h,2h].
t E [Olhl,
The Time-Optimal Control of Linear Diflerential Equations
435
Example 10.14.2: A two-dimensional neutral system was derived by Lopes in [40], in a transmission-line problem. Its linear version is given by
where
and u is related to the voltage v ( z , t ) as follows: v ( L , t ) = u ( t ) - gu(t - h ) , and i(2) is the current. In (10.14.4),
We consider the time-optimal control of the system
i ( t ) - A - l i ( t - h ) = Aoz(t) + A l x ( t - h )
+ Bu(t)
(10.14.5)
where
We recall from Chukwu [39, Example 51 that the conditions for uniform asymptotic stability are: (i) IcJ < 1. (ii) Let
where H is a positive definite symmetric matrix. Suppose the eigenvalues A i of Ji satisfy Xli
< -51 < 0, i = 1 , 2 ,
For the control system (10.14.5), we assume (iii) Iujl 5 1, j = 1 , 2 .
436
Stability and Time-Optimal Control of Hereditary Systems
We examine 10.14.4 when
:),
.-I=(!
A1
= ( 1i O0 ) ’
A o = ( -3
.=(dl
0
--11 ) 1
d4 O),
di#O.
For stability we find that if H = I , the identity matrix J1
= A o + ~ TO-(
Hence the eigenvalues are be 2.
A11
= -6,
-6 0
0 -2).
XIZ = -2, and we can choose
61
to
Using the matrix norm
we deduce that
Hence conditions (i) and (ii) of uniform asymptotic stability are satisfied. We know from Salamon [6, p. 1511 that (10.14.5) is exactly controllable on W;” if and only if (iv) rank [A(X),B ] = 2 V X where A(X) = X ( I - A - l e V T h )- Ao - A l e - + , (v) rank [XI - A-1, B] = 2 V A. Since B has full rank, conditions (iv) and (v) are satisfied; and therefore the system is exactly controllable. It follows from arguments contained in Chukwu [14,Theorem 3.21 that (10.14.5) is null controllable with measurable controls that satisfy (iii). We now deduce optimal control law
The Time-Optimal Control of Linear Differential Equations
437
in the sDace En.To do this we need to determine the fundamental matrix X :
We select
One can obtain values of u on [h,2/21. The calculations become more complicated.
REFERENCES 1. M. A. Cruz and J. K. Hale, “Asymptotic Behaviour of Neutral Functional Differential Equations,” Arch. Rational Mech. Anal. 34 (1969) 331-353. 2. J. P. LaSalle, “The Time Optimal Control Problem,” in Theory of Nonlinear 0 s cillations, Vol. 5, Princeton Univ.Press, Princeton, N.J., 1959, 1-24. 3. H. T. Banks and G. A. Kent, “Control of Functional Differential Equations of Retarded and Neutral Type to Target Sets in Function Space,” S I A M J . Control Optim. 10 (1972), 567-593. 4. E. B. Lee and L. Markus, Foundations of Optimal Control Theory, Wiley, New York, 1967. 5. A. Strauss, An Introduction to Optimal Control Theory, Springer-Verlag, New York, 1968. 6. H. Hermes and J. P. LaSalle, Functional Analysis a n d Time Optimal Control, Academic Press, New York, 1969. 7. R. B. Zmood, “The Euclidean Space Controllability of Control Systems with Delay,” S I A M J . Control Optim. 12 (1974) 609-623. 8. J. Kloch, “A Necessary and Sufficient Condition for Normality of Linear Control
Systems with Delay,” Ann. Polon. Math. 35 (1978) 305-312.
438
Stability and Time-Optimal Control of Hereditary Systems
9. J. Hale, Theory of Functional Diflerential Equations, Springer-Verlag, New York, 1977. 10. 0. Htijek, “Duality for Differential Games and Optimal Control,” Math. Systems Theory 8 (1974) 1-7. 11. 0. Hijek, Pursuit Games, An Introduction to the Theory and Applications of Diflerential Games of Pursuit and Evasion, Academic Press, New York, 1975. 12. R. Gabasov and F. Kirillova, The Qualitative Theory of Optimal Processes, Marcel Dekker, New York, 1976. 13. E. N. Chukwu, “The Time Optimal Control Problem of Linear Neutral Functional System,” J. Niger. Math. SOC. 1 (1982) 39-55. 14. E. N. Chukwu, “The Time Optimal Control Theory of Linear Differential Equations of Neutral Type,” Comput. Math. Applic. 16 (1988) 851-866.
15. E. N. Chukwu, “Control in Wil) of Nonlinear Interconnected Systems of Neutral Type,” Preprint. 16. D. Henry, Theory and Boundary Value Problems for Neutral Functional Equations, Preprint. 17. E. N. Chukwu, “Symmetries of Linear Control Systems,” 436-488.
SIAM J . Control 12 (1974)
18. D. Salamon, Control and Observation in Neutral Systems, Pitman Advanced Publishing Program, Boston (1984). 19. D. Lopes, “Forced Oscillation in Nonlinear Neutral Differential Equations,” SIAM J. Appl. Math. 29 (1975) 196-207. 20. R. Bellman, I. Glicksberg and 0. Gross, “On the Bang-Bang Control Problems,” Q. Appl. Math. 14 (1956) 11-18 . 21. E. N. Chukwu and 0. HAjek, “Disconjugacy and Optimal Control,” Theory Applic. 27 (1979) 333-356.
J. Optn’miz.
22. G. A . Kent, “Optimal Control of Functional Differential Equations of Neutral Type,” Ph.D. Thesis, Brown University (1971). 23. H. T. Banks and M. Q. Jacobs, “An Attainable Set Approach to Optimal Control Functional Differential Equations with Function Space Terminal Conditions,” J . Difl. Equations 13 (1973) 127-149. 24. H. R. Rodas and C. E. Langenhop, “A Sufficient Condition for Function Space Controllability of a Linear Neutral System,” SIAM J . Linear Control Opttmiz. 16 (1978) 429-435. 25. D. Henry, “FDES of Neutral Type in the Space of Continuous Functions: Representation of Solutions and Adjoint Equation,” Preprint, Math. Dept. University of Kentucky, Lexington, KY. 26. N. Dunford and J. T. Schwartz, Linear Operators Part science Publications, New York (1958).
I: General Theory, Inter-
27. Shin-Ichi Nakagiri, “Optimal Control of Linear Retarded Systems in Banach Spaces,” J. Math. Analysis and Applic. 120 (Nov. 15. 1986) 169-210. 28. J. L. Lions, Optimal Control of Systems Governed by P.D.E., Springer-Verlag, Berlin. 1971.
The Time-Optamal Control of Linear Differential Equations
439
29. G. Tadmor, “Functional Differential Equations of Retarded and Neutral Type: Analytic Solutions and Piecewise Continuous Controls,” J. Differential Equations 51 (1984) 151-181. 30. 0. Hijek, “Geometric Theory of Time Optimal Control,” 339-350. 31. E. N. Chukwu, tems, Preprint.
SIAM J. Control 9 (1971)
The Time Optimal Feedback Control of Delay Differential Sys-
B. Warfield, “On Filippov’s Implicit Function Lemma,” Proceedings Amer. Math. SOC. 18 (1967) 4 1 4 7 . 33. S. Saks, Theory of the Integral Monografie Matematyceque VII, English Transla32. E. J. McShane and R.
tion, Warzawa, 1937. 34. W. Rudin,
Real and Complex Analysis, McGraw-Hill, New York, 1966.
35. H. 0. Fattorini, “The Time Optimal Control Problem in Banach Spaces,” Math. Optimiz. 1 (1974) 163-188.
Appl.
36. R. Datko, “Linear Autonomous Neutral Differential Equations in a Banach Space,” J. Di8. Equat. 25 (1977) 258-274. 37. M. Slemrod, “Nonexistence of Oscillations in a Nonlinear Distributed Network,” Math. Anal. Appl. 36 (1971) 22-40. 38. J. K. Hale, “Stability of Functional Differential Equations of Neutral Type,” Equat. (1970) 334-355.
J.
J . Difl.
39. E. N. Chukwu, “An Estimate for the Solution of a Certain Functional Differential Equation of Neutral Type,” in Nonlinear Phenomena in Mathematical Science, edited by V. Lakshmikanthan, Academic Press, New York, 1982. 40. 0. Lopes, “Stability and Forced Oscillations,” 698.
J. Math. Anal. Appl. 55
41. E. N. Chukwu, R. G. Underwood, and L. D. Kovari, Global trollability of Nonlinear Neutral Systems, Preprint.
(1976) 68&
Constrained Null Con-