Linear Equations
2.1. Existence Theory Consider the equation x ( t ) = f(t)
+ Ji B(t,S)X(S) ds,
(2.1.1)
in which f: [0, a) -,R" is continuous and B ( t ,s) is an n x n matrix of functions continuous for 0 I s I t < a and a I 00. The function B(t,s)is frequently called the kernel. If B(t, s) can be expressed as B(r,s) = D(t - s), then (2.1.1) is said to be of convolution type. Most writers ask less than continuity on B, but most of our work will require it as our techniques will frequently require reduction of (2.1.1) to an integro-differential equation. Thus, we will often need to also require that f have a derivative. The integro-differential equations we consider have the form
+ Jd
x'(t) = A ( t ) x ( t )
C(t,s)x(s)ds
+ F(t),
(2.1.2)
in which F: [0, a) + R" is continuous, C(r, s) is an n x n matrix of functions continuous for 0 I s I t < a, and A ( t ) is an n x n matrix of functions continuous on [0, a). 22
2.1.
Existence Theory
23
We now put (2.1.2) into the form of(2.1.1), so that an existence and uniqueness theorem will apply to both of them. Equation (2.1.2) requires an initial function 4 : [O,to] + R" with 4 continuous and to possibly zero. A solution of (2.1.2) is a continuous function x ( t ) on an interval [ t o ,T ) , such that x ( t ) = +(t) for 0 5 t It o . This yields ~ ' ( t=) A(t)x(t)
+ J:
+
C(t,s)4(s)ds F(t)
+ l:C(t,s)x(s)ds. A translation y ( t ) = x ( t + t o ) results in y'(t) = x ' ( t + t o ) = A(t + t , ) y ( t ) + J,"C(to + t,sW(s)ds + F(t + t o ) + Ji""
= A(t
C(t0 + r,s)x(s)ds
+ to)y(t)+ J; C ( t , + t , s + t,)y(s)ds
+ Ji" C ( t o+ t,s)t$(s)ds + F(t + to),
which we write as
y'(d = 4 t O y ( t ) +
J; C(t,s)y(s)ds+ F(t),
again of the form (2.1.2). The initial function 4 is absorbed into the forcing function, and the last equation then has the initial condition Y(0)= X(t0)
= 4(to).
so that an integration from 0 to t yields
+ Ji l ( s ) y ( s ) d s+ J,F(s)ds + Ji
y(t) = 4(to)
C(u,s)y(s)dsdu. (2.1.3)
Interchanging the order of integration in the last term yields an equation of the form of (2.1.1). Thus, the existence and uniqueness theorem that follows applies also to (2.1.2) with a given continuous initial function. The uniqueness part of the proof of the result is facilitated by the following relation. LEMMA Gronwall's Inequality Let f,g: [0,a] and let c be a nonnegative number. If
f(r)
Ic
+ J; g(s)f(s)ds,
+ [O,co)
0 I t < a,
be continuous
24
2.
Linear Equations
then
Suppose first that c > 0. Divide by c + by g ( t ) to obtain PROOF
yo g(s)f(s)ds and multiply
An integration from 0 to t yields
or
f(r)
+1 ;g(s),f(s)dsIcexp Ji g(s)ds.
I c
If c = 0, take the limit as c + 0 through positive values. This completes the proof. THEOREM 2.1.1 Ler 0 < or Icc and suppose that f: [O,or) + R” is continuous und that B(t,s) i s an n x n matrix ofjunctions continuous,for0 I s I t < or. I f 0 < T < or, then there is one and only one solution x(t) of ~ ( t=)f ( t )
+ J~B(t,s)x(s)ds
(2.1.1)
on [0, T I . PROOF
Define a sequence of functions { x n ( t ) }on [0, TI by
These are called Picard’s successive approximations. One may show by mathematical induction that each x,(t) is defined on [0, T ] and is continuous. Let M = maxos,s,sTIB(t,s)l and K = maxOsrsTIf(t)l and consider the series X,(t)
+
1
(X”+I ( t ) - X A t ) )
(2.1.5)
n= 1
whose typical partial sum is x,(t). We now show by induction that J X , , + ~-( x,,(t)J ~) I [K(~t)”]/n!.
(2.1.6)
2.1.
Existence Theory
25
It follows from (2.1.4)for n
=
1 that
(xz(t)- xAd( = /f(t) +
Jof
=
B(t,s)f(s)ds- f(i)
(B(t,s)f(s)(ds IM K t ,
I
so that (2.1.6)is true for n Assume
Jd
1.
IXk+ l(t)
- xk(t)l 5
[K(Mt)k]/k!
and consider
-
Mk+ 1 Ktk+ 1
(k + l)! ' as required. is!the typical term o f a Taylor series of KeM*that converges But K ( M C ) ~ , % uniformly and absolutely on [0, TI. Thus (2.1.5) also converges uniformly on [0, TI to a continuous limit function, say, x ( t ) . We may, therefore, take the limit as n + co in (2.1.4)and pass it through the integral, obtaining x ( t ) = f(t)
+ JiB(t,s)x(s)ds,
so that the limit function x ( t ) is a solution of (2.1.1).
To see that x ( t ) is the only solution, suppose there are x ( t ) and y(t), on an interval [0, Then, from (2.1.1),
TI.
x(t) - y(r) =
so that
Ji B(t,s)[x(s)
Ix(t) - Y(t)l 5 M
This is of the form Iz(t)] 5 c
with c
= 0. By Gronwall's
J;
+ Jd
- Y(S)]
( X b ) - Y(S)l
two solutions, say
ds,
ds.
M]z(s)lds
inequality, Iz(t)( IceM' = 0. The proof is complete.
2.
26
Linear Equations
2.2. Linear Properties Discussion of the linear properties of (2.1.1) tends to be cumbersome, whereas the linear properties of(2.1.2) are very straightforward and analogous to properties of ordinary differential equations. In fact, in the convolution case for (2.1.2) with A constant, the entire theory is almost identical to that for ordinary differential equations. THEOREM 2.2.1 Let f l , f2: [O,a) -+ R" be continuous and B(t,s) be an t < a. I f x(r) and y ( t ) are n x n matrix of finctions continuous fbr 0 I s I solutions of x(r) = fl(t)
+ Ji B(t,s)x(s)ds
y ( t ) = fdt)
+ Ji B(t,s)y(s) ds,
und
respectively, and if c1 and c2 ure real numbers, then c , x ( t ) of
z ( t ) = [clfl(t) PROOF
+ c2y(t)is a solution
+ c,f,(t)] + Ji B(t,s)z(s)ds.
We have
and the proof is complete. We turn now to the equations
+ Ji C ( t , s ) x ( s ) d s+ F(t)
x'
=
A(t)x
x'
=
A(t)x +
(2.2.1)
and
Ji C(t,s)x(s)ds,
(2.2.2)
with F: [O,a) --* R" being continuous, A ( t ) an n x n matrix of functions continuous on [0, a), and C(t,s) an n x n matrix of functions continuous for 0I sI t
+ Ji F(s)ds + Ji A(s)x(s)ds+ Ji J: C(u,s)x(s)dsdu, (2.2.3)
2.2.
Linear Properties
27
and upon change of order of integration, we have an equation of the form (2.1.1)with f(t) = x0
+ Ji F(s)ds,
so that when F(t) = 0, then f(t) = xo.
THEOREM 2.2.2 Consider (2.2.1)and (2.2.2)on [0,a). (a) For each xo there is a solution
x ( t ) of
(2.2.1)for 0 I t < a with x(0) =
XO.
(b) If xl(t) and x 2 ( t ) are two solutions of (2.2.1), rhen xl(t) - x2(f) is a solution of (2.2.2). (c) If x , ( t ) and x2(r) are two solutions qf (2.2.2)and if c1 und c2 are real numbers, then c l x l ( t )+ c 2 x 2 ( t )is a solution of (2.2.2). (d) There are n linearly independent solutions of (2.2.2)on [0, a ) and any solution on [0,a ) may be expressed as a linear combination of them. PROOF In view of the remarks preceding the theorem, (a) was established by Theorem 2.1.1. Parts (b) and (c) follow by direct substitution into the equations. To prove (d), consider the n constant vectors e l , . . . , en, where ei = (0,. . . ,0, l i , 0, . . . ,O)T and let xi(t) be the solution with xi(0) = e,. For a given x,, = ( x o l ,. . . ,xOn)",we have xo = x O l e l+ * . . + xOnen,so the unique solution x ( t ) with x(0) = xo may be expressed as
x(t) = x,,x,(t)
+ . . . + XOnXn(f).
Now, the xl(t),. . . , x n ( t ) are linearly independent on [O,a); for if n
-y Y
CiXi(t)
=0
on
i= 1
is a nontrivial linear relation, then
o=
1 n
cixi(0)=
i=1
1 n
ciei
i= 1
is a nontrivial linear relation among the ei,which is impossible. This completes the proof.
COROLLARY If xl(t), . . . , x n ( t ) are n linearly independent solutions of (2.2.2)and if x,(t) is any solution of (2.2.1), rhen euery solution of (2.2.1)can be expressed as x ( t ) = x,(t)
+ c , x , ( t ) + . . . + cnxn(t)
for appropriate constants cl, . . . , c,.
2. Linear Equations
28
PROOF If x ( t ) and x,(t) are solutions of (2.2.1), then x ( t ) - x,(t) is a solution of (2.2.2)and, hence, may be expressed as
clxl(t)
+ .+ * *
C"X,(t).
This completes the proof.
2.3. Convolution and the Laplace Transform When A is constant and C is of convolution type, then the variation of parameters formula for (2.2.1) becomes identical to that for ordinary differential equations. Consider the systems x' = P x f
Ji D(t
x' = P x f
j;D(t - s)x(s)ds,
and
- s)x(s)ds
+ F(t)
(2.3.1) (2.3.2)
in which P is an n x n constant matrix, D(t) an n x n matrix of functions and F: [O, a)+ R" continuous. We suppose also that continuous on [0, a), IF(t)l and lD(t)(may be bounded by a function Me"' for M > 0 and a > 0. That is, F and D are said to be of exponential order. Laplace transforms are particularly well suited to the study of convolution problems. A good (and elementary) discussion of transforms may be found in Churchill (1958).Our use here will be primarily symbolic and the necessary rudiments may be found in many elementary texts on ordinary differential equations. The following is a list of some of the essential properties of Laplace transforms of continuous functions of exponential order [(v) requires differentiability]. The first property is a definition from which all the others may be derived very easily with the exception of (vii). (i)
If h : [0, co)-+ R, then the Laplace transform of h is L(h) = H ( s ) =
:J
e-"'h(t)dt.
(2.3.3)
(ii) If D(t) = (dij(t)) is a matrix (or vector), then L ( D ( t ) )= (L(dij(t))). (This is merely notation.) (iii) If c is a constant and h l , h , functions, then L(ch, + h,) = cL(h,) L(h,).
+
2.3. Convolution and the Laplace Transform
29
(iv) If D ( t ) is an n x n matrix and h(t) a vector function, then
1
L(Ji D ( t - s)h(s)ds
(2.3.4)
= L(D)L(h).
(v) L(h'(t))= sL(h) - h(0). (vi) L-' is linear. (vii) If h , ( t ) and h 2 ( t )are continuous functions of exponential order and L(h,(t))= L(h,(t)),then h , ( t ) = h,(t).
THEOREM 2.3.1 Let Z(t)be [he n x n matrix whose columns are solutions of' (2.3.2) with Z(0)= I . The solution of' (2.3.1) satisfying x(0) = xo is
x(t) = Z(f)xo + PROOF
Ji Z ( t
(2.3.5)
- s)F(s)ds.
Notice that Z ( t ) satisfies (2.3.2): Z'(t)= PZ(t)
+ Ji D ( t - s)Z(s)ds.
We first suppose that F and D are in L'[O, 00). If we convert (2.3.1) into an integral equation, we have
and as D and F are in L', we have
(x(t)(I (x(0)I+ K
+ K Ji (x(s)(ds,
some K
=- 0
and 0 I t < 00.
By Gronwall's inequality
Thus both x(t) and Z(t)are ofexponential order, so we can take their Laplace transforms. We have Z'(t) = PZ(t)
+ Ji D(t - s)Z(s)ds,
and upon transforming both sides, we obtain
+
sL(Z)- Z(0)= PL(Z) L(D)L(Z),
using (i)-(v). Thus [ s l - P - L(D)]L(Z)= Z(0)= I ,
and because the right side is nonsingular, so is [sI - P - L(D)]for appropriate s. (Actually, L ( Z )is an analytic function of s in the half-plane Re s 2 a,
2.
30
Linear Equatioos
where IZ(t)l I Ke".) [See Churchill (1958, p. 171).] We then have L ( Z ) = [SZ - P - L ( D ) ] - ' .
Now, transform both sides of (2.3.1):
+
+
sL(x)- ~ ( 0=) PL(x) L(D)L(x) L(F)
or
[sI
- P - L(D)]L(x)= ~
+
( 0 ) L(F),
so that
+
L(x)= [sI - P - L(D)]-'[x(O) L(F)] = L(Z)x(O) L(Z)L(F)
+
+ L ( J i Z(r - s)F(s)ds
= L(Zx(0)) =L
Z(t)x(O)+
) ).
(
Ji Z(t
l/[(t - T)'
+ l]}
if O l t l T , if t > T
+ 11)
if O l t l T , if t > T.
- s)F(s)ds
Because x , Z, and F are of exponential order and continuous, by (vii) we have the required formula. Thus, the proof is complete for D and F being in L'[O, 00). In the general case (i.e.,D and F not in L'), for each T > Odefine continuous L'[O, 00) functions F T and D T by
and DT = { D ( r ) D(T){l/[(t - T)' Consider (2.3.1)and X'(f) =
PX(f)+ F T ( f )
+ Ji DT(t - S ) X ( S ) d S ,
(2.3.1)T
with x(0) = x o for both. Because the equations are identical on [0, TI, so are their solutions; this is true for each T > 0. Thus because (2.3.5) holds for (2.3.1)Tfqr each T > 0, it holds for (2.3.1) on each interval [0, TI. This completes the proof. Exercise 2.3.1 This exercise is not trivial. Substitute (2.3.5) into (2.3.1), interchange the order of integration, and show that (2.3.5) is a solution of (2.3.1).
2.3. Coovolutioo and the Laplace Transform
31
Although one can seldom find Z(t),we shall discover certain properties that make the variation of parameters formula very useful. For example, by a change of variable
Ji Z ( t
- s)F(s)ds=
Ji Z(t)F(t
- s)ds,
so if we can show that it follows that, for any bounded F, then
Ji Z(t)F(t
- s)ds
is also bounded. In the study of Liapunov's direct method, one frequently finds that sg IZ(t)ldc is finite. Furthermore, uniform asymptotic stability of the zero solution of (2.3.2) and ;[ IZ(t)ldt c co are closely connected. We turn now to the integral equation x ( t ) = f(t)
+ Ji B(t - s)x(s)ds,
(2.3.6)
with f: [0, co)4 R" being continuous, B ( t ) an n x n matrix continuous on [0, a), and both f and B of exponential order. The goal is to obtain a variation of parameters formula for (2.3.6). Naturally if B and fare both differentiable, we could convert (2.3.6)to an equation of the form (2.3.1) and apply Theorem 2.3.1. But that seems too indirect. Such a formula should exist independently of B' and f'. We shall see, however, that the derivative o f f will enter, in a natural way, even when the transform of (2.3.6) is taken directly. THEOREM 2.3.2 Let H ( t ) be the n x n matrix satisfying H(t) = I
+
B(t - sjH(s)ds
(2.3.7)
and let f'(t) and B be continuous and of exponential order. The unique solution x ( t ) of (2.3.6) is given by
+ Ji H(t - s)f'(s)ds.
x ( t ) = H(t)f(O) PROOF
The Laplace transform of (2.3.7) is
L ( H )= L ( I ) and, as L(1) = s-', L ( I )= s - ' I . Thus
+ L(B)L(H)
(2.3.8)
32
2.
and, because L ( I )is nonsingular, so is [ I
L ( H )= [ I
-
-
Linear Equations
L(B)].This implies that
L(B)]- Is-
Now the transform of (2.3.6) is L(x)= L(f) + L(B)L(x)so that [ I - L(B)]L(x)= L(f) or
L(x)= [ I
-
L(B)]- ‘L(f).
Multiply and divide by s on the right side and recall that
L(f’) = sL(f) - f(0). This yields
L(x) = ($1 - L(B)])- ‘sL(f) = L(H)[L(f’) f(O)] = L(H)L(f’)+ L(H)f(O)
+
= L(Ji
)+
H(t - s)f‘(s)ds
= L(H(r)f(O)
L(H(t)f(O))
+ J; H ( t - s)f’(s)ds ,
)
so that (2.3.8) follows from (vii). This completes the proof. Notice that (2.3.7) represents the n integral equations x ( t ) = ej +
Ji B(t
- s)x(s)ds.
I t is necessary that functions be defined on [0, co) for the Laplace transforms to be applied. Also, F and D need to be of exponential order. The following exercise suggests that one may try to circumvent both problems. Exercise 2.3.2 Suppose that F and D in (2.3.1) are continuous on [0, TI but not defined for I > T . Define F and D on [0, co) by asking that F(t) = F(T) and D ( t ) = D ( T ) if I 2 1. Check the details to see if the variation of parameters formula will work on [0, T I . Exercise 2.3.3 Continue the reasoning of Exercise 2.3.2 and suppose that it is known that IZ(t)ldt < co. If D is not of exponential order but F is bounded, can one still conclude that solutions of (2.3.1) are bounded?
Jc
2.4.
Stability
33
We return to x(t) = f(t)
+ ji B(t - s)x(s)ds
(2.3.6)
with f: [O, m) + R" being continuous and B a continuous n x n matrix, both f and B of exponential order. THEOREM 2.3.3 If H is dejined by (2.3.7) and if H is differentiable, then the unique solution of (2.3.6) is given by x ( t ) = f(t)
L(li x(t which implies
+ Ji H'(t - s)f(s)ds.
af(.
- s)ds) =
(2.3.9)
H(t - s)f(s)ds),
ji x(s)ds = ji H(t - s)f(s)ds.
We differentiate this to obtain (2.3.9), because H(0) = 1. This completes the proof. The matrices Z and H are also called resolvents, which will be discussed in Section 2.8 in some detail.
2.4. Stability Consider the system x' = A(t)x
+ ji C(t,s)x(s) ds
(2.4.1)
2.
34
Linear Equations
with A an n x n matrix of functions continuous for 0 I t < co and C(t,s) an n x n matrix of functions continuous for 0 I s I t < 00. If 4: [0, t o ] + R" is a continuous initial function, then x(t,4 ) will denote the solution on [to,co). If the information is needed, we may denote the solution by x(t, t o ,4). Frequently, it suffices to write x ( t ) . Notice that x(t) = 0 is a solution of (2.4.1),and it is called the zero solution. DEFINITION 2.4.1 The zero solution of (2.4.1) is (Liapunou) stable for each E > 0 and each to 2 0, there exists 6 > 0 such that Ic$(t)l
if,
< 6 on [O,to] and t 2 to
imply Ix(t, 4)l < 8. DEFINITION 2.4.2 The zero solution of (2.4.1) is uniformly stable for each E > 0, there exists 6 > 0 such that to 2
O,l4(t)(< 6 on [O,to],
if,
and t 2 to
imply Ix(t,411 < E . DEFINITION 2.4.3 The zero solution of (2.4.1)is asymptotically stable if it is stable and i f for each t o 2 0 there exists 6 > 0 such that I4(t)l< 6 on [O, to] implies Ix(t, 4)1+ O as t + 00. DEFINITION 2.4.4 The zero solution of (2.4.1)is uniformly asymptotically stable (U.A.S.)if it is uniformly stable and i f there exists r] > 0 such that, for each E > 0, there is a T > 0 such that to 2 0, I$(t)l
imply Ix(t,
< q on [O,to], and t 2 to + T
< E.
We begin with a brief reminder of Liapunov theory for ordinary differential equations. The basic idea is particularly simple. Consider a system of ordinary differential equations x' = G(t,x),
(2.4.2)
with G : [0, co) x R" + R" being continuous and G(t,O)= 0, so that x = 0 is a solution. The stability definitions apply to(2.4.2)with Q ( t ) = x ( t o ) on [0, to]. Suppose first that there is a scalar function I/: [0, co) x
R" + [0, co)
2.4.
Stability
35
having continuous first partial derivatives with respect to all variables. Suppose also that V ( t ,x) + 00 as 1x1 -+ 00 uniformly for 0 It < oo; for example, suppose there is a continuous function W: R" -+ [0, a)with W(x) -+ 00 as 1x1 -+ co and V ( t ,x) 2 W(x). Notice that if x(t) is any solution of (2.4.2) on [0, oo), then V(t,x(t)) is a scalar function oft, and even if x(t) is not explicitly known, using the chain rule and (2.4.2)it is possible to compute V'(t,x(t)). We have V'(t,x(r)) =
But G(t,x)
= (dx,/dt,
8V dx, ax, dt
~
~
dVdx,, dV + ...++ -.at ax, dt -
. . . ,dx,/dt)T and so (a) is actually
- + dV/dt.
V'(t,x(t)) = grad V G
(b)
The right-hand side of (b) consists of known functions o f t and x. If V is shrewdly chosen, many conclusions may be drawn from the properties of V'. For example, if V'(t,x(t)) I0, then t 2 to implies V ( t ,x(t)) IV ( t o ,x(to)), and because V ( t ,x) -+ co as 1x1 -+ GO uniformly for 0 < t < co,x(t) is bounded. The object is to find a suitable V function. We now illustrate how V may be constructed in the linear constant coefficient case. Let A be an n x n constant matrix all of whose characteristic roots have negative real parts, and consider the system X' =
AX.
(2.4.3)
All solutions tend to zero exponentially, so that the matrix B = JOm [exp AtIT[exp At] dt
(2.4.4)
is well defined, symmetric, and positive definite. Furthermore, A'B
+ BA = - I
(2.4.5)
because
= Joz
(d/dt)[exp At]'[exp
= JOm
(AT[expAt]'[exp
= A'B
At] dt
At]
+ [exp AtIT[exp At]A dt
+ BA.
Thus, if we select V as a function of x alone, say, V(X) = x'Bx,
(2.4.6)
2.
36
Linear Equations
then for x ( t )a solution of (2.4.3) we have
V’(x(t))= ( x ~ ) ’ B+ x x~Bx’ = ( x ‘ ) ~ B+ x x~Bx’ = xTATBx xTBAx = xT(ATB BA)x
+ +
= -XTX.
The matrix B will be used extensively throughout the following discussions. In some of the most elementary problems asking V ( t ,x ) to have continuous first partial derivatives is too severe. Instead, it suffices to ask that V : [0, 00) x R” + [0, co)is continuous
and
(2.4.7) V satisfies a local Lipschitz condition in x.
DEFINITION 2.4.5 A function V ( t ,x ) satisfies a local Lipschitz condition in x on a subset D of [0, co) x R“ if, for each compact subset L of D, there is a constant K = K ( L ) such that ( t ,x l ) and (c,x 2 )in L imply that IV(t,x,) - V(t,x,)l I Klx, - x2I.
If V satisfies (2.4.7), then one defines the derivative - of V along a solution x(t) of (2.4.2) by V;2,4.2)(t, x ) = lim sup [ V ( t h+O+
+ h, x + hG(t,x ) ) - V ( t ,x)]/h.
(2.4.8)
Because V satisfies a local Lipschitz condition in x , when V is independent o f t [so that V = V ( x ) ] ,we see that Next, define
lVi2,4.2)(X)lI KIG(t,x)l.
V’(t,x ( t ) )= lim sup [V ( t h-O+
+ h, x ( t + h ) )- V ( t ,x ( t ) ) ] / h .
(2.4.9)
It can be shown [see T. Yoshizawa, (1966; p. 3)] that V ’ ( 4x ( t ) )=
Vi2.4.2)k XI.
(2.4.10)
Moreover, from integration theory it is known that V’(r,x ( t ) )I0 implies that V ( t ,x ( t ) )is nonincreasing. The next problem will be encountered frequently in the following, and it is best taken care of here. Refer to (2.4.3) and select B as in (2.4.4). Then form V ( X )= [ X ~ B X ] ’ ’ ~
2.5.
Liapunov Functionals and Small Kernels
31
and compute the derivative along a solution of (2.4.3). If x # 0, then V has continuous first partial derivatives and V’(x) = (xTBx)’/2[xTBx]”2 = -x ~ x / ~ [ x ~ B x ] ” ~ .
Now there is a positive constant k with 1x1 2 ~ ~ [ X ~ B Xso] for ~ ’ x~ # , 0, V‘(x) I -k(xl.
But we noted after (2.4.8) that IV‘I I KIG(t,x)l = K J A x ~ ,
so when x
= 0 we
have
V’(x) s o .
Hence, for all x we see that V ‘ ( x )I 41x1.
The theory is almost identical for integro-differential equations, although the function V(t,x) is generally replaced by a functional V(t,x(.))= V(t ,x(s); 0 I s It ) . We develop this idea more fully later when we ccnsider general functional differential equations; however, we now have sufficient material for some general results.
2.5.
Liapunov Functionals and Small Kernels
We consider the system x’ = Ax
+ J; C(t,s)x(s)ds,
(2.5.1)
in which A is an n x n matrix all of whose characteristic roots have negative real parts, C(t,s) an n x n matrix of functions continuous for 0 I s I t < co, and
I“
IC(~ s)l, du
is continuous for
o I s I r < co.
Find a symmetric positive definite matrix B with ATB + B A
=
-I.
(2.5.2)
There are positive constants r, k, and K (not unique) with 1x1 2 2 k [ ~ ~ B x ] ’ / ~ , lBxl I K [ X ~ B X ] ” ~ ,
(2.5.3) (2.5.4)
2.
38
Linear Equations
and
rlxl I[ x ~ B x ] ~ / ~
(2.5.5)
A basic tool in the investigation of (2.5.1) is the functional V ( t ,x( .))
=
+
Ji I"(C(u,s)l du Ix(s)l ds,
[ x ~ B x ]R ~ ~ ~
(2.5.6)
where R is a positive constant. This functional has continuous first partial derivatives with respect to all variables (when x # 0) and it satisfies a global Lipschitz condition in x ( t ) . Let us compute the derivative of (2.5.6) along a solution x ( t ) of (2.5.1). For x # 0 we have V'(t, X( .)) = { ( x ~ B x ) ' / ~ [ x ~ B x ] " ~ }
+R
I"'
IC(u,t)l du 1x1 - R
Ji IC(t,S)((Wlds,
and because
+
(x~Bx)' = (x')~Bx x ~ B x ' =
[
Ji xT(s)CT(t,s)ds xTx + 2 Ji xT(s)CT(t,s ) ds Bx,
= -
by (2.5.3) and (2.5.4) we have ( x ~ B x ) ' / ~ ( x ~ B x5) "-klxl ~
This yields
[
1
xTAT+
V' I- k
+R
I"
+ K Ji IC(t,s)IIx(s)I ds.
IC(u,t)I du] 1x1 - ( R - K )
Ji Ic(t,
s)l Ix(s)l ds.
(2.5.7)
Our basic assumption is There exists R 2 K
and E 2 0 with E Ik - R
IC(u,t)I du. (2.5.8)
THEOREM 2.5.1 Let B, k, and K be dejned by Eqs.(2.5.2)-(2.5.4). (a) If (2.5.8) holds, the zero solution of (2.5.1) is stable. (b) I f (2.5.8) holds with R > K and E > 0, then x = 0 is asymptotically stable. (c) If (2.5.8) holds and sy IC(u,s)lduds is bounded, then x = 0 is uniformly stable.
yo
2.5. Liapunov Functionals and Small Kernels
39
(d) Suppose (c) holds and R > K and z > 0. I f for each p > 0 there exists S > 0 such that P 2 S and t 2 0 imply J Z pIC(u,s)Iduds < p , then x = 0 is uniformly asymptotically stable.
yo
(2.5.9) This choice of 6 = B(E, t o )fulfills the conditions for stability. PROOF OF
(c) For uniform stability, 6 must be independent of to. If
yo JF IC(u,s)l duds I K M for 0 I f < co and some M > 0, then (2.5.9) may be replaced by
+
6 < ~r/[(1/2k) RM)],
yielding uniform stability.
(2.5.10)
2.
40
Linear Equations
If Euclidean length is used for 1x1, then the second integral is arc length. Let x[a, b] denote arc length of x(t) on [u, b].
Then we have
v(to,4(.)) -P
rlx(t)l I
Jl) Ix(s)Ids
- px[t0,t].
(2.5.12)
(2.5.13)
Because Ix(r)l 2 0, we have J; Ix(s)(ds < 00, which implies that there is a sequence { r , ) -+ co with (x(t,)I + 0. Also, x [ t 0 , t ] is bounded. Thus Ix(t)( -+ 0. Because (a) is satisfied, the proof of (b) is complete. (d) By (b), x = 0 is uniformly stable. Find 6 > 0 such that 141 < 6 implies I x ( t , 4 ) ) < 1 . Take q = 6 and let E > 0 be given. We then find T such that PROOF OF
t o L 0, I4(t)(< 6 on [0, to],
and t 2 to
+T
imply Ix(t,4)( < E. The proof has three distinct parts.
+
+
(i) Find L > 0 and p > 0 with ( ~ / 2 k L ) p R ( & M / L ) < rc. For that p find S in (d). We show that if Ix(t)l < E / Lon an interval of length S, then
Ix(t)( < E always.
Suppose Ix(t)l < E / L on an interval [t,,?, + PI with P 2 S. Then at + P we have (as Ix(t)l < 1 )
t =t,
r ( x ( t ) (I V ( t ,x( .)) =
[xTBx]”2
+ J;’ J:+, R(C(u,s)(du (x(s)lds
I ( 4 2 k L ) + pR As V’ I 0, we have
+
rlx(t)l
I V ( t , x ( . ) )I
+ R M E / L< YE.
v(t,+ P, x(.)) < rE
for t 2 t , P , so that l x ( t ) ( < E if t 2 t , + P . (ii) Note that there is a P , > 0 such that the inequality Ix(t)l 2 E / ~on L an interval of length PI must fail because
0 5 r)x(t))I V ( t ,x(-)) I(1/2k) (iii) There is an N such that times because 0I
I x ( t ) ( moves
v(t,X( .)) I (1/2k) + M
+M
-p
s’ Ix(s)\ds. fu
from & / 2 L to E / Lat most N - ~ ~ [ tt ]o. ,
2.5. Liapunov Functionals and Small Kernels
41
Thus, on each interval of length S + P I ,either ) x ( t ) )remains smaller than E/L for S time units or moves from E / Lto E / ~ LThe . motion from E / Lto &/2L happens at most N times. Thus if t > to + N ( S + PI), then we will have Ix(t)l < E always. Then taking T = N ( S + P completes the proof. Remark The corollary to Theorem 2.6.1 will show that the conclusion in (b) demonstrates uniform asymptotic stability in the convolution case. Exercise 2.5.1 Consider the scalar equation
x'
=
-x
+ Sd a(t,s)(t - s + l)-"x(s)ds
for n > 1 and cr(t, s), a continuous scalar function satisfying )a([, s)l Id for some d > 0. Determine conditions on d and n to ensure that each part of Theorem 2.5.1 is satisfied. That is, give different conditions for each part of the theorem. Pay careful attention to (d) and notice how Part (i) of the proof would be accomplished. Exercise 2.5.2 Consider
x'
=
-x
+ S: d ( t - s + l)-"x(s)ds + sint,
with d and n positive constants. Determine d and n such that the variationof parameters formula yields all solutions bounded.
ds
There is also a variation of parameters formula for
x' namely,
= Ax
+
+
C(t,s)x(s)ds F(r),
+ ji R(t,s)F(s)ds,
~ ( t=) R(t,O)x(O)
(2.5.14)
(2.5.15)
where R(t,s) is called the resolvent and is an n x n matrix that satisfies
for 0 I s It and R(t,t ) = 1. When C ( t ,s) is of convolution type so is R(t,s), and in fact, R(t,s) = Z ( t - s), where Z ( t ) is the n x n matrix satisfying
+ J' c(t - s ) ~ ( s ) d s
~ ( t=) A Z ( ~ )
and Z(0)= 1.
(2.5.17)
2.
42
Lmear Equations
We found conditions for which !Om Ix(t)l dt < 00 for each solution of ( 2 . 5 4 , so that IZ(t)l dt < co.Thus, in the convolution case, a bounded F in (2.5.14) produced bounded solutions. But in the general case of (2.5.14),we have too little evidence of the integrability of R(t,s). Thus we are motivated to consider Liapunov’s direct method for the forced equation (2.5.14). Extensive treatment of the resolvent may be found in Miller (1971a), in a series of papers by Miller (see also the references mentioned in Section 2.8), and in papers by Grossman and Miller appearing in the Journal ofDiflerentia1 Equations from 1969 to mid-1970s. Additional results and references are found in Grimmer and Seifert (1975). The following is one of their results, but the proof presented here is different. THEOREM 2.5.2 Let A be an n x n constant matrix all of whose characteristic roots have negative real parts, let C(t,s) be continuous for 0 I sI t < 00, and let F: [O,co) + R” be bounded and continuous. Suppose B satisfies ATB + B A = - I and ci2 and B2 are the smallest and largest eigenvalues of B, respecM for 0 I t < 00 and 2PM/a < 1, then all solutions tively. If yoIBC(t,s)l ds I of (2.5.14)are bounded. If the theorem is false, there is a solution x(t) with lim SUP^-.^ xT(t)Bx(t)= + co.Thus, there are values oft with Ix(t)l as large as we please and [xT(t)Bx(t)]’2 0, say, at t = S, and xT(t)Bx(t)I xT(S)Bx(S) if t I S. Hence, at t = S we have
PROOF
[xT(t)Bx(t)]’=
-
xT(t)x(t)+
Jd
2xT(s)CT(t, s)Bx(t)ds
+ 2FT(t)Bx(r)
2 0
or xT(S)x(S)I Jos 2xT(s)CT(S, s)Bx(S)ds+ 2FT(S)Bx(S)
+ I (2/1x)lx(S)l(x~(S)Bx(S))”~ Jos IBC(S,s)l ds + 2xT(S)BF(S) I 21x(S)l JOsIBC(S, s)l [ ( ~ ~ ( s ) B x ( s ) ) ” ~ds/ a ] 2xT(S)BF(S)
I (2/+(S)lPlX(S)lM
+ 2xT(S)BF(S)
+ 2xT(S)BF(S).
= (2PM/a)lx(S)12
As 2PM/a < 1 we have a contradiction for Ix(S)I sufficiently large.
2.5.
Liapunov Functionak and Small Kernels
43
The proof of the last theorem is a variant of what is known as the Liapunov-Razumikhin technique, which uses a Liapunov function (rather than a functional) to show boundedness and stability results for a functional differential equation. An introduction to the method for general functional differential equations is found in Driver (1962). Detailed adaptations of the Razumikhin method to Volterra equations may be found in Grimmer and Seifert (1975) and in Grimmer (1979). Most of those results are discussed in Chapter 8. Halanay and Yorke (1971)argue very strongly for the merits of this method over the method of Liapunov functionals. Notice that the main conditions in the last two theorems are very different. In Theorem 2.5.1 we mainly ask that 1 : IC(u,t)I du be small, where the first coordinate is integrated. But in Theorem 2.5.2 we ask that IBC(t, s)l ds be small, where the second coordinate is integrated. Under certain conditions on C(r,s)it is possible to obtain a differential inequality when considering (2.5.6),(2.5.7),and (2.5.1).That is, we differentiate V ( t ,x( .)) along a solution of (2.5.1)and attempt to find a scalar function q(t) > 0 with V t ,x( - q(t)V(t,x(*)). (2.5.18)
yo
When that situation occurs, owing to the global Lipschitz condition in that V satisfies, it turns out that the derivative of I/ along a solution of the forced equation (2.5.14)results in the inequality
x(t)
v’(t,X(-))
5 -q(t)W,x(.))
+ KIWI.
(2.5.19)
It then follows that for a solution x(r,+) on [to,00)
which can ensure boundedness, depending on the properties of q and F. Equation (2.5.20) becomes a substitute variation of parameters formula for (2.5.14),acting in place of (2.5.15).In fact, it may be superior to (2.5.15) in many ways even if much is known about R(t,s). To see this, recall that, for a system of ordinary differential equations X’
= P(t)x
+ Q(t)
with P ( t ) not constant, if Z ( t ) is the n x n matrix satisfying Z’(t)= P(t)Z(t),
Z(0) = I ,
2.
44
Lmear Equations
then the variation of parameters formula is
+
x ( t ) = Z(t)x(O)
Z ( t ) Z - '(s)Q(s)ds.
Even if Z(t) is bounded, Z-'(s) may be very badly behaved. One usually needs to ask that
Ji trP(s)ds 2 - M
>
-00
to utilize that variation of parameters formula; and this condition may imply that Z(t)t*O as t -+ 00. In that case, the hope of concluding that bounded Q produces bounded x ( t ) vanishes. To achieve (2.5.19) we examine V' I
[
-k
+R
6"
(C(U, t)I du] 1x1 - ( R - K )
Ji IC(t,s)( Ix(s)(ds
(2.5.7)
once more and observe that we require a function
A: [O, 00) + [O, 1 1 with IC(4 s)l 2
W) J" p(u, 41du,
(2.5.21)
for 0 I s I t < 00. For, in that case, if R > K and
K I k - R J" IC(u,t ) Jdu
with K positive, then from (2.5.7) we have V' I -KIxI - ( R - K)A(t)Ji
I"
IC(u,s)l du Ix(s)l ds
Ji I"(C(u,$1
I - 2 k E A ( t ) [ ~ ~ B x ] '-' ~[ ( R - K)/R]A.(t)R
dulx(s)l ds
I - ?(t)V (t, x ( .) ), where q ( t ) = A(t) min[2kE;
(R- K)/R].
(2.5.22)
These calculations prove the following result. THEOREM 2.5.3 Suppose that the conditions of Theorem 2.5.1 (b) hold and that (2.5.21) and (2.5.22) are satisjed. If x ( t , 4 ) is a solution of (2.5.14) on [ t o , co)and i f V is dejned by (2.5.6), then V ' ( t , x ( . ) )I - t l ( t ) v ( t , X ( . ) ) + KIF(t)l,
2.5. Liapuoov Functionals and Small Kernels
45
and therefore,
Exercise 2.5.3 Verify that (2.5.19)holds. Exercise 2.5.4 Consider the scalar equation x’(t) = - x ( t )
+ Jd C ( t , s ) x ( s ) d s+ acost,
c,{exp[ - h(t - s ) ] } with c1 and h being positive constants. where IC(t,s)l I Find conditions on h and c 1 to ensure that the conditions of Theorem 2.5.3 are satisfied and q ( t ) is constant. Your conditions may take the form q(t) = h(P - 1)/B I 1
for some B > 1
and for some a < 1.
Bc,/h I a
In the convolution case there is a natural way to search for q(t). Exercise 2.5.5 Consider the vector system x’
= Ax
+ Ji D(t - s)x(s)ds
(2.5.23)
in which the characteristic roots of A have negative real parts and (D(t)l> 0 on [0, 00). Let B, k, and K be defined as before and suppose that there is a d > K and k , > 0 with
k > k, 2 d
I“ (D(u
-
t)(du,
0I t < co.
Prove the following result. THEOREM 2.5.4 I f there is a continuous and nonincreasing scalar function + (0, co) with
I : [0, 00)
then there is a constant q > 0 such that for x ( t ) a solution of (2.5.23) and V ( t ,X( .)) = [ x ~ B x ] ” ’
+ d Ji JzID(u - s)(du Ix(s)l ds
we have V ’ ( t , x ( . ) )I -qI(t)V(t,x(.)).
2.
46
Linear Equations
In our discussion of the variation of parameters formula for an ordinary differential equation X' =
P(t)x + Q(t)
with P not constant, but P and Q continuous on [O,co), we looked at Z(t)Z-'(s) where Z(0)= I .
Z'(t)= P(t)Z(t),
Jacobi's identity [or the Wronskian theorem [see Hale (1969; pp. 90-91)] states that det Z(t)= exp yo tr P(s)ds, so that det Z(t) never vanishes. However, if Z ( t ) is the principal matrix solution of x' = A x
+ Ji B(t - s ) x ( s ) d s
(2.5.24)
with A constant and B continuous, then det Z(t)may vanish for many values oft. THEOREM 2.5.5 Suppose that (2.5.24) is a scalar equation with A I0 and B(t) I0 on [0, co). I f there exists t , > 0 such that J:Ji'B(u-s)dsdu+-co
as
?+a,
then there exists t , > 0 such that if x(0) = 1, then x ( t , ) = 0.
If the theorem is false, then x ( t ) has a positive minimum, say, x,, on [0, t , ] . Then for t 2 t , we have PROOF
Jil B(t
x'(t) I
I
Ji' B(t
- s)x(s)ds
+ J: B(t - s)x(s)ds
- s ) x , ds
implying, upon integration, that x ( t ) Ix 1
as t
+ co,a
+ J: Ji' B(u - s)xIdsdu + - 00
contradiction. This completes the proof.
2.6. Uniform Asymptotic Stability We noticed in Theorem 2.5.1 that every solution x ( t ) of (2.5.1) may satisfy
2.6.
Uniform Asymptotic Stability
47
(that is, x is L'[O,co)) under considerably milder conditions than those required for unifoim asymptotic stability. However, in the convolution case x' = Ax
+ J; D(t - s)x(s)ds
(2.6.1)
with D(t)continuous on [0, co) and A being an n x n constant matrix, then JOm
(D(r)(dr< co
and
Ix(t)(dt<
JOm
00
(2.6.2)
is equivalent to uniform asymptotic stability of (2.6.1). This is a result of Miller (1971b), and we present part of it here. THEOREM 2.6.1 If each solution x ( t ) of (2.6.1) on [0, 00) is L'[O, co), i f D(t) is L'[O, a), and if A is a constant n x n matrix, then the zero solution of (2.6.1) is uniformly asymptotically stable. PROOF
If Z(t)is the n x n matrix with Z(0)= I and Z ( t )= A Z ( t )
+ Ji D(t - s)Z(s)ds,
then Z ( t ) is L'[O, 00). Let x ( t , t o ,$) be a solution of (2.6.1)on [ t o ,00). Then xyt, t o , $ ) = A x ( t , t , , $ )
+ J;
D(t - s)$(s)ds
Jz
so that x'(t
or x ( t
+ s' D(t - s)x(s, to.$)ds, lo
+ t o , t o , 4) = A x ( t + t o , t o , 4) + D(t + to - s)$(s)ds + Ji D(t - s)x(to + s, t o , $)ds
+ t o , t o , 9) is a solution of the nonhomogeneous equation x' = A x + J; D ( t + J; D(t + to - s)$(s)ds, S)X(S)dS
which we write as y'
= Ay
+ J; D(t - s)y(s)ds + F(t)
(2.6.3)
with y(0) = x ( t o , t o ,4) = $ ( t o ) and F(r) =
J;"D(t + to - s)$(s)ds.
(2.6.4)
By the variation of parameters formula [see Eq. (2.3.5) in Theorem 2.3.11 we have Y(t) = W $ ( t o )
+ J; Z ( t - s)F(s)ds
2. Linear Equations
48
or
I
so that
x(t+ro,ro,~)=Z(t)~(tZ o () r+- ~s )~{ J i n D ( s + u ) ~ ( t o - u ) d u ds. (2.6.5)
Next, notice that, because A is constant and Z(t) is L'[O, co), then A Z ( t ) is L' [0, a). Also, the convolution of two functions in L' [0, cx)) is L' [0, a), as may be seen by Fubini's theorem [see Rudin (1966, p. 156).] Thus yo D(t - s)Z(s)dsis L'[O, co), and hence, Z'(t)is L'[O, co). Now, because Z'(t)is L'[O, co), it follows that Z(t) has a limit as t + 00. But, because Z(t)is L' [0, m), the limit is zero. Moreover, the convolution of an L'[O, 00) function with a function tending to zero as t + m yields a function tending to zero as t + a.( H i n t : Use the dominated convergence theorem.) Thus Z'(t) = A Z ( t )
+ Ji D(t - s)Z(s)ds
+
0
ast-+m. Examine (2.6.5) again and review the definition of uniform asymptotic stability (Definition 2.4.4). We must show that (4(t)l< q on [0, to] implies that x(t + t o , t o , 4)+ 0 independently of t o . Now in (2.6.5) we see that Z(t)$(t0)+ 0 independently of t o . The second term is bounded by
4
Ji IZ(t
- s)(
Ji"ID(s + u)(duds I q Ji IZ(t
- s)l
I"
(D(v)(duds,
and that is the convolution of an L' function with a function tending to zero as t -+ m and, hence, is a (bounded) function tending to zero as t + 00. Thus, x(t
+ t o , t o , 4)+ 0
as t + co
independently of t o . The proof is complete. COROLLARY 1 l f the conditions of Theorem 2.5.l(b) hold and i f C(t,s) is of convolution type, then the zero solution of (2.5.1) is uniformly asymptotically stable. Under the stated conditions, we saw that solutions of (2.5.1) were L' [O, co).
PROOF
COROLLARY 2 Consider x' = A x
+ J; D(t - S)X(S)dS
(2.6.1)
2.6. Uniform Asymptotic Stability
49
with A being an n x n constant matrix and D continuous on [0,00). Suppose that each solution of (2.6.1) with initial condition x ( 0 ) = xo tends to zero as t + co. I f there is a ,function i ( s ) e L'[O, 00) with ID(s + u)l du < i ( s ) for 0 I t o c co and 0 < s < co, then the zero solution of (2.6.1) is uniformly asymptotically stable.
r;
PROOF
We see that Z ( t )-+ 0 as t + 00, and in (2.6.5), then, we have 1x0 + to, t o , 4)l 5 p w ( t o ) l
+ omax 14(s)lJ;Iz(t sssto
- S)ll(S)dS.
The integral is the convolution of an L' function with a function tending to zero as t -+ co and, hence, tends to zero. Thus x ( t + t o , to. 4) -,0 as t + 00 uniformly in t o . This completes the proof. Example 2.6. I
J:
Let D ( t ) = ( t + l ) - " for n > 2. Then
D(s
+ u)du = Ji"(s + u + l ) - " d u (s
-
+
I)-n+l
-n+l
which is L'. We shall see an application of this concerning a theorem of Levin on x'
=
-Ji a(t
- s)x(s)ds
when a(t) is completely monotone. Recall that for a linear system X' =
A(t)x
(2.6.6)
with A ( t ) an n x n matrix and continuous on [O,co), the following are equivalent : (i) All solutions of (2.6.6) are bounded. (ii) The zero solution is stable.
The following are also equivalent under the same conditions:
(i) All solutions of (2.6.6) tend to zero. (ii) The zero solution is asymptotically stable. However, when A ( t ) is T-periodic, then the following are equivalent: (i) All solutions of (2.6.6) are bounded. (ii) The zero solution is uniformly stable.
2.
50
Linear Equations
Also, A ( t ) periodic implies the equivalence of (i) All solutions of (2.6.6) tend to zero. (ii) The zero solution is uniformly asymptotically stable. (iii) All solutions of X'
= A(t)x
+ F(t)
are bounded for each bounded and continuous F: [O,cn)
(2.6.7)
-,R".
Property (iii) is closely related to Theorem 2.6.1. Also, the result is true with (A(t)l bounded instead of periodic. But with A periodic, the result is simple, because, from Floquet theory, there is a nonsingular T-periodic matrix P and a constant matrix R with Z(t) = P(t)eR' being an n x n matrix satisfying (2.6.6). By the variation of parameters formula each solution x(t) of (2.6.7) on [0, co)may be expressed as x(t) = Z(t)x(O)
+ Ji Z(t)Z-'(s)F(s)ds.
In particular, when x(0) = 0, then x( t ) =
Ji P(t )[e""
-
"'3 P -
(s)F(s)ds.
Now P(t) and P - ' ( s ) are continuous and bounded. One argues that if x(t) is bounded for each bounded F, then the characteristic roots of R have negative real parts; but, it is more to the point that
Thus, one argues from (iii) that solutions of (2.6.6) are L'[O, co)and then that the zero solution of (2.6.6) is uniformly asymptotically stable. We shall shortly (proof ofTheorem 2.6.6) see a parallel argument for (2.6.1). The preceding discussion is a special case of a result by Perron for IA(t)( bounded. A proof may be found in Hale (1969; p. 152). Problem 2.6.1 Examine (2.6.5) and decide if:
(a) boundedness of all solutions of (2.6.1) implies that x = 0 is stable. (b) whenever all solutions of (2.6.1) tend to zero then the zero solution of (2.6.1) is asymptotically stable. We next present a set of equivalent statements for a scalar Volterra equation of convolution type in which A is constant and D ( t ) positive. An ndimensional counterpart is given in Theorem 2.6.6.
2.6.
Uniform Asymptotic Stability
51
THEOREM 2.6.2 Let A be a positive real number, D : [0,00)+(0, continuous, Jg D(t)dt < 00, - A D(t)dt # 0, and
+ Jr
X' = - A X
+ Sd D(t - S)X(S)ds.
00)
(2.6.8)
The following statements are equivalent: (a) (b) (c) (d) (e)
All solutions tend to zero. -A j: D(t)dt < 0. Each solution is L'[O, 00). The zero solution is uniformly asymptotically stable. The zero solution is asymptotically stable.
+
We show that each statement implies the succeeding one and, of course, (e) implies (a). Suppose (a) holds, but - A J: D(t)dt > 0. Choose t o so large that D(t)dt > A and let 4(t)= 2 on [0, to]. Then we claim that x(t,$) > 1 on [ t o ,00). If not, then there is a first t , with x ( t l )= 1, and therefore, ~ ' ( t ,5) 0. But
PROOF
+
r;
x ' ( t l )= - A x ( t , ) =
+ Ji' D ( t l - s)x(s)ds
+ Ji' D(s)x(tl - s)ds + si' D(s)ds + Ji"D(s)ds > 0,
-A
2 -A
> -A
a contradiction. Thus (a) implies (b). Let (b) hold and define V ( t , x ( . ) )= 1x1
+ si J m
D(u - s)duIx(s)lds,
so that if x ( t ) is a solution of (2.6.8),then V ' ( t , x ( - ) ) -Alx(
+ Jm
=
- [XI
+ Ji D(t - s)lx(s)lds
D(u - t )du 1x1 -
Ji D(t
for some a > 0.
- s)Jx(s)l ds
52
2.
Linear Equations
An integration yields 0 I v ( t , X ( . ) ) I V(t,,+) - a J])(s)lds,
as required. Thus, (b) implies (c). Now Theorem 2.6.1 shows that (c) implies (d). Clearly (d) implies (e), and the proof is complete. To this point we have depended on the strength of A to overcome the effects of D(t) in X'
= AX
+ ji D(t - s)x(s)ds
(2.6.1)
to produce boundedness and stability. We now turn from that view and consider a system
x' = A(t)x +
ji C(t,s)x(s)ds+ F(t),
(2.6.9)
with A being an n x n matrix and continuous on [0, co), C(t,s)continuous for 0 I s I t < co and n x n, and F: [0, co)-,R" bounded and continuous. Suppose that G ( t , $ )=
-J]"
C(u,s)du
(2.6.10)
is defined and continuous for 0 I s I t < 00. Define a matrix Q on [0, co)by Q ( t )= A ( t ) - G(t,t )
(2.6.11)
Q commutes with its integral
(2.6.12)
and require that (as would be the case if A were constant and C of convolution type) and that
for 0 I u I t and some positive constants M and a. Here, when L is a square matrix, then $. is defined as the usual power series (of matrices). Also, when Q(t) commutes with its integral, then exp Q(s)ds is a solution matrix of
I:,,
X' =
Moreover,
Q(t)x.
53
2.6. Uniform Asymptotic Stability
Notice that (2.6.9)may be written as
+ F(t) + (d/dt)Ji G(t,s)x(s)ds. (2.6.14) from both sides, left multiply by exp[ -yo Q(s)ds], and
X’ = [ A ( t ) - G ( t , t ) ] x
If we subtract Qx group terms, then we obtain
{ [
(exp -
QW d S , x @ i }
= {exp[
-Ji
Q(s)ds]}[(d/dr)Ji
G(t,s)x(s)ds
1
+ F(t) .
Let C#J be a given continuous initial function on [0, t o ] . Integrate the last equation from t o to t and obtain {ex.[
-
Ji Q ( s ) d s ] }
x ( t ) = {ex,[
+ +
-
{
J:’
Q(4d s ] } x00)
exp[ -
Q(4d s ] } F(u)d~
l:,{ex,[ J, Qb)
x (d/du)
-
ds]}
G (u,s)x(s) du
If we integrate the last term by parts, we obtain
2.
54
Lmear Equations
Left multiply by exp[yo Q(s)ds],take norms, and use (2.6.13) to obtain
D on [O,oo) .for THEOREM 2.6.3 I f x ( t ) is a solution of (2.6.9), if IQ(t)(I some D > 0, and if S U yolG(t,s)lds ~ I p, ~ then for ~ B suficiently ~ small, ~ x (t ) is bounded. PROOF
For the given to and $, because F is bounded there is a K , > 0 with
Mlx(fo)l+ ~ ~ ' l G ( t o , s ) $ ( s ) l d ssup +
I,,
6:)
M{exp[-dt - ~ ) ] ) I F ( u ) l d < u K,.
From this and (2.6.15) we obtain Ix(t)l I K,
+ J; I G k s)l Ix(s)l ds
+ s' DMexp[-a(t - u)] JiIG(u,s)lIx(s)Idsdu I K , + B sup Ix(s)l + (DMP/a) SUP Ix(s)l 10
o
O
=K,
+ B[1 + (DM/a)]
sup Ix(s)l.
O
+ (DM/a)] = m < 1, yielding Ix(t)l I K , + m sup Ix(s)(. K , > m a x o ~ I ~ I o ~ and $ ( t )K~, + m K , -= K,. If Ix(t)l is not
Let p be chosen so that p[1
O
Let then there is a first t , > to with Ix(tl)l = K,. Then K,
=
Ix(tl)l I K,
bounded,
+ mK, < K , ,
a contradiction. This completes the proof. Exercise 2.6.1
Let a > 0 and x' =
-J; a(t - s + 1 ) - 3 ~ ( s ) d s+ F(t).
Work through the entire sequence of steps from (2.6.10) to (2.6.15). Then state Theorem 2.6.3 for this equation, let F(t) = sin t and 4(t)= 1 on [0, t o ] , and follow the proof of Theorem 2.6.3 to find M , K , , D, a,K , , and p.
~
2.6. Uniform Asymptotic Stability
55
Exercise 2.6.2 Interchange the order of integration in the last term of (2.6.15), assume (G(t,s)l I Le-Y(‘-S) for L and y positive, and use Gronwall’s inequality to bound Ix(t)l under appropriate restrictions on the constants.
THEOREM 2.6.4 I n (2.6.9) let F(t) = 0, IQ(t)l D on [0, a),and so IG(t, ds B. If B is suflciently small, then the zero solution uniformly stable. I
s)l
PROOF
is
I
Let E > 0 be given. We wish to find 6 > 0 such that to 2 0,It$(t)l < 6 on [0, to],and t 2 to
imply Ix(r,4)( < 8. Let 6 < E with 6 yet to be determined. If It$(t)l < 6 on [0, t o ] , then From (2.6.15) (with F = 0),
First, pick /3 so that B[l + (DM/cr)] 5 2. Then pick 6 so that (M + B)S + < E. If It$(t)l < 6 on [ O , t o ] and if there is a first t l > to with Ix(t,)l = E, we have
$E
< ( M + PIS + t l x ( t 1 ) l = 4 a contradiction. Thus, the zero solution is uniformly stable. The proof is complete. E
= Ix(t1)l
Naturally, one believes that the conditions of Theorem 2.6.3 imply that the unforced equation (2.6.9) is uniformly asymptotically stable. We would expect to give a proof parallel to that of Perron showing that the resolvent satisfies supoar
so
ro
THEOREM 2.6.5 Let the conditions of Theorem 2.6.3 hold and let F = 0. Suppose also that A is constant and C ( t ,s) = D ( t - s).If suposrca, IG(t - s)( ds I B, then for B suflciently small the zero solution of (2.6.9) is uniformly asymptotically stable.
2.
56
Linear Equations
By Theorem 2.6.3 all solutions of (2.6.9) are bounded for bounded F. If Z’(t)= AZ(t) + yo D(t - s)Z(s)ds with Z(0) = I, then by the variation ) (2.6.9) on [0, co) is written as of parameters formula a solution ~ ( t of PROOF
x(t) = Z(t)x(O) +
For x(0) = 0 this is x(t) =
Jd
Z(t - s)F(s)ds.
J: Z(r - s)F(s)ds,
which is bounded for every bounded F. One may repeat the proof of Perron’s theorem [Hale (1969, p. 152)] for ordinary differential equations to conclude that JOm
(Z(t)(dt < 00.
By Theorem 2.6.1 the zero solution is uniformly asymptotically stable. The proof is complete. Remark Theorems 2.6.3-2.6.5 are found in Burton (19824. We return now to the n-dimensional system x’ = Ax
+ Jd
D(t - s)x(s)ds,
(2.6.1)
with A constant and D continuous. Our final result of this section is a set of equivalences for systems similar to Theorems 2.6.2 for scalar equations. These two results may be found in Burton and Mahfoud (1982a), together with examples showing a certain amount of sharpness. Let Z(t) be the n x n matrix satisfying Z’(t) = AZ(t) +
Jd
D ( t - s)Z(s)ds,
Z(0) = I.
(2.6.16)
THEOREM 2.6.6 Suppose there is a constant M > 0 such that for 0 I t o < 00 and 0 It < 00 we have
Ji
Jz
+ v)( dudv IM.
(D(u
Then the following statements are equivalent. (a) (b) (c) (d)
Z(t)+ 0 as r + a. All solutions x(t, to,$) of (2.6.1) tend to zero as t + 00. The zero solution of (2.6.1) is uni&ormly asymptotically stable. Z(t) is in L’[O, 00) and Z(t) is bounded.
(2.6.1 7)
2.6.
Uniform Asymptotic Stability
57
(e) Every solution x(t,0,xo) of x'
=Ax
+
D(t - s)x(s)ds
+ F(t)
on [0,00) is bounded for every bounded and continuous F : [0, 00) (f) The zero solution of (2.6.1) is asymptotically stable. Furthermore, the following is a second set of equivalents: (g) Z ( t ) is bounded. (h) All solutions x ( t ,t o , 4) of (2.6.1)are bounded. (i) The zero solution of (2.6.1)is uniformly stable. (j) The zero solution of (2.6.1) is stable.
(2.6.18)
-,R".
PROOF Let (a) hold. Then a solution x(t, t o , 4 )of (2.6.1) may be considered as a solution of
x' = A X
+ J:
D(t - s)$(s) ds +
Jb ~
( - ts)x(s)ds
for t 2 to with the second term on the right treated as a forcing term. If we translate the equation by y ( t ) = x(t + to), we obtain y'(t) = Ay(t) +
Jd
D(t - s)y(s)ds
+ c" D(t + to - s)+(s)ds.
We may now apply the variation of parameters formula and write
+ J'
y(t) = z(t)$(to) z(t - u)
J:
+
~ ( uto - s)+(s) ds du.
The substitution s = to - v yields y(t) = Z(t)$(t,)
+1 ;Z(t - u)Ji" D(u + v)$(to - v)dvdu.
Because I$(t)I I K, K > 0, on [0, t o ] , we have (y(t)l5 KIZ(t)l
+ K fdlZ(t - u)lJi"ID(u+ u)(dudu.
The last term is the convolution of an L'[O, m) function (f-jlD(u + u)I do) with a function tending to zero (Z(t)),and so it ends to zero. Thus, (a) implies (b). Suppose that (b) holds. Then, in particular, all solutions of the form x(t,O,x0) tend to zero, which implies that Z ( t )+ 0 as t -+ co.Now
Ji
ds
sd"
lD(u + u)( dudu I M
uniformly in t o . Thus the integral (Z(t- u)l
J:
ID(u
+ v)l dudu
2.
58
Linear Equations
tends to zero uniformly in t o and hence x ( t , t o ,4) tends to zero uniformly in t o for bounded 4. Thus, (b) implies (c). Let (c) hold. Then Miller's result implies that Z ( t ) is L'[O, 00). Also, the uniform asymptotic stability implies that Z ( t ) is bounded. Hence, (d) holds. Suppose (d) is satisfied. Then solutions x(t,O, xo) of (2.6.18) on [0,00) are expressed as x(t) = Z(t)x(O)
+
Ji Z(t - s)F(s)ds.
Because Z ( t ) is L'[O, 00) and bounded and because F is bounded, then x ( t ) is bounded. Hence, (e) holds. Suppose (e) is satisfied. Then the argument in the proof of Perron's theorem [see Hale (1969, p. 152)] yields Z ( t ) being L'[O, a). This, in turn, implies uniform asymptotic stability. Of course, uniform asymptotic stability implies asymptotic stability, so (e) implies (f). Certainly, (f) implies (a). This completes the proof of the first set of equivalences. Let (g) hold. The variation of parameters formula implies that x ( t , 4) is bounded. Thus, (h) holds. Suppose (h) is satisfied. Then IZ(t)l I P and yo ID(u o)l dudu I M imply that whenever 14(f)l< 6 on [0, to] we have
y,
Ix(t
+ to, 441 I Pl+(to>l+ 6 Jd Iz(t - u)l J:' < P6
+ 6PM < E,
IC(u
+
+ o)l dvdu
+
provided that 6 < E / ( P P M ) . Hence, x = 0 is uniformly stable. Thus, (h) implies (i). Certainly, (i) implies (j). Finally, if x = 0 is stable, then Z ( t ) is bounded, so (j) implies (g). This completes the proof of the theorem.
2.7. Reducible Equations Revisited In Section 1.5 we dealt with Volterra equations reducible to ordinary differential equations. That discussion led us to a large class of solvable equations. But that is only a small part of a general theory of equations that can be reduced to Volterra equations with L' [0, 00) kernels.
2.7. Reducible Equations Revisited
59
Though all this can be done for vector equations, it is convenient to consider the scalar equations x ( t )= f ( t )+ or x'
=
Ax
Ji C(t
- s)x(s)ds
+ Ji C(t - s)x(s)ds+ f(t)
(2.7.1)
(2.7.2)
in which f and C have n continuous derivatives on [0, x))and A is constant. DEFINITION 2.7.1 Equation (2.7.1) or (2.7.2) is said to be (a) reducible if C ( t )is a solution of a linear nrh-order ordinary differential equation with constant coerfficients L ( y ) = soy(") + a l y ( " - l )+ . *
*
+ any = F ( t )
(2.7.3)
with F continuous on [0, co)and (2.7.4) (b) V-reducible if it is reducible and
Ji
Ja
if
IF(u - s)l duds
exists on [0, co);
(2.7.5)
(c) t-reducible if it is reducible and if JO5
(d) uniformly reducible
ItF(t)l dt < CO;
if it is reducible and if there exists M
(2.7.6)
=- 0 with (2.7.7)
f o r 0 It < co and0 Ito < 00; (e) completely reducible if reducible and F(t) = 0.
(2.7.8)
We have already discussed (e) in Section 1.5. The vast majority of stability results for (2.7.2) concern one of the reducible forms. In most cases the results are stated for n = 0, so that one is directly assuming at least one of (2.7.4)(2.7.7) with F replaced by C, so that (2.7.3) is just L ( y ) = y = C(t). Of course, when (2.7.1) or (2.7.2) is reducible, then we operate on it using (2.7.3) to obtain a higher-order integro-differential equation with F as the
60
2.
Linear Equntioas
new kernel. Thus, (2.7.4) is the basic assumption for Miller’s results on uniform asymptotic stability (U.A.S.); (2.7.5) was used in Theorem 2.5.1 and elsewhere; (2.7.6) is a basic (but unstated) assumption of Brauer (1978) in deriving certain results on U.A.S.; and (2.7.7) was just used in Theorem 2.6.6. We now give examples that will be of interest in later chapters. Example 2.7.1 Let A and a be constants and suppose j2 IC’(V)ldu < co. If we differentiate the scalar equation x’ = A x
+ J; [ a + C(t - s)]x(s)ds,
(2.7.9)
we obtain
which we may write as x‘ = y y’ = [ a
+ C(O)]x+ A y + J; C’(t - s)x(s)ds,
or in matrix form as
(;)=(
a
+
‘>(“)+s’(
O C(0) A
which we finally express
X’ = BX
y
0
0 0)f+))ds, C’(t - s) 0 y ( s )
+ Ji D(t - s)X(s)ds,
(2.7.11)
in which D is in L’[O, co)and (2.7.5) is satisfied so that (2.7.9) is I/-reducible. Now, it is possible to investigate (2.7.11) by means of Theorem 2.5.1 (and others) because (2.7.9) is not covered by Theorem 2.5.1. Let us look more closely at (2.7.10). The integral is viewed as a small perturbation of x” - Ax’ - [ a + C(O)]x = 0
(2.7.12)
because C’(t)is in L’[O, a). Also - A is the coefficient of the “damping,” whereas - [ a + C(O)] is the coefficient of the “restoring force.” From wellestablished theory of ordinary differential equations we expect (2.7.10) to be stable if - A > 0, - [ a + C(O)]> 0, and D ( t ) is small. Example 2.7.2 Consider the scalar equation x’
= ~x
+ Ji B In(t - s + a)x(s)ds
(2.7.13)
2.8.
The Resolvent
61
with a > 1, A < 0, and b < 0. Differentiate and obtain x” = Ax’ + b(1na)x
+ Ji b(t - s + a)-’x(s)ds
and x”’ = Ax” + b(1na)x’ + (b/a)x -
Ji b(t
-s
+ ~)-~x(s)ds.
Now the kernel is L’[O, 00) so we express it as a system x‘ = y
y‘ = z, z’ = (b/a)x + b(1na)y which may be written as X’ = BX
+ Az - Ji b(t - s + a)-2x(s)ds, + Ji D(t - s)X(s)ds
with D in L’[O, 00). By the Routh-Hurwitz criterion, the characteristic roots of B will have negative real parts if (aAIn a( > 1. We expect stability if b is small enough. Exercise 2.7. I Consider the scalar equation x’ = Ax
+ Ji b[cos(t
- s)](t - s
+ a)-’x(s)ds.
(2.7.14)
Can (2.7.14) be reduced to an integro-differential equation with L’ kernel?
2.8.
The Resolvent
We briefly mentioned the resolvent in Sections 2.3. It is used to obtain a variation of parameters formula. We noted, in the convolution case, that it was quite effective for dealing with perturbations because it employs the solutions of the unforced equations about which we frequently know a great deal. The nonconvolution case, however, presents many new difficulties, and it is our view that perturbations are better handled by other methods, particularly in the case of integro-differential equations. This is a view not shared by certain other investigators, and it is a view that may change as new results are discovered. Nevertheless, it seems well worthwhile to give a brief sketch of the resolvent for both integral and integro-differential equations. As we shall not
62
2.
Linear Equations
use these results, we choose to prove only selected ones and to leave the rest to the references. In all cases we proceed formally with the sometimes tacit assumption that certain equations do have well-behaved solutions. Given the integral equation x ( t ) = f(t)
+ J; C(t,s)x(s)ds
(2.8.1)
with f: [O,a] + R" being continuous and C continuous for 0 Is I t Ia, we define the formal resolvent equation as
+
R(t,s) = - C(t,s)
l
R(t, u)C(u,s)du.
(2.8.2)
Assuming that a solution R(t,s) exists as a continuous function for 0 I s I
t I a, we note that x ( t ) may be found with the aid of R(t,s) to be x ( t ) = f(t) -
Ji ~ ( u)f(u) t , du,
(2.8.3)
a variation of parameters formula. To verify (2.8.3), left multiply (2.8.1) by R(t,s) and integrate from 0 to t: JiR(t,u)x(u)du -
$d R(t,u)f(u)du= J; R(t,u)Ji C(u,s)x(s)dsdu = =
by (2.8.2).Thus
J;
l
R(t,u)C(u,s)dux(s)ds
J; [R(t,s) + C(r,s)]x(s)ds
-J; R(t,u)f(u)du= J; C(t,s)x(s)ds,
which, together with (2.8.1)yields x ( t ) = f(t) -
as required. Miller (1971a, p. 200) shows that
1
R(t, u)C(u,S ) d U =
J; R(t,u)f(u)du,
l
C ( t ,u)R(u,s)du,
so that (2.8.2) may be written as R(t,s) = - C ( t , s )
+ s'C(t,u)R(u,s)du,
+ s in (2.8.4), we have R(t + s,s) = - C ( t + s,s) + l+s C(t + s,u)R(u,s)du
and if we replace t by t
(2.8.4)
2.8.
The Resolvent
or R(t
63
+ s, s) = - C(t + s, S) + Ji C(t + s, u + s)R(u + s, s)du.
(2.8.5)
In this form s is simply a parameter and we may write (2.8.5) as L(t) = - o(t)+
J: ~ (+ ts, u + s ) ~ ( udu,)
(2.8.6)
and the proof of existence and uniqueness may be applied directly to it. Equation (2.8.2) is conceptually much more complicated than (2.8.1). Thus, one is often inclined to believe that more progress can be made by attacking (2.8.1) directly without going through a variation of parameters argument. We have already indicated that, in the case of an integrodifferential equation, one may use differential inequalities and Liapunov functionals to bypass the resolvent. Nevertheless, much has been discovered about (2.8.2), both theoretically and technically. The interested reader is referred to Miller [(1968), (1971a, Chapter IV)], Nohel(1973), Becker (1979), and Corduneanu (1971). In particular, when (2.8.1) is perturbed with a nonlinear term, then (2.8.4) can be used to rewrite the equation into a much more manageable form. Recall that the ordinary differential equation x' = A x
+ f(t,x)
may be expressed as
Similarly, the solution of x(t) =
si C ( t ,s)x(s)ds + h(t, x ( .)),
where h is an appropriate functional, may be expressed with the aid of (2.8.3) as x(t) = h(t,x(.))-
Ji R ( r , u ) h ( u , x ( . ) ) d u .
For special functionals this may be simplified, as may be seen in Miller (1971a, Chapter IV). Whereas we have seen that integro-differential equations may be expressed as integral equations, there are certain advantages to considering resolvents of integro-differential equations directly. We consider x'(t) = f ( t )
+ A ( t ) x ( r ) + Ji B(t,s)x(s)ds,
x(0) = xo,
(2.8.7)
2.
64
Linear Equations
in which f : [0, u] + R“ is continuous, A an n x n matrix continuous on [0, a], and B an n x n matrix continuous for 0 I s I t I u. Then we seek a solution R(t,s) of the formal resolvent (or adjoint equation) R,(t,s) = - R ( t , s ) A ( s ) - J ’ R ( t , ~ ) B ( u , s ) d ~ , R ( t , t ) = I ,
(2.8.8)
on the interval 0 I s I t . (Here R, = dR/ds.) A proof of the existence of R may be found in Grossman and Miller [1968]. Given R(t,s), the solution of the initial-value problem (2.8.7) is given by x(t) =
R(t,o)x0
+ Ji R(t,s)f(s) ds,
(2.8.9)
a variation of parameters formula. Assuming the existence of R(t,s),(2.8.9) may be verified as follows. Let x ( t ) be the solution of (2.8.7) and integrate by parts.
or
Ji
[R(t,s)x’(s)
+ R,(t, S ) X ( S ) ] ds = R(t,t ) x ( t ) - R(t,O)x, = x(t) -
as R(t,t ) = I. Now, because x ( t ) = R(t,O)x,
x(t)
R(t,O)x,
satisfies (2.8.7) we write this as
1
+ J-{R(t,s,[t(4 + A ( s ) x ( s )+ J; B(s, u ) x ( u )du
I
+ R,(t,s)x(s) ds. Changing the order of integration we have
Ji
R(t,s)B(s,u)x(u)duds =
o
u
J; [,(t, s)A(s) + R,(t, s) +
l
=
We then have x(t) - R(t,O)xo =
s’f R(t,s)B(s,u)x(u)dsdu Ji lR(t,u)B(u,s)x(s) du ds.
Ji R(t,s)f(s) ds
1
R(t,u)B(u,s) du x(s) ds.
The integral on the right is zero according to (2.8.8), so (2.8.9) is verified.
2.8.
The Resolvent
65
If (2.8.7) is perturbed by a nonlinear functional, then (2.8.9) may simplify the equation. Proceeding formally again, if R(t, s) satisfies (2.8.8), then the solution of x’(t)= f(t)
+ A(t)x(t)+
ds
B(t, s)x(s)ds
+ h(t, x ( -)),
x ( 0 ) = x,,,
(2.8.10)
for an appropriate functional h, may be expressed by (2.8.9)as x ( t ) = R(t,O)x,
+ Jd
R(t,s)h(s,x(.))ds.
(2.8.11)
Such results are considered in detail by Grossman and Miller (1970).