Chapter 2 Necessary Conditions for an Extremum in the Classical Problems of the Cal Culus of Variations and Optimal Control

Chapter 2 Necessary Conditions for an Extremum in the Classical Problems of the Cal Culus of Variations and Optimal Control

CHAPTER 2 NECESSARY CONDITIONS FOR AN EXTREMUM IN THE CLASSICAL PROBLEMS OF THE CALCULUS OF VARIATIONS AND OPTIMAL CONTROL In $32.3 and 2.5, we pres...

2MB Sizes 0 Downloads 28 Views

CHAPTER 2

NECESSARY CONDITIONS FOR AN EXTREMUM IN THE CLASSICAL PROBLEMS OF THE CALCULUS OF VARIATIONS AND OPTIMAL CONTROL

In $32.3 and 2.5, we present derivations of the Euler-Lagrange equations for the Lagrange problem of the classical calculus of variations and of the Pontrjagin maximum principle for optimal control problems. In these derivations, we make use of Theorems 1 and 3 of Chapter 1. Section 2.2 and Subsection 2.4.2 are devoted to an elementary derivation of the most important necessary conditions for a minimum in simplest classes of problems of the calculus of variations and optimal control. This derivation is independent of Chapter 1.

2.1. Statements of the problems 2.1.1. Functionals, constraints, and boundary conditions

We shall consider only one-dimensional problems, wherein the independent variable t, sometimes referred to as time, belongs to an interval [ t o , tl] with - m S to < tl S m. As a rule, there are two groups of variables in the problems one encounters, namely, x = (x', . . . ,x " ) and u = ( u ' , . ..,u'). The variables x are called phase variables, and the variables u are called controls. There are three elements in problems related to the classical calculus of variations or optimal control, namely a functional, constraints imposed on the phase coordinates and controls, and boundary conditions, imposed on the end points of the time interval considered in a given problem. In practice, it is not always feasible to make a distinction between the constraints and boundary conditions, but in most cases this distinction looks natural and turns out to be convenient. 93

NECESSARY C0M)lTlONS FOR

?4

AN EXTREMUM

[CH. 2.12.1

Olie encounters three types of funetionals. Inregral functionals have the following form:

Rnx R" x R .-.R; the function f is called the intcgrand. Functionals which depend on the terminal values of phase coordinates, i.e., functionals of the form whale f :R x

&(x( .)) = q(h, +),

rI, x(fl)),

(2)

where 9 :R x R" x R x W" -+ R, are called endpoinl functionals. Finally, there are functionals of mixed form,

A(*(. ), ~ ( 9= Sl(x( ) j, 4 - 1) + &(x( - h where 9,is the integral term and j r is the endpoint term.

(3)

We shall encounter constraints of two types. These are either functional relatious, expressed by equalities and inequalities,

where

C;, : R x

R" x R" x It' -* R't, i = 1,2, or non-functional relations,

2.g.,

&i(r)EfJCR,

WtEdC[r,,rt].

(5)

Functional constraints of the form (4) which do not depend on derivatives and controls, i.e., the relations gdt, x ( 0 ) = 0,

gdr, W

) G 0,

(6)

will be called phase conxrrainrs.. Ctmstrairtts o f the form

Jw)= Cp(4 X W . u(c)),

m

where cp :R X R" x R +R", are called consmints in soloed form. The equation (7) describes many controlled systems. Hence the terms pliase cmrdinaws and controls. If a control is given, then EM7) becomes an ordinary differential equation with respect to x. Any klution of this equation which corresponds t o a control u( .) is called a phase frajecrury, and the pair ( x ( ), u ( )) connected by Eq.(7)is called a controlledprocess.

CH. 2, $2.11

Boundary

STATEMENTS OF THE PROBLEMS

conditions

R x R" x R x R" a set

are

given

95

by

separating in the space a trajectory, i.e., the are to belong. One often encounters the following

r to which the end points of

point (to,x ( t o ) , t l , x ( t , ) ) , boundary conditions: - with fixed end points, wherein the values of a trajectory are fixed on both ends of the interval [to,t , ] (it is assumed here that the interval itself is also fixed), x ( t o ) = xo, x ( t l ) = x,; - with the right or left end point free, wherein the corresponding end point of t h e interval [to,t,] is assumed to be fixed, but there are no conditions imposed on a phase trajectory at this point; and - periodic, wherein the interval [to,tl] is fixed and a phase trajectory attains the same values on both end points, x ( t o ) = x ( t l ) . 2.1.2. Problems of the classical calculus of variations and optimal control

The following general statement: 9 ( x ( .), u ( .))+inf(sup);

(8)

GI( t, x ( t ) ,1 ( t ) , u ( t ) ) = 0 , Gz(t, x ( t ) , 1 ( t ) , 14 ( t ) )

tl.

(9)

(10)

u(t)E U(t).

( t o , x (to),

0,

x ( t l ) ) E 1'

(11)

covers most problems of the optimal control and calculus of variations. (It is not assumed that the intervaf[to, t l ] is fixed. If this interval is fixed, then the corresponding problem is called a problem with fixed time.) If the functional (8) is integral, then the problem (8)--(11) is called the Lagrange problem ; if the functional is endpoint, then the probleiii is called the Meier problem ;and, finally, if the functional is mixed, then the problem is called the Bolza problem. To a large extent, all three statements of the problem are equivalent. For example, if we are given an integral functional, then, introducing a new coordinate x " + l and supplementing the system (9) by the equation f""- f

=0

with boundary condition x""(to) of the functional

= 0,

we reduce the minimization problem

NECESSARY CONDITIONS FOR AN EXTREMUM

96

[CH. 2, 82.1

to the minimization problem for the endpoint functional $i2

= xn+’

(tl).

Conversely, if we are to minimize an endpoint functional 9L= $(tl, x ( t l ) ) with, let us say, fixed values of to and x ( t o ) (when we can set $(to,x ( t o ) ) = 0 without loss of generality), then, assuming that the function $ is differentiable, we can set

and we obtain

Properties that are peculiar to problems of the classical calculus of variations consist of the following. First, in problems of the classical calculus of variations all functions that enter the statement of the problem are assumed to be smooth, at least continuously differentiable. On the other hand, there are n o non-functional constraints of the form (10). (These facts allow us to include problems of the calculus of variations in the category of smooth problems which we discussed in Subsection 1.1.1.) On the contrary, in optimal control problems non-functional constraints are of essential importance. The set U ( t )that defines the constraint (10) may itself be of most diverse nature, e.g., it can be a discrete set. For this reason, it is not natural to consider smooth, or even only continuous controls in optimal control problems. The same can be said about the assumptions concerning the smoothness of the mappings G, and Gz of (9), etc., with respect to the controls u. Thus, a standard assumption in problems of the calculus of variations is the continuous differentiability with respect to all variables, and a standard assumption in optimal control problems is the joint continuity in all variables and smoothness with respect to the variables t and x . Optimal control problems will be reduced to mixed problems (partly smooth and partly convex) which we discussed in Subsection 1.1.3. We present several examples of particular problems that fit the general scheme.

CH. 2, 52.11

STATEMENTS OF THE PROBLEMS

97

The following problem is called the simplest vector problem in the calculus of variations:

(x (to),x ( t , ) )E

r.

It is assumed in (12) that the interval [to, t , ] is fixed, that the function L is defined and continuously differentiable on a domain of the space R X R" x R",and that the set r, which defines boundary conditions, is an arbitrary subset of the space R" x R".If n = 1, then the problem (12) is briefly referred to as the simplest problem. The letter L for the integrand of the simplest vector problem was chosen in honor of Lagrange. In so doing, one implicitly appeals to the language and notation of classical mechanics. At the basis of classical mechanics lies the principle of least action (or, as it is sometimes called, the stationary action principle, which is more precise). According to this principle, trajectories of a system of particles in a force field U are stationary points of the action functional

1

Ldt.

Here, the integrand is the Lagrangian of the system, which is the difference between the kinetic and potential energies,

L=T-U. For this reason, the integrand L is sometimes called a Lagtangian even if the problem is not taken from the classical mechanics. - L are called in classical The expressions pz = L x t , H -- z:=,p,Xl mechanics the momentum and energy of the system. Subsequently, we shall sometimes employ these terms, suggested by classical mechanics. The following problem is called the Lagrange problem with constraints in solved form and equality and inequality phase constraints : 9(x(

a ) ,

u(

a))

=

1'' 10

f (t, x ( t ) , u (t))dt -+ inf;

(13)

NECESSARY CONDITIONS FOR AN EXTREMIJM

98

ho(to,X ( t 0 ) ) u E

= 0,

hl(t,,x(t*))

[CH. 2, $2.1

(16)

= 0,

u.

(17)

Here. the integral functional does not depend on 1. The constraints are subdivided into solved ones, (14), and phase constraints, (15). The boundary conditions are described by the relations (16). (Not all boundary conditions encountered in applications can be given in this form. For example, periodic conditions can not be so expressed. Nevertheless, the relations (16) describe a sufficiently broad class of boundary conditions.) In considering the Lagrange problem within the framework of the classical calculus of variations, we shall assume that the interval [to,tl] is fixed, and that there is n o constraint (17), The problem (13)-(17) is said to be autonomous if there is no explicit dependence on time in all functions and mappings which enter the statement of the problem. Problems with fixed time of the following form:

il:'

((a(t)Jx(r))+ ( b ( t ) l i ( l ) ) ) d + i n f ;

x = A (t)x

I

+ B(t)u.

(g, ( t )

x(t))

(hk,

(fk

1

=

a,( t ) . Pkn

i

=

I , . . . ,m ,

k=0,1,

J = 1 , ..., sk,

uEU

will be called lineur optimal control problems. Sometimes the requirement that the time be fixed is deleted from the definition of linear problems. These and more general optimal control problems will be investigated in $9.3 by the rnetho,ds of convex analysis. 2.1.3. Weak and strong extrema in problems of the classical calculus of variations

The problems formulated above still have some indefiniteness, since the class of admissible elements was not described. The Lagrange problem (13)-(16) with fixed time will be investigated within the framework of the classical calculus of variations in the Banach spaces C;([to,t , ] ) x C'([t,,t l ] ) , where C;([to, tll) is the space of vector-valued continuously differentiable functions, and C'([t,,t l ] ) is the space of vector-valued continuous func-

STATEMENTS OF THE PKOBLEMS

CH.2. 82.11

y9

tions. (For brevity, we agree to denote the norm of t b p a c e C , by 11 -11,. If we wish t o compare the norm in the space C with the norm in the space C1, we shall sometimes denote t h e former by lj-l/l.) Investigations of the simplest problems are carried out in the Banach spaces C ; ( [ s .t,]). Tn the case of a Lagrange problem, a local minimum in the space C;X C’ (or in the space C;, in the case of the simplest problems) is called a weak minimum. In other words, a pair ( x * ( - ) , u * ( - ) ) yields a weak local minimum to the funcfional4(x( .),u( .)) of the problem (13H16)if there exists a number E > O such that the inequality

- 1. u *(*)I holds for any admissible pair (x( .), u ( .)) E C ; x C’such that 9 ( x ( . ). .(

II.<.,-

)) 23 9 (x *(

IluC.)- u*(-,fb<&.

X.(*)IIl
Here, a pair is said t o be admissible in the problem if it satisfies the constraints (14) and (15) and the boundary conditions (16). A weak minimum for the simplest problem (12) is defined in a similar way. A local extremum in the topology of thespace C (over x) is called a strong extremum. In other words, an admissible pair (x,( u *( -))yields a strong local minimum for the functional 9 i n the problem (13x16) if there exists a number E > O such that the inequality a),

9(x(

*

1, u! -))a$ ( x * ( .),l.4 *( 1)

holds for any admissible pair for which

Ilx(9- x,(.)lb<

&.

A strong minimum for the simplest vector problem (12) is defined in a similar way. However, we shall interpret the term “strong extremum’.’ in a slightly broader sense that is peculiar to this notion in optimal control problems. This will be discussed in the next subsection. 2.1.4. Admissible cptrols and controlled processes in opthml control

problems. Optimal processes

We have already mentioned that the requiremtnt that the controls be continuous is not natural in many cases. Often the statement of the

100

NECESSARY CONDITIONS FOR AN EXTREMUM

[CH. 2. 52.1

problem itself implies the necessity to consider a broader class of admissible controls. Sometimes the class of piecewise-continuous controls is taken. We shall usually consider as admissible arbitrary measurable bounded controls which assume values in the set U ( t ) . This choice of admissible controls requires that we render more precise the notion of a controlled process. A process ( x ( t ) , u ( f ) ) is said to he controlled on the interval [to,t,] if, on this interval, the function u ( t ) is an admissible control and the function x ( t ) is a vector-valued absolutely continuous function which satisfies the equation (14) almost everywhere, i ( t ) = cP(t,x(t), u ( t ) ) .

The notion of an admissible controlled process includes the time interval over which this process is considered. Thus, a controlled process which is admissible in the problem (13)-(17) consists of a triple ( x ( t ) , u ( t ) , [to, tl]) such that the vector-valued functions x ( t ) and u ( t ) form a controlled process on the interval [to,tl], and the phase variables x ( t ) satisfy the phase constraints (15) and the boundary conditions (16). We shall say that an admissible process (x * ( t ) ,u * ( t ) ,[ t o * ,tl*]) is optimal if there exists an E > O such that the inequality

z= 3 (x * ( . 1%u * ( . 1)

.a(x(-), u ( *

holds for any other admissible process ( x ( t ) , u ( t ) , [to,t , ] ) for which Ito- to*! < E , Itl - tl,l < E , I x ( t ) - x . ( t ) l < E (vt E [to,tr] n [ t o * ,tl*]). One also says in this situation that the process (x * ( t ) ,u * ( t ) ,[ t o * , fl*]) yields a strong minimum for the problem (13)-(17). Thus (returning to problems in the classical calculus of variations), we give the following sense to the extended definition of a strong minimum. We illustrate it on a vector problem of the classical calculus of variations. We say that a vector-valued function x * ( t )yields a strong minimum in the problem (12) if there exists an I > O such that, for any function x ( t ) E WE,I([to,t l ] ) , which satisfies the boundary conditions and the inequality

Ilx(. 1- x * (

-

)ti0

< E7

the following inequality holds:

9(x ( *

3.9 (x * ( . 1).

CH. 2, 52.21

ELEMENTARY DERIVATION

101

2.2. Elementary derivation of necessary conditions for an extremum in the simplest problems of the classical calculus of variations

In this section, we give a derivation of the Euler, Weierstrass, Legendre, and Jacobi necessary conditions, making use of most elementary means. Our arguments are based throughout on a direct application of the method of variations. 2.2.1. Elementary derivation of the Euler equation

We begin with the simplest problem with fixed end points,

x ( t 0 ) = xo,

x ( t , ) = XI.

I

We assume that the function L(t, x, y ) is continuously differentiable on a domain U of the space R3.The problem (1) will be investigated for a weak extremum, i.e., in the space C,([to,tl]). The derivation of the Euler equation consists of three stages. The first stage consists of the proof that the functional 9 has a first variation (at any point x * ( such that the points ( f , x * ( t ) ,i * ( t ) ) ,t E [ t o ,t , ] , belong to the domain U ) , and of obtaining a necessary condition in terms of the first variation. We consider the following function of one variable: a )

rp(A)= 9 ( ~ * * ) +(A X ( * ) ) = =

1:'

W(t,A)dt

=I, (I

L(t,x.(t)+Ax(t),i.(t)+Ax(t))dr,

(2)

which is generated by the variation x ( t , A ) = x * ( t ) + A x ( t ) of the point x * ( t )in the direction of the point x ( t ) . Under our assumptions concerning L, x * ( .), and x ( -), the function W ( t ,A ) is differentiable with respect to A. Moreover, the derivative N / 8 A is continuous for sufficiently small A, since ah

= L, ( t , x

* ( t )+ A x ( t ) ,x * ( t )+ A x ( t ))x ( t )

+ Li (t, x * ( f ) + Ax ( t ) , 1 , ( r ) + A x (t))x(t). Therefore, we can differentiate under the integral in (2), and

102

NECESSARY CONDITIONS FOR AN EXTREMUM

[CH. 2 , 8 2 2

where

Further, if the function x * ( t ) is being investigated for an extremum, then it is admissible. Therefore, for any function ~ ( tbelonging ) to the following subspace Lo: Lo={x(t)Ecl([to,t,])(x(ro)=X(tl)=O},

the function x * ( t ) + A x ( t ) passes through the same boundary points as the function x * ( t ) . Let x ( t ) E Lo.If x , ( t ) is a solution of the problem (l),then it follows that the function defined by (2) has a minimum at zero. As a result, we arrive at the following necessary condition for an extremum: Cp’(O)=

M(x*(.),x(.))=O,

Vx(.)ELo.

(3)

The first stage of the derivation has been completed. The second stage consists of transforming the expression for the first variation on the space Lo by an integration by parts. This is done in two ways. Following Lagrange, one integrates by parts the second term; and following DuBois-Reymond, one integrates by parts the first term. The transformation by the Lagrange method assumes an additional smoothness condition, namely, the assumption that the function p ( t ) = L, ( X . ( t ) is continuously differentiable. Under this additional assumption, we shall integrate by parts the second term in the expression for the first variation, assuming that x ( E Lo. We obtain a )

where a(t)=q(t)-@(t)=

dt

We shall now present the transformation of the first variation according to DuBois--Reymond. To this end, we integrate by parts the first term on the space Lo,

CH. 2, 82.21

ELEMENTARY DERIVATION

103

and we obtain that the expression for the first variation has the following form: S~(X,(.),X(.))=

1''

b (t)x(t)d t,

1"

(5)

where

W e pass t o the third stage of the derivation of the Euler equation. Lagrange lemma. Let a function a ( t ) be continuous on the interval [to,t l ] . Assume that, for any continuously differentiable function x ( t ) which vanishes at the end points of the interval [to,t l ] , the following equality holds :

a( t )x(t)d r= 0. Then a( t ) = O . Proof. Since the function a ( t ) is continuous, it is sufficient to verify that a ( t )= 0 at interior points of the interval [to,t l ] . Assume that we have a(.) # 0 at some interior point T. Without loss of generality, we can assume that L Z ( T ) > ~ .We choose an E > O so small that, on thg one hand, the interval do= [T - E , T + E ] lies entirely inside the interval [to,t l ] , and, on the other hand, such that a ( t ) is larger than a positive number a on this interval. We take now any non-negative, but not identically zero, finite function from Cl([t,,t l ] ) with support in do. For example, we can take (Fig. 2)

to

7 - E

7

7 + E

Fig. 2.

104

NECESSARY CONDITIONS FOR AN EXTREMUM

f t - T + E):: Z ( t ) = f ( t , 7,E

)

- T - E)*,

[CH. 2,P2.2

t E do,

=

tsf d o .

Applying the mean value theorem of the integral calculus, we obtain

[: a ( t ) Z ( t ) d t

=

I,,

a ( t ) Z ( t ) d t2 a

Z(t)dt > 0.

We have obtained a contradiction. The lemma has been proved We conclude now the derivation of the Euler equation according to the Lagrange method. We established at the first stage that, if x * ( t ) is a solution of the problem (l), then the equality (3) holds. At the second stage, we showed that the first variation can be represented in the form (4) on the subspace Lo (true, under an additional assumption). Combining these two facts with the Lagrange lemma, we arrive at the conclusion that, if x * ( t ) is a solution of the problem (l), then it is necessary that the following relation hold:

This relation is called the Euler equation of the problem (1) in the Lagrange form. We note that in the proof we made use of the variations x ( t , A ) = x * ( t )+ AZ(r, T, E ) of the extremal x * ( t )in the direction f ( t , T,E ) , shown on Fig. 2. In order to derive the same equation according to DuBois-Reymond, we shall prove the following lemma.

DuBois-Reymond's lemma. Let a function b ( t ) be continuous on the interval [to,t , ] . Assume that the following equality holds for any continuous function v ( t ) with mean value zero:

b ( t ) v ( t ) d t = 0. Then b ( r ) = bo= const. We remind the reader that a function v ( r ) is said to have the mean value zero if

CH. 2, 82.21

1:'

ELEMENTARY DERIVATION

105

v ( t ) d t = 0.

Proof. Assume that the conclusion of the lemma is false. Then there must exist two points T~ and T~ inside the interval [tl, t2] such that b(T1) # b ( ~ ~ ) , say T ]< T~ and b ( ~ > ] )b ( T z ) . We choose an E > 0 so small that the intervals

A , = [ T I- &,

TI

+ &]

A2 = [ T 2 - &,

7 2

-k

and &

]

do not intersect each other, lie inside t h e interval [to,tl], and the following inequality holds:

This, obviously, can be done. Consider now any continuous function 6 ( t ) which is zero outside of A , U A 2 , non-negative and not identically zero on A ] , and has opposite values on A 2 . As an example of the required function, one can take ( t - T~ + E

) ~(

[ I, +IA2

t + T ]+ E ) ' ,

tE

A],

- ( ( ~ - T ~ + E ) ~ ( - C + T ~ + & ) t~ E, A 2 ,

6(t)=6(t,Tl,Tr,&)=

t E [to, t i ] \ ( A , U 4 2 ) .

0,

Again, we obtain by the mean value theorem

1;'

b ( t ) f i ( t ) d t=

2 2

b(t)6(t)dt

(PI

-p2)

b(t)6(t)dt

6 ( t ) d t > 0. Al

The contradiction with the condition proves the lemma. Comparing the relations (3) and ( 5 ) with DuBois-Reymond's lemma, we obtain that, if x * ( t )is a solution of problem (l),then the following relation must hold:

if'

q ( T ) d T + p ( t ) = co,

106

NECESSARY CONDITIONS FOR AN EXTREMUM

[CH. 2,12.2

or, in more detail,

l"

L , ( T , x * ( T ) , ~ * ( T )+L,(t,x.(t),X,(t))= )~T co.

This relation is called the Euler equation in the DuBois-Reymond form. The first term in the last relation can be differentiated, therefore, the second term is continuously differentiable. Thus, we have obtained the following proposition. Proposition 1. Assume that the Lagrangian L in the problem (1) is continuously differentiable on a domain U C R3 such that the points ( t , x * ( t ) , i * ( t ) ) ,t E [ t o , t l ] ,where x , ( . ) E C , ( [ t o , t l ]belong ) to it. For a function x * ( t ) to yield a weak local minimum in the problem (l), it is necessary that the following Euler equation in the Lagrange form hold:

The functions x , ( t ) along which the Euler equation holds are called extrem als. We present several particular cases where the Euler equation has easily determined integrals. Corollary 1. If the function L does not depend on x, then it is necessary for the extremality of x * ( t ) that the following relation hold:

L, (t,x * ( t ) ) = 0 , t E [ t o , t l ] .

(7)

Corollary 2. If the function L does not depend on x, then the Euler equation admits the following solution, which in mechanics is called the L'momentum integral":

p ( t ) = L x ( t , x , ( t ) ) = p o =const. Corollary 3. If the function L does not depend on t, then the Euler equation admits the energy integral

H ( t )= P ( t ) i * ( t )-- L ( x * ( t ) , 1 *(tN = L, ( x * ( t ) , 1 * ( t ) ) i * ( t )- L ( x * ( r ) , 1 * ( t ) )= Ho = const.

CH.2,1221

ELEMENTARY DERIVATION

107

Corollaries 1 and 2 follow directly from (6). In order to prove Corollary 3, one should take the derivative d H / d f and show, making use of (6), that it is equal to zero. Thus, we have found the second-order differential equation (6) for an extremal x . ( f ) . The general solution of this equation depends on two arbitrary constarits, which are determined by the boundary conditions. Let us outline the derivation of a necessary condition for an extremum in the simplest Bolza problem

where, unlike in the problem (l), there are n o boundary conditons, but the functional contains endpoint terms. The stages of the pruof are the same. Assuming that all the functions entering (8) are continuously differentiable, one can easily verify that there exists a first variation of the functional 93 for a function x * ( f ) which lies in the domain of cliffereritiahility of $o, $,, and L ; and that this variation is equal to SS(X.(

-

)>x( ))= 4;(x.(M)x(to)+ #~l..twxt(to)+

+

(LA

$Xx,(tJ)x(tJ+ 6 9 ( x * ( . ) , x ( * ) )

+; I x . ( t l ) X ( t l )

lx.(r,X(t)+

~i

lx.
(9)

If x , ( t ) is a wlution of the ploblem (8), then, obviously,

The first variations 6 3 and 82 coincide on the subspace Lo introduced before. Therefore, by virtue ot Pioposition 1, L, = p ( f )is a continuously differentiable function, and the Euler equation (6) holds. Differentiating by parts the expression (9) for the first variation of the functional & B ( x . ( . ) , x ( * ) ) according t o Lagi
IX.(,)

~ ( ~ . ( *=)( S, ~aX **( M)) )

P(tO)b(tO)+

It follows by the relation (10) that

(+Xx*(td) +P(tl))x(tl).

108

NECESSARY CONDITIONS FOR AN EXTREMUM

[CH. 2,12.2

We have arrived at the following proposition. Proposition 2. Let the functions q0,$,, and L in the Bolza problem (8) be continuously differentiable. For a function x * ( t ) to yield a weak local minimum in the problem (8), it is necessary that the following Euler equation : ( - - Ld , + L . ) I dt

=o X.(I)

with the boundary conditions

All that we have said above can be easily extended to the vector case. For example, if we consider the simplest vector problem, where we have in ( l ) x ( - ) EC;([fo,f,]),XOER",x,ER",L : R x R " x R " + R , t h e n w e a r r i v e at the Euler equation which has the form (6) in the vector notation. It is, in fact, a system of n second-order equations,

_ -dt L,i(t,x', ..., x n , i l,..., x " ) + L . i ( t , x ' ,...,x " , x ' ,..., x " ) = O ,

(6')

where i = 1 , . . . , n. The general solution of this equation depends on 2n parameters, which are determined by the boundary conditions. We have a similar situation in the case of the simplest Bolza protl]), +b0 : R" + R, : R" + R, blem wherein, in (8), x ( . ) € C;([to, L :R x R" x R" + R. Here again the Euler equation has the same form (11) in the vector notation; and the boundary conditions (12) hold, which are also vector conditions. We leave the proof of these assertions to the reader. 2.2.2. Illustrations and discussion The Euler equation for the simplest problem is a full analog of the Fermat equation f ' ( x * ) = 0, which we discussed in the Introduction. In order to

CH. 2, 82.21

ELEMENTARY DERIVATION

109

show this, we shall carry out a “non-elementary”, functional derivation of the Euler equation. We consider the subspace Lo of the space Cl([to,tl]) that we mentioned in the preceding subsection. We formulate the problem (1) as the following problem without constraints: f(x(

*

)) =

J

11

to

L(t,x * ( t )+ x ( t ) ,i* ( t )+ i ( t ) ) d t + inf,

(13)

where x * ( t ) is a function being investigated for an extremum in the problem (l), and x ( E Lo. Exactly as we did in Example 8 of 90.2, one can show that the functional f in (13) is Frechet differentiable on the space Lo,and that its derivative at zero has the form a )

(f’(O), x ( * )>=

/

I’

( 4 ( t ) x ( t )+ p ( f )(t))dt? i

where the functions q ( t ) and p ( t ) were introduced in the preceding subsection. The Fermat equation f’(0) = 0 is equivalent to the following:

Integrating the first term of (14) according to DuBois-Reymond, and utilizing the fact that any continuous function with the mean value zero can be taken for y ( t ) = 1 ( t ) ,x ( . ) E Lo,we obtain that the linear fynctional on the space C([to,t , ] ) equal t o

vanishes on the subspace of those functions y ( t )belonging to C([to,t l ] )for which

1:’ 1”

y(t)dt

= 0.

By the corollary to the annihilator lemma (see §O.l), we obtain at once 4(

7

+ p~( t )=

h07

which is the Euler equation.

110

NECESSARY CONDITIONS FOR AN EXTREMUM

[CH. 2,62.2

Because of this, the simplest problem of the calculus of variations is an infinite-dimensional analog of the problem of finding an unconditional extremum of a function of several variables. However, there are effects that are peculiar to problems in the calculus of variations, as compared with the classical analysis; the effects that are related to the peculiar character of these problems. We proceed to illustrations. Example 1. The solution of the Euler equation exists, is unique, and yields an absolute extremum in the following problem :

Here, the Euler equation is x = 0. The solution that satisfies the boundary conditions is unique: x * ( t )= 0. Clearly, it yields the absolute minimum in this problem. Example 2. The solution of the Euler equation exists, is unique, yields a weak extremum, but does not yield a strong extremum : 9.,(x (

1) =

x(0)= 0,

1

it3(t)dt + inf;

x(1) = 1.

Here, the Euler equation is d(3x2)/dt= 0. The unique solution that satisfies the boundary conditions is x * ( t )= t. Let a function x ( f ) belong to the space Cl([to,tl]), and let x(0) = x(1) = 0. Then the function x * ( t )+ x ( t ) is admissible. We obtain

=4,(~*(.))+3,/-'x(r)dt+[

(3i2(t)+x3(t))dt

= 9@*(*))+/' (3iZ(r)+i3(t))dt.

Thus, if 3fZ(t)+

X 3 ( t )2 0 ,

CH. 2, $2.21

ELEMENTARY DERIVATION

111

in particular, if

Ilx ( * >I11

3,

then 9 a , ( x , ( . ) + x ( . ) ) ~ 9 2 ( x * ( . ) ) , i.e., the function x * ( t ) = t yields a weak local minimum in the problem. On the other hand, we obtain the values

on the sequence of functions xn

( t )= x * ( t )+ h n ( t ) ,

where hn(0) = h, ( 1 ) = 0, and

It remains to note that the functions h, ( r ) are arbitrarily close to zero in the metric of C([O, 11) if n + m. It follows that there is no strong extremum, and that inf 92= - CQ. The former is caused by the fact that the Weierstrass condition is violated, and the cause of the latter is established by Bogoljubov's theorem (see Subsection 9.2.4). Example 3. The solution of the Euler equation exists, is unique, yields an absolute extremum, but is not of class C1: 9,(x(

a))

x(0) = 0,

=

1'

t$x*(t)dt+ inf;

x(1) = 1.

This example is due to Hilbert. Here, the Euler equation has the form

d dt

- ( 2 t $ X )= 0.

t

Its general solution is x ( t ) = + D. The curve x * ( t ) = t''3 passes through the given points. It is easy to verify directly that the function x * ( t ) yields an absolute extremum in the problem. However, this function is not continuously differentiable.

NECESSARY CONDITIONS FOR AN EXTREMUM

112

[CH. 2 , 6 2 2

Example 4 (a conjugate point):

First, we shall show that, if T s rr, then the lower limit of the functional 4 , is zero. T o this end, it is sufficient to reduce the functional 4 4 on the subspace Lo = { ~ ( tE) C,([O,TI) x(0) = x ( T )= 0) to the form

1

JOT

(i(t)- x(t).

ctg r)2dt.

It follows that for T < rr the function x * ( t ) = 0 is the unique minimal, and for T = 7~ all minimals are x * ( t , C) = C sin t. (We note that, since x ( t ) E C1 and x(0) = x ( T )= 0, the function ctg t . x ( t ) does not have any singularities on [0, TI if T rr.) Indeed, integrating by parts, we obtain JOT

(x

-

x ctg r)’dt =

=

loT+

(i2 x 2ctg2t

-

2x1 ctg t ) d t

loT

(x* - x2)dt,

which was required. We now consider the case T > r . It is easy to evaluate that, if x ( t , A ) = A sin(.rrt/T), then

The functional 4 , is negative for small A, and the function x ( t , A ) itself is arbitrarily close to zero in the metric of C,([O, TI). This means that the extremal x * ( t ) 5 0 no longer yields even a weak minimum. The Euler equation for the functional under consideration has the form i + x = 0. The zeros of the non-trivial solutions of this equation which satisfy the condition x(0) = 0 are called the conjugate points to the point zero. In our case, these solutions have the form x (t, C) = C sin f, Most important is whether the first conjugate point belongs to the interval [0, T ]

CH. 2, 52.21

ELEMENTARY DERIVATION

113

or not. The condition for a minimum related to a conjugate point, the Jacobi condition, will be discussed in Subsection 2.2.5. Let us summarize the results. We have found cases wherein - there exists a unique solution of the Euler equation which yields neither a strong, nor a weak extremum, namely, T > rr, T # krr; - there are an infinite number of solutions, and all of them yield the absolute minimum in the problem, T = rr; - there are an infinite number of solutions, but none of them yields either a strong or a weak extremum, T > rr, T = km, k > 1. Example 5. The Euler equation has no solutions; moreover, there is no absolutely continuous solution at all : .9s(x(

a))

=

1’

tZx2(t)dt+inf;

x ( 0 ) = 0, x(1) = 1

(compare with Example 3). This example is due to Weierstrass. It was given by Weierstrass as an argument against Rieman’s justification of the Dirichlet principle. Here, the Euler equation is d ( 2 t 2 i ) / d t= 0. Its general solution is x ( t )= Ct-‘ D. N o curve of this family passes through the points that we need. Moreover, there does not exist any solution to the problem in the class of absolutely continuous functions. This is so because, on any such function, .9,(x( )) > 0, whereas the value of the problem is zero. Indeed, if we take the Weierstrass minimizing sequence

+

-

x, ( t ) = arctg ntlarctg n,

or x, ( t )= t””, or, more simple still,

then we find out that 9&,,( 0 (.9s(y, ( .))+ 0). Examples 3 and 5 are particular cases of the problem 55; they are discussed in $9.2. In the same section, we explain the cause of the fact that there is no solution in the Weierstrass example. a))+

NECESSARY CONDITIONS FOR AN EXTREMUM

114

[CH. 2, 52.2

2.2.3. Weierstrass' necessary condition

Unlike the Euler equation, Weierstrass' condition is a condition for a strong extremum. In deriving this condition, we shall employ special variations, which, in fact, were introduced by Weierstrass himself, and which we therefore call Weierstrass variations. The derivative of the additional term which enters the definition of a Weierstrass variation has a needlelike form. As the parameter A tends to zero, this needle becomes more narrow, but does not decrease in the uniform metric. We shall make use of similar variations in deriving the simplest version of the maximum principle in $2.4. Let f(x) be a smooth function on a straight line. The function of two variables Z$ (x, 6) :R x R+ R, %f

(x,

0 = f ( 0 - f ( x ) - f ' b ) (5- x 1

(15)

is called the Weierstrass function of the function f. Geometrically, &(x, 6) is the difference between the value of f at 6 and the value at 6 of the affine function tangent to f at x (Fig. 3). It follows, in particular, that iff is convex, then its Weierstrass function is non-negative. The definition we gave can be easily extended to a finite-dimensional case. Let X = R",and let f :R" + R be a smooth function. The function 'i$ (x, 6 ) :R" x R" -+R

defined by the equality

J

t

X

5 Fig. 3.

CH. 2, 52.21

ELEMENTARY DERIVATION

115

is called the Weierstrass function. We pass to the derivation of Weierstrass’ necessary condition, beginning with the simplest problem (1). We shall assume that the following smoothness requirement (standard in the calculus of variations) is satisfied: the integrand is continuously differentiable on a domain U in R 3 which contains points ( t , x * ( t ) , x * ( t ) ) , t E [to,tl], where x * ( t ) E Ci([to,t l ] ) . The following function of four variables is called the Weierstrassfunction of the integrand L : %(t,x,X,()= L(t,x,t)- L(t,x,X)-(~-X)L,(t,x,X).

(16)

One can see that it is the Weierstrass function gL with respect to t h e last argument x,and that here t and x are parameters. Our goal now is to prove the following assertion.

Proposition 3. Under the assumptions concerning the smoothness of L and x * ( that we mentioned above, let the function x * ( t ) be an extremal in the problem (1). Then for the function x * ( to yield-a strong local minimum in the problem (l),it is necessary that the following inequality hold for any point t E (to, t , ) and any real number 5: a )

a )

(17)

s(t,x*(t),x*(t),5)~0.

The last relation is called Weierstrass’ condition for a strong minimum in the problem (1). We note that Weierstrass’ condition is always satisfied if the integrand L is a convex function of the last argument 1. Such integrands are called quasiregular. Proof. We are to describe the class of Weierstrass variations. Let T E (to, tl). We chose an E > 0 such that T + E < tl. Let A be a number between 0 and E . We denote by h(t,A ) the following continuous function:

-

0,

h(t, A )

=

if f e [ T , T + & ] , if t = T + A , [ER, linear on the intervals [ T , T and [ T + A, T + E ] .

At,

+A]

116

NECESSARY CONDITIONS FOR AN EXTREMUM

[CH. 2, 82.2

On Fig. 4, we presented both the function h and its derivative. The derivative of the function h reminds one of a needle, which (as we already mentioned) gave the reason for calling variations of this type “needlelike” variations. Now, the class of Weierstrass variations is constructed as follows: x(f,A)= x,(t)+h(t,A). The function x ( t , A ) joins the same points as the function x * ( t ) .(True, it is not a continuously differentiable function, but it is admissible in the sense and the functional 9 is defined on it.) that it belongs to Wm,L, We form the function cP(A)=$(x(-,A)).

Let us write it out in more detail:

v ( A )=

I

11

L(t,x ( t , A), i ( t , A))dt

= 9 ( x * ( .)I+

T+A

L ( t , x * ( t ) + (t.-

715, x * ( t ) + 5)dt

T+E

+/7+A

L(t,x,(r)+A5-A5(~- A ) - ’ ( t - ~ - h ) , f * ( t ) -

T+E

Fig. 4.

A t ( & - A)-’)&

CH. 2, 52.2)

ELEMENTARY DERIVATION

117

Differentiating q ( h ) with respect to the parameter A and setting A we obtain that

= 0,

We shall now make use of the fact that x * ( t ) is an extremal, i.e., that it satisfies the Euler equation. The DuBois-Reymond form is more convenient here:

6''

LX/x.(,)dT -I-Li

= Co.

(~,(t)

Utilizing this relation, we obtain

Therefore,

- ~ L(7,xX * ( T ) , i * ( T ) )

+ 6 (L, ( T + - K'

E,

x *(T

I:+€

+

E

), f * (7+ E ))

L,(t,x*(t),f*(t))dt)+

W E ) .

If x * ( t ) yields a strong minimum in the problem (l), then the inequality cp '( + 0) 3 0 must hold. Passing to the limit in this relation as E +0, we obtain the inequality L(T, X *(7),i *(7)

6) - L (7,x * ( T I , f * ( T ) ) - 6Li (7,x * ( T ) , i *(7))

0,

which is the required Weierstrass condition. All our considerations can be very easily generalized for the simplest vector problem. The following function of 3 n 1 variables is the Weierstrass function for the integrand of the simplest vector problem:

+

8 ( t , x , X , ~ ) = L ( t , x , ~ ) - L ( t , x , i ) -l(L~X (-tf, X , X ) ) .

118

NECESSARY CONDITIONS FOR AN EXTREMUM

[CH. 2, 52.2

Weierstrass variations have the same form, x (t, A = x * ( t ) + h ( t , A 1,

where, however, the function h(t, A ) now depends not on three, as it was before, but on n + 2 parameters (because here 5 = (t’, .. .,t“) is a vector). As a result, we arrive at an entirely similar formulation of Weierstrass’ condition, namely, for a strong extremum of the simplest vector problem, it is necessary that the following inequality hold along an extremal x *(r):

~ ( t , x * ( ? ) , x * ( t ) , ~ ) ~v O t E, R R ,

tE(to,tl).

We noted above that Weierstrass’ necessary condition is always satisfied for quasiregular functionals. In Subsection 9.2.4, we shall prove an important theorem of Bogoljubov, from which it will follow that there exists an equivalent problem with quasiregular integrand to any simplest vector problem in the classical calculus of variations. This allows one to assume that, at least theoretically, Weierstrass’ necessary condition is always satisfied. 2.2.4. Legendre’s condition

Legendre’s condition, and also Jacobi’s condition, which will be discussed in the next subsection, are “second-order’’ conditions, i.e., they are related to second variations. For functionals in the classical calculus of variations, the second variation is a quadratic functional, and Legendre’s and Jacobi’s conditions are conditions under which this functional is non-negative. Sections 6.2 and 6.3 are devoted exclusively to the theory of quadratic functionals. The theory developed in these sections allows one to obtain Legendre’s and Jacobi’s conditions for general problems in the classical calculus of variations. Here, we shall restrict ourselves to only the simplest problem. Thus, we consider the problem (1). First, we shall evaluate the second variation of the functional 9 ( x ( * ) ) which enters the definition of the problem (1). Here, additional smoothness requirements should be imposed on the integrand L ( t ,x, y ) . In order to assure ourselves of complete freedom in carrying out further computations in this and the next subsection, we shall require that the Lagrangian L be three times continuously differentiable in a domain U C R3 which contains points ( t , x * ( t ) ,x * ( t ) ) ,

CH. 2, $2.21

ELEMENTARY DERIVATION

119

t E [to,t,],where x * ( t )is a twice continuously differentiable function. The function x * ( t ) is an extremal, which means that the Euler equation

holds. Under these assumptions, the function c p @ ) = 4a(x* (

=

1;

a

1+ Ax ( 1)

L(t,x * ( t ) + A x ( t ) , x * ( t ) + Af(t))dt

(18)

can be twice differentiated under the integral. Carrying out this differentiation, we obtain after elementary calculations the following formula:

X ( x ( .))= S ' ~ ( X * (- ) , x (* ) ) = =

I:'

I:'

K ( t ,x ( t ) , i ( t ) ) d t

( A ( i ) i ' ( t ) +B ( t ) x * ( t ) +2C(t)x(t)x(t))dt

=It' to

(A(t)i'(t)+ (B(t)-$

C ( t ) ) x'(t)) dr,

where A ( t )= Lix

lx.(,),

B ( t )= L x ,

lx.(0,

a t ) = Lix

lX.W

Since x * ( t ) is an extremal, we obtain that the first variation 6 4 ( x * ( x ( * )) vanishes on any function x ( t ) that vanishes at the end points of the interval [to,t,]. ( A s in Subsection 2.2.1, we denote the set of such functions by Lo.)By what we have said, the function q ( h ) has the derivative equal to zero at zero whenever x ( t ) E Lo. From a necessary condition for a minimum of the function of one variable, q " 2 0 ,we obtain the following necessary condition for a minimum in the problem (1): for the extremal x * ( t ) to yield a weak local minimum in the problem (l), it is necessary that the quadratic functional X ( x ) be non-negative on the subspace Lo. a),

Proposition 4. Let all the assumptions concerning the smoothness of the Lagrangian L and the function x * ( t )that were mentioned above be satisfied.

120

NECESSARY CONDITIONS FOR AN EXTREMUM

[CH. 2, $2.2

Moreover, let the function x * ( t ) be an extremal in the problem (1). For the function x * ( t ) to yield a weak local minimum in the problem (l), it is necessary that the following inequality hold for any t E [to,t , ] : (20)

A(t)=Lii(t,x.(t),X.(t))~O. This relation is called Legendre’s condition.

Proof. By what we have said, it is enough to show that, if the inequality A ( T ) < ~ to< ,

T

< t,,

(21)

holds at an interior point of the interval, then the quadratic functional X ( x ) is non-negative. (By virtue of our assumptions concerning L and x * ( -), the function A ( t ) is continuous and even differentiable. Therefore, the assumption that the inequality (21) holds at an interior point does not violate the generality of our considerations.) Let h + ( t )denote the identically zero function. We consider the following variation of the function h , ( t ) (Fig. 5): h(t, A,

d/A/2-It - T T) =

I

0

)/~\/A

for ( t - T )s A / 2 , for ( t - ~ 1 2 A / 2 .

It can be seen at once from the definition of the functions h (t,A, T ) that and that the moduli of the products they tend to 0 as A -0, h (t,A, T)A (t, A, T ) are uniformly bounded by a constant, which we shall denote by C. Thus, S 2 9 ( x . ( . ) , h ( * A, ,

7))

CH. 2, 82.21

ELEMENTARY DERIVATION

121

We obtained that the functional .’X(h(., Ao, T ) attains a negative value for some ho.In order to complete the proof of the proposition, it remains to smooth out the three corners of the function h ( . ,A,,, T ) in order to obtain a function h l ( t ) of class C1 at which the functional 9 % ( h l ( . ) ) = 6’9(x *( -),h l ( -)) is negative. (We remind the reader that we are speaking of a weak extremum.) The proposition has been proved. Remark. Proposition 4 can be immediately derived from Weierstrass’ condition. Indeed, if A (T)< 0, then Weierstrass’ condition for the functional Yt at the point 7 is not satisfied (because here S(t, x, f, 5) is equal to A ( t ) ( 5- x))’). Therefore, there exists a polygonal line x ( t ) , arbitrarily close to zero in the metric of C, on which X ( x ( .)) < 0. Smoothing out this line and multiplying it by a small constant, we arrive at the proof of Proposition 4.

2.2.5. Jacobi’s condition All the three necessary conditions for an extremum discussed above, namely, the Euler equation, Weierstrass’ condition, and Legendre’s condition, have a local character in the sense that computations at separate points were required in order to verify them.3’ On the other hand, it is clear that local conditions $lone are not enough in order to obtain satisfactory necessary conditions in the classical calculus of variations. This can be seen, say, from the following example. An arc of a great circle is the shortest line on a sphere that joins two given points only under the condition that there are n o diametrically opposed points of the sphere inside this arc. If there are such points, then the arc does not yield the solution to the problem on the shortest line connecting given points. However, any small part of the arc is the shortest line and, therefore, at any point of this arc all local necessary conditions for an extremum are satisfied. The point is that the shorter path which connects the end points of the arc is obtained by a global, not a local, variation. Jacobi’s condition is exactly a basic global necessary condition for a local 3’ Here, the adjective ‘‘local’’ has a different sense than when we were speaking, e.g., about a local extremum. Here we are-speaking of necessary conditions (for a local extremum) being local or global, i.e., of conditions which must be verified by checking either separate points of a curve or the entire curve.

122

NECESSARY CONDITIONS FOR AN EXTREMUM

[CH. 2, 52.2

minimum. As Legendre’s condition, it is a condition that a quadratic functional be non-negative. Thus, we consider again the simplest problem (I), and we suppose that all the assumptions under which we derived Legendre’s conditions are satisfied. We coqsider the following quadratic functional, which is the second variation of the function 4 ( x ( )): X ( x ( * )) = tj23( x * ( * 1, x (

=I 10

*

1)

( A ( t ) f 2 ( t ) +( B ( t ) - $

The Euler equation for the functional

C ( t ) ) x z ( t ) ) dt.

X has the form

--d ( A ( t ) x ) + ( B ( t ) - dz C ( t ) ) x = 0.

dt Equation (22), i.e., the Euler equation for the second variation of the functional 4 ( x ( -)) is called the Jacobi equation of the problem (1). The Jacobi equation is a second order linear differential equation.’ Assume that the following strict inequality holds:

A ( t ) = Lxi( t , x * ( t ) ,1* ( t ) )> 0. This inequality is called the strengthened Legendre condition. Let this condition be satisfied. Equation (22) can be rewritten in the following form (the assumptions concerning L and x , make it possible): - A (t)Z - A ( t ) i

+ ( B ( t )- C ( t ) ) x = 0 ,

or f

- P ( t ) f - Q ( t ) x = 0.

(23)

The existence and uniqueness theorem for the Cauchy problem holds for such equations. In particular, the solution @ ( t ,t o ) of the Jacobi equation , = 0 and d(to,to)= 1 exists and is with the boundary conditions @ ( t o to) unique. The zeroes of this solution distinct from the point to are called the points conjugate to the point to.

Proposition 5. For the function x * ( t ) to yield a weak minimum in the problem (1) (under the assumptions concerning the smoothness of L and

CH. 2, 52.21

123

ELEMENTARY DERIVATION

x * ( .) under which Proposition 4 was deriued, and when the strengthened

Legendre condition holds), it is necessary that there be no points conjugate to the point to in the interval (to,t J . This condition is called Jacobi ’s necessary condition. Proof. Assume that the opposite is true, i.e., that there exists a point (to< T < t l ) such that @(T,

T

to) = 0.

We note that @(T, to)# 0, otherwise we would have, by virtue of the uniqueness of the solution of the Cauchy problem for Eq. (23) with the Cauchy data x (T)= X ( T ) = 0, that @ ( t ,to)= 0, which contradicts the equality d ( t , to)= 1. We denote by h ( t ) the function which coincides with @(t,to)on [to,T] and is equal to zero for t 2 T. This is a “polygonal extremal”: it consists of two extremal parts. We shall show that X ( h ( = 0. Indeed, integrating by parts, we obtain a )

=I:(We construct now the following variation of the function h ( t ) (Fig. 6): h(t), t o S t C T - A , linear on the interval 0,

[T - A, T

t3T+E.

7-AT

Fig. 6.

T f E

f,

+

E],

NECESSARY CONDITIONS FOR AN EXTREMUM

124

We evaluate cp (A)

= X ( h(

- ,A,

7, E ) ) .

[CH. 2,92.3

We have4)

cP(h)-cp(O)=cp(A)

By the mean value theorem of differential calculus, ~ ~ ( T - A , A ) ’= ( l h ( -~A ) J = IAh(e,)I,

7-A

s 8 , s 7,

Hence, by the mean value theorem of integral calculus,

where r - A

S

6’ =sT

cp’(+o)=

+ E,

T

-A

s &S

r. Hence we obtain at once that

-A(T)~’(T)<~.

Therefore, there exist a A. and an E~ such that X(h,( * ; A o , 7, E,,)) < 0. It remains to “smooth out” h(t, Ao, T , .so).Proposition 5 has been proved. 2.3. The Lagrange problem. The Euler-Lagrange equation

In this section, we derive the Euler-Lagrange equations for problems with constraints in the classical calculus of variations. At the basis of the derivation lies the Lagrange multiplier rule, which was proved in Chapter 1. We assume here that the interval [to,tl] is fixed. 2.3.1. The Lagranie problem in solved form without phase constraints Consider the following extremal problem:

4,

For brevity, we write h (t. A ) instead of h ( t , A, T,E ) .

CH. 2, 82.31

THE LAGRANGE PROBLEM

125

We see that here the differential constraints have a solved form, and that there are n o phase constraints. We shall assume that standard smoothness conditions of the classical calculus of variations are satisfied, namely, that the mappings f:RXR"XR'+R,

cp:RxR"xR'+R"

are continuously differentiable with respect to all variables on a domain U C R x R" x R containing points (t, x * ( t ) , u *(t)),t E [to,t,], and that the mappings h, : R" + R",, i = 0,1, are continuously differentiable on domains V,, i = 0 , 1, containing points x,(t,), i = 0 , l . Here, ( x * ( t ) ,u * ( t ) ) belongs to the space C;([to,h])X C'([t,,tl]). We denote by L = L(t, x, f, u, p , ho) the function L

= hof(t, x,

L : RX R"

X

u ) + (P

R"

X

R'

I

- cp ( 6 x, u ) ) , X

R"

X

R+R.

This function will be called the Lagrangian of the problem (1). The function 2 = 2 ( x ( .), u ( - ) , p ( .), lo, 11, ho),

2 :C;([to,tl])xC'([to,tl])xC;([to,tl])XR"XR"'XR+R will be called the Lagrange function of the problem (1).

Theorem 1. For a pair ( x * ( - ), u *( to yield a weak local minimum in the problem (l), it is necessary that there exist Lagrange multipliers ho E R, ho 3 0, li E Rsi, i = 0,1, and p ( E C;([to, tl]), not all zero and such that a) the Euler equation in x for the Lagrangian L a))

a )

holds with the boundary conditions

NECESSARY CONDITIONS FOR AN EXTREMUM

126

[CH. 2, 82.3

b) the Euler equation in u for the Lagrangian L holds,

Lu

I ~ X . W . U . W ~=

0.

(4)

Equations (2) and (4) together are called the Euler-Lagrange equation of the problem (1). We present an expanded form of the equations obtained. In the expanded form, Eq. (2) is a vector differential equation

- P ( f ) = c P X t , x * ( t ) , u * ( t ) ) p ( t ) - Aofz(t,x*(t),u * ( t ) ) .

(2')

(In fact, this is a system of n equations.) This equation is called the adjoint equation. The relations (3) are boundary conditions for Eq. (2'),

These relations are usually called transversality conditions. Finally, Eq. (4) has the form ( o X t , x * ( t ) u, * ( t ) ) p ( t ) = Aofu(t,x*(t),u*(tN.

(4')

Theorem 1 again confirms the Lagrange principle. Having formed the Lagrange function 2,we then write necessary conditions for an extremum .in the problem without constraints, 2'4 inf. If one fixed a u * ( t )in the latter problem, then one obtains the simplest Bolza problem; and the relations (2), (3) are in complete agreement with Proposition 2 of the preceding subsection, where we derived a necessary condition for the simplest Bolza problem. Further, having fixed an x * ( t )in the Lagrange function, we obtain the simplest vector problem in u, where the Lagrangian does not depend on ti. Equation (4) is written entirely in agreement with Corollary 1to Proposition 1 of the preceding subsection. Proof of Theorem 1. The problem (1) belongs t o the class of smooth problems, to which the Lagrange multiplier rule can be applied (Theorem 1 of 01.1). Let us show this. Since the theorem concerns a weak extremum, we should consider our problem in the space C;([t,,,ti]) x C r ( [ t o tl]). , For brevity, we denote this space by 2, and the pair (x( .), u( .)) by z . Denoting by Y the space C"([to,ti]), we set

THE LAGRANGE PROBLEM

CH. 2, 92.31

fo(z)= 9 ( x ( *),

U(

.)),

fo

127

z+ R,

W ) ( t ) =i ( t ) - c p ( t , x ( t ) , u ( t ) ) ,

f : Z + y,

H i ( z ) = h , ( x ( t , ) ) , H,:Z+Ra,

i=o,1.

Then the problem (1) takes on the form (6)-(8) of 01.1,

fo(z)+ inf; F ( z ) = 0, H, ( 2 ) = 0, i = 0,l.

I

We are to verify the conditions of Corollary 2 to Theorem 1 of 01.1. All the functions and mappings that enter the formulation of problem (1') are continuously differentiable on a neighborhood of the point z * = ( x * ( .), u *( .)). Indeed, the function fo(z) is the composition fo = j 2 0 4 i 1 of the mapping

$i(z)

= f(t, x ( t ) , u ( f ) ) ,

$1

: z +=

C([tn, ti]),

whose differentiability was proved in Example 6 of 00.2, with the continuous linear functional

9&)

=

1''

l(t)dt,

$2

: c([tn, rl])-+

R,

0

whose differentiability was established in Example 1 of 00.2. We present the formula for the derivative of the function fo(z):

128

NECESSARY CONDITIONS FOR AN EXTREMUM

[ F ' ( z * ) z ] ( ~ )i = ( t ) - A ( t ) x ( t ) -B ( t ) u ( t ) ,

A ( t )= cpx (t,x * ( t ) ,u * ( t ) ) , B ( t )= cpu (t,x * ( t ) ,u * ( t ) ) .

[CH. 2, §2.3

I

Finally, the mappings H,(z), i = 0,1, are differentiable by what was proved in Example 4 of §0.2, and their derivative is H : ( z * ) z = T J ( t , ) , i = 0,1,

r, = h : ( x * ( f , ) ) x ( t , ) , i = 0 , l

1

(7)

It remains to prove the regularity of the mapping F ( z ) in z * . By the definition of regularity, it means that, for any y ( .) E Cn([to,tl]), we must solve the equation

1 ( t )- A ( t ) x ( t )- B ( t ) u( t )= y ( t ) , where the matrices A ( t ) and B ( t ) are determined by the relations (6). By virtue of the conditions imposed on the mapping q ( r , x, u ) and by virtue of the fact that (x * ( t ) ,u * ( t ) )E C; x C', we obtain that the matrices A ( t ) and B ( t ) are continuous. Making use of Theorem 1of 90.4 on the solvability of a system of linear equations, we obtain the regularity of the mapping F. We apply now the Lagrange multiplier rule. We form the Lagrange function for the problem (1'):

9 = h o f o ( z+) (y *, F ( z ) )+ (10 1 H o ( z ) )-t(11 1 H , ( z ) ) .

(8)

According to the Lagrange multiplier rule, there exist Lagrange multipliers y *, lo, I , , and ho such that the following relations hold at the point z * :

px=o,

9"=o,

(9)

which are equivalent to the relation gZ= 0. , Therefore, by Riesz's representation However, Y = C n ( [ t ot,]). theorem, there exists a regular Bore1 measure p such that

J

to

The relation (10) can be rewritten in another form:

Here, pi ( t ) are functions of bounded variations which are continuous

THE LAGRANGE PROBLEM

CH. 2, 52.31

129

from the right except, possibly, at the point to. Substituting (10) into (8), we obtain that

1:' +l:'(x

.$?=

Aofdt

-P I d ~ ) + ( l ~ l h ~ ( x ( f ~ ) ) ) + ( I ~ l h l ( x ( t l ) ) ) .

First, we shall investigate the first of Eqs. (9). We have & ( z * ) x ( * ) =I 10 ' ( A O a ( t ) l x ( t ) ) d t

(11)

+ (I0 1 TOX ( t o ) ) + ( 1 1 I Tl(X (tl)). Let us integrate by parts the terms that contain x ( t ) under the integral:

l(;

I

(Aoa ( t ) x ( t W

=

1:'(

I:' =I:'

\lll

=

&u(T)~T

I x ( t ) ) dt +

(1" 10

&u(T)~T

1 *(to)),

I

(-4( b x (t) dP ( t ) )=

( \ r " A * ( ~ ) d ~ ( ~ ) ( l ( t ) ) (d \t1" "+ A * ( T ) ~ P ( T ) ~ x ( ~ o ) ) .

Substituting these expressions into (11) and making use of the fact that

x(t1)= X(tO)+l:' i ( 7 ) d T ,

130

NECESSARY CONDITIONS FOR AN EXTREMUM

[CH. 2,82.3

The expression (12) represents a continuous linear functional on the space C;,

where

(13')

By virtue of the uniqueness of the representation of a linear functional on the space C ; in the form (13), and from the equation & = 0, we obtain that .(t)=O,

a

(14)

= 0.

It can be seen from (14) and (13') that the vector-valued function p ( r ) is absolutely continuous. We set @(r) = p ( t ) . Then it follows from (14) and (13) that p ( r ) satisfies the equation

Substituting t = to in (15) and making use of the expression for a in (13'), we obtain

p(to)= r;io. If we set t = tl, then we obtain the equality p(rl) = differentiating (15), we obtain the equation

-r?l,.Finally,

- p ( t ) = A * P ( t ) - Aoa(t). The relations (2') and (3') have been proved. Substituting now p ( t ) d t for &, we obtain that the linear functional on C'([to,t , ] ) of the form

d p ( t ) in the formula for

Pu(2 * ) u ( - ) =

/

" a

I

(Aob( t )- B * ( t ) p( t ) u (t))dt

is equal to zero. By Riesz's theorem, it follows that

THE LAGRANGE PROBLEM

CH. 2, 52.31

131

B * ( t ) p ( t )= Aob(t). The relation (4'), and with it also Theorem 1, have been proved. 2.3.2. Isoperimetric problem

In the calculus of variations, the following minimization problem is called the isoperimetric problem :

[r

9 (X ( . )) =

1''

fo(t,x, f ) d t + inf;

to

f, (t,x, f )dt = a,, j = 1,. . . , m,

ho(x(to))= hi(x(t1))= 0, i = O ,..., m,

f,:RxR"XR"+R,

h, : R" -+Rsx, i

1

tJ

= 0,l.

If we set u ' = i', fJ+n

=

i = 1, ..., n,

f , ( t , x , u ) , j = L . .. , m ,

then we obtain the following Lagrange problem:

9(x ( - ), u ( f'

0

)

)

=

= u ' , i = 1,.

1" to

fo(t,x . . . ,x ",u ', . . . , u " ) d t--+ inf;

. ., n,

f"+'= f , ( t , x ' , . . . , x n ,u l , . . ., u"),

h, (X'(to),. . .,x"(t0))= 0 , i = 1,2, x"+J(to) = 0, xn+'(t,) = a,, 1 s j s m. Applying Theorem 1 to this problem, we arrive at the following result. Theorem 2. For a vector-valued function x * ( t ) to yield a weak local minimum in the problem (16), it is necessary that there exist Lagrange multipliers A, E R, 0 G j S m , 1, E R I , i = 0 , 1, not all zero and such that the Euler equation

NECESSARY CONDITIONS FOR AN EXTREMUM

132

[CH. 2, 52.4

holds for the Lagrangian

2=o A,f,(t,x,f) m

L(f,x,f)=

1

where the following boundary conditions are satisfied

As an exercise, the reader may try to obtain the form of the necessary condition in the problem with higher derivatives,

1:'

f ( t , x , f , x , .. . ,x'"')dt+inf;

x"'(to) = (6,

x(*)(tI) = (;,

OC i

6

n - I,

reducing this problem to the Lagrange problem. Moreover, it is easy to derive directly from Theorem 1 of $1.1a necessary condition for a problem with phase constraints; e.g., for the following problem:

1:'

f ( t , x, I ) d t + inf; @ ( t , x ) = 0,

f :R X R" X R" -+R,

x ( t o ) = xo,

0 :R x R" -+ R",

m

x ( t l ) = x,.

< n.

In order to guarantee the regularity, it is sufficient to require that the following condition hold: rank @= (t, x ( t ) )

= m,

t E [to,t l ] .

In this problem, the necessary condition also has the form of the Euler equation for the Lagrangian

L

f

1

= ( t , x, f - (p ( t) 0

0,x 1).

2.4. The Pontrjagin maximum principle. Formulation and discussion

This section is devoted to the formulation and discussion of a basic necessary condition for an extremum in optimal control theory, the Pontrjagin maximum principle. We also present here an elementary proof

CH. 2, 82.41

THE PONTRJAGIN MAXIMUM PRINCIPLE

133

of the maximum principle for a particular case of the problem with a free right end point. A proof of the entirely general maximum principle is contained in 92.5. In this chapter, we restrict ourselves to the optimal control problem without phase constraints, postponing the discussion of problems with phase constrants to Chapter 5. 2.4.1. Formulation of the maximum principle

As follows from the explanations given in 92.1, the optimal control problem without phase constraints can be formulated in the following way:

x

= cp(&

(2)

x, u ) ,

u E U C R‘, ho(to,x(to)) = 0,

(3) h,(t,,X(t1)) = 0

(f : R x R” x R r + R, cp :R x R” x R‘+ R“,h, :R x R” -+ R’, i

(4) = 0,l).

It is assumed that all functions and sets that enter the conditions of the problem satisfy the conditions indicated in Subsection 2.1.2. As was already noted, we consider as admissible controls bounded measurable vecror-valued functions u ( t )which assume values in U ; and the notions of a “local extremum” or an “optimal process” have the same sense as in Subsection 2.1.4. We shall formulate the maximum principle in two equivalent forms, namely, in the “Hamiltonian” and “Lagrangian” forms. We begin with the Hamiltonian form. We consider the function

H(t,x,u,p,Ao)=(pIc~(t,x,u))-hof(t,x,u) (where p E R” and A,, E R+),which we shall call the Pontrjagin function. When the context is that of classical mechanics, the variables denoted by the letter p are usually called momenta. Together with the Pontrjagin function, we introduce the function

W t ,x, u, p , Ao) = sup H ( t , x, u, p , Ao) UEU

which is called the Hamiltonian.

134

NECESSARY CONDITIONS FOR AN EXTREMUM

[CH. 2,82.4

Theorem 1 (the maximum principle in the Hamiltonian form). Let (x * ( t ) ,u * ( t ) )be an optimal controlledprocess in the problem (1)-(4),defined on the inferval [to*,tl*]. Then there exist a number A. 2 0 , vectors lo E Ro, lI E R"1, and a vector-valued function p ( t ) , not all zero, such that (i) the vector-valued function p ( t ) satisfies the adjoint equation

(iii) the Harniltonian X ( t , x * ( t ) , p ( t ) ,Ao) is continuous on the interval [to*,t 4 and satisfies the relations q t o * x *(to*), p (to*), 9

W I * ,

Ao) = - (hot ( t o * ,

I

x *(to*)) lo),

I

(8)

x * ( t 1 * ) , P ( h * ) , Ao) = (hIt(G*,x *(h*)) 11)

at the ends of this interval. We note the expression for the Hamiltonian

W~.x*(t),P(t),ho)= ( h l * ( ~ l * J * ( ~ I *!I)) ) l +L'l

(5,x

* ( e l 7

* ( 0 9

P ( S ) , Ao)dS,

(84

which is obtained iil the proof, and also the uniform Hamiltonian notation x = H P7 ~5 = - H, of Eqs. ( 2 ) and (5). We pass to the description of the Lagrangian form of the maximum principle. We write down the Lagrunge function of the problem (1)-(4), the same as in 02.3,

CH. 2, 62.41

THE FONTRJAGIN MAXIMUM PRINCIPLE

135

where

Theorem 1' (the maximum principle in the Lagrangian form). Let (x * ( t ) ,u * ( t ) )be an optimal controlled process in the problem (1)-(4) defined on the interval [tv,, tl*].Then there exist a number A. s 0, vectors loE R%, ll E R"I,and a continuous vector-valued n-dimensional function p ( t ) ,not all zero, such that (i) the Lagrangian L satisfies the Euler equation in x

and the boundary conditions

L Li

]t=to.

l1=tl.=

=

h fx(to*,x *(to))lo,

I

- hYx(t~*,x*(tl*))~l;

almost everywhere on the interval [to*,t , J ; (ii) the Lagrangian L attains its minimum in u at u = u .(t) for almost all t of [ t o * , h*l, L ( t , x * ( t ) , X * ( t ) , p ( t ) , h o=) 2

g L ( t , x * ( t ) , i * ( t )u,p(t),Ao); ,

(7')

(iii) the Lagrange function 2 is differentiable with respect to to from the right at the point to*, with respect to tl from the left at the point t l * , and

3 1 ato +

o

=

0,

-1

a2

at, - o

= 0,

where a /at + 0 and a / a t - 0 denote the right and left derivative, respC'cCively. The verification of the fact that both formulations of the nlaximum principle are equivalent does not cause any difficulties. hideed,

L=@)i)-H. Therefore, the relations (5) and (S), (6) and (6'), and (7) and (7') are pairwise equivalent. It remains to verify that the relations (8) and (8') are

NECESSARY CONDITIONS FOR AN EXTREMUM

136

[CH. 2, $2.4

equivalent. We note that the continuity of the Hamiltonian is not an independent condition, but follows from ( 5 ) and (7) (this will be seen from the proof), and therefore from (5') and (7'). Further, by virtue of (7'), 2(to,t1,x*(

.),.. . ) = (I,(h,,(ro,x*(t")))+(II~ h l ( t l , x * ( t l ) ) )

+ Therefore, for

F

>O,

2(to*

+

E,

= (I0

J1:' [Wt)I

t ] ,. . . ) - 3 ( f o * ,

1 ho(to* +

E,

x *(to*

tl,

i * ( t N - X(t9 x * ( t ) ,p ( r ) , h o w .

. . .) =

+ E ) ) - ho(to*.,x *(to*)))

+ EX(tc,*,X*(to*),p(to*),Ao)+

o(E)=

(by virtue of the first equality in (6)) =

~ [ ( l o I h o r ( t o * , x * ( t o * ) ) ) + x(to*,x*(to*),p(to*),h~)I+o ( E ) .

The equality obtained implies the equivalence of the first relations in (8) and (8'). The equivalence of the second relations can be verified in a similar way. Theorem 1' ,and, therefore, the Pontrjagin maximum principle, are one more realization of the Lagrange principle formulated in the Introduction. According to this principle, necessary conditions for an extremum in a problem with constraints coincide with necessary conditions for an extremum of the Lagrange function subject to the constraints not incorporated in this function. Indeed, if the Lagrange multipliers Ao, lo, 11, and p ( t ) are fixed, then the Lagrange function 9 depends on three groups of variables, namely, on phase trajectories x ( t ) , controls u ( t ) , and time instants to and I,. If we fix now the interval [to,tl] and the control u ( t ) , then

CH. 2, 12.41

THE PONTRJAGIN MAXIMUM PRINCIPLE

137

the problem on the minimum of the Lagrange function in x ( t )has the form of the classical Bolza problem; and the assertion (i) of Theorem 1' means that x * ( t )satisfies the necessary condition for the minimum of the Lagrange function with respect to x ( t ) for fixed u ( t )= u * ( t ) and t o = to* , t , = t l , . In the same way, the assertion (ii) of Theorem 1' is necessary and sufficient for the Lagrange function to attain the minimum over all admissible controls (this is the only constraint not included in the Lagrange function, since it does not have a functional character) for the fixed interval [ t o * , t,,] and trajectory x * ( t ) at the point u ( t ) = u * ( t ) . (This assertion follows from an intuitively obvious formula

which will be strictly proved in Chapter 9 under much more general assumptions.) Further, we note that the vector-valued functions x * ( t ) ,u * ( t ) ,and p ( t ) can be extended to the left of the point to* and t o the right of the point tl, in such a way that the Lagrange function becomes differentiable with respect to to and tl at the points to* and tl,, respectively. To this end, x * ( t ) and p ( t ) must remain continuous, and u * ( t ) must satisfy the relations

In this case, it follows from the computations presented above that, by virtue of (87, the derivatives of the Lagrange function with respect to to and t l at the points to.+,and t l , , respectively, are zero. In other words, the assertion (iii) of Theorem 1' means that the instants of time to* and t l , satisfy the necessary conditions for a minimum of the Lagrange function with respect to to and t l . W e noted above that the only constraint not incorporated in the Lagrange function in Theorem 1' was the condition (3). However, in specific cases also some other constraints can be left out of the Lagrange functions. These are mainly boundary conditions, such as fixed end points and a fixed time. In so doing, the corresponding transversality conditions vanish, and (agqn confirming the Lagrange principle) the remaining

138

NECESSARY CONDITIONS FOR AN EXTREMUM

[CH. 2,12.4

relations coincide with the necessary conditions for a minimum of the Lagrange function subject to the constraints which were not incorporated in this function. Indeed, if, e.g., h , = x - xo (a fixed left end point), then the first condition in (6) means that p ( t o )= lo. If ho = t - a (a fixed left instant of time), then the first condition in (8) takes on the form X = - lo, etc. Thus, the Lagrange multipliers that correspond to fixed end points coincide with the values of p ( t ) , and the Lagrange multipliers that correspond to fixed instants of time coincide with the values of the Hamiltonian at the corresponding points and d o not carry any more information. If p ( t )= 0, then all these multipliers are zero. We have not mentioned the conditions that guarantee the inequality ho# 0. They are very cumbersome, and usually it is simpler to verify directly that ho# 0. Thus far, we have been speaking of the problem with an integral functional. In problems with the endpoint functional + ( t , , x (tl)), the Lagrange function has the form

ll=lO.

All the relations of the maximum principle are obtained from this function exactly as in Theorem 1'. The corresponding proof, in essence, does not differ at all from the proof of the maximum principle for problems with integral functionals.

2.4.2. Elementary proof of the maximum principle for the problem with a free right end point

We consider an optimal control problem with a free right end point and a fixed time,

CH. 2, 12.41

THE PONTRJAGIN MAXIMUM PRINCIPLE

139

The maximum principle for this problem can be proved in a very simple way if we assume that an optimal control is piecewise-continuous. First of all, let us clarify what we are to prove. Let a controlled process (x * ( t ) ,u * ( t ) )be optimal, where the control u * ( t )is piecewise-continuous. Then, by Theorem 1, there exist a number A o a O and a vector-valued function p ( t ) , not both zero, such that (i) the vector-valued function p ( t ) satisfies the differential equation (5) and the second boundary condition in (6), which in this case has the form

p(tJ

= 0;

(13)

(ii) the relation (7) holds for almost all t. If A. were zero, then p ( t ) would be the solution of the equation

P

= - cp xt, x

* ( t ) ,u * ( t ) ) p

(14)

with the conditions (13), i.e., p ( t ) would have to be identically zero. Therefore, the case A. = 0 is excluded, and we can assume without loss of generality that A. = 1. Thus, we are to verify that the equality

W)Ic p ( t , x * ( t ) , =

u * ( t ) ) ) - f ( t ? x * ( t ) u, * ( t ) ) =

Fiy [(P(t)lcP(tJ*(t),

holds almost everywhere on equation

[to, t,]

u))-f(t,x*(t),u)I

(15)

if p ( t ) is the solution of the adjoint

with the terminal condition p ( f , ) = 0. We shall prove that the equality (15) holds at every point of continuity of the control u * ( t )belonging to the interval (to,t l ) . The proof is based on a direct application of “needlelike” variations of the control u * ( t ) ,and, in essence, is a modification of the proof of Weierstrass’ condition that was presented in 02.2. Thus, let T be a continuity point of the control u * ( t ) .We fix an element u E U and consider the control

NECESSARY CONDITIONS FOR AN EXTREMUM

140

[CH. 2,52.4

which is a needlelike variation of the control u ,(I) (Fig. 7). W e denote by xA ( t ) = x ( t ; T , A ) the solution of Eq. (10) with the initial conditions (12) which corresponds to the control uA( t ) . By assumption, x A (I) = x *(I) if to s t S T - A. Moreover, since the Cauchy problem for the equation f = cp(I,X, u )

is solvable in a neighborhood of the point (7,x * ( T ) ) , the vector-valued function xA( t ) is defined also on the interval [ T - A, T ] , if A is sufficiently small. If the control u ,(I) is continuous at the point T , then it is continuous also in the A-neighborhood of this point. For this reason, the function f , ( t ) is also continuous o n the neighborhood, and therefore x*(T)= X,(T

- A ) + A ~ , ( T - A ) + O(A)

= X,(T - A ) + A ~ ( T A , x * ( T - A),

U,(T

- A ) ) + o(A).

For the same reason, x,,

(T)= X,(T

-A)+ A ~ ( T A,

X,(T

- A),

u)+ o(A).

It follows that the limit Y(T) =

lim

A 10

XA

( T ) - x ,(T)

A

ppI

I

I '

I t

I I

I 20

I

I

t

I

I I

I I

I I I

I

I

r - A

I

1

7

Fig. 7.

I

I

I

I

II

I

I I I

I

I

I

I L

t,

CH. 2, §2.4]

THE PONTRJAGIN MAXIMUM PRINCIPLE

141

on the interval [T,t,]. It fallows from the theorem on the continuity and differentiability of a 'solution of a differential equation with respect to initial data that, for A > 0 sufficiently small, the vector-valued functions x A( 1 ) are defined on [ T , t , ] , converge uniformly to x * ( t ) , and the limit

Passing to the limit as A

Y ( t )= Y (7)+

& 0, we

1'

cpx

obtain

(s,x *(sh u * ( S ) > Y

(It is possible to pass to the limit under the integral, because cp is continuously differentiable with respect to x , and u * ( t ) is a bounded vector-valued function.) Thus, on [T,tl] the function y ( t ) is the solution of the equation Y = cpx ( l , x * ( t ) ,u * ( t ) ) Y

with the initial conditions (18). We have for t 3 T

I Y (t))

d

= - (cp

I Y ( t ) )+ ( P ( t ) I Y(t)) = x * ( t ) , u * ( t ) ) p ( t )I Y 0))+ V X ( 6 x * ( t ) 7 u *(?)I I Y ( t ) ) = (Fi(t)

dt ( A t )

T(t7

+ ( P ( t ) l cpx (t,x * ( t ) , u * ( t ) ) y ( t ) ) = (fx

0 9 x * ( t ) ,u *(tN I Y ( t ) ) ,

NECESSARY CONDITIONS FOR AN EXTREMUM

142

i.e., since p ( t J

[CH. 2 , 9 2 4

= 0,

Further, since (x * ( t ) ,u ,(r)) is an optimal process,

IT''

[ f ( t ,x A

+ Iim

A LO

= f(T3 x

+

*(TI7

(f),

* ( t ) )- f ( l , x * ( t ) ,

u ) - f ( T x *(TI, *(TI)

j-7'' (t,x * ( t ) , *(t)) I Y L4

(fx

*(r))ldt

(tW.

Hence, making use of the equality (19), we obtain (P(T)lcp(TJ*(T),

3

I 40

u*(T)))-f(T,X*(T),

(7,x * ( T I > u ) ) - f ( 7 7

u*(7))2

x *(TI, u ) .

But T is an arbitrary continuity point of the control u * ( t ) , and u is an arbitrary element of the set U. It follows that the relation (15) holds at all continuity points of the control u * ( r ) , which was to be proved. 2.4.3. The maximum principle and calculus of variations

The Pontrjagin maximum principle contains necessary conditions (of the first order) in the classical calculus of variations. We shall now show how the Euler equation and Weierstrass' condition, and also the canonical equations and the Weierstrass-Erdman conditions, which we have not yet

THE PQNTRJAGIN MAXIMUM PRINCIPLE

CH. 2, 62.41

143

mentioned, can be obtained from the maximum principle. We restrict ourselves to the simplest variational problem. The reader may, if he wants to, carry out the corresponding computations for problems of more general form. Thus, we consider the simplest problem

and we assume that the function x * ( t )(continuously differentiable) yields a strong minimum in this problem. The problem can be rewritten in the form of an optimal control problem in the following way:

i=u,

u E R , ~ ( t o ) = ~ o ,~

( t l ) = ~ i .

If we set u * ( t ) = i , ( t ) , then the controlled process. ( x * ( *), u * ( optimal in the last problem. We have

a))

is

H = p u - AoL(t, X, u). The adjoint equation has the form

fi

=

A o L (1, x * ( t), 1* ( t ) ) ;

and it follows from the maximum principle (since there are no constraints on u ) that

H,= p - A o L

I(x.(t),u.(l))

=

0

(20)

almost everywhere. But, since the control K * ( t ) is continuous, this relation must hold for all t. If A. were zero, then, by virtue of (20), also p ( t ) would be identically zero, which is impossible. Therefore, we can assume that A o = 1, and we arrive at the Euler equation d fiv)= ;Et Li (t,x * ( t ) ,u *(t)) = Lx ( t , x *([I, 1* ( t ) ) .

Further, it follows from the maximum principle that m,ax (P ( t ) u - L (t, x * ( t 1, u 1) = p ( t ) u * (t ) - L 0,x * (t ), u * ( t ) ) for almost all t. Obviously, this equality holds at all points of continuity of

144

NECESSARY CONDITIONS FOR AN EXTREMUM

[CH. 2, P2.4

the function u * ( t ) , i.e., for all t. Taking into account the formula (20), we obtain L(t,x * ( t ) ,u ) - L ( t , x

*(?I,f *(?)I-

( u - f *(t))Lx(t7 x*(t)7 f*(t)) 2 0

for all t and u. We have arrived at Weierstrass’ condition. In particular, these considerations allow us to draw the conclusion that the requirement of the continuous differentiability of an extremal function, which is usually made in the calculus of variations, is redundant. The same relations hold (only not for all, but for almost all t ) when an extremal function is absolutely continuous and its derivative is bounded. In particular, it is easy to obtain the Weierstrass-Erdman necessary conditions for the so-called piecewise -smooth extremals. Indeed, if a strong minimal x * ( t ) has a piecewise-continuous derivative, then the Euler equation and Weierstrass’ condition must hold at every point of its continuity. Let the function x * ( t ) be non-differentiable at t = 7 (i.e., its derivative has a discontinuity of the first kind). The following formula holds at all points of continuity of x * ( t ) ; .i.*(t)r,(t,x*(t),x*(t))--L(t,x*(t),f*.(t))

=

Wt,x*(t),p(t)).

According to the maximum principle, the Hamiltonian is continuous. It follows that f * (7 - 0)L.i(7,X

* (7),f * (7 - 0)) - L (7,X * (T),f * ( T - 0)) =

= f * (7 - 0)Li (7,X

* (T), * (7-k 0)) - L (7,X * ( T ) , * (7 + 0)).

In the same way, since the function p ( t ) = Li (t,x * ( t ) ,i* ( t ) )is continuous as a solution of the adjoint equation: we have

Li (7,x *(7),f * (7 - 0)) = k (7,x * (T),f *(7 + 0)). These two relations are called the Weierstrass-Erdman conditions. They characterize possible values of discontinuities of the derivatives of polygonal extremals. The Weierstrass-Erdman conditions for the simplest vector problem can be expressed in an entirely similar way. In conclusion, we shall say a few words about the canonical equations. We have already noted that in the optimal control problem the phase trajectory and the solution of the adjoint equation satisfy the system of equations

THE PONTRJAGIN MAXIMUM PRINCIPLE

CH. 2, $2.41

x. = -aH aP '

145

aH p = - s

We assume now that the Lagrangian L of the simplest problem of the calculus of variations is twice continuously differentiable and satisfies the strengthened Legendre condition, Lii >0 , i.e., in particular, that it is a convex function in its last argument. Then, by the implicit function theorzm, the equation

p

= Lx

(t,x, u )

is uniquely solvable in u on a neighborhood of every point (t7x * ( t ) ,i* ( t ) ) , i.e., there exists a continuously differentiable function u(t, x , p ) such thzt

p

= Li

(t,x, u (t,x, P)),

where u ( t , x , ( t ) , p ( t ) ) = x * ( t ) and p ( t ) = L x ( t , x , ( t ) , x * ( t ) ) .The derivative of the function u +. pu - L (t,x, u ) = H(t, x, u, p ) is zero at the point u = u(t, x,p). Since this function is concave, it attains its minimum at the point u ( t , x, p ) , i.e.,

H ( t , x, u (t,x, P),P ) = P U ( f , x, p ) - L (t, x7 u (t,x, PI) = W t , x, p ) . The last relation defines the so-called Legendre transform of the function L with respect to the last argument. The generalization of this transform, the Young-Fenchel transform, will be studied in detail in the next chapter. We have

But p

= Li(f,

x * ( t ) ,x * ( t ) ) on the extremal. Therefore,

a x ( t , x , ( t ) , p ( t ) )- aH(t,x.(f),li.*(t),p(t)) ax ax Similarly, axlap = aH/ap, and it follows from (21) that x * ( . ) and p ( . ) are solutions of the following system of equations:

ax x. = -ax aP ' Obviously, the system of first-order equations obtained is equivalent to the Euler equatidn. It is called the canonical form of the Euler equation, or simply the canonical system.

"=-ax.

NECESSARY CONDITIONS FOR AN EXTREMUM

146

[CH. 2, $2.4

2.4.4. Certain illustrations As in the classical calculus of variations, one can encounter most diverse situations while solving optimal control problems with the aid of the maximum principle. For linear problems with a bounded set of controls, the situation is typical wherein there exists a unique admissible controlled process which satisfies the maximum principle, and this process is optimal. Example 1. In the problem

[x d t + i n f ;

i = u,

IuIs1, ~ ( o ) = o ,

only the process ( x ( t ) = - t, u ( t ) = - 1) satisfies the maximum principle, and it is, obviously, optimal. Indeed, in this problem

H

= PU

- X.

The adjoint equation has the form p

=

1.

Hence (since we consider a problem with a free right end point and, therefore, p(1) = 0), p ( t ) = t - 1, and the maximum of the function H is attained at u = - 1.

In general, one can encounter the same situations in optimal control problems as in the calculus of variations, i.e., the lack of solution, the existence of a set of admissible processes which satisfy the maximum principle and are or are not optimal, etc. The fact that, in optimal control problems, one often considers bounded sets of admissible controls can create (and sometimes does create) the illusion that solutions to such problems necessarily exist, and can always be found with the aid of the maximum principle. Of course, this is false. For an illustration, we shall consider an example that will also allow us to point out a very important phenomenon, namely, sliding regimes. Example 2.

A sliding regime. We consider the problem

[xzdr-+inf;

i=u,

/ u l = ~ ,x(o)=Cu,

x(1)=0.

PROOF OF THE MAXIMUM PRINCIPLE

CH. 2, 12.51

147

For 1 a I > 1, there is, obviously, no admissible controlled process. For 1 a I = 1, such a process is unique and corresponds to the control u ( t ) = sign a. Now, let a = 0. It is easy to see that a positive value of the functional results from any admissible controlled process. At the same time, the functional tends to zero on the sequence x , ( f ) , x 2 ( f ) ,x 3 ( f ) ,. . . presented in Fig. 8. We note that here the sequence of phase trajectories converges uniformly, and that the sequence of controls, to the contrary, does not converge to anything. Such sequences are called sliding regimes. Thus, the problem does not have a solution. We propose that the reader verify that no controlled process satisfies the maximum principle (for a = 0).

2.5. Proof of the maximum principle We remind the reader that we consider the following problem: 9 ( x ( . 1, u (

- 1) =

11

1"

f(r,

x, u)dt -+ inf;

f = Cp(l,X,U),

uE

u,

ho(to,x ( t 0 ) ) = 0 ,

hl(fl,

x(t1)) = 0.

Here, we can not make use of the Lagrange multiplier rule in view of the constraints imposed upon the controls, and also because it is impossible to differentiate with respect to u. Our proof is based on the extremal principle for mixed problems. Its relation to the optimal control problem is not as obvious as the relation, say, of the Lagrange multiplier rule to the classical

0

1

Fig. 8.

t

NECESSARY CONDITIONS FOR AN EXTREMUM

148

[CH. 2,SZ.S

Lagrange problem; and the derivation of the Pontrjagin maximum principle from the extremal principle for mixed problems requires more effort. In our constructions, the first step will be to reduce the problem (1)-(4) to an equivalent, in a sense, mixed problem. We shall d o it with the aid of the method proposed by A. Ja. DubovickiI and A.A. Miljutin, which utilizes a change of time. 2.5.1. The reduction of the problem

Let a controlled process ( x * ( t ) , u * ( t ) ) defined on the interval [to*,tl,] be optimal in the problem (1)-(4). We choose a non-negative, bounded and measurable function v * ( T ) on the interval [0,1] which is subject to the condition

[

(5)

V*(T)dT=ti*-fo*,

and we set

Further, we fix a measurable r-dimensional function w * ( T ) which assumes values in U and satisfies the equality w*(T)=

(7)

u*(t*(T))

almost everywhere on the set A ( v *). Now we can formulate the reduction problem that we mentioned at the beginning of the subsection:

I,’

vf(t, y, w * ( T ) ) d T

+

inf;

(8)

CH. 2, 52.51

PROOF OF THE MAXIMUM PRINCIPLE

149

This is also an optimal control problem, and the controlling parameter is the scalar u. In this problem, we shall take for admissible controls all non-negative, bounded and measurable functions u (7) each of which vanishes on one of the sets

...) A k = { T E [ O ,l ] ( l w * ( ~ ) ( ~(kk =0,1, } (its own for every function). The set of such functions will be denoted by V . We set Y *(7)= x

*(t*(7))1

where t *(T) is determined by the formula (6). Lemma 1 . The controlled process ( t * ( 7 ) ,y *(T), u * ( T ) ) is admissible in the problem (8)-(11), and yields a local minimum in this problem.

In order to prove the lemma, we must consider the transformations of the form (6) in more detail. Let a function ~ ( 7be ) non-negative, bounded, and measurable on the interval [0, 11 (Fig. 9a). We set

Fig. 9.

NECESSARY CONDITIONS FOR AN EXTREMUM

150

[CH. 2, 92.5

d ( u ) = ( ~ E [ o 1, ] ~ u ( 7 ) > 0 } .

The function t ( 7 ) is continuous and non-decreasing. T h e inverse function also does not decrease, but it can, generally speaking, have discontinuities of the first kind on no more than countable set of points tl,&, . . . (Fig. 9b). For definiteness, we set T(t)

= min{.r E

[o, 11 1 t ( T ) = t ) , if

t # t ( l ) , T(t(1))= I .

The following equalities hold ;

Proposition 1.

t(T(5))=

5

T ( t ( 7 ) )= 7

for

6 E [t(o),t ( l ) ] ,

for almost all

7 E A(u).

Here, T ( t )E A ( u ) almost everywhere on the interval [ t ( O ) , t ( l ) ] . Proof. The first equality follows from the definition of the function T ( t ) and from the continuity of the function t ( 7 ) .Further, T ( t ( 7 ) ) = T , if 77 does not belong t o the union of the half-open intervals ( T ( & - o), T ( & + o)].O n each of these half-intervals, the function t ( 7 ) is constant and equal t o &, i.e., the intersection A ( u ) n ( T ( & - 0), T ( & + O)] has measure zero. Hence follows the second equality. Finally, since the function t ( 7 ) is monotone, the measure of the image of every measurable set A C [0, 11 is equal t o

Thus, in particular, the image of the set A ( u ) has full measure in [ t ( O ) , t ( l ) ] . Hence follows the last assertion. Proposition 2. Let functions z ( t ) on [ t ( O ) , t ( l ) ] and measurable, and let w(7) E A ( u ) . If t = t ( T ) , then

Z(t(T)) =

for almost all

T

provided that these integrals are defined.

W(T)

on [0, 11 be

CH. 2, 62.51

151

PROOF OF THE MAXIMUM PRINCIPLE

Proposition 3. Let x ( t ) be a solution of Eq. (2) defined on [ t ( O ) , t ( l ) ] and corresponding to an admissible control u ( t ) . If y ( ~ =) x ( ~ ( T ) ) ,and i f a vector-valued, measurable function W ( T ) on [0, I] is such that u ( t ( T ) ) = w ( 7 ) almost everywhere on A ( u ) , then y ( 7 ) is a solution of the equation

(12)

Y ' = v(7)(P(t(T),Y , w(.)>.

Conversely, i f w (7)is bounded, measurable on A ( u ) , and assumes values in U, and i f y ( 7 ) is a solution of Eq. (12), then ~ ( t =) W ( T ( C ) )is an admissible control in the problem (1)-(4), and x ( t ) = y ( T ( t ) ) is a solution of Eq. ( 2 ) corresponding to the control u ( t ) . Proof. The first assertion follows at once from Proposition 2, because cp ( f ( T ) , x ( t ( T ) ) , fA (t(T))) = cp ( f ( T ) , Y

(71, w (7))

(13)

holds almost everywhere on A ( u ) and, therefore,

Y(.)

= X(f(T))= =

x(t(O))+

Y (0) +

1;;:

cp(t,x(t),

I' 477

>cp (f(77

u(t))dt

1, Y (77

1 3

w (71))dV

In order to prove the second assertion, it is sufficient to note that the equality (13) holds also here, by virtue of Proposition 1, and then apply again Proposition 2. We pass to the proof of Lemma 1 . It follows at once from the first part of Proposition 3 that the controlled process ( t *(T), y * ( T ) , u *(T)) is admissible in the problem (8)-(11). Now, let (c(T),Y ( T ) , u ( T ) ) be another admissible I Y ( T ) - ~ * ( T ) ~forall
I

152

NECESSARY CONDITIONS FOR AN EXTREMUM

[CH. 2,§2.5

2.5.2. Necessary conditions for an extremum in the problem (8)-(11)

CH. 2, 12.51

PROOF OF THE MAXIMUM PRINCIPLE

153

It is not difficult to see that Lemma 2 is exactly the Pontrjagin maximum principle for the problem (8)-(11), Proof. First, we shall show that the problem (8)-(ll) is a mixed problem which satisfies all the conditions of Theorem 3 of 41.1 (more precisely, of the Corollary to this theorem), and then we shall apply this corollary. With Eq. (9a), we connect the mapping : W1,lx W;,l X V + L ? that assigns the vector-valued function z(T)=

y’(7)-

(14)

U(T)(P(t(T),Y(T), W*(T))

to every (( .) E W1,l([O,I]), y ( .) E W:,l([O, I]) and u ( . ) E 9’f (we remind the reader that V is the set of admissible controls in the problem (8)-(11)). Since V ( T ) vanishes on one of the sets Ak, the vector-valued function T+ u(T)(P(((T), y ( ~ ) w , ,(T)) is bounded and, therefore, z ( * ) € L;. In a similar way, Eq. (9b) generates the mapping @* : Wi,l x V + L1which acts according to the formula &-(TI=

@&(-),

u ( . ) ) ( 7 ) = (’(7)-

4.).

(15)

Finally, we consider the, mapping @ : W1.]x W;,, x V+ L 1x L ; which is the “Cartesian product” of the mappings Q2 and GI, i.e., @((( .), y ( .), u ( .)) = ((( .), z ( .)), where ( ( 7 ) is determined by the formula (15), and z ( T ) is determined by the formula (14). With the aid of the mapping @, Eqs. (9a) and (9b) can be written in the form @(t(-),

Y ( -), 4.)) = 0.

Let us verify that, for any u ( - ) E V, the mapping @ is continuously Frechet differentiable on W,,, x W;,] and regular at the point ( t * ( .), y,( .)). Indeed, the continuous differentiability of the mappings and @* follows from the results proved in 40.2 (see Example 11). In this connection, the derivative of the mapping at the point ( t *( y *( .)) is the linear operator a),

(((T),

Y (T))-+

YYT)

- u(T)(Px ( t * ( T ) ,

- U(T)(Pt(t*(T),

(’(7).

*(7))5(T)

Y *(TI, w *(TNY (71,

and the derivative of the mapping ((TI+

Y *(TI, w

Q2

(16)

is the linear operator (17)

154

NECESSARY CONDITIONS FOR AN EXTREMUM

[CH. 2,82.5

Now, if J ( T ) and Z ( T ) are arbitrary elements of the spaces L , and L ; , respectively, then, as follows from Theorem 1 of $0.4, there always exist a ( ( . ) € W,,, and a y ( . ) E Wy,, which are related to J ( . ) and z ( . ) by the formulas (16) and (17). Therefore, the mapping y (-))+ @(((.),y(-),u(.))isregular at the point ( f a ( y,( -))forevery u ( - ) € 7 f . Further, we note that the functional (8) is also continuously Frechet differentiable on W1,]x Wy.l for every fixed v ( .) E 7 f , and that its derivative at the point ( t * ( .), y,( -)) is the linear functional (((a),

a),

(5(7),Y ( T ) ) +

I'

u ( 7 ) tfc ( f * ( T ) , Y

+ (fx

(f*(T),

w

Y *(TI, w*(.))l

*(7))5(7)

Y(7))ldT.

Thus, the problem (8)-(11) satisfies all the conditions of Corollary to the extremal principle for mixed problems (Theorem 3 of $1.1). We note that mappings into a finite-dimensional space enter the condition (11). In order to finally prove that the corollary can be used, it remains to note that ( f * ( 7 ) , y * ( T ) , v * ( T ) ) is a local minimum point in the problem (8)-(ll), even if t ( ~ and ) y ( 7 ) are considered in the topologies of the spaces W , , ]and Wy,l, respectively (this follows from Lemma 1 and from the fact that the topology of the spaces W1,]and Wy,' is stronger than the topology of the spaces C and C " ) .Thus, the corollary is really applicable.

CH. 2, 52.51

PROOF OF THE MAXIMUM PRINCIPLE

155

+ S(7)(f'(7)-

.(.))I&

where lo E R , Il E R'I, q ( .) E L:, and s( ) E L,. (The mapping @ acts into L;", and t h e space dual to L;" is L:". Therefore, q ( E L: and s( E L.)With a certain choice of multipliers Ao, lo, 11, q ( ~ )and , s ( T ) , the Lagrange function must satisfy the conditions enumerated in the formulation of Theorem 3 of 51.1. We shall write down these conditions, taking into account the expressions that we established before for the derivatives of the mapping @ and functional (8) at the point ( f * ( - ) , y * ( * ) ) . For brevity, we denote+ ho = ho(to*,y *(0))7 h l = hl(tl,, y *(l)), H ( T )= H(t,(7), Y *(7), w *(TI, q ( 7 ) ,Ao), etc. We have a )

a )

T Y (

I

,Y ( . = ( h L h Y (0)) + ( h

I Y (1))

for all y ( - ) W;,l; ~

for all t ( . ) E W1,'; and

for all v( - )E V. Integrating by parts t h e second term in the integrand in (18) and setting y(1)= y(o)+.fiy'(T)dT, we obtain

NECESSARY CONDITIONS FOR AN EXTREMUM

156

[CH. 2,§2.5

for all y ( . ) E WY,l. This means that h :,lo

+ h T,ll -

q ( T ) + h TJ,-

I,’

u *(7)H,d~ = 0,

1,’

u , ( q ) ~ , d T=

o

a.e.

Changing, if need be, q ( 7 ) on a set of measure zero, we hence obtain that q (7)is absolutely continuous and satisfies all the conditions formulated in the assertion (i) of the lemma being proved. In a similar way, (19) implies (ii). It remains to verify that the assertion (iii) follows from (20). Indeed, if, e.g., H + s > 0 at all points of a subset of the set A ( u , ) that has a positive measure, then, setting at these points U(T) = 2 ~ * ( 7 and ) U(T) = u * ( T ) at the remaining points, we obtain I , ’ ( f l ( T ) - U * ( 7 ) ) ( H + s ) d 7 >o,

which contradicts (20). The proof of the second relation in the condition (iii) is just as simple. The lemma has been proved.

2.5.3. The completion of the proof of the maximum principle We set

I

~ * (= t )min(7 E [o, 11 t * ( ~=) t ) , p ( t ) = 4(7*(tN, r(t) = S ( T * ( t ) ) .

According to Proposition 1, p(t*(.))

= 4(7)7

r(r*(T))

= s(7)

for almost all 7 E A (U *). Applying Proposition 3 to q ( 7 ) and p(f), we obtain by virtue of the assertion (i) of Lemma 2 that p(r) satisfies the differential equation

ri

= -H

x

0,x *(r),

l.4

* ( r ) , p , A,)

PROOF OF THE MAXIMUM PRINCIPLE

CH. 2, $2.51

157

and the boundary conditions

This proves the first assertion in the statement of the maximum principle. In exactly the same way, r ( t ) satisfies the differential equation f =

-H

(t7

(21)

x * ( t ) , * ( t ) , ~ ( t )Ao) ,

and the boundary conditions

Thus far, a particular form of the functions u *(T) and w * ( T ) was of n o interest t o us, as long as the equalities (6) and (7)were satisfied. These equalities, of course, leave a large degree of freedom in the choice of these functions. We assume now that u * ( T ) vanishes on the system of halfintervals (closed from the right) & = (Tk,T k + &], k = 1,2,. .., which is constructed in such a way that the image of the union of these intervals under the mapping T + c * ( T ) is dense in [to*,t l , ] . Here is one of the methods of constructing such a function. Let &, ...} be a countable dense subset of the interval [to*, r1J. We choose = 1/2. We set numbers p1> 0,p2> 0,... such that X

{el,

where the summation is carried only over those indices i for which ti< 4. Then the half-intervals I k = ( T k , T k + Pk]are pairwise-disjoint. Now, let

u&

We shall verify that the image of the union under the mapping + t * ( ~is) dense in [ t o * ,tl,] (here, the equality t*(l) = t l , is obvious). To this end, it is sufficient to verify that t * ( ~=) [k for any T E G.We note that T~ < T k if and only if ti < & and c * ( T ) = t + ( T k ) for all T E 4. We have for T

7

E Ik

158

NECESSARY CONDITIONS FOR AN EXTREMUM

= to*

+ (tl,

'$k

- to*) ~tl,

[CH. 2,52.5

- to:.. - '$k, - to*

which was required. We assume now that every half-interval Ik is the union of a countable set of non-empty closed from the right half-intervals I k l , I k k z , . . . , that {ul, u z , . . . } is a countable dense subset of the set U, and that a vectorvalued function w * ( T )is chosen so that w * ( T ) = u,, if

T

E Ikt.

By the inequality in the assertion (iii) of Lemma 2 , H(t*(T),Y *(T), w

0

*(T), q ( 7 ) ,Ao)+ s(7)

almost everywhere on the union UI,. Every half-interval 1has a positive measure (it is non-empty by assumption). Therefore, for every k and i, there exists a T E Ik, such that W * ( T ) ,

Y *(TI, w

= H('$kix *('$k),

*(TI, 4(T), Ao)

+ s(7)

=

ua,p('$k),A O ) +

r('$k)

so-

Since the points t l ,t 2 ,.. form a dense subset of the interval [to,, tIJ, since the vectors u,, uz,... constitute a dense subset of the set U, and since the function (t, U ) - P H ( f ,x ( t ) , u , p ( t ) ,Ao) is continuous, this implies that

H ( t ,x *(f),u, p ( t ) , Ao) + r ( t ) s 0

(23)

for all t E [to*,t 1 * ] , u E U. On the other hand, the equality in the assertion (iii) of Lemma 2 implies, by Proposition 1, the equality H ( t , x * ( t ) , u . ( t ) , p ( t ) , h o ) +r ( r ) = O

for almost all t. Therefore

(24)

CH. 2, 92.51

PROOF OF THE MAXlMUM PRINCIPLE

159

for almost all f. This proves the second assertion in the statement of the maximum principle. Since the control u , ( t ) is bounded, and since the function (f, u ) - + H(f, x*(f), u , p ( f ) , Ao) is continuous, (24) implies (26)

x(I,x*(t),p(t),Ao)+ r(t)=O.

is continuous. Therefore, so is the f + %(f, x *(t),p(f),Ao). Comparing (26) with (22), we obtain

But

r(t)

%(to*

9

x .(fo*), p(to*), A01 = - (hot (to* x *(to*))

x ( f ~ ~ 7 x * ( r 1 * ) , p ( f 1 * )= , ~~0 ) ~

9

l

z

function

I lo),

~ ll). ~ l

~

,

x

~

Finally, (21) and(24) imply (8a) of 92.4. The entire Pontrjagin maximum principle has been proved.

Notes to Chapter 2

2.2 The calculus of variations is presented in many monographs and textbooks: Hadamard fl],Ahiezer [l], Bolza [l],Gel’fand and Fomin [1], Caratheodory [2], Courant and Hilbert [l], Lavrent’ev and Ljusternik [l], [2], and others.

2.3 A detailed survey of works on constrained problems in the calculus of variations appears in the book of Bliss [l]. Concerning multidimensional problems see the monographs of Morrey [l] and Klotzler [l]. 2.4 and 2.5

The Pontrjagin maximum principle was formulated in 1956, and this provided the basis for optimal control theory. Of the works of an early

~

~

l

160

NECESSARY CONDITIONS FOR AN EXTREMUM

[CH.2, $2.5

stage, we mention the papers oi Gamkrelidze [l], [2], and Pontrjagin’s s u k e y [l]. These investigations were summarized in the monograph of Pontrjagin, Boltjanskii, Gamkrelidze, and Mishchenko [l]. The first proof (Boltjanskii [l])of the maximum principle was improved by Rozonoer [l]and Egorov [2]. Dubovickii and Miljutin [2], and Halkin [4] proposed proofs based on new ideas. In this book, we return to the proof of the maximum principle three times. We follow Pontrjagin [l] in 82.4 and Dubovickii and Miljutin [2] in $2.5. The third proof (in Chapter 5 ) is related t o Halkin’s ideas. Let us also indicate texts and monographs on optimal control: Bellman, Glicksberg, and Gross [l], Boltjanskii [4], Bryson and Y. C. Ho [l], Krasovski’i [3], Krotov and Gurman [l],Lee and Markus [l],Hestenes [3], Young [2], and others.