Chapter 2 Fundamental Concepts

Chapter 2 Fundamental Concepts

CHAPTER 2 Fundamental Concepts 2.1 Introduction A s has been d e s c r i b e d i n Chapter 1 t h e r e w a s no s t a n d a r d mathematical procedu...

560KB Sizes 18 Downloads 226 Views

CHAPTER 2 Fundamental Concepts 2.1

Introduction A s has been d e s c r i b e d i n Chapter 1 t h e r e w a s no

s t a n d a r d mathematical procedure f o r a n a l y s i n g s i n g u l a r c o n t r o l problems b e f o r e t h e e a r l y 1960's a p a r t from

Miele's 'Green's Theorem' approach.

However, from

about 1963 onwards r e s e a r c h e r s began t o s t u d y t h e second v a r i a t i o n of t h e g e n e r a l performance index of optimal c o n t r o l i n an a t t e m p t t o f i n d new n e c e s s a r y c o n d i t i o n s f o r a s i n g u l a r c o n t r o l t o be optimal.

The

t h e o r y of t h e second v a r i a t i o n of a f u n c t i o n a l had been

w e l l e s t a b l i s h e d i n t h e c l a s s i c a l c a l c u l u s of v a r i a t i o n s and had f i g u r e d prominently i n t h e work c a r r i e d out by P r o f e s s o r G. A . Bliss and h i s s t u d e n t s a t t h e U n i v e r s i t y of Chicago d u r i n g t h e f i r s t h a l f of t h e present century.

But a l t h o u g h t h e g e n e r a l t h e o r y f o r

t h e problem of Bolza reached a h i g h degree of s o p h i s t i c a t i o n under t h e a t t e n t i o n of t h e Chicago School no a p p l i c a t i o n s of t h i s t h e o r y had been attempted. Indeed, i n t h e p r e f a c e t o h i s book ( B l i s s , 1946) P r o f e s s o r Bliss makes an appeal f o r s u i t a b l e examples t o be l i s t e d which would i l l u s t r a t e t h e theory.

This

appeal has t o a l a r g e e x t e n t been answered over t h e

l a s t t h i r t y y e a r s by t h e enormous r e s e a r c h e f f o r t engendered by c o n t r o l problems a r i s i n g from t h e o p t i m i z a t i o n of dynamical systems. 37

38

SINGULAR OPTIMAL CONTROL PROBLEMS

Until the early 1960's the theory of the second variation of a functional had rarely been applied to any practical problems.

Even in the field of

Mathematical Physics it had often been felt that the additional complexity of the second variation outweighed any possible benefit which might accrue from its use.

However, with the advent of such singular

problems as Lawden's intermediate-thrust arcs (Lawden, 1961, 1962, 1963) in Aerospace and Siebenthal and Aris's stirred tank reactor (Siebenthal and Aris, 1964) in Chemical Engineering it was clear that a satisfactory mathematical analysis of such problems lay in the theory of the second variation.

This approach has been

fully justified as will be seen in this book.

Not only

has t h e study of the second variation of a general performance index yielded necessary conditions for optimality of singular arcs but it has played a no less important part in the derivation of sufficient conditions for such arcs and in the production of algorithms for the numerical solution of non-singular problems. This present chapter sets before the reader a few fundamental concepts necessary for an understanding of what is to follow.

First, the general optimal control

problem mentioned in Chapter 1 is reiterated and placed on a firm mathematical foundation.

It should be

emphasized here that we give a general statement of what is usually referred to as the optimal control problem.

A singular problem is a special case of this

2.

FUNDAMENTAL CONCEPTS

general statement.

39

Next, t h e f i r s t and second

v a r i a t i o n s of t h e c o s t f u n c t i o n a l from t h e g e n e r a l The method o f

optimal c o n t r o l problem a r e d e r i v e d .

g e n e r a t i o n of t h e s e two v a r i a t i o n s f o l l o w s c l o s e l y t h a t used by B l i s s (1946) a l t h o u g h , of c o u r s e , h i s a n a l y s i s does n o t i n c l u d e s p e c i f i c mention of c o n t r o l variables.

Much of t h e n o t a t i o n used i n t h i s book

c o i n c i d e s with t h a t used by Bliss b u t where t h e r e a r e d i f f e r e n c e s w e have changed d e l i b e r a t e l y t o be i n keeping w i t h t h a t used i n t h e modern c o n t r o l l i t e r a t u r e . Having d e r i v e d g e n e r a l e x p r e s s i o n s f o r both t h e f i r s t and second v a r i a t i o n s , shown t h a t t h e f i r s t v a r i a t i o n must be z e r o and t h e second v a r i a t i o n nonn e g a t i v e f o r a minimizing a r c , t h e f i n a l s e c t i o n of t h e p r e s e n t c h a p t e r f o r m u l a t e s t h e g e n e r a l statement of a s i n g u l a r optimal c o n t r o l problem.

The correspond-

i n g forms f o r t h e f i r s t and second v a r i a t i o n s i n t h e s i n g u l a r case are s t a t e d .

A number of examples a r e

p r e s e n t e d i n t h i s c h a p t e r t o i l l u s t r a t e t h e many a s p e c t s of b o t h v a r i a t i o n s .

2.2

The General Optimal C o n t r o l Problem

Consider an n-dimensional s t a t e v e c t o r space X T w i t h time-varying elements x = ( x l x 2...x,) ,

\

= %(t),

k

= 1,2

,...n ,

and an m-dimensional c o n t r o l T v e c t o r space V w i t h elements u = (ulu2 Urn) ,

u. = u i ( t ) , 1

d e f i n e d by

i = 1,2,

...m,

...

-

to < t

tf.

The s e t V i s

40

SINGULAR OPTIMAL CONTROL PROBLEMS

V = (u(t) : a. 1 -< ui

bi,

i

= 1,2

,...,ml

(2.2.1)

where a;, bi can be known functions of time but are usually constants. Since the major portion of this book will be discussing the case of singular control we shall assume, unless otherwise stated, that a control vector u belongs to the interior of space V so that a. < ui < bi,

i

= 1,2,

...,m.

1

Should some of the ui's become equal

to the corresponding bounds ai or bi then either the

technique of Valentine (1937) may be employed or those variations

Bi(*)

(see below) of the control variables

which attain their bounds can be put to zero.

Further-

more, it may sometimes be convenient (in Section 3.2.2 for example) to update the control vector u to the status of a derivative and write u = v,

v(to)

=

0

(2.2.2)

with v(t ) arbitrary. This transformation will be f

seen to bring the control problem more in line with the classical problem of Bolza. Suppose the behaviour of a dynamical system is governed by differential equations

H

= f(x,u,t)

(2.2.3)

2.

41

FUNDAMENTAL CONCEPTS

and boundary conditions

where to and xo are specified and {to, x(to>, tf, x(tf)) belongs to S, a closed subset of R2n+2.

The terminal

constraint function IJ is an s-dimensional column vector function of x(tf)

and tf.

The final time tf may o r may

not be specified. We suppose further that the performance of the system is measured by a cost functional of the form

J = F[x(tf),

tf] +

I'

L(x,u,t)dt.

(2.2.6)

t0

The n-dimensional vector function f of eqn(2.2.3)

and

the scalar functions F and L are assumed to be at least twice continuously differentiable in each argument. The general problem of optimal control is to find an element of U which minimizes the cost functional J of (2.2.6) subject to (2.2.3-5). 2.3

The First Variation of J Define a one-parameter family of control vectors

42

SINCilJLAR OPTIMA12 CONTROL PROBLEMS

w i t h t h e o p t i m a l v e c t o r g i v e n by u ( . , O ) .

The c o r r e s -

ponding s t a t e v e c t o r

,E )

X('

(2.3.2)

w i l l a l s o b e a f u n c t i o n of t i m e t ( t o < t < tf(E) ) and p a r a m e t e r

E

In a l l cases u(-

with ,E)

E

= 0 along t h e optimal t r a j e c t o r y .

b e l o n g s t o U, x ( * ,E)

b e l o n g s t o some

D i f f e r e n t i a l s of f a m i l y ( 2 . 3 . 2 )

f u n c t i o n s p a c e Y.

are

A s i n t h e c l a s s i c a l c a l c u l u s of v a r i a t i o n s ( B l i s s , 1 9 4 6 ) t h e symbol 6 d e n o t e s d i f f e r e n t i a l s o n l y w i t h r e s p e c t t o t h e parameter

E.

We now i n t r o d u c e a s e t of s t a t e

v a r i a t i o n s and f i n a l - t i m e v a r i a t i o n s d e f i n e d a l o n g t h e optimal t r a j e c t o r y a s

i n which case

d t f = SfdE and 6x = vde.

S i m i l a r l y , w e can d e f i n e a c o n t r o l v a r i a t i o n a l o n g t h e o p t i m a l t r a j e c t o r y as

2.

FUNDAMENTAL CONCEPTS

43

B = (aU/aE)E=o.

(2.3.5)

From the system equatj.ons ( 2 . 2 . 3 ) state variation

rl

it follows that the

must satisfy the equation of

variation

;1 where f,,

= f , r l

f, are n

x

(2.3.6)

+ fuB

n and n

x

m matrices respectively.

Because of the boundary conditions ( 2 . 2 . 4 - 5 ) set of variations Sf,

rl

the

must satisfy end conditions of

the form rl(to>

= 0

.

where $Jt is an s-dimensional vector and f matrix.

(2.3.7)

an s

We now adjoin the system equations ( 2 . 2 . 3 ) terminal constraints ( 2 . 2 . 5 ) of ( 2 . 2 . 6 )

x

n

xf and the

to the cost functional J

by A , an n-vector of Lagrange multiplier

functions of time, and by v, an s-dimensional constant vector of Lagrange multipliers respectively. functional may then be written as

The cost

SINGULAR OPTIMAL CONTROL PROBLEMS

44

where H(x,u,X,t)

T + A f(x,u,t).

= L(x,u,t)

When the vectors u ( * ,E)

and

are substituted into eqn(2.3.9)

x(*,E)

(2.3.10)

of (2.3.1-2)

the cost functional J

may be looked upon as a function of the single parameter

E

and from J(E) one can easily calculate the

first differential dJ. dJ

=

[(Ft

+

T v $t + H

In fact,

-

ATg)dt

T

+ (Fx + v $x)dx] t=tf

By integrating the term in 6; in the integrand of eqn(2.3.11)

by parts, using (2.3.3)2

to eliminate

6x(tf) and noting that 6x(to) = 0 since x(to) is specified, we obtain dJ

= [ (Ft

T + v Jlt + H)dt + (Fx + vT$x

-

AT)dx] t'f

On the optimal trajectory where the parameter

E:

is zero this differential takes the form dJ = J1(cf,n,B). dE:. The second differential d2J on the optimal trajectory can similarly be written as d2J = 2J2(Sf,~,B)d~2. A Taylor series for the func-

tion J(E) may then be written as

2.

FUNDAMENTAL CONCEPTS

J ( E ) = J ( o ) + E J +~ c2J2 +

The function J,(Sf,q,B)

45

...

(2.3.13)

is called the first variation

of J on the optimal trajectory and from its definition and eqn(2.3.12)

it is clear that

Bearing in mind the assumption made in Section 2.2 that u belongs to the interior of V it follows from

(2.3.13) that a necessary condition for u(*,O) to be a control vector which minimizes J is J1

= 0.

That is,

a necessary condition for optimal control is that the first variation should vanish for all admissible variations.

By choosing the adjoint vector X and the

vector v so as to make the coefficients of

0,

n(tf)

and Sf vanish in ( 2 . 3 . 1 4 ) we obtain the following results:

-iT= XT(tf)

=

Hx(x,u,X,t) -F,(;;(tf),

(2.3.15) tf)

+

*Xf

T

(2.3.16)

SINGULAR OPTIMAL CONTROL PROBLEMS

46

(2.3.17) The first variation of J then reduces to J 1 = ItfHU6 dt.

(2.3.18)

t0

With the control variables away from any bounds the variation 8 in the integrand of (2.3.18) is arbitrary. Since J 1 is to vanish for all admissible variations B the fundamental lemma of the calculus of variations (Bliss, 1 9 4 6 ) yields the condition

Hu

(2.3.19)

= 0.

Of course, when u, a member of V, is allowed to attain its bounds we are led to Pontryagin’s Minimum Principle, namely

-u =

arg min H(;,U,A,~).

(2.3.20)

u

Throughout the above discussion

x(*)and U(*) denote

the candidate state and control functions respectively. A further first order condition is the necessary

condition of Clebsch (Bliss, 1946).

In the control

formulation this condition may be written

Tr

T

O

O

L

O

(2.3.21)

2.

FUNDAMENTAL CONCEPTS

for all (n+m)-vectors

TT

(-In fU)T

47

satisfying the equation =

(2.3.22)

0

where In is the nth order identity matrix. We now illustrate the use of necessary conditions (2.3.15)

,

(2.3.19) to obtain a candidate arc for

optimality by applying them to a rocket problem. Example 2.1

The problem of finding the thrust direc-

tion programme necessary to maximize the range of a rocket with known propellant consumption is considered by Lawden ( 1 9 6 3 ) .

The thrust direction is limited to

lie in a vertical plane through the launching point. The acceleration due to gravity is assumed constant and flight takes place in vacuo over a flat earth. rocket is launched with zero initial velocity at t and burn-out occurs at a known instant t

=

T.

The = 0

The

vehicle continues under gravity along a ballistic trajectory until impact.

The acceleration f caused by

the motor thrust, essentially positive, is a given function of time. With Ox and Oy horizontal and vertical axes through 0 and lying in the plane of flight, the equations of

motion for this problem are

(2.3.23)

SINGULAR OPTIMAL CONTROL PROBLEMS

48

where f

=

cm/M and 0 , the control variable, is the

angle made by the thrust direction with Ox (Lawden, 1963). The initial values of the state variables u, v (horizontal and vertical velocity) and x, y (horizontal and vertical displacement) are specified of flight T to burn-out.

as

is the time

There are no end values

specified at the final end-point except T.

The

boundary conditions for the problem are then v(0) = 0

u ( 0 ) = 0,

to = 0,

(2.3.24) y(0)

x(0) = 0,

tf = T.

= 0,

It is required to maximize the total range, which is a function of the values of the state variables at burnout.

This is equivalent to minimizing the cost

function

The Hamiltonian H of eqn(2.3.10) H

=

is

AU fcose + XV (fsine - 8 ) + A u + X v. X

Eqn(2.3.15)

Y

(2.3.26)

then yields

-xu

- A,,

-Av

=

xY (2.3.27)

ix

= 0,

iY = o .

49

2. FUNDAMENTAL CONCEPTS

Eqn(2.3.19)

leads to the result tan8

=

Av/Au.

(2.3.28)

The end conditions given by eqn(2.3.16) Au(tf>

= -(vf+r>/g,

Av(tf)

=

are

-uf(vf+r)/gr, (2.3.29)

Ax(tf)

=

-1,

where r

=

J(vf2 + 2gyf).

Ay(tf)

=

-uf/r A s in (Lawden, 1963)these

results lead to tan8

=

uf/r.

(2.3.30)

Pontryagin's Principle (2.3.20) or the Clebsch condition (2.3.21-22) are satisfied if 8 takes the positive acute angle solution of eqn(2.3.30).

The

extremal is therefore a trajectory along which the thrust direction remains at a constant positive acute angle to Ox.

Whether this extremal is an optimal

trajectory is still to be decided and this will be the subject of a further example in the next section. 2.4

The Second Variation of J In the previous section it was shown that the

augmented cost functional J of (2.3.9) can be thought of as a function J ( E ) of the single parameter

Eqn(2.3.11)

E.

gives the first differential of J and is

quoted again here for convenience:

SINGULAR OPTIMAL CONTROL PROBLEMS

50

dJ

=

[ (Ft + v T+t + H - A T%)dt

+ (Fx + v T+x)d~It,tf

+ jtf{HXGx + Hu6u - A T 6x)dt *

(2.4.1)

t0

This formula is valid along all arcs x(-,E), u(*,E) belonging to the one-parameter families (2.3.1-2).

In

particular, we have seen that along the optimal trajectory (where

the first differential dJ vanishes.

E=O)

From eqn(2.4.1)

one may calculate the second

differential of J as

J'd

=

[(Ft + v +

T

+t

(Ftt

T + H -A %)d2t +

v +tt

T + (Fx + v 11, )d2x X

+ fi -XTx)dt2

+ 2H Sxdt + 2H Gudt -2ATSkdt] U

X

1

.tF

+

IIHxG2x + H 62u - A U

T

G2t +

t=tf

T G X Hxx6x

t0

T T + 26u HuxSx + 8u H 6u)dt. uu By expanding the term

fi

=

(2.4.2)

dH/dt into its constituent

parts, integrating by parts the term in S 2 $ ,

eliminat-

2.

51

FUNDAMENTAL CONCEPTS

i n g t h e & d i f f e r e n t i a l s o u t s i d e t h e i n t e g r a l s i g n by means of e q n s ( 2 . 3 . 3 )

and f i n a l l y by u s i n g e q n s ( 2 . 3 . 1 5 ,

19), w e f i n d t h a t T T d 2 J = [(H + Ft + v J l t ) d 2 t + (Fx + v $

J

~

-A

T

)d2x]

tf

T

T

+ d x (Fxx + v $Jxx)dx1

+ [ (Ht

-

H K)dt2 X

tf

+ 2H d x d t ] X

tf

(2.4.3) The c o e f f i c i e n t s o f t h e terms i n d 2 t f and d 2 x ( t f ) are zero because of t h e t r a n s v e r s a l i t y conditions (2.3.16-17).

The second d i f f e r e n t i a l t h u s r e d u c e s t o

+

[(Ht

-

H & ) d t 2 + 2Hxdx d t I , X

f

6~ + 26uTHux6x + Bu TH

+ ftf(6xTH xx

6u)dt uu

t0

(2.4.4)

52

SINGULAR OPTIMAL CONTROL PROBLEMS

where @(x(tf),

tf) = F + v

T $J

and xf

1

x(tf).

A s men-

tioned in Section 2.3 the second differential d2J on the optimal trajectory may be written as d2J

=

2J,(Cf,q,B)d~2

and from eqn(2.4.4)

it is seen

that

J2 = IQtftfCf

1

+

x 5f GfCf f f

0t

+

rl(tf))

.tc

+

LfiriTHxxn + BTHuxrl

+

JBTH Bldt.

(2.4.5)

uu

t0

This expression J2 is the coefficient of

E~

in the

Taylor series (2.3.13) and is called the second variation of the cost functional J along the optimal trajectory.

It follows that a necessary condition

for u ( * , O ) to be a control vector which minimizes J is d2J/dE2 > 0. That is to say the second variation J, must be non-negative. A particular set of admissible variations satisfy-

ing eqns(2.3.6-8)

is given by

2.

FUNDAMENTAL CONCEPTS

53

For this set of variations the second variation J, vanishes.

Thus, if the candidate arc obtained from the

vanishing of the first variation satisfies the condition J, 2 0 then the set of variations ( 2 . 4 . 6 ) minimize J,.

must

We are accordingly led to an auxiliary

problem known as the accessory minimum problem.

This

is the problem of minimizing J2 with respect to the set of variations

Ef,

q(*),

B(*)

satisfying the

equations of variation ( 2 . 3 . 6 - 8 ) . A further necessary condition for a minimizing

arc is known in the classical literature as the Jacobi condition.

In the optimal control problem with second

variation J2 given above ( 5f = 0 ) the Jacobi condition may be stated as follows (Bryson and Ho, 1969, but see also Chapter 4 ) : an optimal trajectory contains no conjugate point between its end points.

This will be

the case if the matrix S remains finite for

- -

to < t < tf where S satisfies the matrix differential

equation -S

= H

xx

TS -(Hxu + SfU)HUU -1 (HUx + + Sfx + fx

f

TS )

U

(2.4.7)

and end condition S(t,>

= @XX[X(t,>

tf I.

(2.4.8)

SINGULAR OPTIMAL CONTROL PROBLEMS

54

Example 2.2 (Bell, 1965)

Consider again the problem

of maximum range of a rocket vehicle discussed in the

example of Section 2.3. The equations of variation on the extremal are, from eqn~(2.3.23)~

hu

-8,

=

fsine

,

TIv =

Be fcos9

The equations of variation on the extremal of the end conditions are, from eqns(2.3.241,

5,

= 0,

5f

= 0,

rl ( 0 ) = 0 ,

u

Using eqns(2.3.25-26, (2.4.5) may b e written as

- lJ

f ( t - k ) B e 2 sece dt 0

'lJ0)

= 0,

29) the second variation

2.

FUNDAMENTAL CONCEPTS

55

1

T + -(v + r). This variation has to be nong f negative for the extremal found in Example 2.1 of

where k

=

Section 2.3 to be optimal. To demonstrate that it is indeed always non-negative we integrate eqns(2.4.9) with 8 having the positive, acute angle solution of eqn(2.3.30).

where

I1

This gives

=

1

rT fBedt

and

I2 =

0

0

Substituting for nU(T), expression for J

2

nv(T),

0

and 11 (T) in the Y

T + r)j fBe2dt 1.

g f

Thus, J2 > 0 and, moreover, J, 0.

“(T)

we obtain

T + sec8[1 f (T-t) Be2dt + -1(v

$,

rT f(T-t)Bedt.

J

0

= 0

if and only if

The second variation is therefore always

non-negative and the extremal found from the first variation satisfies the necessary condition associated with the second variation.

It is worth pointing out at this stage that the following notation for the second variation will

SINGULAR OPTIMAL CONTROL PROBLEMS

56

normally be used throughout this book:

Q

=

C = H

Hxx’

ux’

R = H

uu ’ (2.4.10)

Qf

-

A = f

*x x ’ f f

X’

B = f

Furthermore, the variations

U’

IT

.

D = +

X

f

in state and B in

control will be denoted by x and u respectively. No confusion will result when the second variation is being considered as a new cost function J[u(*)] in the accessory minimum problem.

The form of the second

variation for a constrained optimal control problem, from eqns(2.4.5) as (5,

and (2.3.6-8)’

may then be written

= 0)

(2.4.11) subject to and 2.5

k

=

Ax + Bu,

Dx(tf)

=

x(to)

0

= 0.

(2.4.12) (2.4.13)

A Sirgular Control Problem

We consider here the class of control problems where the dynamical system is described by the ordinary differential equations

2.

FUNDAMENTAL CONCEPTS

57

= fl(x,t) + fu(x,t)u.

(2.5.1)

where f(x,u,t)

The performance of the system is measured by the cost functional J =

L(x,t)dt

+

F[x(tf),

tf]

(2.5.2)

t0

and the terminal states must satisfy +[x(t,),

(2.5.3)

tfl = 0.

The final time tf is assumed to be given explicitly. Thus, the Hamiltonian H for this problem formulation is linear in the control variables, and the problem turns out t o be singular. It is clear from eqn(2.4.5)

that the second

variation is

subject to eqns(2.3.6-8).

In terms of the notation

mentioned at the end of Section 2.4 this second variation is J[(.)]

=

/tf(hxTQ~ + uTC d d t +

&XT (tf)Qfx(tf)(2.5.5)

t0

subject to

k

=

Ax + Bu,

x(to) = 0

58

SINGULAR OPTIMAL CONTROL PROBLEMS

Dx(t ) = 0. f We conclude this section with a simple example: Consider the following scalar control

Example 2.3

problem: minimize

J subject to

k

=

;[

= u,

2

x2dt x(0) = 1,

-

IuI < 1.

This problem is linear in u with the Hamiltonian

where

H

=

fx2 + Xu

x

=

-x.

A singular arc is one along which

H u = X = O for a finite interval of time.

During this interval

we have H u = X = O which implies

x = 0.

In this case x vanishes identically and s o does u. The arc in (x,t)-space along which u is zero is thus a singular arc.

2.

FUNDAMENTAL CONCEPTS

59

References Bell, D. J. (1965). Optimal Trajectories and the Accessory Minimum Problem, Aeronaut. Q. -' 16 205-220. Bliss, G. A. (1946). "Lectures on the Calculus o f Variations". Univ. Chicago Press, Chicago. Bryson, A. E. and Ho, Y. C. (1969). "Applied Optimal Control". Blaisdell, Waltham, Mass. Lawden, D. F. (1961). Optimal Powered Arcs in an Inverse Square Law Field, J. Am. Rocket SOC. -' 31 566-568. Lawden, D. F. (1962). Optimal Intermediate-Thrust Arcs in a Gravitational Field, Astronautica Acta -8, 106-123. Lawden, D. F. (1963). "Optimal Trajectories for Space Navigation", Buttemorth, Washington, D.C. Siebenthal, C. D. and Aris, R. (1964). Studies in Optimisation - VI. The Application of Pontryagin's Methods to the Control of a Stirred Reactor, Chem. Engng. Sci. 19, 729-746.

-

Valentine, F. A. (1937). The Problem of Lagrange with Differential Inequalities as Added Side Conditions, in "Contributions to the Theory of Calculus of Variations (1933-1937)" pp 403-447. Univ. Chicago Press, Chicago.