Some necessary conditions for optimality for a class of optimal control problems which are linear in the control variable

Some necessary conditions for optimality for a class of optimal control problems which are linear in the control variable

Systems & Control Letters 8 (1987) 261-271 North-Holland 261 Some necessary cdnditions for optimal&y for a class of optimal control problems which a...

588KB Sizes 0 Downloads 71 Views

Systems & Control Letters 8 (1987) 261-271 North-Holland

261

Some necessary cdnditions for optimal&y for a class of optimal control problems which are linear in the control variable HoArig Xuih Sektion

PHrj

Mathematik

*

der KMU,

Karl-Marx-Platz,

701 O-Leipzig

DDR

Received 10 March 1986 Revised 7 July 1986 A class of optimal control problems with phase restrictions is investigated whose performance index and state equation are linear in the control variable. First some necessary conditions for optimality are proved and then they are used to get the optimal solution.

Abstract:

Optimal control, Maximum principle, Method of region analysis.

Keywords:

1. Introduction

The class of optimal control problems we consider is the following: which minimizes the cost functional J= / “L(r, to

x(t),

u(t))

dt,

L(t,

e-9u) =L&,

E) +&(t,

Determine the control funktion

&A

u

(1)

subject to the constraints a(t) =f(t, x(t), u(d), f(t, 5, 4 =I&3 0 +.I&, ou, P1 < u(t) d P,, a1 =sx(t) =sff2, el(x(to>) = e2(x(tf)) = 0.

(2)

Here x is a scalar state function and u is a scalar control function. The functions L,(e) 0 and f,( ., 0 are the functions L,(t, o), fl(t, .) and L,(*, *) and &(a, e), ei(.) and e2(.) are continuously differentiable for all 1 E [to, tr] and 6 E [q, a,]. Further on let us assume continuous,

Pl
q
and

f2(t,

E) z-0

ford1 (t, -5) E [to, trl x [a,, a2].

(3)

Here 0~~ and pi, i = 1, 2, are constant. But if they are functions of the variable t we can simply transform the problem into the case of constant (Y,.and &. In [6] a special type of this class was studied where L(t, 5, u) =A(t)c+ B(t)u and f(t, E, u) = C(t)5 + o(t)u and a2 - (pi is sufficiently small. We study this class because it contains many interesting practical problems, e.g. an inventory problem. In [7] the results of this paper are used to solve an optimal control problem of a hydroelectric power station. This study can give us the form of the solution, e.g. the number of switches and their place. This is important for computing optimal control (see Sirisena [8]). Especially, we can get the form of the solution * From May, 1987: Institute of Mathematics, P.O. Box 631 Bo Ho, 10 000 Hanoi, Vietnam. 0167-6911/87/$3.50

Q 1987, Elsevier Science Publishers B.V. (North-Holland)

262

H. X. Phil / Some necessary

conditions/or

optimuliiy

for some problems even with insufficient information on performance index or control system (see PhG t711. A pair (x, u) is called an admissible process or shortly process if it satisfies (2) and an optimalprocess if it minimizes the cost functional. 2. Necessary conditions for optimalily

First we formulate a theorem that is only an immidiate consequence of Pontryagin’s maximum principle for a more general class of problems which was stated and proved in [l] (p. 208). Therefore its proof will be omitted. Theorem 1. Let (x, u) be an optimalprocess of (l)-(3). Then there exist three nubers A, > 0 and c1 and c2, a function p : [to, tf] + IR’ and two regular non-negative measures pi concentrating on the set T. := {t E [to, t,] ] x(t) = q}, i = 1, 2, which vanish not at the same time and which satisfy PW = c,e;(xW PW

=c2e;(x(t,))

fJ(t,

x(t),

u(t),

- [;HE( 7s x(d, + f%Ntr>) p(t),

UC+

P(T),

A,> dT-i,

Ijdul +/ duz, [ka.l) 01

- cLz({tr%

A,) $ sup H(t, .- u=[&. &I

x(t),

0, p(t),

A,)

forallmostallt,

where

H(t, 6, 0, q, h):=qf(t, t, v>-Wt,

4, v) and e;(E)= $ei(s)m

We might give criteria for when h, > 0 holds. But we will not enter further into this but simply assume A, > 0; i.e., we may substitute A,=l. From now on we abbreviate L(t, x(l), u(t)) by L(t, x, U) and f(t, x(t), For our investigation we need the following map:

u(t)) by f(t,

x, u) and so on.

(4) Theorem 2. Let (x, u) be an optimal process of (l)-(3) and let z E [to, tl]. Then there exists a function K, which is of bounded variation, continuous on the left and monotone decreasing in every subinterval of [z, tf]\T2, monotone increasing in every subinterval of [z, tf]\T, and satisfies K(z) = 0. Moreover there exist a number cz and a so-called switching function + such that for all t E [z, t,], $(t> =fi(t,

x) =p( - l?&T,

x, u> dT)[ c, + K(t)

-I- lfE(7,

x, u) =p( [‘&Cl,

x, u) dr) dT]

and uO>=&

f oramostaZltE{tE[z, 1

t,]I+(t)
u(t)=&

f oramostalftE{tE[z, 1

tr]]+(t)>O}.

Rem&. %I-

Clearly, K is constant in every subinterval of

[z,

t,]\(~,

u T,), i.e., of {t E [z, t,] ] aI < x(t) ,C

H.X.

Phil /Some

necessaty

conditions

for optima&

263

Proof. From Theorem 1 follows for t E [z, 1,], p(r) =P(z> + [‘[L&, where m(t) := -pr[

x, 4 -pb)f&

x, u)] dT+m(t)

(5)

z, t ) + pLa[ z, t ). Let us define

K(r) := -P(Z)

-

/‘$(T, x, u) eq-([Tft(l, x, u) dr) dT+p(t) ew(/fff(T, x, u)‘dT). .(6) I i z Because p is of bounded variation and continuous on the left, K has these’properties also. Moreover the statement (6) implies immediately K(z) = 0. We shall show the monotony of K in Lemma 1 and Lemma 2. Now, (4) and (6) imply

Consequently, by definition, ff,(r,

x,

u,

P,

1) =f2(t,

x>

exp(

-lrf,(T,

H”(Z, x [ d¶

x,

u,

u)

P,

d7)

1)

4

+K(r)

x, u) dL dr . ) I is linear in the third component, Theorem 2 follows from Theorem 1 by setting

Because the Hamiltonian cp(r) := ff,(r,

x,

x(r),

u(t),

p(r),

+/%(T, L

x, u) exp(/‘f((l, L

1) =p(~)f&

4))

- L2(c x(t))

and

cz:= r,(l,tqz))

H”(Z,

x(z),

u(z),

P(Z)>

1).

(7)

0

Lemma 1. 7’he fimction K defined by (6) satisfies

7, x, u) dr ) + /

‘ft ( 7,

L

x, u> =p( - /‘f&L, .?

x, u) dr)K(r)

dT= m(t).

Proof. (6) implies

- jbb)f&, L

x, 4 d7

‘L((T, x, u) exp( JTft(L, x, u) dr) dr - JrI,((r, z I

x, u) dr.

(8)

264

H. X. Phti / Some necessary

= exp ( -/%(T, z

x, u) dT)[p(z)

+ /r$(7, z

conditions

for

optima&

x, u) eq(/Tft(l, L

x, u> dr) dr]

- /‘f((~, x, u>exp(-/Tf6(ls xv u>dr) K(T) dr. I z Consequently, by (5) and (6),

-/‘f((~, z

x, u) exp( -/‘f[(L, .?

From this equation follows (8).

x, u> dl) K(T)

dr.

0

Lemma 2. The function K defined by (6) is monotone decreasing in every s&interval monotone increasing in every subinterval of [z, tf]\T,.

of [z, tf]\T2 and

7

Roof. Since pi, i = 1, 2, are regularynon-negative and concentrated on q, the function m(t) = ,-pr[ Z, t) + /.L~[ I, t) is of bounded variation and monotone decreasing in every subinterval of [z, t,]\T, and monotone increasing in every subinterval of [z, t,]\T,. Let Z be an arbitrary subinterval of [z, tf]\T,. Then for almost all t E Z,

= cl(t) > 0, i.e., k(t) 2 0 for almost all t E Z. Therefore, if we can show that there is no point t of Z such that one of the derivatives of K in t is equal to - co, then K is monotone increasing in Z (see Natanson [4], p. 303). Assume to the contrary that there exist a t* E Z and a sequence { ti } in Z such that DK( t*) := lim K(ti) -K(t*) ti - t* i-00

lim ti=t*, i-03

Then we can choose a subsequence {t;} DRj(t*)

:= lim

i-00

Rj(tj)

- R,(t*) q--t*

= -co.

such that the limits of the following exist at the same time: ’

j=l,

2, 3,

where

RI(t) := exp(- /?c(T, x, u) dT), t &(t):=/%(T, L R3(t) := m(t).

x, u>ev( -f&(1, I

x, u) dl) K(7) d7,

H. X. Phri

265

/ Some necessary conditions for optimalily

Clearly, DR,(t*)

= -exp( -[‘*fc(T,

x, u> d7) D(JI]‘-ff(7,

(see Caratheodory [l] p. 526). Therefore DR,(t*) and are bounded for all t E [z, tr]. Moreover, (8) implies

x, u> d7) are botmded, since &(t, x, u) and K(t)

DR,(t*)

mqt*)*R,(r*)+K(t*)-DR,(r*)+DR,(r*)=Dm(r*)

(see Caratheodory [l] p. 521). Because DK(t*) = - cc and R,(t*) is positive and K(t*) is bounded, from the last equation follows Dm(t*) = - cc, which contradicts the property that m is monotone increasing in Z c [z, t,]\T,. That means that the above assumption is not true. The rest of Lemma 2 may be proved analogously. 0 Let h be a function defined by

Llfl(

h(t, [):=L,t-f

1 - 7 [(L2tf*

2

2

- L*f*ofi+

Lf*

(9)

- L212,] I (I, C)

(all functions on the right side are dependent on (t, E)). Lemma 3. Let (x, u) be an arbitrary

of (l)-(3).

process

Then h(t, x, u) = h( t, x(t))

almost everywhere in

[to, $1. Proof. In the following we write Li, L, and Li, for L,(t, for fi, f;:( and fi,. It is obvious that (4) implies

Lic(t, x(t)) and Lil(t,

x(t)),

f,(L,, qt,

x,

u)

=L~pQ4)

-

= 4, + ?f,,

~(fia+ficu(t))

+

L2&))

-

x(t)) and similarly

(f2,+f2E~o))~2

-

fi’

- + [ - (LZlfZ - L2f2&f2uW

+

(L2cf2

-

Lzf2&(0

+

L2tfZ

-

L2f4

2

= 4, - pflp 2

- $ [ fi(L26f2

-

L2f20

+

L2rf2

-

L2f2,].

2

That means by (9), K(t, x, u) = h( t, x(t)) for an arbitrary process (x, u) almost everywhere in [to, tr]. The set G :=

[to,

tr]

X

[aI, 0~~1is called the state region. We now divide G into three parts:

G+:= ((5, C)EGP(~,

0’O},

Go:= ((t,

[)=O}.

[)~G)h(t,

0

G-z= {(t, 6) EGlh(t,

<) CO}, (10)

This definition allows us to state the following theorem. Theorem 3. L.et (x, u) be an optimalprocess of (l)-(3) and G’ a connected region of G and let (zl, z2) be a non-empty open subinterval of [to, tr] such that (t, x(t)) E int G’ for UN t E (zl, z2). (a) If G’C G+ then there exists z E [zl, z2] such that G(t) 0 for t=(z,

22).

(b) t E (6

If z2).

G’

c

G-

then there exists z E [zl, z2] such that +(t)>O

for t~(z,,

z) and +(t)
for

266

H. X. Phti / Some

necessaqs

conditions

for optimality

It is to be noticed that z may be equal to zi (or z2), i.e., (zi, z) (or (z, z2)) may be empty. If here z1 < z -c z2 then 4(z) = 0. (t, x(t)) E int G’ c G K is constant on (z,, z2).

Proof. It suffices to show (a). Part (b) may be proved analogously. The condition

for all t E (zi, z2) implies 0~~< x(t) < 0~~ for all t E (zi, z2). Consequently, Therefore it follows readily from Theorem 2 and Lemma 3 that

+K(z~+o)+/%(T,

X(T)) exp(fft(L,

=1

x, u) dl

21

forrE(q, ~~1,

(11)

where K( zi -t 0) = lim, 1LIK(t). Since &.(t, x) > 0 and h(t, x(t)) > 0 for all t E (zi, z2), there are only three cases: (1) If cp(z, + 0) = +(zi) +fa(zi, x)K(z, f 0) 2 0 then this theorem holds for z = zi. (2) If +(~a) Q 0 then this theorem holds for z = z2. (3) If t$(zt $0) < 0 < +(z2) then there exists z E (zi, z2) such that G(z) = 0 and this theorem holds, because + is continuous in (zi, zJ. 0 This theorem shows that each optimal process may have at most one switch in a connected region of int G+ or int G- during a continuous interval of time. Let (zt, za) be a subinterval of [to, fr]. It is named a singular interual of the process (x, u) if +(t) = 0 for all r E (zi, za). Then the set {(r, 5) 1r E (zl, z2), 5 =x(r)} is called a singular subarc of (x, u). Theorem 3 also implies that an optimal process cannot have any singluar subarcs in int G+ or int G-. Theorem (a) If (b) If (c) If

4. Let (zl, z2) be a singular inrerual of any optimal process a1
(x, h(r, h(r, h(r,

u) of x(r)) x(r)) x(r))

(l)-(3). = 0 for all r E (zl, z2). 2 0 for all r E (z,, z2). < 0 for all r E (zl, z2).

Pro&. If 0~~
= -eXp( -[f&,

X, U) d+$(I)

20

almost everywhere in (zi, za). Because h and x are continuous, we have h(r, x(r)) > 0 for all r E (z,, z2), as was to be shown. The proof of (c) parallels that of (b). 0 Concluding from Theorem 4 we see that an optimal process of (l)-(3) cannot have any singular subarc in (0, WG+ It= 01~) U {(t, 5) E G- \ 6 = oi}. But it is still possible that there exists z .such that x(z) = a2 while h(z, az) > 0, or x(z) = (pi while h(z, (YJ < 0. The necessary conditions represented in Theorems 2-4 may be used to find the solution of many problems of class (l)-(3). The next section gives an example for this. 3. Solution of problem (l)-(3)

for G = G + or G = G -

In this section we assume x(ro)

=x0

and

x(rr)

=xf

(12)

H. X. Phti / Some necessary

conditions

261

for optimality

where x,, and xr are given numbers. Moreover, for i = 1 or 2 and j = 1 or 2 there are only a finite number of subintervals [sk, s;] (k = 1,. . . , m) of [to, tf] such that ai, /ii) =0

f(t,

for tE ij

[sk, s;]

(134

k-l

and f(t,

ai, Pj) #O

for tE [to, tf]\

fi [Sk, s;]*

(13b)

k-l

Let us define four continuous funtions E+, E-, S+ and S- by the following equations: E+(t)=q,+/‘tp+(r)

dr,

q+(t)=

;(t

kl

i

E+(t) ’

p > 3

;tc;;e=“’

mdf(ty

f(t, otl=*se, if E-(t)

f(t, f(t, s-(t)

=xp-

/ ,‘$-(7)

E-(t), S+(t),

OL1’“)“’

9

1

P2)

e2)

dr,

=a2 and

a2, P,) 20,

if S+(t) = (pi and f(t, otherwise,

al, P2> >, 0,

if S-(t)

a2, Pi) Q 0,

= a2 and j(t,

otherwise.

The above functions are well defined by this definition. Let us investigate for example E+ and ‘p+ for x,, > (pi. Since E” is continuous and E+(t,) > o~i, there exists a non-empty subinterval [to, ti] of [t,, tr] such that i+(t)

= cp+(t> +f(t,

E+(t),

P,)

for t E (to, tl>,

E+(t,)

=x0.

This equation has exactly one solution. Further, t, may be so large that the following holds: E+(t)

> cq foralltE[ta,

ti]

and

E+(t,)=a,ort,=t,.

If t, = t, then E+ and ‘p+ are determined completely. Otherwise (i.e., t, < tt) we assume inductively that we come after 1 steps to t, with E+(t,) = (pi. Now we have to determine a t, E [t,, tr] such that f(t,

al, P,) G 0 for d t E [I,, $1 and for all S > 0 there exists a t’ E (t;, t; + 8) with f( t’, c+, p,) > 0.

From the definition E+(t)

follows

= a1 fortE[t,,

t;]

andcp+(t)=O

fortE(t,,

t;).

Further, because of (13), there exists a t;’ > t; such that f(t, al, &) > 0 for all t E (t;, t;‘). Consequently, by definition, E+ is in this interval the solution of m

=fh

Y,(t)7

Pl)Y

nw

=a1

(this equation has exactly one solution). Let t,+l v,(t)

'a1

for all t E (t;, t,+&

and

be from (t;, tr] such that

~,(t,+~) = a1 or t,+l = t,.

Then we have

E+(t)=r,(t>

fortE [t;, t,+,l ad v’(t) =f(s, r,(t),

Now we make the (I + l)-th step, and so on.

P,) fortE (t;, t,,,).

268

H. X. PhL / Some necessary

conditions/or

optima&

Clearly, f(t,, q, &) Q 0 for I = 1, 2,. . . . Moreover there exists r E (t,, t,+r) such that f(t, (pi, pi) > 0. Consequently, by (13), there exist only a finite number of such t,‘s. That is, E+ is piecewise continuously differentiable and ‘p’ is piecewise continuous. Analogously, we can construct E-, S+ and S- (for S” and S- we begin in r, and go backward). These functions are piecewise continuously differentiable also. By this it is easy to show that the following functions are piecewise continuously differentiable: M+(r)

:=max{E+(r),

S+(r)},

For us it is sufficient if M+ and M-

M-(r)

:= min{ E-(r),

S-(r)}.

are almost everywhere differentiable

and this can be shown by

M*(r)=$[IE*(r)-S*(r)I+E*(r)+S*(r)]. For that reason we can define (almost everywhere)

qr> := wt>

u+(t) := Q”(f) -f1k M+(t)) fdh M+(t)) ’

-f&2 ~-w) fat7 M-w)

*

We are now in a position to state and prove the following theorem. Theorem 5. Suppose (12)-(13) and-@e following conditions hold: G = G+, M+(t,) = x0, M+(tf) M+(t) Q a2 for all t E [to, tf]. Then>(M+, u’) is the unique optimal process of (l)-(3).

= xf and

Proof. It can easily be seen that (M+, u’) is an admissible process. Consequently, this problem has at least - one optimal process (see [3]). Furthermore for every process (x, u) we have X(S) >, E+(r) 2 a1 and Z(t) 2 S+(r) > o~i and therefore Z(t) 2 M+(r)

2 01~ for all r E [to, rf].

(14

Assume to the contrary that there exists an optimal process (x, U) different from (M+, u+). Since x(ta) = M+(t,) =x,-, and x(l,) = M+(t,) = xf and (14) there is a subinterval [rl, r2] c [to, 1,] such that

x(t1) =M+(t,),

x(tl)=M+(tZ)

and

x(t)>M+(t)~a,forallt~(t,,

t2).

Therefore K is monotone increasing in (ti, t2). Because G = G +, Lemma 3 and (10) imply h(t, x, u) > 0 almost everywhere. Consequently, by Theorem 2, if +(r3) >, 0 for any t, E (tl, r,) then cp(t) > 0 for all t E (ta, t2). From this we have +(rl + 0) 2 0, because otherwise there exists a t, E (ti, t2) such that +(t) ~0 for all r ~(r~, r4), i.e., u(t) =& for all r l (rl, r4) and this implies x(t) 0 for all r E (rl, r2) and therefore u(r) = & for all t E (rl, r2). In case x(tZ) = (pi, one has f(t2, .+, a,) < 0 because of x(t) > (pi and i(t) = f (t, x(t), &) for all t E (ti, r2). For that reason x(t) = S’(t) for all r E (rl, t2), which conflicts with (15) and the definition M+. That means that x(r-J > 0~~must hold. By an argument analogous to the previous one, x(rZ) = S’(t,) is impossible. Therefore x(tZ) = E+(Q > S’(t,). Consequently, there is an E > 0 such that x(t) > E+(r) for all t E (t2 - E, t2). For that reason there exists an increasing sequence {z,,} in this interval which converges to t, such that i(z,) < @(z,,), &) for all n. By letting n + 00 we get f(&, x(r2), &) G i.e., f (z”, x(z,), &I
H.X.

Theorem 6. Suppose (12)-(13)

G=G-, Then (M-,

Phti / some

necessary

conditions

for

optimality

269

and the following conditions hold:

M-(ro)=xo,

M-(rr)=x,

and

M-(t)acu,

foraZltE[t,,

tr].

u-) is the unique optimal process of (l)-(3).

Similar to (14) for every process (2, ii) of (l)-(3) M+(t)



with (12)-(13),

for alI t E [to, t,].

This fact gives us a presentation for M+ and M-. Example. Consider 2n

/0 (

4x-cos

x+emX+(x+t)u)

2(t) = 0.2 - 1.3 sin lu(t)l

~1,

t

dt+min,

+ u(t),

(16)

0.5
x(0)=1,

x(2~)=2.

According to (9), h(t, E) = 2.8 + sin 5 - emE+ 1.3 sin t. Since h(r, 5) > 0 for ail (t, t) E G = [0, 2a] X [0.5, 2.51, (10) implies G = G+. In order to use Theorem 5 we determine the functions E+ and S+. From the definition of ‘p+ and E” follows cp+w =

0

for t E (4, f2),

-0.8 - 1.3 sin t

for t E (0, ti)

u (t2,

27r),

-0.3 - 0.8 t + 1.3 cos t

for t E [0, tl], for t E [II, t21,

0.5-0.8(t-t,)+1.3(cost-cost,)

fortE[t,,2a],

where t, and t, satisfy E+(t,) f(t,,

- a1 = -0.8 - 0.8 t, + 1.3 cos t, = 0, i.e., t, = 0.458, a,, /I,)=

-0.8-1.3sin

t,=O

and

t2~(n,2~),

i.e., t,=~+arcsin~=3.804.

Further, we have

fort E (0,t3)u (t4, t,), ‘+(‘)= ( y.2-1.3sint forr E (t3,t4)u (ts,2~), S+(t)=

i

0.5 0.5+1.2(t-t,)+1.3(cos t-cost,) 0.7 + 1.2(t - 271) + 1.3 cos t

for t E D, $1 u [t4, ts], fortE[t3, t4], for t E [ts, 2~1,

where t, and t, and t, satisfy S+(tS)-ol,=0.2+1.2(t,-2a)+1.3costs=O f(t,,

q,1)=1.2-1.3sin

t,=O

and

S+(t,)-~a,=1.2(ts-t4)+1.3(~~~tg-co~t4)=0

~,E($T,

and

ts~(~,2~),

T),

i.e., t,=nand

O
i.e., t,=5.415, arcsinf3=1.966, i.e., t,=0.775.

270

H.X.

Fhli

/Some

necessary

conditions

for optimality

Consequently, by definition, -0.3 - 0.8 r + 1.3 cos r 0.5

M+(r)= I

0.5+1.2(t-ra)+l.3(c0sr-cost,) 0.5+0.8(r-rz)+1.3(c0sr-cost,) 0.7+1.2(r-2n)+1.3cos r

-0.2+1.3sinr

forr E P, d, fort E h9 tsl u P4,4, forrE[r,, r4], forrs[r,, ra], for r E [ r,,,2n],

for r E (0, rl> u (r2, d, for r E (rs, r4) u (r6, 274, forrE(r,, r,)u(r,, rz),

where

S+(r,) - E+(r,) = 2r, + 0.2 - 2.4~ - 0.81,+ 1.3 cos r, = 0 and i.e.,

r, = 1.2~ - 0.1 + 0.4r, - 0.65cos r, = 5.704 (see Figure 1).

2.. G = G+

Fig. 1.

r, < r, -c HIT,

H.X.

Because M+(O) = 1, W(2a)

Phti / Some necessary

= 2 and M+(t)

conditions

for optima@

G 2.5 for all t E [0,2a],

271

Theorem 5 implies

that (W,

u’)

is the unique optimal process of (15).

The method described above is called method of region analysis.

Acknowledgement

I thank the referee for his comments and Prof. R. Klijtzler and Prof. E. Zeidler for helpful discussions. References [l] C. Caratheodory, Vorlesungen Uber reelle Funkrionen (Teubner, Leipzig-Berlin, 1918). [2] A.D. Ioffe and V.M. Tichomiroy, Theorie der Extremalaufgaben (VRB Deutscher Verlag der Wissenschaften, Berlin, 1979). [3] B.S. Morbuchovic, Existence of optimal control, in: Contribution to Science and Technics, Modern Problems of Mathematics, Vol. 6 pp. 207-260 (in Russian) (Moscow, 1976). [4] I.P. Natanson, Theorie der funktionen einer reellen Veriinderlichen (Akademie-Verlag, Berlin, 1981). [5] H.X. Phd, Zur Stetigkeit der Lbsung der adjungierten Gleichung bei Aufgaben der optimalen Steuerung mit Zustandsbeschrtikungen, Z. Anal. Anw. 3 (6) (1984) 527-539. [6] H.X. Phb, Lineare Steuerungsprobleme mit engen Zustandsbereichen, Optimation 16 (2) (1985) 273-284. (71 H.X. Ph6, On the optimal control of a hydroelectric power plant, Systems Control Lett. 8 (1987) 281-288 (this issue). [8] H.R. Sirisena, A gradient method for computing optimal bang-bang controls, Internat. J. Control 19 (2) (1974) 257-264.