10 Optimal Control

10 Optimal Control

10 Optimal Control A. A BASIC OPTIMIZATION PROBLEM 1. Remark: Once the asymptotic design constraints for a feedback system have been achieved, addi...

823KB Sizes 2 Downloads 108 Views

10

Optimal Control

A. A BASIC OPTIMIZATION PROBLEM

1. Remark: Once the asymptotic design constraints for a feedback system have been achieved, additional design latitude, which can be used to optimize some measure of system performance, may still remain. In the present chapter, we investigate the formulation and solution of some of the more commonly encountered deterministic optimal control problems. We begin with a basic optimization problem, the solution of which will point the way to the open and closed loop servomechanism problems and the regulator problem. Let A E ?A where ( A * A ) - exists, and let w and z be fixed vectors in H . We then term the problem of minimizing J ( B ) = llz - ABu!(12 over B E V the basic oprimizatioiz probfem.Without the causality constraint this is a standard generalized inverse problem with solution any B, 214

A.

215

A BASIC OPTIMIZATION PROBLEM

satisfying

Bow

=

(A*A)-'A*z

This may be verified by a projection theorem argument. A more elementary, if somewhat tedious, completion cfthe squares argument is as follows. With this choice of B, and any €3 E V,

( z , A B w ) = (AB,w, A B w ) Hence

and

( z , AB,w) = ( A l l o w , A B , w )

(ABw, Z ) + (ABw, A B w ) = ( 2 , Z ) - (ABO W , ABw) - ( A B w , A B , W ) + (ABw, ABw) + ( A B , w,A B , w) - ( A B , w,A B , w) = /IA(B - Bo)wl(' ( 2 , Z ) - (ABOW, ABOw) = IIA(B - B,)w/l2 ( 2 , Z ) - ( z , ABOW) [(Ago W , ABo W ) - (ABo z)I = J(B0) I/A(B - B , ) u ' ) / ~2 J(B0)

J(B) =

I ~ Z - ABwl12

+

2

(2, Z) -

(z, A B w )

-

+ +

~

+

'

2

which verifies that B , minimizes J ( B ) over 8. Unfortunately, even if A E V, (A*A)-'A* may fail to be in %? (except for memoryless A ) . Moreover, if z E H , ( A * A ) - ' A * z may fail to be in H , , in which case it will be impossible to satisfy Bow

= (A*A)-'A*z

with B, E %? even for fixed w and z in H , . That is, since H , is an invariant subspace for every causal operator if w E H , , a necessary condition for a causal operator B, to interpolate the points w and ( A * A ) - ' A * z is that ( A * A ) - ' A * z E H,. Fortunately, by appropriately modifying the optimization criteria, an optimal causal B, can be constructed.2 Basic Optimization Theorem: Let A E B, assume that A*A admits a spectral factorization A * A = C*C where C E V n W-', and let MI E H , and z E H . Then J ( B ) = I/z

-

ABwl/2

is minimized over B E W by any Bo E % such that Bow = C-’E,C*-’A*z

Prooj': Since w E H , , for any B E W CBW = BCE,w

=

E,CBEtw

=

E,CBW

10.

216

OPTIMAL CONTROL

Hence

( z , ABw)

(A*z, Bw) = (C*C*-'A*z, Bw) = (C*-'A*z, C B ~ V ) (C*-'A*?, E+CBW)= (E,C*-’A*Z. CBW) - ( C * - ' C * C C - ' E + C * - ' A * z , CBW)= ( C * - ' A * A B o , CB\.r) =

=

=

(AB,w, A C - I C B W ) = ( A B , w , A B w )

while a similar argument implies that

( z , AB,w) = (AB,w, A B o w ) With these equalities i n hand, precisely the argument used in Remark 1 reveals that for any B E % J(B) = 4 B o )

+

lIA(B - B,)YlI2 2

JW,)

and hence we have verified that B, minimizes the given performance measure over V whenever it exists. // 2. Remark: It is important to note the fundamental difference between the generalized inverse "solution" to our optimization problem and the solution of the theorem. Like ( A * A ) - ' A * , C - ' E t C*-'A*z is not causal, however, since C- is causal,

'

C - ' E , C* - A*z

=

'

E , C - ' E , C* - A*z E H ,

and hence the pair { w , C - ' E + C * - ' A * z ) satisfies the necessary condition for causal interpolation. I t may then be possible to interpolate this pair with a causal operator B , , Although no general construction for such an interpolating operator exists,2 a necessary and sufficient condition for the existence of a causal interpolating operator will be derived in the sequel ; indeed, in a number of special cases it is even possible to interpolate an infinite set of input/output pairs {win,C - ' E t n C*-'A*z n }’

3. Example: Consider the Hilbert resolution space L,( - a,cc> with its usual resolution structure and let A = d be the ideal delay on L2(- x1, rj). Then since 6" = P,the ideal predictor

[d*B] = [PB]

=1=

"*I]

allowing us to take C = I. Now if w = z = x,,.~,,i.e., multiplication by the characteristic function of the interval (0, 11, then C * - ' A * : = Pz = xi - ol and hence I

E,C*-'A*z

=

E , ~ c - l ~=O0l

A.

217

A BASIC OPTIMIZATION PROBLEM

We must therefore choose B, in % so that =

Bow

=

C-'EoC*-'A*z = 0

Clearly B, = 0 suffices. Now, let us consider the case where w

c*-1A*Z

=

Hence E,C*-'A*z

=

Pz

-

E,Pz

= x(,,

but z = x(,,

21.

Then

= X(- ' , ' I

= X(O,’] =

w

and

B o w = C-’E,C*-’A*z= w allowing us to take B, = I as our causal interpolating operator. Note that with this optimal choice of B , we have

z

- ABow = X ( O . 2 1 -

X(1.21 = X(0,ll

so that JIz - AB,wl12 = 1. On the other hand, without the causality constraint, one could take B, = = 8-', in which case z - AB, w = 0 and so is its norm. Accordingly, the causality constraint has strictly reduced the performance of our estimator. 4. Example: Now consider the Hilbert resolution space I,( - co,GO) with its usual resolution structure and assume that W E H , .This space turns out to be especially well suited for our interpolation problem since one can identify a signal X E 12(-m, 03) with an operator X E ~ by ? means of the equality X u = x * u where * denotes convolution. In particular, X E % if and only if x E H , (l.B.ll), and so if we let x = E,C*-'A*z, then the operator C - ' X W - ' interpolates the pair { w , C-’E,C*-’A*z}.To verify this recall that w = Wb and x = X 6 where the 6 sequence in i2( - 00, m) is defined by

"={ . I

1, 0,

i=O

izo

Thus

[c-'xw-'](w)= [ c - ' X W - ' w ] ( b ) = [c-'x](s) = [c-'](x) =

c-~E,c*-~A*~

'

verifying the interpolation property. Moreover, C- E %? by construction, while X E % since x = E , C * - ' A * Z E H , . Hence for C ' X W - ' to be

218

10.

OPTIMAL CONTROL

causal it suffices that W - ' be causal. To this end the assumption that w E H, implies that W E 44 and hence the only restriction required to obtain a solution of our optimization problem is that W E % n % - I . Although this process for obtaining a causal interpolating operator might at first seem to be somewhat roundabout, it in fact reduces to the classical frequency domain solution to the optimal servomechanism problem. Here, however, one typically lets L' = ( ? * - ' A * ; and then recognizes that X = [ V ] , , obtaining the more familiar solution B, = C - ' [ V ] , W - ' where [VlC denotes the causal part of V . 5. Remark: Although the above examples indicate that the causal interpolating operator needed for the solution of the basic optimization problem can often be computed by ad hoc means, it remains to formulate a criterion for its existence.' We begin in the finite-dimensional case.

6. Lemma: that

Let x

=

col(.x,) and y r

i= 1

=

Iyilz I K 2

col(y,) be

II

vectors and suppose

1 Ixi12 r

;= 1

Then there exists a lower triangular n x n matrix such that IlAII 5 K .

J =

A x and

Proof: Since the case for n = 1 is trivial, we give an inductive proof: We assume that the lemma holds for n = 1,2, . . . , p - 1 and show that it also holds for n = p. To this end we let x' = col(x,, x 2 , . . . , x p - I ) and y' = col(y,, y,, . . . j i p - in which case the inductive hypotheses implies that there exists a lower triangular ( p - 1) x ( p - 1) matrix A' such that y' = A'x' and IlA'Il I K . Taking the entries of this matrix to be the first p - 1 rows and columns of the desired matrix, we have constructed aij, i , j = 1, 2, . . . , p - 1, such that

.

uii = 0, and

j > i

A.

219

A BASIC OPTIMIZATION PROBLEM

for any p vector z

=

col(z,). Now, since

P

C

i= 1

P

IYi12s K2

C

i= 1

IXiI'

Therefore we may define a linear functional on the one-dimensional subspace spanned by the vector x : $(cx

for any scalar c. Moreover

I NCX) where I/

(Is

IlzlIs

is the seminorm defined on the p vector z

=

col(zi) by

As such, $(ex) by the Hahn-Banach theorem admits an extension to p space 4(z) satisfying the inequality 4(z) I lIzlls. Finally, since every linear functional on n space can be represented by a row vector, there exist complex scalars apj such that D

that define the remainder of the nonzero entries in the desired A matrix. //

7. Remark: In finite dimensions the inequality required by the hypotheses of Lemma 6 is always satisfied for some K if x1 # 0, in which case an interpolating matrix exists. The inequality, however, characterizes the norm of the resultant interpolating matrix, thereby allowing the theory to be extended to the infinite-dimensional case. 8. Lemma: Let 9’= { E i :i assume that

=

1,2,

.. . , n } be a partition of

IIE'yll I K ~ ~ E i x ~i ~=, 1, 2 , . . . , n

for specified x,y E H. Then there exists A (i) E'A = E'AE', i (ii) y = A x ; and (iii) 11 Ail I K

=

1 , 2 , . . ., n ;

E

such that

€ and

220

10.

OPTIMAL CONTROL

Proof: Let xi = Aix and yi = Aiy and define two n vectors, x' and y' b y x' = col(llxill) and y' = col(llyill). Now

C IYd2 =

i= I

c lIAiYl12

i= 1

=

IlE'Yl12

The hypotheses of Lemma 6 are satisfied, and there exists a lower triangular n x n matrix B with IJBIJ5 K such that y' = Bx'; i.e., n

1 bijIIxjII = ItyiIt

i= 1

Moreover, without loss of generality we may assume that bij = 0 when either x i = 0 or y i = 0, allowing us to define a matrix C by

Finally, the required A is constructed: n

n

Now, n

n

n

n

where we have used the fact that C is lower triangular to limit the range of summation over j. Now, for any k = 0, 1, 2 , . . . , n E k yi = E'A'y = y i if i 5 k while Ekyi = 0 for i > k , and similarly for Ekxj.Accordingly, n

EkAz = i=l

1 cij(z, x j ) E k y i= 1 i

j = 1

verifying property (i) of the lemma.

k

i

cij(z, x')EkyI

A.

22 1

A BASIC OPTIMIZATION PROBLEM

Finally, to show that J J A1) I K we denote by H , the finite-dimensional subspace space

H,

=

V{xi}

in which case IlAlI

=

sup IIAzlllllzll :#O

=

SUP

: # 0.z e H,

Il~zlllllzll

since Az = 0 if z is orthogonal to H , . Moreover, since the xis are mutually orthogonal if z E H,,

Thus

I I n

i

/ n

n

We have verified that thereby completing the proof. //

\

112

222

10.

9. Property: Let x, and only if

Y EH .

K

Then there exists A

= SUP

OPTIMAL CONTROL

E %? such

that y

=

A x if

(/Et~II/lJEtxII
te$

where ~lEtyl~/IIEtx~l is taken to be zero if Ety = Etx = 0 and infinite if Etx = 0 but Ety # 0. Moreover, when this condition is satisfied 11 A / ( = K , and if B E $5 also satisfies y = Bx,then ((B1(2 ((All = K.

Proof:

Assume that B E %? satisfies y

=

Bx. Then

IlE'ylJ = IIEtBxll = IjEtBEtxll 5 11B11 IIE'xll

t E $. Hence for any E

showing that JIB112 !lEtyJ1/lJEt.xlJ for all exists t E $ such that

llBll 2 l l ~ t ~ ! l / l l ~> + xK l l-

> 0 there

E

verifying that IlBll 2 K and K < s m if there exists B E % such that y = Bx. Conversely, if K < cc Lemma 8 implies that for each partition 6P = { E ' :i = 0, 1,2,. . . , n } of 8 there exists an operator A , such that y = A,x, lA,ll 5 K , and

FA,? = ./?APEi,

i

=

1,2,.. .,n

Since the partitions of 8 form a directed set, the family of operators A , defines a net of operators taking values in the ball of radius K in 9. Moreover, since this ball is weakly compact, the net A , admits a convergent subsequence with limit A in the ball. Clearly IlAIl I K whereas for any z E H (Ax, z ) = lim (A,x,

2) =

(y, z )

9

and hence A x = j!. Finally, to verify that A E % we let 9t be the partition (0, E t , I } and we consider the subnet of operators A,; composed of operators in the convergent subnet which also refine i@. Now, by construction F A , + = F A , , Et for each element of this subnet. Moreover, since the subnet A,#+ converges to A it follows that € + A = E'AE', as was to be shown. i'l 10. Remark: Although no constructive algorithm for computing A is

known in general, an interpolating operator is given by the formula A = s(M)

s

[Etx 0 d E ( t ) y]/l(Et.x1(2

B. THE SERVOMECHANISM PROBLEM

223

which is convergent if

Indeed,

verifying the interpolation condition, whereas for any z

E

H

verifying that A E V. Here, we have used the fact that Et interval of integration.

=

E f E t in the

B. THE SERVOMECHANISM PROBLEM 1. Remark: The most elementary servomechanism problem is illustrated in Fig. 1. Here, we have an open loop system with plant Pand compensator M that is disturbed at the output by a signal n E H , . Our problem is to operate on n with a causal compensator that minimizes

J(W = llvllZ +

llrl12

over M E (e. Physically the requirement that we minimize //y//’means that the output of the plant must follow -n, so that our problem is really an optimal tracking problem. Of course, since P is linear, if one could apply arbitrarily large inputs r to P , this tracking problem would be straightforward. In practice, however, the norm of the input must be limited, and hence we minimize the sum of IIyI12 and llrll’ to obtain a compromise between the tracking requirement and input energy.

Fig. 1. Open loop servomechanism problem

10.

224

OPTIMAL CONTROL

2. Property: Let P E % and assume that [I + P*P] admits a spectral factorization [ I P*P] = C*C, C E % n % - I . Then if y and r are defined as in Remark 1 and n E H,, then

+

J ( M ) = IIY/12 + llrllZ is minimized over M

E%

by any M ,

M,n

=

E $5

satisfying

- C - ' E t C*-'P*n

Proof: This optimal servomechanism problem is a special case of the result of the basic optimization theorem. Here

z

=

(n,0 ) E H Z ,

B:H+H, where B = M . Finally A : H operators

--+

w

=

nE H,,

BE%

H z is characterized by the 2 x 1 matrix of

Indeed, this yields

so that the minimization of / / z - ABy/12 over B E % is equivalent to the minimization of /1y11' llrIIZover M E % . Substituting these terms into the result of the theorem now yields

+

[A*A]

=

[I

+ P*P] = [C*C]

and M,n

=

Boy

-

C-'E C*-'[-p*

=

C-'E+C*-'A*z

t

as was to be shown. //

-I]

"1

..

0

=

-C-lE

t

C*-'p*n

3. Example: Consider the space L2(- a,30) with its usual resolution structure, let P = b be the ideal delay, and let n = xto.zl E H,. Now [I

+ P*P] = [ I + b * b ] = [2Z] = [(&*(&I)]

B.

225

THE SERVOMECHANISM PROBLEM

Fig. 2. Closed loop servomechanism problem.

showing that C

=

&I

and that

M o X(0.21

= - t E 0 &o,

21

=

1

- Z X ( 0 , 11.

Our compensator must take the form of a causal operator that interpolates , and - 3 ~ ( E ~H, . One such M o that will achieve this the vectors x ( ~ 2l goal is

M 0 --

-&I

4. Remark: Although our servomechanism theory can be formulated for the open loop system of Fig. 1, in practice a feedback formulation is preferred. First, the feedback formulation allows us to employ an unstable plant so long as the plant is stabilized by the loop, as illustrated in Fig. 2. Second, in the feedback configuration one senses the system output rather than directly sensing the disturbance n, which may be physically inaccessible.

5. Remark: Since we are interested only in feedback laws that stabilize the closed loop system, we adopt the formulation used in the stabilization theorem. In particular, we assume that P admits a doubly coprime fractional representation

[

3’ ::[ =

- up1

vpl]

and we optimize the design over the choice of W E W. Since our only input to the system is the disturbance n, the stabilization theorem implies that Y = H y nn =

CNpr

WDpl

+ VpIDpIlfi

and

r = H,,n

=

[D,, WD,, - D,,UPrln

10.

226

OPTIMAL CONTROL

To obtain the optimal stabilizing feedback law we then minimize J ( F ( W N = llY/12 + llrl12

over W E%? and use this Wo to define the optimal feedback law

F,

=

+ Vpr)-'(-KDp, + up,)

(WON,,

6 . Property: Assume that P admits a doubly coprime fractional representation

[, : -

3 ' [2: -

-

=

up, Vpl]

and assume that [N;t?,N,, + D& D,,] admits a spectral factorization [N;,N,, D z r D p r ]= [C*C], C E %? n W ' . Then if y and Y are defined as in Remark 5 and n E H , , then

+

J ( F ) = 1I4'/l2 + llrl12 is minimized over the set of stabilizing feedback laws by Fo

=

(WON,,

+ V p r ) - l ( - w o D p l+ U P J

where Wo is any causal operator such that ( W, N,, W o D p , n= C - ' E + C * - ' [ - N ~ r V , , D , ,

+ VPr)-' E ,a and

+ DZrD,,U,,]n

Proof; As with the open loop servomechanism problem the solution of the closed loop problem follows from the basic optimization theorem of Section A. In particular, we let 2 =

(VplDpln, - D p r U p r n H ) ~2 ,

B:H+H, where B = W. Finally, A : H of operators

This yields

+

w = D,,nE H , ,

BEC'

H 2 is characterized by the 2 x 1 matrix

B.

227

THE SERVOMECHANISM PROBLEM

Hence the minimization of llz - ABwl12 over B E %is equivalent to the minimization of IIyI12 Ilrl12 over W, which in turn parametrizes the stabilizing feedback laws. To solve the minimization problem we have

+

[A*A] = [Nz,N,,

+ DzrD,,]

[C*C]

=

and

W,Dp,n

=

B o p = C-'E+C*-'A*z

=

C - ' E + C * - ' [ - N ~ r V p , D p I D~,D,,U,,]n

+

Finally, if W, is any causal solution of this equation for which (WON,, + V,,)-' E B, the stabilization theorem implies that

up) is a stabilizing feedback law minimizing J ( F ) = Ily1/2 + Ilrl12 over the set F,

==

(WON,,

+

WoDp1 +

Vpr>-'(-

of stabilizing feedback laws as required.

//

7. Example: Let us repeat Example 3 in the case of a closed loop design. As before, P = 6, which admits the doubly coprime fractional representation 0 -' I]

I

[-6 Hence

[NyV,,

=[; ;]

+ Dp*,Dpr]= [6*6+ I ] = [ 2 I ] = [($I)*(&]

with C = ($1). As in Example 3, we take n W, is defined by

W X(0.21 = and ifwe take W,

=

-

t E 0 PX(0.2,

=

x ( , , ~ ]E

H , , in which case

1

= - 3 3 0 , 11

-+El, Fo

=

[-$El6

+ l]-'[+E1]

is the desired feedback law. For the purpose of comparison, the open and closed loop realizations for our optimal servomechanism are illustrated in Fig. 3.

228

10.

OPTIMAL CONTROL

Fig. 3. (a) Open and (b) closed loop realization of an optimal servomechanism.

8. Remark: Although the open and closed loop configurations of Fig. 3 are quite different, a little algebra will reveal that the two systems have identical responses y and r to any disturbance n. This similarity, as well as the identity between M , and W,, is a manifestation of the simple doubly coprime structure applicable to a stable plant. Of course, if P is unstable, the open loop problem is not well defined and a comparison between the two approaches becomes meaningless.

C. THE REGULATOR PROBLEM 1. Remark: The regulator problem is actually a special case of the servomechanism problem wherein the disturbance n = 8, x(-f), represents the system’s response at time j- to a nonzero initial state, which we assume represents a perturbation from nominai (which we take to be zero). A regulator may thus be viewed as a device that drives a system back to its nominal trajectory whenever it deviates therefrom (presumably the effect of some outside force not included in the model). Unlike the general servomechanism problem, however, we can exploit the structure of the state model and the special nature of the disturbance n = O+x(t) to obtain an especially powerful solution to the regular problem. First, the

c.

229

THE REGULATOR PROBLEM

Fig. 4.

Plant with disturbance applied at the output, state, and input

solution is valid independently of the initial state, and second, the solution can be implemented by memoryless state f e e d b a ~ k . ~ , ~ In the sequel we assume that P E ~ and ' admits a strong minimal state decomposition { ( S , , I,,0,) : -f E $}. It then follows from the results of Section 7.B that P admits a state trajectory decomposition P = 01 where I is algebraically strictly causal and 0 is memoryless. The resultant plant is illustrated in Fig. 4, where the disturbance is shown in three equivalent forms. Here we have assumed that x(t) = I + sfor some s E Ht so that n = fl,x(t) = 8,1,s = EtPEts. Equivalently, however, we can view the disturbance as the term a($,t)x(t) added to the state at time $ 2 t. Finally, the disturbance can be viewed as the application of the term Ets at the input (if it is understood that we are interested only in controlling its effects after time t). Regulator Theorem: Let P c: S admit a strong minimal state decomposition {(S,, A,, 0,): -f E $} and assume that [I + P*P] admits a P*P] = [ ( I V)*(I V ) ] where C = I + V spectral factorization [ I with V algebraically strictly causal. Then with the notation of Remark 1 the ro E H minimizing

+

+

+

J ( r ) = IIE+Yl12+ llE+rl12

+

over r E H , satisfies E t r o = - E , V I E t r o Ets]. Furthermore, if xo = ACE, ro + Ets] is the state trajectory resulting from the application of r o , then there exists a memoryless operator F independent of s such that Etr, = E t F x o . 2. Remark: The fundamental point of the theorem is the equality E + r , = EtFxo, which permits the optimal regulator to be implemented by means of memoryless state feedbacks, as illustrated in Fig. 5. Note further that F is independent of s so that this implementation is optimal for any disturbing initial state. Indeed, it is this simple implementation that

230

10.

Fig. 5.

OPTIMAL CONTROL

Memoryless state feedback implementation of the optimal regulator

makes the regulator especially important in control theory and, more generally, it is the memoryless state feedback concept that makes the state model useful.

3. Proof of The Theorem: From Property B.3 the optimal solution of the regulator problem satisfies Etro = M o n = -C-'E,C*-'P*n = -C-'EtC*-'P*EtPEt.~ where n = E, PEts. Thus E+C*CE,r, = - E t C * C C ~ ' E t C * - ' P * E t P E f s = - E, C*Et C * - ' P * E , PEts = - E , C*C*- ' P * P E s = - E , P*PEts where we have invoked the anticausality of C*, C*-' , and P*. Recalling that [ I + P*P] = [C*C] = [(I + V)*(I + V ) ] ,this becomes E,[I + P ] E + r , = - E + [ P * P ] E t s

+ +

Moreover, P*P = V * V V * V = V* + (I stitution into the preceding equation yields Et(I

+

V)*V, which upon sub-

+ V ) * ( I + V ) E + r o= - E + [ V * + ( I + V ) * V ] E ' s =

-E+(I

+ V)*VE's

where we have used the fact that E , V * E t = 0 since V* E %*. Moreover, multiplying both sides of the above equation by E,(I + V ) * - ' yields E,(f

+ V ) E t r o = -E , V E t s

or equivalently, E , ro

as was to be shown.

= -E ,

+

V I E + r o Ets]

23 1

PROBLEMS

Finally, from the lifting theorem it follows that there exists a memoryless operator F such that V = FA

E+r,

=

-EtFAIE+ro

+ Ets] = -EtFxo

completing the proof. //

3. Example: Consider the plant modeled by the differential operator

i(t>= c- l l x ( t ) + [+IY(t)>

x(0)

Y(t> = [2lx(t),

=

0

defined on L,[O, co) with its usual resolution structure. This plant has the equivalent differential operator model P

+ 1]-'[1]

[D

=

while its input to state trajectory model is characterized by the differential operator

A

=

[D

+ ll-";]

Now, using the differential operator Q[ - D l as a representation of Q[D]* we have

I

+ P*P = [D2 1]-'[D2 21 = [ ( D + 1)(D l)]-'[(D + $ ) ( D -

-

-

- JZ)]

and so

+ 1]-'[D + $1 We thus have V = [ D + 11-l

if V

C

=

=

FA,

[D

F

=

I

+ [D + 1]-'[$

[a

- 13 and

=

[2$

-

A

=

-

[D

11 = I

+V

+ 13-' [*I. Hence

21

is our required memoryless state feedback operator.

PROBLEMS 1. Use the projection theorem to show that Bow = ( A * A ) - ' A * z minimizes llz - ABw/12over B E $9 whenever ( A * A ) - ' exists. 2. Show that ( z , ABw) = ( A B , w, ABw)and ( z , A B , w ) = ( A B , w, A B , w) whenever B , w = ( A * A ) - ' A * z .

232

10.

OPTIMAL CONTROL

-

3. On the space L,(-m, LO) with its usual resolution structure let A = P be the ideal predictor, let w = x , ~ , and let z = x , ~2,1 . Determine the causal operator Bo that minimizes llz - ABwIj2 over B E %'. 4. Repeat Example B.3 with disturbance n = xc0, 5. Using the formulation of Property B.6 show that [N;rNpr + DtrDpr] is always invertible. 6. Repeat Example B.7 with the plant taken to be the ideal predictor P. 7. Determine the memoryless state feedback gain that optimally regulates initial condition disturbances for the plant with state trajectory where ,I decomposition P = %

1 = [D + 23- [2]

and

8 = [2]

REFERENCES 1.

2. 3.

4. 5.

Lance, E.. Some properties of nest algebraes. Proc. London M m h . Soc. ( 3 ) 19, 45-68 (1969). Porter, W . A.. A basic optimization problem in linear systems, Math. Svsrems Theory 5 , 2 0 4 (1971). Porter, W. A,, Data interpolation, causality structure, and system identification. Inform. and Conrrol29, 217-229 (1975). Schumitzky. A,. State feedback control for general linear systems, Proc. Inrernar. Symp. Math. Nefworks and Sysfems pp. 194-200. T. H . Delft (1979). Steinberger, M . , Ph.D. Dissertation. Univ. of Southern California. Los Angeles. California (1 977).