A three level costate prediction method for continuous dynamical systems

A three level costate prediction method for continuous dynamical systems

Automatica, Vol. 14, pp. 177 181. Pergamon Press, 1978. Printed in Great Britain Brief Paper A T h r e e L e v e l C o s t a t e P r e d i c t i o n...

315KB Sizes 0 Downloads 41 Views

Automatica, Vol. 14, pp. 177 181. Pergamon Press, 1978.

Printed in Great Britain

Brief Paper A T h r e e L e v e l C o s t a t e P r e d i c t i o n M e t h o d for Continuous Dynamical Systems M. HASSAN,$ R. HURTEAU,t M. G. SINGHt and A. TITLIt Key Word Index--Hierarchical systems; multi-level optimisation; non-linear systems; optimal control; interconnected dynamical systems. leaves the lowest level to solve simple low order linear vector equations. It will be shown that such an algorithm will ensure uniform convergence to the optimum and examples will show that the procedure is numerically very efficient. In the paper, the approach is described and its convergence is proved in the Appendix. The method is then illustrated on a practical example of a nonlinear synchronous machine. It is shown that the present approach requires less storage and computation than even the best hierarchical method which has been developed so far.

Summary--In this paper a continuous time version of a previous discrete systems optimisation algorithm is developed. The new algorithm uses prediction of costates within a three level structure to provide an efficient organisation of both the storage and the computation. The algorithm which applies to both linear and nonlinear interconnected dynamical systems has been proved to converge to the optimum. A practical example is given to illustrate the approach. In the example which is of a nonlinear synchronous machine the present approach appears to provide faster convergence and smaller storage than with previous hierarchical and global methods.

2. Problem formulation and the 3 level algorithm The problem is to minimise

1. Introduction A NUMBER of hierarchical algorithms have recently been developed for the optimisation of both continuous time[l, 2] and discrete time nonlinear dynamical systems[3]. In each case substantial advantages have been gained in both storage and computation time compared to the global single level solution even for low order problems. In the case of the continuous time algorithms[l,2] it is necessary to solve for each subsystem, a matrix Ricatti equation for linear quadratic problem and a two point boundary value problem for the nonlinear case at the lowest level whilst in the discrete time case, this is reduced to mere substitution into simple vector formulae by using the costates as a part of the coordination vector. This makes the discrete problem solution very attractive in terms both of computer storage and computation time not only compared to the global single level solution but also as compared to the other hierarchical solutions. Although the discrete costate coordination method is a very powerful one if the original system is in discrete time, it is not desirable to discretise continuous time problems prior to solution by this approach. The reason for this is that unless the discretisation step length is significantly shorter than the integration interval used in the continuous time algorithms of Hassan and Singh[1,2], the numerical solution of the continuous time problems is not identical to that of the discrete time problem. Fine discretisation, however, increases the computational burden. There is therefore a need for developing a hierarchical algorithm for continuous time systems which has the desirable properties of the discrete time costate coordination algorithm (i.e. which requires solution of low order vector differential equations, etc.) but which does not require prior discretisation. In this paper such an algorithm is developed for continuous time systems using a three level structure. The new three level algorithm which is developed in this paper uses the standard prediction type method of Takahara[4] at the third level and a form of costate, prediction at the second. This

: =~ F (llx(t)ll~+ Ilult)ll~)dt

(1)

s.t. ~ = f(x, u, t)

(2)

whereII.II~=' Ts where x, u are of high order and Q, R are assumed block diagonal. In order to solve this problem by the new method rewrite the dynamical equation (2) in the linearised form ~(t) = A (x, u, t)x(t) + B(x, u, t)u(t) + D (x, u, t) such that A, B are block diagonal and the rest of the nonlinearities as well as the off-diagonal terms are in D(x, u, t). Then, in order to solve a succession of simple low order linear quadratic problems, the overall problem can be rewritten as m i n J in equation (1) subject to

~(t)=A(x*,u*,t)x(t)+B(x*,u*,t)u(t)+D(x*,u*,t)

(3)

x*(t)=x(t)

(4)

u*(t)=u(t).

(5)

In order to solve this problem using the new three level structure, write the Hamiltonian of the system as

H=~Ilx(t)IIQ+~Iluet)IIR+2(t)[A(x ,u ,t)xet) 1

2

1

2

T

~

*

+ B(x*, u*, t )u(t ) + D(x*, u*t )] + flr (t )[x(t ) - x* (t )] vr[u(t)--U*(t)]. Then from the necessary conditions for optimality dH --=0

@

~H *Received December 13, 1976; revised May 18, 1977; revised September 26, 1977. The original version of this paper was presented at the 4th IFAC Multivariable Technological System Symposium which was held in Fredericton, Canada during July 1977. The published Proceedings of this IFAC Meeting may be ordered from: Pergamon Press, Headington Hill Hall, Oxford OX3 0BW, England. This paper was recommended for publication in revised form by associate editor A. Sage. tLaboratoire d'Automatique et d'Analyse des Syst6mes du CNRS 7, Avenue du Colonel Roche, 31400 Toulouse, France. SLAAS and Cairo University, Faculty of Engineering, Elect. and Comm. Dept., Cairo, Egypt.

~v ~H

~u(t

=0

or

x*(t)=x(t)

(6)

or

u*(t)=u(t)

(7)

or

Ru(t)+BT(x*,u*,t)2(t)+v(t)=O

or

u(t)=-R-l[BT(x*,u*,t)2(t)+v(t)]

=0

(8)

~H or

~(t) = A (x*, u*, t)x(t) - B(x*, u*, t)R - 1i-Br (x ,, u*, t)2(t) ' +v(t)]+D(x*,u*,t) 177

(9)

178

Brief Paper ?H

or

= -,~(t)

?x(t)

,~(t)= - Q x ( t ) - A~ (x*,u*,t),~(t)-fl(t)

(lO)

i(T)=O ?H ?x*

=0

or [~(x*,u*,x,t) -Sx*(t)

fl(tl= L

8at(x*,u*,u,t) +

where Cx,C. are also small positive prechosen constants, stop and record x,u as the optimal state and control traiectories. Otherwise send x*, u* to level 2 and go to step 2. Remark. This three level algorithm would appear to be very efficient since at each of the levels of the hierarchy, merely vector equations are manipulated as opposed to the matrix equations in previous methods. Having described the method it is illustrated on lhc practical problem of a nonlinear synchronous machine.

3. Example 1: Optimal coutrol qf synchronous machine excitation

?x* 8D'I (x * u * , t ) ]

(11)

The problem is to manipulate the voltage applied to a synchronous machine in order to bring it back to its steady state operation. The model for the system is gi,,cn by M u k h o p a d h y a y [ 5 ] as

where )~' I = )'2

aaC(x*, u*, x, t ) = A (x*, u*, t )x (t )

?H ?u*(t)

=0

B2 A2y.~siny t - ~ s i n 2 y l

.~'(x*, u*, u, t) = B(x*, u*, t)u(t)

)"2=BI

.41y 2

or

.f'3 = tq

Cl )'3 + ('2 cos ),'~

[-.~(x*,u* x t) (?~'(x*,u* u t) v(t)=/ - -' : +-k 8u* ?u ~D(x*, u*, t t~ +

J'(')

(12)

where Yt Y2 Y3 u~ machine.

is the rotor angle (radianst is the speed deviation((d/dtjyl) (rad/s) is the field flux linkage the control variable is the voltage applied to the

The parameter values are:

N o w let

,4 t = 0.2703, ,42 = 12.012, B i = 39.1892,

II(t)=E(x*, u*, x, u, t),~(t)

(13)

v ( t ) - (~(x*,u*,x,u,t)A(t)

(14)

/32 =

48.048, C~ =0.3222, C 2 = 1.9.

It is desired to minimise using (13) in (t0) we have J ~ ~ ~'~ (v, -0.7461 )2 + y~ ~(t)=

Qx(O-A'r(x*,u*,tj;~(t)-E(x*,u*,x,u,t)3(t)

+0.113b-7.7438)2 + 100(ut - 1.1 )2 dt

or

~(t)=-Qx(t)

F(x*,u*,x,u,t)2(t)

(15)

with £ ( T ) = O

Results. The algorithm was p r o g r a m m e d o n a n IBM 370/165 digital c o m p u t e r to solve the above problem. Convergence to the o p t i m u m took place in 1.67 seconds compared to: 15 seconds using the global quasilinearisation solution on the same computer and

where

F(x*,u*,u,t)= A~ (x*,u*,t)+E(x*,u*,x,u,t) Essentially in the new m e t h o d , (6) (15) are iteratively satisfied within a three level structure. The various steps of the algorithm are:

Step 1 : Set the iteration index of the third level i.e. k = 1 and provide a guess of the initial trajectories of x*, u* to the lower levels. Step 2: At level 2, guess the initial trajectories 2(0, v(t) and send these to level 1 and set the iteration index of level 2 i.e. L to be 1. Step 3: At level 1, integrate (9) forward in time with x ( t 0 ) = x o and calculate u(t) and send to level 2. Step 4: At level 2, integrate (10) backwards from 2 ( T ) = 0 and label the 2 trajectory obtained by the index L + 1 i.e. 2(t) ~'÷ ~. Also calculate v(tt and label this new trajectory v(t/L- ~. If \"j'/, (,~L+ l(t )

,,~L[/) )2 dt ~,~:,

is not satisfied, where rx is a small prechosen positive number, go to step 3. Else.

Step 5 : At level 3, use x, u obtained from the first one to produce a new x*, u* by using (6) and (7) and label these new trajectories X*(t )k+ 1, u*(tjk* I If

v/)'or (x,k +l (t) - x *k (t))2 __<~:~

and

V/'ior (U *k+ | ( t ) - u*t(t )) 2 --<,~:~

6.26 seconds using an efficient hierarchical solution (cf. Hassan et al. [1]) on the same computer. The trajectories in each case were identical and are given in [ 1]. Thus the new method does appear to provide excellent convergence. Table 1 gives the n u m b e r of iterations for convergence at level 2 for each iteration of level 1. The results can be interpreted as follows: (a) Since the jobs at each level are easier (given that we deal at each level with only a part of the vector to be predicted) and since convergence is rapid as seen in Table 1, the computation time is small compared to previous methods. (b) An important feature of the method which is perhaps the main contribution of the paper is that unlike in previous methods TABLE 1 The iteration No. for the 3rd level

The No. of iterations required for the 2nd level to converge

1 2 3 4 5 6 7 8

2 2 2 2 2 2 2 1

Brief Paper where it was necessary to solve a two point boundary value problem at the first level, here it is merely necessary to integrate a linear differential equation with a given boundary condition.

179

Assuming without loss of generality that x (0) = 0 (noting that the steady state of this system is at the origin) then ).it ) ~ 0 as L ~ :~.

~,L *~(t)= j5 ~'-(t, r )Q{ -S~,4,qt, ,')B(x ~*, u ~*, ,,) Conclusions In this paper a new three level method has been developed for optimisation of nonlinear (and linear) interconnected dynamical systems. The algorithm uses prediction of costates and has the favourable properties for the continuous time that the previous discrete time costate coordination method [3] had i.e. small storage and computation time since at the lowest level only vector equations are solved. The method has been proved to converge and a practical example shows that not only is this convergence extremely rapid, it is in fact more rapid than with previous hierarchical methods.

× R I[B't(x*k, uk*,c)+S(Xk*,Uk*,xL

~, UL

l,t')]

x i~'L(v )dt" + j'[l ~bkl t, r )D (xk*, uk*, r ldc l dr =j'[ ~ L t t . rtQl--j'~,

,)kt'(t,v);~L(c)drldr

+1I ~ L ( t, r )Q If~o ~k(t, r)D(xk,, Uk*, v)dcldr where

~f/., i = 'Dk(t, v IBlx k*, u k*, t, )R ' [BT(x k*, Uk*, v ) + ~'(xk*,uk*,x k~L ~,U k" '~,C)]

Relbrences [ 1] M. HASSANand M. G. SINGH: The optimisation of nonlinear systems using a new two level method. Automatica 12, 359 119761. [2] M. G. SINGH and M. HASSAN: A two level prediction algorithm for nonlinear systems. Automatica 13, 95 96 (1977). [3] M. HASSANand M. G. SINGH: A two level costate prediction algorithm for nonlinear systems. Automatica 13, 629 634 (19771. [4] Y. TAKAHARA: A multi-level structure for a class of dynamical optimisation problems. M.S. thesis, Case Western Reserve University Cleveland, U.S.A. (19651. [5] B. K. MUKHOPADHAYA¥and O. P. MAHK: Optimal control of synchronous machine excitation by quasilinearisation. Proc. IEE 119(1)91 98 (19721. Appendix: Convergence of the new three let,el method The study of convergence of the algorithm is divided into two parts i.e. convergence of the first two levels and that of the third level. Convergence of the first two levels. Assume that one is at iteration k of the 3rd level and iteration L + 1 of the second. Let x *k, u *~ be the predicted values of x*, u*, at the kth iteration of the third level and let x k, u k be the values o f x and u when the first two levels converge. At the iteration n k of the third level let x RL and u kL be the values o f x and u calculated at the first level for x *k and u *k provided by the third level with )L~ and vLk provided by the second one. Now since V Lk depends o n x k4L 1) uk(L 1) and ~Lk it will be shown that if,,l(t) converges in the second level for given x *k and u *k then v(t) also converges. Now from (15) jk(L+ If(t)= Ogleit ' to )jk(L+I)(/° ) __.[{1q~]L(t, r )Qx kL(r)dr.

let length of i l t ) = m a x ,

I1~")11

t

t

+~':~,~@,.~[mffHDIx*.n*.,"ll]d~'}d~ where

~2 = [ m a x Iloll?

~ =[maxIlCqt,,')ll1 ~ [maxII~'L(')II] + ~U' ~2J24

-5 •

max IID(x*'u*'t) t

let there exist a number ,8, such that:

with t o - O ; ) . ( T ) - O then we have

(T) = 0 = ~]L(T, 0)~ ~(L+ "CO)- ~T @]L(T"r)QxkL(z)dr.

~*'~+ "(o) = ~ ¢]qo, ~)O.~q~)dr.

I

T kL t kL i~'L+'l(t}=¢,kL (t,O)~04~, (O,r)QxkL (v)dr-~o~b ' (t,r)QxkL(r)dr

q T 2 teV max IJ;?~L+'~(t)H]_-< ' 2 [~,,U2~l 3

k,L +,,(t ) = j'~ (p]L(t, Z)QxkL(z)dr - ~o (~]L(t, r )Qx kL(z ldr

+l<~2m/~t[max[l't"qt)llll,_ ,

~,L +"(t l = .('od)]L(t, z )Qx~L(r )dr + ff ~]L(t, r )QxkL(~ )dr From (141 we have:

- ]'~,~,qt, ~)Qx~(r )dr Jk'L +')(t) =.[[ 4~L(t, zlQxtL(r)dr.

(16}

Vk(L + 1)(f ) = =~ (X k* , Uk *, X kL, UkL, t )~ k( L + 1)

From (9)

xkL(t) -- 4~(t, 0)X(0) -- J'~-,~k(t, r)B(x k*, Uk*, r) x R '[Br(x~*,Uk*,Z)+ £1'(xk*,u~*,xkIL '~,U~IL '~,r] t k x ) kl "(r)dr+~00 (t,r)D(x k* .u k* ,r)dr.

where (17)

From (17) in (16)

.a~' ~ 'qt)=.f/q~,~(t, r )O{0qr, 0)x(0) - j'g 4~(t, v)B(x ~*, u ~*, v) xR

l[BT(xk*uk*,r)+~_Ja(Xk*,Uk*,xktL

1),Uk(L

ll, t')]

x ~kL(r)dv + .[; 05~(t, v)D(x k*, Uk*, v)dv ~,dr.

Now if

180

Brief Paper

then

:

?.r'<,

+k(t,r)-I

if t~s is bounded, this leads to the theorem:

+I

T h e o r e m (11. If (a) i(t), .if'..~ and D are b o u n d e d functions of time (b) .~, .~', and D are c o n t i n u o u s l y differentiable w.r.t., x* in the open interval [0, T~[ and their derivatives are bounded.

--]"°Ok(t'~)iv(r)dr

I

[ xk* u k* x ~ u~.t i ~ ( t ) ] let

in

Then there exists an interval such that for all t~ the open interval [0, "lk[ the a l g o r i t h m converges to the o p t i m u m solution. Prool: C h o o s e 7] such that

I ~+ ~ ( t ) = 0 ( t ) + ~ ( t )

T~-t 2

[max ill k' '(till ]<[max HOIt)li l+[m?x ,,~,,,,i I

lfll~2 [g~3,u4fl~] = 1:0 < t < 7

9

?

T~-

where

+t ~

-

[maxii0(t)li]=j",,lq, maxlilhr)H]dr=s',,t[maxi:lh'lii[

with the b o u n d s of

I

2

and if tl is chosen in the open interval [0, T] [ the a l g o r i t h m will converge to the o p t i m u m i.e.

max

il~(t)ii]=j"0z,,EmaxIi,~"(,)l]dt +,,=[m~xII~(t)ll1=/,,/÷,'~'[m~x ,,~t)ll

Imp IlY~+~'(,)11]
Convergence q / t h e third level. The third level will predict the values o f x *k + ~ and u *k + ~ from x k and u k provided by the first two levels. Then

let us again a s s u m e that there exist a n u m b e r f12 such thai

Ii/t'llj

X*~'' l(ti--xk(t)

-- ~bk(t,O)x(O) -- (.to ~b~( t, r ),~ktz )dr + .{[I4~ (t, "~)D (x k*, u g*, r )dr.

< ltL3t + t~7 )f12 [ max Ii1~ Iii

D(x*,u*,t)-CllX*,U*,t)x*lt)+C2(x*,u*,t)u*(t).

then we have:

x*k ~ ' (t )=
[maxlil'~+'(,)ll]
C l (x ~*, u ~* , "c)x ~* ( z ) d r + .1"[)( ~ ( t , r ). C_, tx ~*, u ~*, z )u~* (r)de.

+ (l,~t +~,- )[L[ max

u *~ " l ( t ) = uV(t)

-R

- - R I B'r (xk*, u~'*, t );d'(t ) IU(xk*,uk*.×~,u~,tI'~k(t)

I1,<(~)11

[maxill~+'(t)ll]

-- F I x ~*, Uk*, X ~, U~, t ),~(t ).

<- 'd,.t +

where

i(x~.* u,~*,x~, u~. t)-_R

~[Bt(x*~,u*.~,t)

N o w if t = 0

+ U ( x *~, u *~, x~,u~, t )]. Now assume again that x ( 0 ) = O then x*k+~(t)~0

and

u*k+~(t)--,0

as

(S', +,u/IlL I[max IIl~(t)li •

IIe~' Io)ii ~ z,=,m_~,,[.llmo)ll] where/17(~ is the max length of 1-

k-,z

at t = 0

and [~2<~is such that

,~<,[li~(o)ll] -~ ~,~,m_,<,[lln(o)nl]

[

This leads to the next theorem:

x*k+~It)]=

T h e o r e m (2). (a) /17ofl20 < I ; /

+ fi, ,r(t, r)C2(x k*, u k*, r).'*(~)d~[ •

0

r ( x k* , u k* . x k, u ~, t

2

)~k(t)J

If

,t~@i2 < l

(b) I = [ x l , u r ] , . ~ ( x * , u * , x , t ) , . ~ ( x * . u * , u ,

tl, D(.',*,u*,t)

are b o u n d e d functions of time (c) aft, .~, and D are c o n t i n u o u s l y differentiabte w.r.t, u* in an open interval [0, 7"2[ and their derivatives are bounded.

Brief P a p e r Then there exists an interval such that for all t I in the open interval [0, T2[, the algorithm converges to the optimum solution.

Proof.

1~ 1

If t 2 is chosen in the open interval [0, T2[ then the algorithm converges uniformly to the optimum solution. If T is chosen to be min~t 71, T2} and t is chosen in the open interval L0, T[, then the second and third levels will converge to tlae optimum solution.

(a) If Pvof12o < 1

III~+~(otlF < Illk(O)ll k-~ ~,lll(O)ll-,o (b) let T2 he chosen such that /-/6 T2 + (//7 + I/3 T2 )f12 = 1 1 -/17fl 2

#6 "~-/'/3f12 such a T2 exists.

Remarks. 1. The above theorems show that there exists a horizon of optimisation within which the new three level algorithm will converge uniformly to the optimum solution. In fact, as the example shows, this horizon of optimization can be long. 2. The above method also applies to large scale linear quadratic problems although in that case it is no longer necessary to introduce the additional variables x*, u*, #, v for the linearisation. This makes the method rather attractive cOmputationally.