CMAC-Based Learning Control for a Batch Reactor

CMAC-Based Learning Control for a Batch Reactor

Copyright © 1996 IFAC 13th Triennial World Congress, San Francisco, USA 7a-ll 5 CMAC-BASED LEARNING CONTROL FOR A BATCH REACTOR Hui Liu Xiaomil1g X...

1MB Sizes 5 Downloads 102 Views

Copyright © 1996 IFAC 13th Triennial World Congress, San Francisco, USA

7a-ll 5

CMAC-BASED LEARNING CONTROL FOR A BATCH REACTOR Hui Liu

Xiaomil1g XU

Zhongjun Zhang

Department of A UtOllWtlOrl Shanghai Jzaotong Umvcrszty ShrlTlg/zaz, :300080, P. H Chma

E-mazl: :z:mxu@s)tu.fdu.cTl

Abstract, Nowadays. batch process operations are going important in the chemical industry and more precise tracking is required for batch chemical reactors. To inJprove the control performance, Cerebellar Model Articulation Controller(CMAC) based learning control is adopted for a batch chemical reactor. Analyzing the Albus' algorithm, the paper point.s out its drawback in batch updating. An improved algorithm and its cOllvprgPllcP afP discusspd. Sirnulations of thrpp typps of control schemes for bat.ch reactor are cornpared, which shows that. the improved CMAC-based learning control has quick convprgpncp and satisfipd control characteristic. Keywords. CMA(', Learning controL Batch control

I. INTRODUC'I'ION

Batch processes offer ,;olne of the Inost interpsting and challenging problems in modeling and controling of thpir dynamic nature. Although most chemical industry has been developed in a continuous fashion, batch chemical reactor has inherent kinetic advantageous over certain continuous reactors. It is well known that batch chemical reactions are highly non linear and time varying, conventional PlO control can not obtain satisfied control characteristic. Because both the batch process and learning control art' repetitive, the later can be expected to have good control performance for the batch reactor. Originally suggested by Uchiyama (1978) and developed by Arimoto (1984), lparning control may be defined as "Any control scheme that improves the performancE' of the device being controlled as actions are repeated, and to do so without the necessity of a parameteric model of systems" (Craig, 1988). It requirps less a priori knowl-

6179

edge about the controlled system and uses practice t.o improve movement by altering the stored data on the basis of previous tracking error. It is also similar to some of human being's learning process, the work may be operated many times in finding inputs to a complex system to accomplished the work. Learning control deals with highly uncertainty dynamic system in a very simple manner. It is used mainly in a robot application requiring execution of a same motion with certain periodicity. Lots of works on this topic had been extpnsively investigated by Moore (1992). Later Miller (1990) implemented real-time dynamic control of an industrial robot with artificial neural network-based learning controller, in which he used neural prototype Cl\IAC proposed by Albus (1975). Katoh (1989) used three types of learning control- simple learning control, betterment process and modd reference learning for control of a batch reactor. The implementation of these three types has the difficulty of choosing the learning parameters.

In the paper, CMAC-based learning control is adopted for a batch chemical reactor On the ba.,is of a detail analysis of Albus' algorithm, the author has found its drawback in batch IJpdatin~. An improved al~orithlll is proposed for overcornin~ this drawback and the convergence of this algorithm is analyzed. Finally, results of simulat.ion obt.ained by a conventional PlO and two different algorithm of (:'vIAC-based learning control afC cOIn pared respectively.

2 DESCRIPTION OF CMAC

CMAC was proposed by Albus in 1970s. It is a learning structure that imitates the human cerebellum. C::vtAC h
r:,.

Boil)

(L) where, Cik is the number of associative cells that the it.h and kth sample overlap. Obviously, Cik = Cki _ The accumulated output error for the kth training sample is Ek = LI iJO~t). The correctness caused by updating on itself in every associated weights is t::.k Eklc, where t::.k is called the accumulated weight. error for the kth input sample. Its contribution to the output of the it.h sample will he e,kt::. k . Whpl1 the learning converges, i.e., -+ 0, then t::.k -+const.. If the weights can be recoverpd from thp accumulated weight errors, the acculTlulated weight error t::.k can be treated as variables to be learned instead of weights Wi-

=

sit)

Assuming that the initial weights of (:MAC are zeros, for each input ve(Ctor Si, the output of thp network is, P

f(8i)

Assume that t.here are P training samples, let di be the desired output for the ith training sample and ~ bp tbe weight vect.ors of (,lw CMAC'. The network output is ~T~

f(s;) =1.11; ai and the ('rror signal is Ji = d; - f(s;). For each excited weight, the updating law given by Albus (1975) is,

(I ) where j3 is the learning rate, and takes the value between and L

o

Because of the built-in generalization of CMAC, when the kth training sample is updated at. the lth iteration,

6180

=L

C'jt::..i

.1 = 1

The goal of learning is to satisfy the equation f( Si) = d;, i.e.,

Lf=1

Ci.it::.j

= di . Written

in

Ct::.=D

r:,

2.1 A/bus' algorithm

ok

the amollnt of correctness le, where t ) is the output error for the kth training ~aIIlple at the lth iteration, is bound t.o affect the output of the other training samples. For example, it makes the new output of the ith training sample being,

Lpt A

the

matrix form,

(4)

= [r: I , r: ::, .. ,r: p l, then AW=D

(5)

Generally, there are two ways to update the weights. One is one-by-one updating which updates the weights in each presentation, the ot.her is batch updating which saves the changes in the weights for each t.raining sample and updat.es only aftpf a batch of training samples have been presented. Assurne a set of training samples is [(SI, d l ). (S2' d2 ), ... , (sn, dn )]. In one-by-one updating,

(si,d l ) is first trained, t,he corresponding weight ~I makes the output of the network satisfying the equation f(sd = d l . Then 011 the basis Of~I' (s2,d 2 ) is trained until the output satisfies f(82) = d 2- Under general circumstances, ~j=l=1J,2' So when f(8J} = dj, 1(052) =1= d2. This phenomenon is called "forget". Though forget can be overcome by repeatedly cycle training, t.he speed convergencp may be decreaspd. 'We suggest. lIsing t.he batch updating method t.o update the weight.s. It not only can overcome "forget", but also can convergent quickly.

or

Albus himself didn't prove the convergence of CMAC. Wong (1992) and Park (19R9) studied the problem ill the view of linear equation solution, and Wong (1993) also in the view of frequency domain. They all aimed at one-by-olle updating. In this paper. batch updating is invest.igated and Albus' algorithm ,which is found liable to diverge in the processing. i~ improved.

wherl'

H(rJW)

= -~(~(c _ (. L

n)e- Jwn

n=l

+ L(c - n) e.7

Wtl )

(1:1)

n=J

sin w _

= - J __

SUI £'w

----,---'c~

2 sin 2

2.2 rVmkness of Albus' alqorithm and its Improvement

Theorem 1. Given a set of training samples composed of input.-output pair from Rn >---t R1tl. If the input. span' is discretized such t.hat. no two training input samples excit.f' t.he salllP sct of associat.ioll cells, the CO[]VPrgcncp condit.ion of Albus' algorithnl in hatrh updating

is

1- <

;.J.J

<

J']7r.

Proof In bat.ch updat.ing, when adopting Albus' algorithm, thp accumulated error is updated as follows,

6. (1+ I) 1

= ~c (cl

T

- '\" >.6.(1)) L., .It,

C ZJ

)

(fi)

whel'e C'. -

{Cl)

'.I -

0

if 1 i - j 1< ot.herwise

C

(7)

For convenience, assuming the dimension of both input and output is one, thp discretization of all training samples are thpl1 bet.ween 0 and R.

.6.7 is OIle of t.he dpsired values from equation (6),

s; = r~(dj Let eil) then,

L C;j.6.;)

(8)

.iti

= .6.;1) -.6.;. and subtract equation (8) from (6), (9)

is an non-periodical sequential of t.he length N!, where NI R-2c-2. Repeating the interval over the infinite grid, fi can be analyzed in thp Fourier domain. Because e can bp expanded in a FOllrier series of tht' form 'i

=

f'n

=

( 10)

~

SincE' c is generally set in (() to 256, sin cw/c is too sllIall to be cOllsidered. If w ::/= 27r 1 H(e JW ) 1:;:.:;1

~ tan :)

1

(1:1)

If and only if 1 If (ej<.v) 1< 1, the algorithm will convergent. In a digital frequency period (27r), 1 11 (e jW ) 1< I leads to;j- < w < 3;. W = 27r, wE /{el>(A), t.akes 110 effect 011 the convergence of wand .6.. D

Obviously, in a digital frequency 27r, t.hp Albus' algorithlll will be divergent in the interval from 0 to ~ and from 327l' to :1". To widen the convergence range, the equation (1) is modified as,

.6.

--t

ji

Si

C

q

l/J;= - -

--t (lj

(14)

where q is the averagp times of every weight updatpd in every hatch .

Theorem 2. Given a set of training samples composed of input-output pair from Rn >---t Rm. If the input space is discretized such that two training input samples excite the same set of association cells, the convergence condition of the improved algorithm in batch updat.ing is 2arctan(l/q) < w < 2(7r - arctan(l/q)), where q is the averagr times of every weight updat.ed in every bat.ch. Proof When using the improved algorithm for batch updat.ing, if w ::/= 27r, the equal.ion (1:3) is modified to I I 1 H(e JW ) IR; - 1 - - w 1 q t.an '2

(15)

If and only if 1 H (e jW ) 1 < I, the algorithm will convergent. In a digital frequency period (211"), 1H(ei W ) 1< I leads to 2 arctan(i/q) < w < 2(7r - arctan l/q). w 211", W E !\- er(A), takes no effect on the convergence of w and .6..

=

Equation (9) becomes _(1+1) fk

.

= H(e.7 W )

_(I) i'k

(11 )

6181

Because of t.he local generalization in CMAC, each input v(ytor makf,s c cells excited simultaneously. So after a batch of the training samples have heen presented, the averagf' tirnes of every weight IIpclat.t'd in ('very batch is no less t.han one, i.e .. mi1l.tJ = 2. \Vhen q = 2, the CtlIlVergence range of the improved algorithm is 0.93 < w < .5.35 radian, which is wider than that. of Albus' algorithm. Wit.h the increase of q, (:MAC almost always cOllvergent.

:l. APPLICATION OF C.'vl AC-BASEO LEA RNING TO BATCH REACTOR

A batch reactor chosen }1Pre is given by Luyben (1990). Reactant is charged into the vessel. Steam is fed into the jacket to bring the react.ion mass up to a desired temperature. Then cooling water IIlust be added t.o the jacket 1.0 remove the exot.hermic heat of reaction and to make the reactor t.emperat.ure follow the prescribed temperat.uret.inle curve. First-order l'onseclltive reartions take plare in t.he reactor as time elapsed,

A~B~('

Heatlllq Phase

Total continuity: 1/ VJ

dp _ ]"

rit -

's Ps -

IU

(22)

VY (.

where

= density

PJ

of steam vapor in the jacket

v) = volume of t.he jacket Po< = density of incoming steam Wc

= rat,:

of condensation of steam (mass per t.ime)

Energy equation for st.eam vapor:

where

Hs = enlhalpy of incorning st.eam

he

= cnthalpy of liquid condensate

(16 )

(21) Componet. continuit.y for A: t Tt dCA --

-'\'k'),C A

(17)

where M

A", and Bt/' = vapor-pressure ronstants for water

Component continuity for B:

Vd~;S = Vk)('A

= lllci/ecular weight of steam

- Vk 2 ('B

( 18)

Kinet.ic equat.ion·

Coolmg phase

Energy equation for the jacket.: (19) rhoJVJ(.'J

Energy equat.ion for process:

"('' PdT \ I'' }\') (' \ I!k C" -=-'IY ' A - / \ 2 V 2 '8 ril - h;A.;(T - T M )

df~J = F~lJCJPJ(1:J[J + hoA.o('T,w

7:,)

- TJ

)

(25)

where

PV

(20)

TJ = temperature of cooling water in jacket.

= density of water CJ = heat capacity of water

PJ

Energy equation for metal wall:

TJfi

= inlet cooling-wat.er temperat.ure

(21 ) where A) and .\2 are the exotherrnic heats of reaction for the two react.ions. The operation is divided into two stages, heating and cooling,

6182

;3,1 Simulation of the batch reactor control

In this section, PID and two CMAC-based learning cont.rol with different algorithms are compared. The objective of the cont.rol is to make the temperature follow

6183

6184