Multiple Prediction Models for Long Range Predictive Control

Multiple Prediction Models for Long Range Predictive Control

MULTIPLE PREDICTION MODELS FOR LONG RANGE PREDICTI. .. 14th World Congress of IFAC N-7a-06-6 Copyright © 1999 lFAC 14th Triennial \Vorld Congress...

3MB Sizes 41 Downloads 48 Views

MULTIPLE PREDICTION MODELS FOR LONG RANGE PREDICTI. ..

14th World Congress of IFAC

N-7a-06-6

Copyright © 1999 lFAC

14th Triennial \Vorld

Congress~ Beijing~ P.R_

China

MULTIPLE PREDICTION MODELS FOR LONG RANGE PREDICTIVE CONTROL Danyang Liu, 8irish L. Shah and D. Grant Fisher Department of Chemical and Ma.terials Engineering, University of Alberta, Edm' J Canada T6G 2G6

Abstract; A new multi-step prediction formulation is developed and used to generate the long range predictions of the future process outputs required for predictive control. The basic idea behind this new approach is to simultaneously and directly construct a separate j-step prediction model for each future output y(k + j) where j =:: 1:1 2, ... r N, and N is the prediction horizon. This is different from the conventional approach, which constructs only the one-step prediction model and calculates the N multi-step output predictions either by repeated use of the one-step prediction rnodel or by use of the Diophantine equation. A siInulated example is given which shows that the extra computation inherent in the proposed approach is justified by much better predictions than the conventional approach. The proposed multiple model prediction approach is then cOJTIbined with a multiple model control technique to create a long range predictive controller. Results from an experimental application of this control strategy to a 2 x 2 pilot-scale level process demonstrate the excellent control that can be obtained on a real process. Copyright © 1999 IFA(~ Keywords~

prediction; model prediction; model predictive control

use of an appropriate vector norm 1l,,1), the objective function can be expressed as

1. Introduction Long range predictive control (LRPC) of a process is based on minimizing the future control errors between the setpoint and the predicted output of the process over a specific time horizon in the future. One of the earliest and complete descriptions of LRPC can be found in to Kishi's excellent work (Kishi, 1964). However, for some reason his work has rarely been mentioned in the literature. Other similar methods appeared in the late 19708 bearing such names as IDCOM (IDentification and COMand, (Richalet et aL, 1978)), DMC (Dynamic Matrix Control, (Cutler and Ramaker, 1980), Predictor-based self-tuning control (Peterka, 1984), EHAC (Extended Horizon Adaptive Control, (Ydstie, 1984)), ~AC (Model Algorithmic Control, (Rouhani and Mehra, 1972)), MUSMAR (MUlti-Step 11ultivariable Adaptive Regulator, (Mosca et al.. , 1989) , MOCCA (Multivariable Optimal Constrained Control Algorithm, (Sripada and Fisher, 1985)), EPSAC (Extended Prediction Self-Ada.ptive Control, (Dekeyser and Van Cauwenberghe, 1985)), GPC (Generalized Predictive Control, (Clarke et at., 1987), and so on. As has been pointed out in (Fisher, 1991), the primary objective in LRPC is to minimize the difference betw-een the actual future process outputs Yk+l, Yk+2, ~ ~ ., Yk+N and the corresponding desired values (setpoints) Yk+l' V:+2' ... , U:+N· By

J ~ fIYk,N - Y;,N I1 where

Yk,N

Yk+l ] Yk+2

==

(

Yk~N

Since Yk,N is the future is a random variable and its mean value YklN (Le., value). From the triangle see that

J

Y:+l ] Y;+2 ( yZ+N output of a plant, it has to be replaced by estimated or predicted inequality it is easy to

lIYk,N - Y;,N If

H(Yk,N - Yk,N) + (Yk,N - yZ,N)H IIYk,N - Yk,N 1I + IIYk,N - Y~,N 11

~

Therefore,

= Ilvk,N -

Yk,NII is the prediction error y.tlV 11 is the control error (also called the bias in the output)~ Thus, the original probleln of Ininhnizing the output error J is divided into two problems, Le~, minimizing J 1 and J 2 , which respectively represent the main objec-

where J 1 and J 2 ;==

!lVk,N -

6775

Copyright 1999 IFAC

ISBN: 0 08 043248 4

MULTIPLE PREDICTION MODELS FOR LONG RANGE PREDICTI. ..

ti,,~s

14th World Congress of IFAC

of the estimation and the control algorithms~

Since the prediction error J 1 contains long range prediction error terms, the estimation algorithm should minimize the Bummation of the estimation errors over the prediction horizon rather than simply the one-step prediction error, Le~, long range prediction identification (LRPI) should be used (Shook et al., 1990). Unfortunately, the LRP! problem is a nonlinear estimation problem. To solve this problem, Shook et al. (1990, 1992) proposed a filter which can be used. with the ordinary RLS algorithm to achieve approximately the same results as LRPI~ Lu and Fisher (1990) took a different approach. They developed a non-minimal LRPC (following essentially the same approach used in GPC) and showed that with this fonnulation the use of ordinary RLS minimized the same LRPI criterion~ In addition to combining LRPI with LRPC their approach does not require online solution or recursive updating of a Diophantine identity and all the future output estimates are produced using a single equation. This paper presents a multi-step prediction model approach to solve the LRPI problem. In this approach N j-step prediction models, where j == 1, 2, ~ N, are constructed simultaneously based on the same input-output data4 This approach is then combined with the multiple model technique (Liu et al., 1997) to form the multiple-predietionmodel- based minimum-bias control scheme. Application of this scheme to a pilot 2 X 2 liquid level control problem shows that this control scheme is very effective. 4



Generally speaking, the plant to be controlled is of the form Yk:::=:

f 1 (Yk-l,.u,Yk-n,Uk-l, .. 4,Uk-m.,Wk)

(1)

where Yi is the plant output at time i, Uj the control action at time j, Wk the random disturbance, m and n are integers~ For the sake of easy understanding of the prediction model approach, this section deals with the special case of m == n == 1 The general case is dealt with in Section 3. In this special case Equation (1) reduces to a

= f 1 (Uk-l,Yk-l,Wk)

(2)

and the future outputs Yk+l and Yk+2 can be expressed as (3) £1 (Uk, Yk, Wk+l) Yk+l Yk+2

Yk+l

= C\,l Uk + D1 ,lYk + dt + ek+l

(6)

where Ck+l is the model error, C\1 1 ' D I ,l and ell are coefficients such that the linear model (6) is a linearization of the nonlinear plant (2) in the sense that the sum of the squared model-plant fitting errors (with possible exponential weighting) is minimized. ~lodel (6) can be called a one-step prediction model since the one-step prediction

(7)

can be obtained by assuming that the expected value of ek+l is zero. In a typical model based LRPC, the twO'-step prediction of Yk+2 is obtained by repeated use of Equation (7), Le.,

C1t1 U k+l + D1,1 Yk+l +d1 +ek+2 C1 ,lUk+l + D1 ,1(C1 ,lUk + D1,lYk +d1 + ek+l) + tIl + ek+2 '"' ....,.., '"'2 C1,lUk+t + D1,lCl,lUk + D 111 Yk +D1t1d1 + dl + D1 ,lek+l + ek+2

Yk+2

(8)

,

2. Prediction lnodel: a special case

Yk

Equation (3) can be expanded to a locally linearized model using Taylor's theorem:

£1 (Uk+l' Yk+l, Wk+2)

(4)

Substituting Equation (3) into (4) gives Yk+2 f l (Uk+l,. f 1 (Yk ~ Uk, Wk+l), Wk+2) f 2 (Uk+l,Uk,Yk,Wk+2,Wk+l)

(5)

which implies that the two-step prediction of Yk+2 is ""

Yk+2

::=:

G 1J 1 'Uk+l

.........

..... 2

+ DI,l CI,1 'ltk + D 1 ,lYk +D 1 •1 d.1 + d1

(9)

The conventional two-step model prediction approach uses Equation (7) and (9). Strictly speaking, however, the linearization of (5) cannot be obtained by repeated use of the linear model (7) since the plant is generally nonlinear. The correct way of obtaining the locally linearized representation of (5) is to dire(.1.1y apply Taylor's theorem to (5), which gives Yk+2

== C2,lUk+l + C2 ,2 U k + D2 ,lYk + dk + 2 + ek+2

(10)

where ek+2 denotes a small residue~ Equation (10) is different from Equation (8) since in general the following relations do not hold:

6 2 ,2 == D 1 •1 G1 ,l ~ D1 ,ldl + dl

Ch

(11)

Therefore, in order to guarantee that the model prediction errors are as small as possible, it is necessary to separately and simultaneously construct linear models with different prediction horizons. In other words, instead of using the relations (11) to obtain the tVio-step prediction model (8) or (9) from the one-step prediction model (6) or (7), one

6776

Copyright 1999 IFAC

ISBN: 0 08 043248 4

MULTIPLE PREDICTION MODELS FOR LONG RANGE PREDICTI. ..

14th World Congress of IFAC

Hence C1 ,1 = 1.5819, DI ,! :::= 0.6840, dl 28.0152, and, according to Equation (9), the twostep prediction model is ., :

. ~

.

.... _~~_~_:_:-r~~.:-r~:::r::--_.-:-r_:::'r~~:::-r_-=,"~_:-:-r~~""'.r~;',,,. •

..

f

I

,

.....

.,

Yk+2

•- r , , -

=:

1.5819uk+l +1.0821uk+O.4679Yk+47.1782

Method 2 (The proposed method): This method directly estimates the parameters in the two-step prediction model:

"

l: ,!

~ C2~1 'Uk+l

Yk+2

+ C2 ,2U k + Dr;.,l Yk + d~

Using the same input-output data the two-step prediction model given by 14"ethod 2 is Yk+2 = 1.0005'Uk+1-2~5427uk+O . 3004Yk+61 . 9886

Fig. 1. The actual outputs of (12) (solid line) plus the two-step predictions given by the conventional Method ] (dotted line) a.nd the proposed Method 2 (dashed line).

which is significantly different from that obtained by Method 1.

should estimate the coefficients of Equation (10), Le~, O2 ,1, 6 2 ,2, D2 ,1) and d2 directly using, e .. g. the least squares technique.

COII1parison: Figure 1 shows that the predictions given by Method 2 are much closer to the actual plant output than those given by l\.1ethod 1. Even larger differences can be found for nonlinear processes.

The above conclusion can be verified by a simulated example. Consider a third order linear process of the form

'Yk == O.. lYk-1 +Uk-l

+ O.. OlYk-2 + +

2Uk-2

O.. lYk-3

+ Uk-3 + 70

(12)

Assume that the plant is excited by

~in(O.lk} Uk

==

{

-1 1

+ V"

for for for for

k < 150 150 ~ k 200 ~ k k 2: 250

3. The general prediction model

To generalize the above results, let us return to the original nonlinear plant (1)~ It is easily seen that the future outputs Yk+l, Yk+2, .... , Yk+';' can be expressed as Yk+l -=

f 1 (Uk, ... , Uk+l-m, 'Yk, ...; Yk+l-n, Wk+l)

(13)

< <

Yk+2

200

= £1 (Uk+l, ..... , Uk+2 -7n, Yk+l, ... )

250 :=

f 1 (Uk, .. ·,'Uk+l-m.,Yk, . ··,yk+l-n' Wk+l),

where Vk is generated by a random variable with uniform distribution on the interval. Both the conventional and proposed methods are used to

yk .... ~, Yk+2-n, Wk+2)

find the two-step output prediction for the process. In both Inethods the process is considered to be a black box with a reduced order: m = 1 and n ~ 1~ Note that there is clearly a structural mismatch between the plant (12) and the model

Yk+2-n, Wk+2) f 1 (Uk+l, .... ,Uk+2-?l1.,

=::::;:

f 2 (Uk+l,

.... , Uk+l-rn, Yk,

.u, (14)

Yk+l-n, Wk+2, Wk+l)

For any general integer i procedure leads to

~

1, repeating the a.bove

(7) . Method 1 (The conventional method): In this method only the one-step prediction model (7) is constructed by using the R2LS (rigorous recursive least-squares) algorithm with forgetting factor 0.95 (The basic philosophy and references to the R 2 LS algorithm are given in Section 3) . The two-step predi<..-tion model is obtained by repeated use of the one-step prediction model, as is shown in Equation (9)~ After conducting identification on a 400 sample interval data record, the one-step prediction model is found to be

where, in general, f 1 , £2, .... , and f i are different nonlinear functions.

From the Taylor expansion of Equation (15) it follows that Yk+i can also be expressed as Yk+i

(Ci,l Uk+i-l

+

u

..

+ Ci,iUk) +

(Fi }l Uk-l + .~~ + Fi,rn-l Uk+l--m) + (Di,l Yk + .... + D i ,nYk+l-n) + (16)

6777

Copyright 1999 IFAC

ISBN: 0 08 043248 4

MULTIPLE PREDICTION MODELS FOR LONG RANGE PREDICTI. ..

14th World Congress of IFAC

where C, P, b, and Cl are coefficients, and ek+i is the model error. In order to determine the coefficients, all k's in Equation (16) should be replaced by k - i. This leads to

(17)

(20)

Uk,N

where

c/>f

o =

[Uf-l' .. U'f-i :

U r - i - l ...

Uk-i+l-tn :

o

o o

Y'{-i . · ~ yl'-i+l-n : 1]

E>i

:=:;;

rGi,l· .. CisS bi~l

...

:

Fi,l ... Fi,rn-l

Di,n : ~JT

Then from Equation (18) it follows that

The R 2 LS algorithm (Kishi, 1964; Liu et al., 1997) can now be applied to (1 7) to find the best choice of i in the sense that the sum of the squared prediction errors is minimized. The R 2LS algorithm is a rigorous recursive implementation of the least squares method~ The R 2 LS algorithm is different from the conventional RLS algorithm in that it does not need "good" a priori statistical knowledge of the parameters to be estimated and the results given by the R 2 LS algorithm are the same as those given by the batch least squares method.

e

4~

Prediction model based MD control

Suppose

Uk,l\oT

is constrained to lie in a set Uk,N,

and the output space is defined to be

'Uk,N

E UkrN}

and the output bias space is defined to be

Bk~N

== {hik,N - Yk,NI; Yk,N

E

Yk,N}

where for a vector v, JvJ denotes a vector obtained by replacing each component of v with its absolute value. Each control action Uk,N E Uk N corresponds to a bias in 13k,N. If the control action 'Uk,N is such that the corresponding bias IYk,N -Yk,NI is a noninferior point in the output bias space BkIN) then the vector Uk contained in this Uk,N (see Equation (20») is called the mininlum-bias (MB) controL The problem of finding the MB control can also be expressed aB the following multiobjective optimization problem 1

Equation (16) shows that (18) where the i-step prediction Yk+i can be decomposed into two parts as

(19) with

[6: 1 •.• (j . .J 'I,

All

Yk+i

Uk+i-l

.

min IYk,N - Y'k,N I { subject to f)k,N E Yk,N

]

~

Z,f.

[

or equivalently

~k

min ICk,NUk,N + fj~,N - Y~INI { subject to 'UklN E Uk,N

Fi71 Uk-l + ... + Fi~1'n-l Uk+1-17l +Di,lYk + + Di,nYk+l-n + dk+i u'

It is easy to see that Y~+i is the forced response and Y;:+i is the free response. l\"'ow suppose that 1. the predi<..'tion horizon is N, 2. 8 1 , .oa, and are obtained simultaneously by using the R 2 LS algorithm 1 and 3. VZ+l ,... , Vk+N are the desired outputs.

eN

Define

5. Multiple prediction lnodel based MB control

The multiple prediction model based MB control scheme is described by the diagram shown in Figure 2. Each "prediction model set" in Figure

(2) contains N separa.te equations (as described above) to predict the trajectory Yk+l, ..., Yk+N. Suppose the total number of prediction model sets is Q'. Each prediction model is associated with a prediction error . The "online model evaluation" includes calculating the average prediction error

6778

Copyright 1999 IFAC

ISBN: 0 08 043248 4

MULTIPLE PREDICTION MODELS FOR LONG RANGE PREDICTI. ..

14th World Congress of IFAC

••

••• Weighted

aVIM':
control "ctiQn. seE!:

6.1. (23)

Thep MB cQntrvller:;;

see (,22)



Dnline model

The 0;;, m9deJ ~eb". (16) I)r (11)

evaI1J2Itiol'l.;$ee: (21)

Sell! ~.

Fig. 2. rv[ultiple prediction models based I\.1B controller Cj of prediction models in the i-th model set and renumbering all the model sets such that

Wstel" to

Tank 1

Water Valve 1

(J is a user specified number which de:fines the

number of controllers (or equivalently) the number of model sets to be used~ The /3 controllers (each using a different "prediction model set") generate f3 MB control actions: u k

(22)

The final control action Uk is obtained by combining the control actions in (22). In this paper a weighted average of the {3 calculated output actions is used

y Drain

(23)

Fig. 3. A pilot scale 2 x 2 level control system 6~

Real-thne experiment evaluation

The proposed control scheme was applied experimentally to the pilot scale 2 x 2 real-time level control system shown in Figure 3_ The objective was to control the liquid levels in Tank 2 (Ul) and Ta.nk 3 (Y2). The two manipulated variables are the positions of Valve 1 and Valve 2. Because of the existence of a pipe that links Tank 2 with Tank 3, this plant represents a multi"ariable system with strong interaetion~ The sampling interval T = 20 seconds and the prediction horizon N == 4. Let k be the discrete time (Le., the number of samples). The setpoints for the two level positions Yt and Y2 are given as * Yl

{50% == 75%

* Y2 ==

{75% 50%

60::; k < 120 otherwise

The manipulation of the two valves is subject to two kinds of constraints: the magnitude constraints

and maximum allowable rate constraints

(24) In the multiple prediction model based controller, a total of four model sets was used~ The orders (m., n) of the four model sets are as follows:

60 ::; k < 180 otherwise

(1,1),

(1,2)~

(2,1),

(2,2)

(25)

6779

Copyright 1999 IFAC

ISBN: 0 08 043248 4

MULTIPLE PREDICTION MODELS FOR LONG RANGE PREDICTI. ..

:_;2755~1~ ~~""~. . tts at ~(jimodel-Bas~ Adapti;", C0fl1~(er .; ~ .
14th World Congress ofIFAC

j

8. REFERENCES C~ Mohtadi and P. S. 1.\tffs (1987). Generalized predi<.-tive control. Automatic(/, 23, 137-160. Cutler, C. R. and B.L. Ramaker (1980). Dynamic matrix c-ontrol a computer control algorithm. ProcA JACC, San Francisco. Dekeyser, R~M.C. and A.R. Van Cauwenberghe (1985). Extended prediction self-adaptive con-

Clarke, D. W.,

"----'-_-J..-.._ _---L~--'--_---'-_ _ ~ _ __'_

troL Proc. 7th IFA C Symposium on Identification and System Parameter Estimation, York UK.

Fig. 4. Experimental control results: Level Yt in Tank 2 {%)j Level y2 in Thnk 3 (%)i Position of Valve 1(%); Position of Valve 2(%)

To reduce online computation only one of the four model sets is selected to calculate the control action Uk. Therefore, in this case a = 4 and f3 == 1. The control of the levels is excellent as shown in Figure 4. The ~~ringing" in the control action could be reduced by specifying tighter input rate

constraints in (24)

7. Conclusions The conventional method of obtaining the multistep output predictions is based on the repeated use of the one-step prediction modeL In this paper a multi-model, Inulti-step , prediction method is described. The main idea in this method is that N separate prediction models for {Yk+i : j = 1,2, A.A' N} are constructed simultaneously from

the same input-output data. The one-step prediction model is used only to give the one-step prediction" the two-step prediction model is used only to give the two-step prediction, and so on. A simulated example is used show that the prediction errors given by the proposed method are much smaller than the conventional method. A direct application of the multi-step prediction model Jnethod is the prediction model-based, minimum-bias (MB) control scheme described in this paper.

This scheme is th.en extended to

the multiple prediction models based MB control scheme, in which a subset of '~good" models are selected from a set of prediction models. The final control action is obtained as the weighted average output of predictive controllers designed using these selected prediction models~ The experimental evaluation presented in this paper shoV\rs the excellent performance obtained using this control scheme.

Copyright 1999 IFAC

'

Fisher, D.G. (1991). Process control: an overview and personal perspective.. The Canadian Journal of Che1Tlical Engineering 69, 5-26. Kishi, F.H. (1964). On-line computer control techniques and their applications to re-entry aerospace vehicle control. In: Advances in Control Systems Theory and Applications, Academic Press 1 New York (C.T. Leondes, Ed.) . Vol. 1. pp. 245-257. Academic Press. New York~ Liu, D~, S. L. Shah, D . G . Fisher and X. Liu (1997). Multimodel-based minimum bias control of a benchmark paper machine process. The Canadian Journal of Chemical Engineerin9 75, 152-160. Lu, W. and D.G. Fisher (1990). Nonminimal, model-based, long range predictive control. Proc. American Control Conference, San Diego; USA 2 1 1607-1613~ Mosca, E ..') G . Zappa and J.M. Lemos (1989). Robustness of multi predictor adaptive regulator: Musmar~ Automatica 25, 521-529. Peterka, V. (1984). Predictor-based self-tuning control" Automatica 20, 39-50. Richalet~ J., A. Rault, J.L. Testud and J. Papon (1978). Model predictive heuristic control: applications to industrial processes. Automatica 14, 413-428. Rouhani, R. and R,K" rv1ehra (1972). Model algorithm control: basic theoretic prespectives. Automatica 18, 401-414. Shook, D.SA' C. Mohtadi and S.L.. Shah (1990). Adaptive filtering and gpc.. Proc. American Control Conference 1 1 556-561. Shook, D.S., C. Mohtadi and S.L. Shah (1992). A control-relevant identification strategy for gpc. IEEE Transactions on Automatic Control 37 ~ 975-980. Sripada, N.R. and D.G" Fisher (1985). Multivariable optimal constrained control algorithm (mocca): Part i. formulation and application. Proc. Int. Con! on Industrial Process Modeling and Control, Hangzh 0 'U., China. Ydstie, H.E. (1984) . Extended horizon adaptive control. Proc.. 9th IFAC World Congress, Budapest, Hungary pp. 133-137.

6780

ISBN: 0 08 043248 4