Robust Disturbance Rejection for FIR Systems with Bounded Parameters

Robust Disturbance Rejection for FIR Systems with Bounded Parameters

Copyright @ IF AC Advanced Control of Chemical Processes, Pisa, Italy, 2000 ROBUST DISTURBANCE REJECTION FOR FIR SYSTEMS WITH BOUNDED PARAMETERS Same...

1MB Sizes 1 Downloads 72 Views

Copyright @ IF AC Advanced Control of Chemical Processes, Pisa, Italy, 2000

ROBUST DISTURBANCE REJECTION FOR FIR SYSTEMS WITH BOUNDED PARAMETERS Sameer Ralhan & Thomas A. Badgwell

Aspen Technology, Inc. 1293 Eldridge Parkway Houston, Texas 77077

Abstract: A robust MPC algorithm is formulated for systems described by a finite impulse response (FIR) model with a constant output disturbance. Uncertainty in the system is parameterized by ellipsoidal bounds on the FIR parameters. Cost function constraints are used to provide off-set free control. The steady-state properties of the algorithm are analyzed and a simulation example is provided to illustrate advantages of the proposed approach. Copyright © 2000 [FAC Keywords: robust control, model predictive control, FIR model

center. The set e can normally be obtained from a system identification experiment. Using standard statistical assumptions (Ljung, 1987), the nominal model jj is the best estimate for the true system 8, and the set e is a joint confidence region for the system parameters. In this context the matrix W corresponds to the inverse of the covariance matrix for the system parameters, scaled to a desired level of confidence.

1. INTRODUCTION In this paper we consider control of a linear, stable, single input-single output (SISO) system with nonzero gain described by a finite impulse response (FIR) model, subjected to a constant disturbance d at the output: M

Yk

=L

h(Uk-i

+ d =1fT Xk + d

(1)

The standard model predictive control solution to this problem (Qin and Badgwell, 1997) involves optimizing the behavior of a single nominal model jj E e. Offset-free control is achieved by appending an estimated disturbance to the model, based on the measured output at the current time step:

i=l

8~ [hI Xk

(2)

h2 .. . h M (

= [Uk-l Uk-2 .. . Uk-M 1T ~

(3)

where Yk E ~ and Uk E ~ are the output and input of the plant at time k, respectively.

= jjT Xk + d dk = fh - eT Xk Yk

We wish to design a controller that brings the output of the system to zero from an initial nonzero value when the FIR parameters of the true system 8 are known only to lie in an ellipsoidal set 6:

(5) (6)

Meadows and Rawlings (1997) have shown that this disturbance estimate provides offset-free control for any plant dynamics, provided that the closed loop system is stable, and the input is unconstrained at steady-state.

The matrix W = WT > 0 defines the size and orientation of the ellipse, and the vector which we refer to as the nominal model, defines its

e,

Here we propose an alternative solution that makes direct use of known model uncertainty. A

815

two-stage algorithm is developed in which the first stage searches for a steady-state input that minimizes steady-state offset for all plants in the uncertainty domain. The steady-state input target is then passed to a second stage dynamic optimization that drives all systems in the uncertainty domain to the same steady-state. A similar twostage approach was suggested by Lee and Cooley in the context of min-max control for integrating systems (2000) . Because it makes use of the model uncertainty, the two-stage algorithm proposed here is guaranteed to provide offset-free control in response to a constant output disturbance, so long as the true plant lies within the uncertainty set, and all systems in the uncertainty set have nonzero gains. In addition, for at least some cases, the proposed algorithm provides better closed loop performance than the standard solution.

= [(OT Id)w + dJ 2

lJI(x, w , y , 0)

Id~[ll . .. lf

(8b)

This is a strictly convex function in w, so long as there are no systems in n with zero gain. The stage II cost function can be written as: ~(x , 7r,w,y,O) =

(z5 + ~T~)

+ A(w ~

A = [Zl

(9a)

7r)T (w -

= z'iv+M

~(x,7r , w,y,O) ~

0 hI

A(O) =

(7a)

L

(zJ + A (Vj

0 0

0

hI

0

0

hM hM-I

N+M -

W)2) (7b)

(lOc)

hI

h2

hM

0

j=O

can be

(lOb)

0 0

hM hM- 1

~

(lOa)

D= [d · ·· d)T

hI

(9c) (9d)

~=A7r+Bx+Cw+D

h2

T

wf

The vector of future output predictions written as:

The robust controller considered here uses two cost functions. The stage I cost function IJI : lRM x !R x lR x n -+ !R penalizes only steady-state output error, while the stage II cost function ~ : lRM x lRN X !R x lR x n -+ lR penalizes deviations of the output and input from their steady-state targets over the whole horizon:

(9b)

7r)

z2 . • . ZN+M]

w=[w ...

2. COST FUNCTIONS

lJI(x , w , y, 0)

(8a)

j

Zj

= L h i vj-i

(7c)

+

L

hiUj-i

+d

(7d)

i=j+l

=W

V j 2:: N

(7e)

d=y - OTx A T x [U_I U_2 " ' U-M] A T

(7f)

Vj

7r

=

(7g)

=[VO VI ... vN-d

(7h)

A 0= [hI h 2

.. • hM)

T

h3 h4

h2 h3 h4

M

B(O) =

(7i)

G(O) =

0

0

0 . . . ... 0

0

0

hI 0 h2 hI

0 0 0

hM

hI

Substitute lOa into 9a to get:

816

0 0 0 0 0 (lOd)

0

0

The stage I cost function can be written as:

0 hM

hM- 1 hM hM-I hM 0 0 0 hM- 1 hM

hM-I hM 0 hM- 1 hM 0 hM 0

The sequence {Zj} represents future predicted outputs, and the sequence {Vj} contains future inputs considered by the controller. The input weight A > 0 defines a trade-off between the costs of input and output regulation. The vector x defines the state of the system, 7r contains the future inputs, and the Fm model parameters are stored in O.

0

0

0

i=l

(lOe)

~(X, 7r, W, y, 8)

= 7rT (M + AT A) 7r

7rk A

+2(AG -wf 7r +GTG+>..wTw +x T 88T x G ~ Ex

-

= lJI(x, w, y, 8)~

(13)

~(x,7r,w,y) =~(x,7r,w,y,8) ~

(14)

Controller performance will be evaluated based on the controller cost for the true plant 8. For this purpose we define plant cost functions I) and ~:

= lJI(x, w, y, 8)~(x, 7r, W, y) = ~(x, 7r, W, y, 8) -

lJI(x, w, y)

~

~

(15)

(16)

-

I

(20d)

Proof: First we show that the input target computed by the stage I optimiz ation asymptotically approaches the correct value. Then we show that given the correct input target, the stage II dynamic optimization forces the output to zero. At each time step k, the disturb ance estimat e d used in the stage I plant cost function is equal to the true disturb ance d: d

Stage I: Steady State Error Minimization ~

= [10·· .O);k

We now state a theorem describing the convergence properties of the algorithm. Theore m: Robus t Steady -State behavi or of the RLQR D algorit hm. If all systems in fl have nonzero gains, then the RLQRD algorith m acting on system 1 brings the output asymptotically to zero (there is no steady- state offset).

We now define a robust linear quadra tic regulat or designed to stabilize system 1. Defini tion: RLQR D At time step k, the Robust Linear Quadratic Regulator for Disturbance rejection is defined as:

.

(20c)

Both the stage I and stage II optimizations are semi-inifinite programs that minimize the model cost subject to an infinite dimensional constra int. For each system in the set fl, the constra int prevents the controller cost from increasing relative to the cost comput ed using the previous optimal input (Wk or irk)' Both stage I and stage II programs are strictly convex and have feasible solutions (Wk and irk, respectively), ensuring the existence of a unique optima l solution for the RLQRD algorithm at each time step.

3. ROBU ST SISO LQR FOR DISTU RBANC E REJEC TION

wk=arg min{lJI (xk,w,Y k)

0 1 0 0

T=[O .··Olf uk

(20a)

(20b)

o ... o ...

(12)

From this it can be seen that the stage II cost function is strictly convex in 7r for all 8 E 8. For the sake of convenience, we now define plant and model cost functions for both stages. The stage I and stage II controllers will optimize cost functions based on system 8; we define these as the nominal model cost functions q, and c): lJI(x, w, y)

A



S=

(11)

+ Cw + D

T ~. S 7rk-l + TWk; 7ro = [0···0) = 0 0 o1

= 'ih -

or Xk = d

(21)

Since d is constan t, the stage I plant cost varies only with w: (17)

IJI(Xk,W,Yk,8) $

The constra int in 17 ensures that the sequence of stage I plant costs in non-increasing:

IJI(Xk,Wk,Yk,8) VB Efl} A~. =Wk-l ; Wo = 0 Wk

(

A

18

)

Stage 11: Dynam ic Error Minimization

.=

7rk

~

-

This sequence is bounde d below by zero and therefore converges. So in the limit as k -+ 00:

.

argmin { ~(Xk, 7r, Wk, Yk)

~(Xk,7r';;k'Yk,8) $ ~(Xk,irk';;k'Yk,8) VB Efl}

Because the gain is nonzero, this equatio n can satisfied in only two possible ways:

(19)

817

a single constraint C computed using the most

(25)



limiting system {}:

or





Wk

= -Wk-l -

2d

.=

(26)

iJT Id

Wk

But the stage I optimization is strictly convex, so there can be only one solution. Solving these • and Wk-l • simultaneously one finds that Wk are equal; so the optimal input converges to a constant value at steady-state:

••

Wk -* Woo

d

= - iJT Id

as

k -*

00

-

argmin{iI1(xk ' W, Yk) / (34a)

C(Xk ,W,Yk)::; O}

A · C(x ,w,y,{}) = .6.iI1(x,w,w,y,{}) ~

- iI1(x , W, Y , (})

V {} E 0]





d

= Wk-l = Woo = - iJT Id

({) - of W ({) -

(29)

~(Xk , 7Tk , ~oo ' Yk) - ~(Xk-l';k-l ' ~oo'Yk-d \( VOlk-l - Woo • )2

A

+ zN+Mlk -2

(31)

Combining 28, 30, and 31 shows that the sequence of optimal plant costs is non-increasing:

~(Xk' ;k, ~oo , Yk) ::; ~(Xk-l' ;k-l, ~oo, Yk-d - ZJ1k-l - A(volk-l -

~oo)2

(32)

and

Uk

= Vo -* ~oo

as

0) ::; 1]

(35a)

g ~ -(w - w)T Y

(35c)

w~ IdW

(35d)

8

The sequence is bounded below by zero and therefore converges. This implies that the output converges to zero and the input converges to the optimal input target:

= Zo -* 0

2gT {}/

When selecting an optimization technique to solve this problem it is important to note that although the constraint C is convex in w, it is not necessarily differentiable with respect to w and standard gradient based methods may perform poorly unless special precautions are taken. Lau et al. (1991) and Boyd and Barratt (Boyd and Barratt, 1991) discuss this issue and offer alternative solution methods. We used a standard Sequential Quadratic Programming (SQP) algorithm but computed the gradients analytically using a value of based on the most recent iterate of w.

(30)

The cost function constraint in 19 ensures that:

~(Xk ' ;k,~oo'Y4,)::; ~(xk,7Tk,~oo'Yk)

(34e)

H ~ (w - x)(w - x)T - (w - x)(w - x)1(35b)

Using arguments similar to those put forth previously by Badgwell (1997) it can be shown that:

-2 = -ZOlk-l -

(34d)

The cost function difference .6. iI1 can be simplified using 8a, so that the inner maximization problem 34e can be written as:

8= argmax [ {}T H{} •

(34c)

8= argmax [ .6. iI1(x, w, W, Y, (})

and

Wk

(34b)

.6.iI1(x,w,w,y,{}) ~ iI1(x,w , y ,{})

(27)

Now let us consider the behavior of the dynamic optimization. Assume that the stage I optimization has converged, so that:

fh

~

5. SIMULATION EXAMPLE In this section we compare the performance of a standard Nominal MPC (NMPC) controller with that of the RLQRD algorithm for control of a simulated fired heater. The control algorithm must adjust the set point of a fuel gas PID controller in order to maintain the heater outlet temperature at a desired setpoint. The heater is subject to occaisonal feed rate changes that dramatically affect the open-loop dynamics. Assume that the transfer function for low feed rate operation is given by gl, and at high feed rates the heater responds according to gh (time constants in minutes) :

k -* 00 (33)

4. NUMERICAL SOLUTION The RLQRD algorithm requires the solution of two semi-infinite program at each time step k . In this section we discuss solution of the stage I optimization; a strictly analogous procedure can be used to solve the stage 11 program. Kassmann (1999) discusses several solution methods for semi-infinite programs; here we focus on the method known as local reduction (Hettich and Kortanek, 1993), in which the infinite-dimensional cost function constraint in 17 is replaced with

3e- 108 gl = (1 + 48)2 le- 58 gh = (1 + 58)2

818

degC sm3/s degC sm3/s

.

.:I

'l-C\~ I

. .

-.

..

.

0 .5

- - . , .. .

- .. : . . - . . . . . " - . . .. , . - .. - .

.

:

.

\

. .. . . \ .,

- , - - - :....

-,

.

."

.. . ". - -- " . - ."

.

I. ..~ .......,~. . ,,---.- .

c

-0.5

,,

it. !

·'0

,

0 .5

0.5

0

2

.... .

..



. : ... , .- .-



10

...

12

C.:.- -: . -. -. -

-, . - ... ..

14

.

~

, ...

11

"

20

I:~ l~ -

,

0

. . : --. . . ':'"

..

.

.... , . . ." - . -- . ', ..

-20

'0

'::'--:

' . .. ,.-. - .

- -.

..



2

.... .



.

...

10

. . .. .. • ' . " --','

12

I.

.. .

1.

:

, ....

1.

20

Fig. 3. NMPC (-), and RLQRD (--) responses for low feed rate operation.

Fig. 1. Open-loop step response for heater in low feed rate (- -) and high feed rate (-) operation.

·. ·:.~ . -.. -- -----c ~- ~ - --. ~ ___~ .;... i 0:'~r .•·•.~ < . .... ...... .•. .....•. .. ..... ............'

-0.5

. - -- _. ', .. . . . . -.. :.

•.. ... •'.•. . . ....... .

- , - .:. - . .. . : -.. ,- -.-:- -- . --: - - . . .. - .. . .. -- .. -. . " .

.

,

.

.

.

"0':--:-'--:.--:.~-:-.-'7;.'O-~I2;---:'~'--:: ..-"7...;---:;!zo

f~:'t~· • .•. •. <" ."-.-·,·· -- '>."-.,3" S' 0 .5

-t .5

- _. -; . . . . . ~- . ...

,.~. , . . . . : . . . . . ; .... ... ;. - . , .. : - . - .. :, ... . - . . ~ . . . .

.- -"- . •.. - .... ; ... . " - ;' .

-- . . -;, . . . , < -.. - . - ; - . -- --.;-- -- ---, ..

.... - .. -. ..

-

"0':--:-'"':"'~.--:.~-:-.-'7;.'O~-7I2:--~'~'--::"-~";---:;!20

Fig. 4. NMPC (-) , and RLQRD (--) responses for high feed rate operation.

Fig. 2. Identified impulse response for heater 8 (- ) with upper bound BH ( - .) and lower bound BL (--).

was used for the calculations and the optimizations were solved using the default SQP routine constr from the Matlab Optimization Toolbox.

Figure 1 illustrates the open-loop response of the heater to a step increase in fuel gas flow of 1 sm3/s for a 5 minute sample time. As the feed rate increases the gain decreases, deadtime shrinks, and the dynamic response slows down.

Figure 3 compares performance of the NMPC and RLQRD algorithms for low feed rate operation. A step output disturbance is introduced at time 2. For this case the heater has a higher gain and longer deadtime than implied by the nominal model, causing the NMPC controller to cycle severely. The RLQRD algorithm also cycles, but is able to reject the disturbance much more quickly and with significantly less movement of the fuel gas setpoint.

Assume that the impulse response model shown in Figure 2 has been identified by step testing the heater during both high and low feed operation. The upper and lower limits on each coefficient reflect uncertainty in the process deadtime, gain, and dynamic response; this is due to using data at different feed rates. The nominal model used for control design passes halfway between the upper and lower limits.

Figure 4 shows a similar comparison for high feed rate operation. For this case the heater has a lower gain and shorter dead time than implied by the nominal model, leading to oscillations for the NMPC controller. The RLQRD algorithm provides a less aggressive response, and is again able to reject the disturbance more quickly and with significantly less input movement.

NMPC and RLQRD controllers were designed 10, N 2, and ). = .001. The uncerusing M tainty description used by the RLQRD algorithm, containing the models between BH and BL in figure 2, is defined by W = diag(I00000, 5.73, 0.738, 0.534, 2.28, 15.1, 127, 1260, 14800, 100000)

=

=

Figure 5 shows the same test for the case of a perfect nominal model. Note that this does not correspond to a real operating mode of the heater; the purpose here is to see the best possible per-

Simulations were performed to see how well both controllers reject an output disturbance during low and high feed operation. Matlab (version 5.3)

819

i-4·5 :VV .~. -. - .~ iO:t ···....·.: '\> :.····· ..'.-------~---I .... . .j -,

]

..

-u .. .

:.

-2

o

. .. . . . .

; . . . . ; . . . . :. .

2

;

••

-

! . , W 12 1.

, "

"

~

Fig. 5. NMPC (-) , and RLQRD (--) responses for a perfect nominal model. formance for the NMPC controllers. Both controllers are now able to reject the disturbance more quickly, but the RLQRD controller uses less agressive input movement.

6. CONCLUSIONS A robust disturbance rejection algorithm (RLQRD) has been formulated for unconstrained SISO FIR systems. The key idea is introduction of a steadystate target optimization that searches for an input target that minimizes steady-state error for all outputs in the uncertainty domain. The optimal input target is passed to a dynamic optimization that then drives all systems in the uncertainty domain to the same steady-state. The proposed algorithm ensures asymptotic rejection of a constant output disturbance so long as all systems in the uncertainty description have a nonzero gain. A simulation example shows that for at least some cases, the RLQRD algorithm outperforms the standard nominal MPC controller with an output bias.

ACKNOWLEDGEMENTS The authors gratefully acknowledge the financial support of Aspen Technology, Inc., The National Science Foundation, and the Texas Higher Education Coordinating Board.

7. REFERENCES Badgwell, Thomas A. (1997). Robust Model Predictive Control of Stable Linear Systems. Int. J. Control 68, 797-818. Boyd, Stephen P. and Craig H. Barratt (1991) . Linear Controller Design - Limits of Performance. Dover Books on Advanced Mathematics. Prentice Hall, Inc .. Englewood, NJ.

820

Hettich, R. and K O. Kortanek (1993) . SemiInfinite Programming: Theory, Methods, and Applications. SIAM Rev. 35, 380-429. Kassmann , Dean E . (1999). Robust Model Predictive Control as a Class of Semi-Infinite Programming Problems. PhD thesis. Rice University. Lau, M. K, S. Boyd, R. L. Kosut and G. F. Franklin (1991). A Robust Control Design for FIR Plants with Parameter Set Uncertainty. In: Proceedings of the 1991 American Control Conference. pp. 83-88. Lee, Jay H. and Brian L. Cooley (2000) . Stable Min-Max Control for State-Space Systems with Bounded Input Matrix. Automatica 36, 463- 473. Ljung, Lennart (1987). System Identification: Theory for the User. Prentice-Hall, Inc .. Englewood Cliffs, New Jersey. Meadows, Edward S. and James B. Rawlings (1997). Model Predictive Control. In: Nonlinear Process Control (Michael A. Henson and Dale E. Seborg, Eds.). Chap. 5, pp. 233- 310. Prentice Hall. Qin, S. Joe and Thomas A. Badgwell (1997) . An Overview of Industrial Model Predictive Control Technology. In: Fifth International Conference on Chemical Process Control (Jeffrey C. Kantor, Carlos E. Garcia and Brice Carnahan, Eds.). pp. 232-256. AIChE and CACHE.