Integrated Predictive Control and Economic Optimisation: Decomposition and Hierarchical Structure

Integrated Predictive Control and Economic Optimisation: Decomposition and Hierarchical Structure

Copyright © IFAC Large Scale Systems. Rio Patras, Greece, 1998 INTEGRATED PREDICTIVE CONTROL AND ECONOMIC OPTIMISATION: DECOMPOSITION AND HIERARCHICA...

1MB Sizes 3 Downloads 56 Views

Copyright © IFAC Large Scale Systems. Rio Patras, Greece, 1998

INTEGRATED PREDICTIVE CONTROL AND ECONOMIC OPTIMISATION: DECOMPOSITION AND HIERARCHICAL STRUCTURE 1 V.M. Becerra' P.D. Roberls' G.W. Grifliths"

• City University, London, UK •• Aspentech UK, London, UK

Abstract: In industrial practice, constrained steady state optimisation and predictive control are separate , albeit closely related functions within the control hierarchy. This paper presents a method which integrates predictive control with on-line optimisation with economic objectives . A receding horizon optimal control problem is formulated using linear state space models. This optimal control problem includes in its formulation a general steady state objective depending on the magnitudes of manipulated and measured output variables. This steady state objective may include regulatory and economic objectives. The solution of the receding horizon problem is achieved by means of decomposition and iteration on simplified sub-problems. The method is tested with simulations of a system of chemical reactors. Copyright © 1998 IFAC Keywords: large-scale systems , optimisation, predictive control, process identification, hierarchical control

1. INTRODUCTIO N

setpoints from the second level and directly manipulate the process actuators.

In modern processing plants a typical hierarchy of control functions using predictive control is described below (Qin and Badgwell, 1997) :

It is clear that in commercial approaches to steady state optimisation using linear model based predictive controllers there is a separation between setpoint regulation and local economic optimisation of the steady state. The objective function of commercial predictive controllers is a regulatory objective, while its reference values are computed by separate steady state optimisation algorithms .

(1) At the top level, a steady state optimiser calculates optimal operating conditions based on a steady state model of the whole plant. (2) The second level is located at each process unit. A separate local steady state optimiser calculates reference values for extra degrees of freedom (inputs and/or outputs) not specified by the upper level. A multi variable predictive controller acting as a constraint controller manipulates setpoints of local regulatory controllers at the level below. (3) The regulatory control layer is typically integrated by PID controllers which receive their

Industrial applications of economic optimisation are most common in large scale continuous plants, where the main optimisation tools are based on steady-state process models. This approach has a major drawback: the need to ensure that the process data used for optimisation corresponds to steady state conditions. As a result , the performance of the plant under transient conditions may not be improved by optimisation and large amounts of information about the state of the plant are discarded. To overcome these shortcom-

Work supported by the United Kingdom EPSRC, Grant No . GR/ J73452

1

463

ings, it is desirable to use a dynamic framework for on-line economic optimisation (Perkins, 1994) . To this effect, a number of researchers have attempted to integrate economic objectives into the predictive control formulation . For instance, see (Jang et al., 1987; Eaton and J.B., 1992; Balchen et al., 1992; Becerra and Roberts, 1995). All these approaches use nonlinear first principle models as the basis for prediction and optimisation. More recently, a predictive control algorithm has been presented based on linear empirical models of the process, which were periodically adapted using a moving horizon least squares approach, together with an optimal control algorithm to perform optimal predictions based on economic objectives and subject to hard and soft constraints (Becerra et al., 1996; Becerra et al., 1997; Becerra et al., 1998) . This paper is an extension of that work in that the iterative solution has been decomposed , so that instead of solving a centralised optimal control problem, a number of smaller su bproblems may be solved in parallel, coordinated by an upper entity in the hierarchy. The hierarchical scheme the authors explore in this paper is shown in Figure 1, where at each local unit the predictive control algorithm integrates economic and regulatory objectives.

En

(4)

G(y) :::; 0

(5)

Um

where F : !Rn. X !Rn~ -r !R is a steady state objective, 9s : !Rn~ x !R nd -r !Rny is the steady state input-output mapping corresponding to the dynamics given in equations (1), n = {u E Rn~ IT(u) :::; O} is a set of admissible inputs, T : Rn m -r !R nT is a mapping of input constraints, and G : !Rny -r Rnc is a mapping of output constraints.

2.2 The receding horizon optimal control problem (RHOP)

As an attempt for driving system (1) towards its steady state optimum , the following optimal control problem may be solved in a receding horizon fashion: RHOP min J(i)

Au(k)

1 = -2~x(i + N)T cI>~x(i + N) i+N-l

L

+

1 +2~Um(kf R~um(k)

Consider an interconnected process that obeys the following discrete-time dynamic state space equations y(k)

= Is(X(k),um(k),Ud(k))

= hs(X(k))

~x(k

(1)

+ 1)

= A~x(k)

+8(k ~y(k)

+ Bm~um(k)

i)Bd~ud(i)

= C~x(k)

(7)

(8)

um(k) En = {Um E Rn~lul :::; Um :::; Uh} (9)

I ~um(k) I:::; Ur

(10)

~um(k) = O,M:::; k:::; N

(11)

Where: cI>, Q, R - are weighting matrices of the appropriate dimensions ~ - is the increment operator, i.e. ~um(k) um(k) - um(k - 1) U m - is the vector of manipulated variables Ud - is the disturbance input, 8(k - i) - is 1 when k = i, 0 otherwise A, Bm, Bd - are matrices of the appropriate dimensions N - is the prediction horizon in samples. M - is the control horizon in samples. i-denotes the current time sample

Furthermore, assume that it is desired to optimise in the steady state a scalar performance index associated with the system represented by equations (1) , and subject to inequality constraints in the inputs and outputs. Thus define the steady state optimisation problem (SSOP) as follows: SSOP min F(y,u m )

+ P(y(k))}

subject to:

where k is an integer sampling index X E !Rnz is a state vector, U m E !Rn~ is a vector of independent manipulated inputs , Ud E !R nd is a vector of independent measured but not manipulated inputs, y E !Rn. is a vector of measured outputs, Is : !Rnz x !Rn", X !R nd -r Rn z is a probably unknown state transition mapping, hs : Rn z -r Rn. is a probably unknown state/output mapping.

Um,Y

(6)

1 +2~x(kfQ~x(k)

2.1 The steady state optimisation problem

+ 1)

{F(y(k), um(k))

k=i

2. FORMULATION

X(k

(3)

y = 9s(U m , Ud)

(2)

subject to:

464

xs(k + 1) ys(k)

Notice that the incremental state vector ~x corresponds ",,;th the identified linear state space model (see Section 2.3) , and is not necessarily related in size or values to the probably unknown state vector X .

x s(k)

For the model estimation problem in this paper, the authors have developed a non-iterative technique (Becerra et al., 1997; Becerra et al., 1998) based on the multivariable ARMAX model structure and the Least Squares method. The resulting ARMAX model is then transformed into a non-minimal state-space realization. In this way, the need for a separate linear state estimator is avoided since the state vector is formed from delayed values of the output and input vectors. The technique is based on the moving-horizon concept, with a data window length of Nd samples, an interval of Nu samples between updates and a sampling time Ts . The method exploits the displacement structure of the data window in order to reduce its computational load.

xj(k) = [Yj(k)T yj(k - If .. . yj(k-n a +lf uj(k - If ·· ·uj(k - mdf ... uj(k - md - nb + 2f f

(14)

(15)

The overall state, input and output vectors x , u, y, which are rearranged versions of x s , Us, Ys , may be written in terms of each subsystem's states, inputs and outputs as follows :

Assume that the output vector of the system at discrete time k is denoted as ys(k) E Rn y, and the independent input vector at time k is u s(k) E Rnu. Notice that in practice the vector of independent inputs should include both manipulated inputs and measured disturbances, so that Us = [u;'m u;'df . An ARMAX model of the system can be written as:

+b

I)T · ·· Ys(k-n a +l)T us(k - If· ..us(k - mdf . .. us(k - md - nb + 2f]T

Consider the possibility of decomposing the original system into Ns subsystems, assigning certain inputs and outputs to each subsystem. Define the input-output set for subsystem j as S~u = {yj , Uj} , and assume that the decomposition is such that every input or output is assigned to only one subsystem. Note that some or all subsystems may have multiple inputs and outputs. Given that the state vector (14) is given in terms of present and past outputs together with past inputs, then it is possible to decompose and rearrange the state vector according to this decomposition and obtain the following state vector for the j-th subsystem :

2.3 Model identification and decomposition

+ C*(q-l)€(k)

= [Ys(k)T ys(k -

is a state vector which contains present and past values of the output , and past values of the input variables, dim Xs = n = nyna + nu(nb + md - 2) , As , B s and Cs are matrices of the appropriate dimensions which are formed in terms of the ARMAX model polynomial coefficients, together with ones and zeros, c E Rn is an off-set vector .

Assuming that the system settles to a steady state operating point under receding horizon control involving the solution of RHOP, conditions have been studied for the satisfaction of the necessary optimality conditions of the steady-state optimisation problem SSOP (Becerra et al., 1998).

= B*(q-l)us(k -md)

( 13)

where

Output constraint violations along the predictions are penalized in equation (6) by means of the term P(y(k)) . This implies that output constraints are treated as soft constraints. See (Becerra et al., 1998) for more details.

A*(q-l)Ys(k)

= Asxs(k) + Bsus(k) + c = Csxs(k)

x(k) = [Xl (k)T ... XN. (k)T]T y(k) = [Yl (kf .. . YN. (k)T u(k) = [u; m(kf . .. u~:n(k)T u;)kf . .. u~d(kf]T

r

(16)

= [u~urr

(12)

Similarly, the system matrices As, B s , Cs for the original sorting of states inputs and outputs, may be rearranged according to this decomposition to obtain the new system matrices A, B, C , where B = [Bm Bd] . Using the difference operator ~ = 1 - q-l the following incremental model is obtained using the re-arranged variables:

where A*(q-l) = 1+ Alq-l + .. . + An"q-n", B*(q-l) = Blq-l + .. . + Bnbq-n b, C*(q-l) = 1+ Clq- l + ... + Cncq-n c , md is the minimum pure time delay in samples from inputs to outputs, the sequence €(k) is assumed to be zero mearl discrete white noise, and b E Rny is an off-set parameter vector introduced to take into account non-zero levels in the signals involved.

D.x(k

+ 1) = A~x(k) + BmD.um(k) +

~y(k)

An equivalent non-minimal state-space realization of the deterministic part of the ARMAX model (12) is as follows:

BdD.ud(k)

= CD.x(k)

(17)

This model is the basis for the receding horizon problem described in Section 2.2.

465

2.4 DISOPE algorithm

In this section the main principles behind DISOPE, the algorithm used for solving RHOP are briefly described. For more details, see (Becerra and Roberts , 1996). DISOPE stands for dynamic integrated system optimisation and parameter estimation. This algorithm was designed with two optimal control problems in mind. First, an original optimal control problem (OOP) which may be difficult to tackle directly for different reasons , such as nonlinearities, difficult objectives or large scale. Second, a simplified version of the problem known as modified simplified optimal control problem (MSOP) is instead solved iteratively such that the converged solution satisfies the necessary optimality conditions of the original problem . Define the original optimal control problem as follows :

necessary optimality conditions of MSOP equivalent to those of OOP. Formulas for computing these parameters, together with the steps of the DISOPE algorithm are given in (Becerra et al., 1998). It is possible, for computational convenience, to choose the simplified functions as follows: 1 TIT L(x, u , ,) ="2x Qx + "2uc Ruc 1 T 'P(x,O ="2x 4>x

f(x , u , Cl)

+ E;

= Ax + Bu c + Cl

(22)

(23)

(24)

so that MSOP is linear quadratic and standard non-iterative methods may be employed for its solution.

OOP minJ* u(k)

2.5 Hierarchical structure

= 'P*(x(Nf) N,-l

L

+

If matrices A , B , Q , R, and 4> (see equations (22) ,

(18)

L*(x(k) , uc(k),k)

(23) and (24») are chosen to be according to a decomposition into then it is possible to decompose independent sub-problems which in parallel.

k=No

subject to x(k + 1) x(No )

= j*(x(k) , uc(k),k) = Xo

(19)

Define the modified simplified optimal control problem (MSOP) as follows: MSOP u(k)

The hierarchical scheme presented here for solving the RHOP is similar to the pseudo-model and prediction principle technique of (Singh et al., 1976) .

= 'P(x(Nf ), E;) - rT x(Nf ) N,-l

+

L

L(x(k), uc(k), ,(k»

may be solved

This type of solution gives rise to a hierarchical structure comprising a coordinator in the upper level, and Ns local decision units at the lower level . The tasks associated with the coordinator are calculating Cl , E;, A, {3, , and r for the whole system , communicating with the local decision units, managing the iterations of the DISOPE algorithm and checking its convergence. The task associated with each local decision unit is to solve the corresponding subproblem of MSOP.

Where x E ~n is the state vector, U c E ~n~ is the control vector, L * : ~n X !Rn~ x ~ -t !R is a performance function, 'P* : !Rn -t !R is a terminal weighting function , j* : !Rn X !Rn~ X !R -t !Rn is a state transition mapping.

min J m

block diagonal Ns subsystems, MSOP into Ns

(20)

k=No

2.6 Handling of constraints

- A(kf uc(k) - (3(kf x(k)

The methods for handling the various constraints associated with RHOP are described in detail in (Becerra and Roberts, 1996; Becerra et al., 1998)

subject to x(k + 1) x(No )

= f(x(k),uc(k),o:(k) = Xo

(21)

3. SIMULATION STUDY: CHEMICAL REACTION SYSTEM

Where L : ~n X !Rn~ x !Rn.., -t !R is a simplified performance function, 'P : !Rn X !Rn~ -t ~ is a simplified terminal weighting function , f : !Rn X !Rn~ X !Rn" -t !Rn is a simplified state transition mapping, Cl E !Rn", ~ E !Rn~, A E ~nm, /3 E !Rn, , E !Rn.., and r E !Rn are introduced to make the

This simulation study consists of a dynamic model of two continuous stirred tank reactors connected in series where an exothermic auto-catalytic reaction is taking place. Figure 2 shows a schematic

466

CbI (0) = 0.05165 kmol/m 3 CbZ(O ) = 0.05864 kmol/m 3 . For computational purposes, input and output variables were numerically scaled so that their scaled initial values were unity. The identifier tuning parameters were Ts = 60 s, N d = 120, N u =5 , na = 1, nb = 3, n e = 1, md = 1. The controller tuning parameters were: N =25, M =25 , Q = h , cl> = lOh, R = diag (20000 , 20000), ~ur = 0.17 K.

diagram of the chemical reaction system . The units interact in both directions due to the recycle of a 50% fraction of the product stream into the first reactor. Regulatory controllers are used to control the temperature in both reactors, and the dynamics of these controllers are neglected . This steady-state optimisation problem has been previously solved by different methods (Garcia and Morari, 1981 ; Jang et al. , 1987; Becerra and Roberts , 1995) . The reaction is A + B ~ 2B. Notice that this system is intrinsically decomposed in two subsystems, each associated with one reactor. The model dynamic equations are as follows:

A linear steady state objective function was chosen which reflects the desire of maximising the amount of product B, F(y , urn) = -(CbI + Cb2). Model adaptation was enabled until the process settled at a new steady state point. At 10 hours the adaptation was disabled. Figure 3 shows the trajectories of the manipulated and output variables in each subsystem , noting the presence of the added random signal and the final average values of the temperature setpoints TI = 312 K and T2 = 309.4 K. The final values of the concentration of B in each reactor are Cbl = 0.0644 and C b2 = 0.07248 kmol/m 3 . Notice that there is an initial identification period equal to the data length of the identifier (2 hours), during which the predictive controller is not in operation.

Reactor 1: dCal

dt

(25)

Reactor 2: dC a2 dt

= -T2- - -T2a2r 2

dCb2 dt

CbI C b2 - - - + r2 T2 T2

Cal

C

(26)

=

As an illustration of the decomposition procedure, consider the original state vector:

Where: C xi - is the concentration of species x in tank i TI = 30 min - is the residence time of reactor 1 T2 = 25 min - is the residence time of reactor 2 rl = kl+ Cal CbI - k l - C~l r2 = k2+Ca2Cb2 - k2-C~2 k i ± = A±exp( - E±/ RTi ) - are the reaction rates E+/R = 17, 786 K E_/R = 23 , 523 K A+ = 9.73 X 10 2 2 m 3 /kmol s A- = 3.1 X 10 22 m 3 /kmol s CaO = 0.1 lanol/m 3 - is the feed concentration TI - is the temperature in reactor 1 Tz - is the temperature in reactor 2.

x(k)

:s

:s

I)T u(k - 2)T]T

= [CbI (k) C b2 (k) TI (k - 1) T 2(k - 1) (27) TI(k - 2) T 2(k - 2)]T

Then, the corresponding state vectors used for each subsystem when solving MSOP were: Xl

(k)

= [YI (kf uI (k = [CbI (k) TI (k -

I)T U I (k - 2);]T (28) 1) TICk - 2) ]

and

The manipulated variables are the set-points of the temperature controllers in both reactors, which are bounded between upper and lower levTl 312 K, 300 T2 312 K. In els: 300 this example, the concentrations associated with species B are measured in both reactors and this information is fed into the identification and predictive control procedures, so that y = [CbI C b2 ]T and u = [Tl T2)T . The input and output associated with sub-system 1 are UI = TI and YI = CbI , while the variables associated with sub-system 2 are Y2 = C b2 and U2 = T2 . A pseudo random binary sequence of magnitude ±0.5 K was added to the manipulated set-points in order to enhance input excitation when model adaptation was enabled. The system was started from the steady state given by the setpoints Tl (0) = 307 K and T 2 (0) = 302 K, which yield steady state values

:s

= [y(kf u(k -

4. CONCLUSIONS

:s

In this paper a method has been presented which integrates predictive control with on-line optimisation with economic objectives. A receding horizon optimal control problem has been formulated using linear state space models. This optimal control problem is very similar to the one presented in many predictive control formulations, but the main difference is that it includes in its formulation a general steady state objective depending on the magnitudes of manipulated and measured output variables. This steady state objective may include the standard quadratic regulatory objective, together with economic objectives which are often linear. The method is based on adaptive

467

l -

I

1 Coomnaur

l

- + -

l

I

Pt.ll. wtde steady rtate Dp4m str

-

- - -+-

-

tMd 1

tmt:N,

I...ocal d:ycam c apti.:n5Jti.c;n

Local dynarne

ln1ew-ati:l&

optuliSltlIZl in1.~

ecancmc and

c::cnm::acanc1

r~a1ay

r~oIay

obJCCU~

Ob)a:::t:.ves

-

R~,ayPID

-

l

CQ\trd.l~

Ac:1ua.\.a"s I Measunng d..eI;;Iices

I

I

Chemical rMCt)cn system

006[=:2

I -

o

2

4

6

12

1 '2

1

RquIa1ay PIP Camrd.l~

Actulun I Ml!8Ql:'lng deQ.ces

: 10

'2

I

1

Fig. 1. Hierarchical control scheme for integrated predictive control and economic optirnisation

'2

Fig. 3. Simulation results trol of nonlinear systems. International Journal of Control 63 , 257- 281. Becerra, V.M., P .D. Roberts and G.W. Griffiths (1997). Process optimisation using predictive control : application to a simulated reaction svstem . In: Proc. IFAC Symposium on Adva~ced Control of Chemical Processes. pp . 629-634. Banff, Canada. Becerra, V.M. , P.D. Roberts and G.W . Griffiths (1998) . Novel developments in process optimisation using predictive control. Journal of Process ControlS, 117-138. Becerra, V.M., P.D. Roberts, J. Lin and G.W . Griffiths (1996). New developments in process optimisation using predictive control. In: Proc . 13th IFAC World Congress. Vol. M. pp. 115-120. San Francisco, USA. Eaton, J.W. and J.B . Rawlings J.B. (1992) . Feedback control of chemical processes using online optimisation techniques . Chemical Engineering Science 14, 469--479. Garcia, C.E. and M. Morari (1981) . Optimal operation of integrated processing systems. AICHE Journal 27, 960-968 . Jang, S.S ., B. Joseph and H. Mukay (1987) . Online optimisation of constrained multi variable chemical processes. AICHE Journal 3-35 , 26. Perkins, J . (1994). Trends in process systems engineering. Modeling, Identification and Control 15 , 171-177. Qin , J . and T .A. Badgwell (1997) . An overview of industrial model-based predictive control. In: Proc . Chemical Process Control V. Tahoe City, USA . Singh: M.G ., M .F . Hassan and J .L. Calvet (1976) . Hierarchical optimisation of nonlinear systems v.;th application to a synchronous machine. International Journal of Systems Science 7, 1041-1051.

cw

Fig. 2. Schematic process diagram linear state space models , which are obtained by using on-line identification techniques. The solution of the receding horizon problem was achieved bv means of decomposition and iteration on simpiified sub-problems. The discrete time DISOPE algorithm was used , which achieves the solution of the original optimal control problem by iterating on a number of simplified sub- problems corresponding to the decomposition , which may be solved in parallel. The decomposition may be based on functional criteria, where the process under consideration is divided in different units. The method was tested with simulations of a system of two chemical reactors connected in series.

5. REFERENCES Balchen, J .G. , D. Ljungquist and S. Strand (1992) . State space predictive control. Chemical Engineering Science 47, 787-807. Becerra, V.M. and P.D . Roberts (1995). Nonlinear predictive control using dynamic integrated system optirnisation and parameter estimation. In: Proc . IFAC Symposium on Nonlinear Control Systems Design. pp. 795-800 . Tahoe City, USA. Becerra, V.M. and P.D. Roberts (1996) . Dynamic integrated system optimisation and parameter estimation for discrete time optimal con-

468