Fast distributed MPC based on active set method

Fast distributed MPC based on active set method

Computers and Chemical Engineering 71 (2014) 158–170 Contents lists available at ScienceDirect Computers and Chemical Engineering journal homepage: ...

1MB Sizes 2 Downloads 41 Views

Computers and Chemical Engineering 71 (2014) 158–170

Contents lists available at ScienceDirect

Computers and Chemical Engineering journal homepage: www.elsevier.com/locate/compchemeng

Fast distributed MPC based on active set method Xing Cai a , Michael James Tippett b , Lei Xie a,∗ , Jie Bao b,∗∗ a b

Institute of Cyber-Systems and Control, National Key Laboratory of Industrial Control Technology, Zhejiang University, Hangzhou, 310027 Zhejiang, China School of Chemical Engineering, The University of New South Wales UNSW, Sydney, NSW 2052, Australia

a r t i c l e

a b s t r a c t

i n f o

Article history: Received 29 May 2014 Received in revised form 30 July 2014 Accepted 1 August 2014 Available online 11 August 2014 Keywords: Distributed model predictive control Parallel computing Fast MPC

Modern chemical plants are characterized by their large-scale, strong interactions and the presence of constraints. With its ability to systematically handle these issues, distributed model predictive control (DMPC) is a promising approach for the control of such systems. However, the problem of how to efficiently solve the resulting distributed optimization problem is still an open question. This paper develops a novel fast DMPC approach based on a distributed active set method and offline inversion of the Hessian matrix to efficiently solve a constrained distributed quadratic program. A dual-mode optimization strategy based on the value of unconstrained optimal solution is developed to accelerate the computation of control action. The proposed method allows for the optimization to be terminated before convergence to cope with the fast sampling periods. Furthermore, a warm-start strategy based on the solution of the previous sampling instant is integrated with the approach to further improve convergence speed. The approach is highly parallelized as constraints can be checked in parallel. The approach is demonstrated using an academic example as well as a chemical process network control. © 2014 Elsevier Ltd. All rights reserved.

1. Introduction Model predictive control (MPC) has become the most widely used advanced process control method since it was proposed in 1970s (Richalet et al., 1978; Qin and Badgwell, 2003). Various forms of MPC have been proposed, such as dynamic matrix control (Cutler et al., 1982), for step response models and generalized predictive control (Clarke et al., 1987) for CARIMA models. The key advantage of MPC is the ability of the receding horizon framework to deal with process uncertainty, handle constraints and achieve high levels of performance. The key to this approach is the ability to solve the resulting online optimization problem. For linear systems and linear, convex constraints, the optimization problem constructed by quadratic penalty results in a quadratic programming (QP) problem. MPC is a mature technology both in theory (Mayne et al., 2000) and application (Qin and Badgwell, 2003). However, increasing scale and operational and regulatory constraints on modern chemical processes raise some challenging problems in the application of model predictive control and motivates the development of DMPC as well as fast (D)MPC strategies.

∗ Corresponding author. Tel.: +86 13071847893; fax: +86 571 87951200. ∗∗ Corresponding author. E-mail addresses: [email protected] (L. Xie), [email protected] (J. Bao). http://dx.doi.org/10.1016/j.compchemeng.2014.08.001 0098-1354/© 2014 Elsevier Ltd. All rights reserved.

Recently, the theoretical development (Venkat, 2006) as well as applications (Zafra-Cabeza et al., 2011; Hermans et al., 2012) of DMPC have received great attention. The main difference between decentralized MPC (Magni and Scattolini, 2006) and distributed MPC (Camponogara et al., 2002) is that the controllers communicate with one another in the latter. As a result, DMPC can cope with system interactions and constraints more effectively. To achieve better performance, more communication is needed. As presented by Necoara et al. (2011), the distributed algorithm for solving the coupled constrained problem is a trade-off among convergence speed, communication amount, and distributed computation architecture. Some DMPC algorithms have been proposed to reduce the communication or iteration number while limiting performance degradation. Farina and Scattolini (2012) treated the impact of the neighbors of every subsystem as bounded disturbance to cut down communication, however the algorithm is effective only under the assumption of decentralized stabilizability. Stewart et al. (2010) proposed a method whereby the subsystems were grouped in a hierarchy. In this way, the communication frequency between subsystems was reduced without adding new levels of coordinating controllers. Maestre et al. (2011) proposed a novel DMPC algorithm based on the game theory, the proposed controller only needed two communication steps in order to obtain a cooperative solution. One of the ways to cut down the communication burden is to consider the ‘neighbor-to-neighbor’ communication. Zheng et al. (2011) proposed a DMPC strategy where each subsystem exchanges

X. Cai et al. / Computers and Chemical Engineering 71 (2014) 158–170

a reduced set of information with its neighbors. The optimization index of each local MPC considered its own performance as well as the performance of its output neighbors. For more information about DMPC, the reader is directed to Scattolini (2009), Rawlings and Stewart (2008), Christofides et al. (2013). Some recent work on stability of distributed MPC for nonlinear process networks includes (Liu et al., 2009, 2010, 2013; Christofides et al., 2011). In Liu et al. (2009), a nice framework for distributed MPC based on Lyapunov techniques was proposed, however the communication at each sampling time is only sequential in nature between two distributed controllers. In Liu et al. (2010), the method was extended to multi-controllers, whereby the controllers can communicate with each one another sequentially or iteratively. It was shown that both communication structures yield computational advantages over centralized MPC, with iterative communication offering performance improvements at the cost of slightly increased computational effort (compared to sequential communication) for larger iteration numbers. To reduce the communication between controllers, a cooperative DMPC design in which the evaluations of the distributed controllers are triggered by the difference between the subsystem state measurements and the estimates of them is proposed in Liu et al. (2013). For a comprehensive study of networked and distributed predictive control based on Lyapunov techniques, the reader is directed to Christofides et al. (2011). Most of the existing DMPC results, as discussed above, focus on the design of the control system. What receives relatively less attention is how to solve these distributed control optimization problems efficiently, which is subject of this paper. This is important, as in the practical applications issues such as limited sampling time and bandwidth constraints may limit the application of DMPC. One approach to overcome these issues is to develop fast algorithms which can accelerate the online optimization (fast MPC). However, most of the fast algorithms are developed for the applications of centralized multi-variable MPC. A well known fast MPC technique is the explicit MPC, proposed by Bemporad et al. (2002), Johansen et al. (2002), where the optimal solution is obtained from a lookup table online. The table is generated by using offline multiparametric quadratic programming. Its limitation is that it can only be applied to relatively low dimensional systems to avoid excessively large lookup tables. In order to cope with this limitation, Ferreau et al. (2008) proposed an online active set method based on the assumption that the active set does not change greatly from one sampling time to next. Wang and Boyd (2010) developed a custom optimization method that exploits the particular structure of the MPC problem. Partial enumeration (Pannocchia et al., 2007) was also applied to large scale MPC and it was implemented using a table storage method combined with online optimization. Biegler et al. also developed an efficient online optimization algorithm of MPC for large scale linear (Bartlett et al., 2002) and nonlinear systems (Biegler, 2000), respectively. There are also some parallel QP algorithms which may be applied to DMPC. Giselsson et al. (2013) proposed a distributed optimization algorithm based on accelerated gradient methods using dual decomposition. Necoara et al. (2008) proposed an parallel algorithm based on proximal center decomposition method. Also Cairano et al. (2013) proposed a parallel QP algorithm which is projection-free. These algorithms try to parallelize the existing algorithms without considering the specific form of online QP problem and the similarity of the online optimization problem between two sampling instants, characteristics of the QP problem appearing in DMPC which may be exploited improve efficiency. Most of the above fast MPC approaches are applicable to centralized MPC, rather than DMPC. As mentioned above, there are existing algorithms to solve distributed QP problems, however, these do not exploit the specific structure of the distributed QP problem

159

appearing in DMPC applications. Additionally, DMPC has the added difficulty of potential geographic separation and strong interaction effects. These, along with the slow convergence speed of existing DMPC strategies motivates the development of a fast DMPC algorithm in the current paper, which exploits the specific structure of the QP problem appearing in DMPC for process networks. A custom fast DMPC algorithm for linear systems with box constraints is proposed in this article. The active set method which aims to find the correct active constraint set that can satisfy the KarushKuhn-Tucker (KKT) condition is used to deal with the constraints in a distributed manner. The distributed active set method is highly parallel. Moreover, a novel method of obtaining the initial iteration point for the active set method using the unconstrained solution is developed. The proposed algorithm makes use of the fact that the Hessian matrix remains constant during the entire control period to reduce the online computational load. To accelerate the algorithm, a dual-mode strategy based on the value of the unconstrained solution is designed as a tunable trade-off between convergence speed and performance. The above approach is integrated with a warmstart strategy (Yildirim and Wright, 2002) based on the optimal solution of previous time instant to reduce the number of iterations further. The remainder of this paper is organized as follows. In Section 2, the specific structure of the online QP problem is reviewed. In Section 3, the proposed algorithm is introduced including the parallel computation of the unconstrained solution and active set as well as the dual-mode acceleration strategy. The warm-start strategy is also introduced. The effectiveness of the proposed algorithms is demonstrate in Section 4 through an academic example as well as a case study of control of a chemical process. Finally, concluding remarks are presented in Section 5.

2. Conventional structure of QP problem for centralized linear MPC The optimization problem of centralized MPC encountered at each time instant may be expressed as follows: min x

˚ (x) =

1 T x Hx + fT x 2

(1)

s.t Lx ≤ d T

where x = [xT1 , · · ·, xTM ] ∈ U ⊂ RNm is the optimal control trajec-

M

tory, the (N i=1 mi ) = Nm dimensional decision variables. There are mi inputs for the ith the subsystem, with M subsystems and altogether m inputs, the prediction horizon for the optimization is N. In contrast to the appendix, x is used as the decision variable instead of u, in keeping with the convention in the optimization literature. H = HT ∈ RNm×Nm is the Hessian matrix for quadratic term which is a function of the system model and weighting matrices, which is detailed in Appendix A.1 (Eq. (44)). It should be noted that it remains constant during the entire online moving horizon optimization procedure. Vector f ∈ RNm contains the coefficient for the linear term, which is dependent on the current system state. It is also clear that this quadratic objective function is convex based on the fact that H ≥ 0, and as the input constraints are box constraints as described in the following. The constraints here are assumed to be convex box constraints which are quite common in control applications:



INm −INm

 x≤

 ¯ d d

.

(2)

160

X. Cai et al. / Computers and Chemical Engineering 71 (2014) 158–170

 L=

Remark 1.

INm −INm



  ∈ R2Nm×Nm and d =

¯ d d

∈ R2Nm are the

matrices which represent the box constraint. d¯ and d are up bound and low bound respectively. As this matrix has special form, it is possible to design specific fast algorithm to quickly find the optimal solution. The definition of the index for the active set at a specific solution is: Iac (x) = {i ∈ 1, . . ., 2Nm|Li x = di },

(3)

where Li is the ith row of the matrix L and di is the ith element of vector d. Using the above definition, the overall matrix for active constraint is LIac and the corresponding bound vector is dIac , they satisfy LIac x = dIac . The number of active constraints is NIac . The following equation is satisfied: LIac LTIac

= INIac .

(4)

The above result is obvious as for each row of matrix LIac , there is only one non-zero element, which has a value of one. 3. Fast parallel implementation of active set method In this section, the active set method (Wright and Nocedal, 1999) is selected to design a fast parallel algorithm for DMPC systems since it is easy to obtain a good a initial guess or develop warm-start strategy for iterating to optimal solution. The inverse of the Hessian matrix, which is related to system model, is computed offline to reduce the online computational load. The commonly encountered box constraints represented as (4) are considered. The proposed algorithm is suitable for large scale systems with a large number of decision valuables, and can even terminate before convergence. A dual-mode strategy as well as a warm-start strategy is designed to further accelerate the proposed algorithm. Before starting the main part of this section, the KKT condition for both QP with inequality and equality constraints is introduced (Wright and Nocedal, 1999). The KKT condition for optimization problem (1) is expressed by the following equations: ∗

Hx∗ + LT ␭ + f = 0, ∗

(5a)

Lx − d ≤ 0,

(5b)

∗i (Lx∗ − d) = 0, 1 ≤ i ≤ 2Nm,

(5c)

∗i ≥ 0,

(5d)

1 ≤ i ≤ 2Nm, ∗

where ␭ ∈ R2Nm are the Lagrange parameters and x∗ ∈ RNm is the optimal solution, (5c) are the complementary conditions and (5d) represents the dual feasibility. Moreover, the KKT condition for equality constrained QP is expressed as follows: ∗

Hx∗ + LTac ␭ = −f,

(6a)

LIac x∗ = dIac ,

(6b)

where LIac and dIac represent the active constraint for the equality constrained QP. 3.1. Distributed unconstrained optimal solution As the Hessian matrix H only depends on the system model and the weighting matrix in the MPC design, as such, it remains constant during the entire period of MPC implementation. Therefore, each subsystem can store the inverse of the Hessian matrix which is calculated offline. Under the above conditions, the unconstrained

Fig. 1. Selection of initial feasible solution by projection onto constrained set.

solution for optimization problem (1) can be expressed as follows for subsystem i as: xuc = −Pi H−1 f, i

i = 1, 2, . . ., M,

(7)

RNmi ×Nm

is the picking matrix to find the solution for where Pi ∈ each subsystem. As such, subsystem only has to store the matrix Pi H−1 , so the storage space may be further reduced. Clearly, the unconstrained solution may violate some constraints. In order to obtain a feasible initial point for the iteration of the active set method, shifting is needed to deal with the unconstrained solution. A good initial point for the active set method can be constructed based on the unconstrained solution. From the point of view of the box constraints, (1), for each subsystem there exists the following corresponding constraints: ¯, di ≤ xi ≤ d i

i = 1, 2, . . ., M.

(8)

Each subsystem can check the above satisfaction of the constraints in parallel based on the unconstrained solution xuc computed from i (7). Fig. 1 shows how the initial feasible point is found in a two dimensional case. If any element of xuc is out of the bounds, it is i forced to move to the bound of the box constraint, i.e. it is ‘clipped’ to the upper or lower bound. Otherwise, it remains unchanged. Alternatively, this can be seen as moving the unconstrained solution parallel to the satisfied constraint bounds to the boundary(s) of the unsatisfied constraint(s). In this way, an initial feasible point for iteration is obtained as x0 . By collecting the index of the violated constraint(s), the index of the active set is obtained, denote as Iac (x(0) ). It is a good initial guess for the optimal set. In the following sections we will show that the initial guess can cut down the iteration number significantly. Since there is no coupling in the constraint of each subsystem, it is easy to check the satisfaction of constraint in parallel. 3.2. Dual-mode constrained MPC strategy In order to accelerate the proposed algorithm, in this subsection a dual-mode constrained DMPC strategy is proposed to deal with the two cases based on the unconstrained solution (7). Assume the unconstrained solution is out of the feasible region (see Appendix A.2 for more information). The first case is that the unconstrained solution is very far away (this will be made precise later) from the box constraint set. As illustrated in Fig. 2, it is clear that the difference between the objective functions for optimal solution and initial point is small, as the angle  will be quite small. In this case, x(0) is obtained by checking every constraint and moving to the constraint bound if it is violated, while x∗ is the optimal solution. It is reasonable to just considered the initial point as a reasonable sub-optimal solution. The second case is when the unconstrained solution is near the constrained set. As Fig. 3 shows, there exists the possibilities that the difference between objective functions of

X. Cai et al. / Computers and Chemical Engineering 71 (2014) 158–170

161

Fig. 2. The unconstrained solution far away from the origin.

the optimal and initial start point calculated using the approach in the previous section may be significant. In this case, the iteration of active set method to find a better solution is needed which will be discussed in detail in the following part of this article. Here, we define a switching parameter , which is the ratio of the length between the switching boundary and the original feasible region, we then obtain a region with the same shape as the feasible region shown by the red dashed line in Figs. 2 and 3. If the unconstrained solution is outside this region, the online optimization problem applies the feasible solution obtained at previous section. As it moves into the region, the distributed active set introduced in the following is used.

3.3. Parallel solution to determine search direction In the preceding, a feasible solution x(v) and its corresponding active set Iac (x(v) ) have been obtained. The iterative procedure to improve the solution is now considered. If the iteration number v = 0, obviously it is the case of initial start point and initial active set

(v)

presented in Section 3.1. Denote v as the iteration number and Iac as the active set (v = 0 is the case of initial start point and initial active set). Consider the current active constraints holds with equality, whereas the remaining constraints may be disregarded. Set: ˜(v+1) p = x(v) − x (v)

f

(v)

= −Hx

(9a)

−f

(9b) (v)

where p is the search direction, f is the new linear term and ˜(v+1) is the point after moving along the search direction. The new x objective function can be easily obtained as follows: ˚(˜x(v+1) ) = ˚(x(v) − p) =

T 1 T p Hp + (f(v) ) p + ˚(x(v) ) 2

(10)

The QP problem with equality constraints to be solved at v + 1 iteration becomes: min p

s.t

˚ (p) =



1 T p Hp + f(v) 2

(v) (v+1) ˜ Lac x

=0

Fig. 3. The unconstrained solution near the origin.

T

p (11)

162

X. Cai et al. / Computers and Chemical Engineering 71 (2014) 158–170

By applying the KKT conditions (6), the following linear system is obtained:

⎡ ⎣



H

(v)

T ⎤ 

Lac

(v) Lac

p(v)





0





−f(v)

=

(v)



0

.

(12)

In the above equation, the extended KKT matrix cannot be calculated offline as the active constraints may change at each iteration. The distributed computation of the optimal solution of (11) should be based on the inverse of Hessian matrix H. Here two approaches (Wright and Nocedal, 1999) are introduced for two extreme cases: (1) the number of active constraint is quite small, such as half of the prediction horizon and (2) the number of active constraints is large. These are detailed in the following sections. 3.3.1. Range-space approach In this approach, the Lagrange parameters are determined based on the following Schur complement system which is obtained by Block Gaussian elimination, i.e., multiplying the first equation with (v) Lac H−1 and then subtracting the second equation:



Lac H−1 Lac (v)

(v)

T



(v)

= −Lac H−1 f(v) . (v)

= Pi H

−1

(v)

−f





(v) Lac

T

After the solution of (11) is obtained, we need to compute the (v+1) new iteration point x(v+1) and active set Iac . The following two cases are considered. Case 1: p(v) = 0. Compute the Lagrange parameters according to the following equations: −Hx(v) − f = −



(v)

i Li

(19)

(v) i∈Iac



(v)

T

Lac

(v)

␭ac = −Hx(v) − f.

(20) (v)

Pre-multiplying (20) by Lac and using the result of (4)





3.4. Parallel solution to determine a new point and active set

(13)

The approach is effective when the number of constraints is small. In which case, the Lagrange parameters can be easily computed, then we can use the solution of the Lagrange parameters to compute the solution of the optimization problem (11) in parallel as equation (7): (v) pi

the inverse of KKT matrix or the inverse of ZT HZ. It is the worst scenario for the proposed approach. In most cases the dimension of the active set is small as compared with the dimension of the decision variable. Thus, range-space approach is expected to be used more frequently.

(v)

.

(14)

(v)

(v)





␭ac = −Lac Hx(v) + f

(21)

Based on the above equation for computing the Lagrange parameters, the following equation is utilized for each subsystem to compute the corresponding Lagrange parameters in parallel: (v)

(v)





␭i,ac = −Li,ac Hx(v) + f ,

i = 1, 2, . . ., M,

(22)

(v)

3.3.2. Null-space approach This approach is most effective when the number of con(v) straints is large.  Assume that Lac has full row rank m(v) . Define

where Li,ac is the corresponding active constraints for the ith subsystem. After this step, the subsystem compare their corresponding Lagrange parameters to obtain the minimum:

as the matrix whose columns span Ker(Lac ), i.e. Z∈R (v) ZLac = 0, in the box constraint case it is easy to find Z. Therefore, the vector p(v) could be factorized as:

i,ac

Nm× Nm−m(v)

p(v) =



(v)

Lac

T

(v)

wY + ZwZ ,

(15)

(v)

(v)

where wY ∈ Rm and wZ ∈ RNm−m . Substituting (15) into the second equation of (12), the following equation is obtained: (v)

Lac p(v)

(v)



(v)

= Lac Lac =

T

(v)

T

(v) i,ac

i = 1, 2, . . ., M.

(23)

j∈I

After each controller obtains the corresponding minimum value, they should communicate the global minimum Lagrange parameters which are defined as follows: min(v)

,ac

=

min(v)

(16)

(v) wY + Lac ZwZ

Substituting the above equation to the first equation of (12):



= min j,ac ,

min(v)

min j,ac

j∈(1,...,M)

.

(24)

(v)

wY + Lac ZwZ

= wY = d(v)

H Lac

min(v)



(v)

wY + HZwZ + Lac

T



(v)

= −f(V )

(17)

(v)

ZT HZwZ = −ZT f(V ) − ZT H Lac

T

wY .

≥ 0, stop the algorithm, the optimal solution is x(v) . Other(v)

wise, determine  ∈ Iac such that (24) is satisfied. Then set x(v+1) = (v+1) (v) x(v) and Iac := Iac \{},where \ denotes set subtraction and : = denotes equivalence between two sets. / 0. Case 2: p(v) = In this case, a proper step length ˛v ∈ [0, 1] must be found. Suppose the new iteration point is expressed as follows: x(v+1) = x(v) − ˛v p(v) ,

(v)

multiplying by ZT and use the result ZLac = 0:



If ,ac

(18)

In this case as the number of active constraint is large, most of the control action has been obtained, so the dimension of ZT HZ is small. As such, the optimal solution can also be obtained easily. Remark 2. In both the cases of a large and small active set, the range and null-space approaches, respectively, provide efficient solutions to the optimization problem (11). However, in the case of medium size of equality constraints, it is unavoidable to compute

˛v ∈ [0, 1],

(25)

where ˛v is chosen to be the maximum value in [0, 1] such that x(v+1) (v) is feasible. For j ∈ Iac , the constraint will never go to infeasible, it (v) only remains to check the case that j ∈ Ii \Ii,ac (i = 1, . . ., M) for each subsystem. Ii represents the corresponding constraint for the ith subsystem, each subsystem can check its corresponding constraint in parallel: (v)

(v)

(v)

Lj x(v+1) = Lj x(v) − ˛v Lj p(v) ,

(v)

(j ∈ Ii \Ii,ac and i = 1, . . ., M). (26)

X. Cai et al. / Computers and Chemical Engineering 71 (2014) 158–170 (v)

(v)

(v)

(v)

If Lj p(v) ≥ 0, then obviously Lj x(v+1) ≤ Lj x(v) ≤ dj . However, if (v) Lj p(v)

< 0, in order to guarantee feasibility, ˛v should follow the following inequalities: ˛v,i ≤

(v) (v) Lj x(v) − dj , (v) Lj p(v)

(v)

(j ∈ Ii \Ii,ac and i = 1, . . ., M).

So the value of ˛v is:



˛v =

min

(v)

1, min

(v) i,ac

i∈[1,...,M]

j∈Ii \I

(v)

Lj x(v) − dj

(27)

 .

(v)

Lj p(v)

(28)

Each subsystem computes their corresponding ˛v and then communicate to find overall minimum defined by the above equations. Based on the above step, the index of constraint with minimum value of ˛v less than 1 is referred to as the blocking constraint which is represent by Ibl (p(v) ). Clearly the following relation should be satisfied: (v)

(v)

(v)

(v)

Lj,ac x(v+1) = Lj,ac x(v) − ˛v Lj,ac p(v) = dj,ac , j ∈ Ibl (p(v) ).

(29)

Then set x(v+1) = x(v) − ˛v p(v) and in case of blocking constraint, (v+1) (v) i.e. 0 ≤ ˛v ≤ 1, the new active constraint should be Iac := Iac ∪ Ibl (p(v) ). 3.5. Proposed algorithm The proposed algorithm is summarized below. STEP 1 Each controller computes its corresponding unconstrained solution according Eq. (7), then checks the satisfaction of the box constraints in order to obtain the index for the active set Iac (x(0) ) as well as the initial start point x0 . If the unconstrained solution is in the feasible set, then it is the optimal solution, otherwise go to Step 2. STEP 2 Check the distance of the unconstrained solution for each subsystem i = min(uuc,j /dj ), (i = 1, · · ·, M) to find the j∈Ii

smallest i . Compare it to the switching parameter . If i,min ≥ . Directly apply the initial start point x0 as the solution for optimization problem (1). Otherwise, go to Step 3. (v) STEP 3 For v ≥ 0, proceed as follows. Compute pi from equation (14) using the range space approach or the null space approach based on the number of active sets. Then collect (v) pi to obtain p(v) . Case 1 If p(v) = 0, each subsystem computes vi,ac according to (22) and compares them to obtain the minimum from (23). If min(v) ,ac ≥ 0, stop the algorithm, the optimal solution is x(v) . (v)

Otherwise, determine  ∈ Iac such that (24) is satisfied. Then (v+1) (v) set x(v+1) = x(v) and Iac := Iac \{}. ( v ) / 0, compute ˛v in parallel according to (28) and comCase 2 If p = pare to obtain its minimum as (27). Then set x(v+1) = x(v) − ˛v p(v) and in case of a blocking constraint, i.e. 0 ≤ ˛v ≤ 1, the (v+1) (v) new active constraint should be Iac := Iac ∪ Ibl (p(v) ). If it does not break, go back to Step 3 and set v = v + 1. Remark 3. When the switching parameter is set to  =+ ∞, the proposed algorithm is reduced to a distributed version of centralized active set QP method, the difference is that the inverse of the Hessian matrix is obtained off-line to decrease the online computational burden. Thus, the distributed solution will be coincident with the centralized solution.

163

3.6. Warm-start point strategy for iteration to accelerate convergence In this subsection, the proposed approach is combined with a warm start strategy to accelerate the convergence of the active set method, and cut down the number of iteration of Step 3 of the algorithm in the previous section. During each iteration of the active set method the proposed approach will remove or add a constraint to the active set. As such, (0) the total iteration number is related to the initial active set Iac . Thus, the more accurate the initial active set, the less iterations are required. Assume that between two successive sampling instants, the system states do not change significantly. In other words, there is a small change in the linear term of the quadratic programming problem f in (44). This will result in a small change for the unconstrained solution. Therefore, the active set change may not be significant between two successive sampling instants. The warm start strategy tries to use the active set at the previous sampling instant as an estimate of the current one. Suppose at time k, the optimal solution is expressed as follows:



x∗ (k) = x∗1 (k) , . . ., x∗M (k) T





T T

,

(30)

T

where x∗i (k) = xi∗ (k) , . . ., xi∗ (k + N − 1)



T T

(i = 1, . . ., M) and the

corresponding active set is Ii,ac (k)* for subsystem i. At sampling time instant k + 1, the warm initial point for the subsystem i could be expressed as:



xi (k + 1) = xi∗ (k + 1) , · · ·, xi∗ (k + N − 1) , xi∗ (k + N − 1) (0)

T

T



T T

(i = 1, · · ·, M).

(31)

Obviously, each controller constructs the initial warm-start by removing the first element of the solution vector and repeating the last element to the optimal solution of previous sampling instant. The corresponding initial active set should be Ii,ac (k + 1)(0) = Ii,ac (k)* , (i = 1, . . ., M). 4. Case study 4.1. Academic example Consider the following two input, two output system, which represents the model of the distillation column (Venkat, 2006), its transfer function matrix is:

⎡ ⎢

G(s) = ⎣

32.63 (99.6s + 1)(0.35s + 1)

−33.89 (98.02s + 1)(0.42s + 1)

34.84 (110.5s + 1)(0.03s + 1)

−18.85 (75.43s + 1)(0.3s + 1)









⎥ ⎦ . (32)

 

T21 G11 (s) G12 (s) V = , the T7 G21 (s) G22 (s) L inputs of the system are L and V which denote the reflux flow rate and the vapor boil up flow rate to the distillation column, respectively. While, the outputs are T21 and T7 which represent the temperatures of the tray 21 and 7, respectively. The MPC design is based on the following parameters. The box inputs constraint considered in this case is expressed as: The system is described as

−1.5 ≤ V ≤ 1.5 −2 ≤ L ≤ 2.

(33)

The prediction and control horizon are both chosen as N = 25, the sampling time is 0.5 s. The MPC objective function is J(x(k), u) = N N−1 T T y(k + i) Qyi y(k + i) + u(k + i) Ri u(k + i). The weighti=1 i=0 ing matrices to penalize the inputs and outputs at each time

164

X. Cai et al. / Computers and Chemical Engineering 71 (2014) 158–170

ε ε ε

ε ε ε



ε ε ε



ε ε ε



ε ε ε





Fig. 4. The input and output trajectories at the cases that  = 7.5,  = 6 and  =∞.

 instants are chosen as Ri =

 

50 0

0 50



1 0

0 1

Fig. 5. The iteration number for the cases  = 7.5,  = 6 and  =∞.

 (i = 0, · · ·, N − 1) and Qyi =

(i = 1, · · ·, N − 1), i is the time index. When i = N, QyN =



5008.4 0 , which is calculated based on a discrete-time 0 5368.3 Lyapunov equation. The aim is to control the system outputs from the original point to the steady outputs (− 2, 2) and the control is begin at time instant 15. Two simulations are done to demonstrate the proposed algorithm.

4.1.1. Simulation 1: Dual-mode strategy In this simulation, we illustrate the proposed dual-mode constrained DMPC strategy, i.e. to evaluate the effect of different values of the switching parameter. Two control strategies are tested: (1)  = 7.5 and  = 6 which means there is a change of control solution

ε ε ε

ε ε ε

∞ ∞

∞ ∞

when the distance to constraint bound is getting near, (2)  =∞, which implies only a single mode is used. Fig. 4 shows the trajectories of the system inputs and outputs; Fig. 5 shows the iteration number of these approaches. It is clear from Fig. 5, the control mode switches to the active set DMPC at time instant 22 when  = 7.5 and at time instant 33 when  = 6. Before that time instant the system uses the solution constructed by unconstrained solution, projected to the constraint bound if necessary. The total iteration number is 902, 218 and 1481 for the case that  = 7.5,  = 6 and  =∞, respectively. A lower number of iteration leads to faster computation and less communication between the two controllers. However, there is a loss of control performance as is shown in Fig. 4. The designer can choose  to achieve the desired trade off between computation time and performance and larger  results in better performance but with the cost of increased computational effort.

ε ε ε

∞ ∞

ε ε ε

∞ ∞

Fig. 6. The input and output trajectories with warm-start (WS) strategy and with unconstrained start.

X. Cai et al. / Computers and Chemical Engineering 71 (2014) 158–170

ε ε ε

165

∞ ∞

Fig. 8. Reactor separator process.

and Bao (2013), Liu et al. (2009). Fig. 8 shows the structure of this process. The following reactions occur in the reactors:

Fig. 7. The iteration number with warm-start (WS) strategy and with unconstrained start. Table 1 Computational time of each controller for three scenarios.

N = 10 N = 25 N = 40

Proposed approach (warm-start)

Proposed approach (warm-start and dual mode)

Centralized

0.3712 s 0.6997 s 1.1943 s

0.3125 s 0.5882 s 1.0217 s

0.5622 s 1.1261 s 1.7850 s

Reaction1 :

A −→ B

Reaction2 :

B −→ C.

The objective is to produce the product B and at the same time to avoid producing by-product C. The mathematical model comprises a molar balance for each chemical species and an energy balance for each of the three units. For each unit, there are three states: the mass fractions of A, B and the corresponding temperature. Under assumptions of perfect mixing and constant physical properties, the system equations are given below: Reactor1

4.1.2. Simulation 2: Warm-start strategy combined with dual-mode strategy In this simulation, we illustrate how the warm-start strategy can be integrated with the dual-mode strategy. The warm-start strategy with and without dual-mode strategy are compared with the common active set DMPC with unconstrained warm-start, i.e. the initial point is from the unconstrained solution moving to constraint bound. Fig. 6 shows the trajectories of system inputs and outputs for the above three cases; Fig. 7 shows the iteration number of the three approaches. The total iteration numbers for the three cases are 135, 136 and 1481, respectively. The dual-model strategy can be combined with the warm-start strategy to further reduce the iteration number, as shown in the case that  = 7.5.

x˙ A1

x˙ B1

−E1 F10 Fr = (xA10 − xA1 ) + (xAr − xA1 ) − k1 e RT 1 xA1 V1 V1 −E1 −E2 F10 Fr RT 1 = (xB10 − xB1 ) + (xBr − xB1 )+k1 e xA1 − k2 e RT 1 xB1 V1 V1

T˙ 1 =

−E1 F10 Fr H1 (T10 − T1 ) + (T3 − T1 ) − k1 e RT 1 xA1 V1 V1 Cp −

−E2 Q1 H2 k2 e RT 1 xB1 + Cp Cp V1

Reactor2 4.1.3. Comparison of computational time In order to illustrate the efficiency of the proposed method, the computational times of the proposed approach with warm-start and with both warm-start and dual-mode strategy, are compared with that of a centralized MPC solution, for horizon lengths N = 10, N = 25 and N = 40. When the dual-mode strategy is applied, the switching parameter is set as  = 7.5. As shown in Table 1, the results show that the proposed approach can reduce the computational time significantly (by 33.9%, 37.8% and 33.1% respectively for the three different horizons) compared with a centralized MPC. With the dual mode strategy, the computational efficiency of the proposed approach is further improved to reduce the computation time by 44.4%, 47.7% and 42.7% respectively compared to the centralized MPC, with a slight loss of performance as shown in Fig. 4.

x˙ A2

−E1 F1 F20 = (xA1 − xA2 ) + (xA20 − xA2 ) − k1 e RT 2 xA2 V2 V2

x˙ B2 =

T˙ 2 =

−E1 −E2 F1 F20 (xB1 − xB2 )+ (xB20 − xB2 ) + k1 e RT 2 xA2 − k2 e RT 2 xB2 V2 V2 −E1 F1 F20 H1 (T1 − T2 ) + (T20 − T2 ) − k1 e RT 2 xA2 V2 V2 Cp −

−E2 H2 Q2 k2 e RT 2 xB2 + Cp Cp V2

Separator x˙ A3 =

Fr + Fp F2 (xA2 − xA3 ) − (XAr − XA3 ) V3 V3

x˙ B3 =

Fr + Fp F2 (xB2 − xB3 ) − (XBr − XB3 ) V3 V3

4.2. Chemical process network Consider the following chemical process with two CSTRs in series followed by a flash separator with recycle of the top product of the separator, this has been presented in DMPC studies in Tippett

T˙ 3 =

F2 Q3 (T2 − T3 ) + . V3 Cp V3

166

X. Cai et al. / Computers and Chemical Engineering 71 (2014) 158–170 0.5 ε=5 ε=+∞

420

0.4 MB1

MA1

0.85 0.8

0.3

0.75 0.7

ε=5 ε=+∞

410

ε=5 ε=+∞

1

0.9

430

T

0.95

400

0.2

390

0.65 0

0.2

0.4

0.6

0.8

0.1

1

0

0.2

0.4

time t

0.6

0.8

380

1

0.2

0.4

420

0.4 2

MB2

ε=5 ε=+∞

410

ε=5 ε=+∞

0.3

0.75 0.7

1

430

0.85 0.8

0.8

T

0.9

0.6 time t

0.5 ε=5 ε=+∞

0.95

MA2

0

time t

400

0.2

390

0.65 0

0.2

0.4

0.6

0.8

0.1

1

0

0.2

0.4

time t 0.8

0.8

380

1

0.4

0.3 0.6

0.8

0.2

1

T3

MB3

0.4

0.4

0.4

0.6

0.8

1

ε=5 ε=+∞

420

ε=5 ε=+∞

0.5

0.5

0.2

0.2

430

0.6

0.6

0

0

time t

0.7 ε=5 ε=+∞

0.7 MA3

0.6 time t

410 400

0

0.2

0.4

time t

0.6

0.8

390

1

0

0.2

0.4

time t

0.6

0.8

1

time t

Fig. 9. The states trajectories at the case that  = 5 and  =∞.

The definition for the variables can be found in Table 2. To obtain the specific parameters, reader may check (Christofides et al., 2011). It has been assumed that all units have constant hold up and all species have constant relative volatility. The following relations define the separator overhead concentrations: xAr xBr xCr

˛A xA3 = ˛A xA3 + ˛B xB3 + ˛C xC3 ˛B xB3 = ˛A xA3 + ˛B xB3 + ˛C xC3 ˛C xC3 = . ˛A xA3 + ˛B xB3 + ˛C xC3

the desired equilibrium point. In this simulation, the equilibrium point selected is shown in Table 3 and the corresponding equilibrium inputs are Q1s = 12.6 × 105 [kJ/h],Q2s = 1.332 × 106 [kJ/h] and Q3s = 1.18 × 108 [kJ/h]. The inputs constraints are designed as: |Q1 | ≤ 0.74 × 106 [kJ/h] |Q2 | ≤ 0.468 × 106 [kJ/h]

(34)

(35)

6

|Q3 | ≤ 0.812 × 10 [kJ/h], where Qi = Qi − Qis , i = 1, 2, 3. The MPC objective function

The objective of the DMPC is to stabilize the 9 states to the equilibrium from an initial point and the manipulated variables



N

i=1

T

x(k + i) Qxi x(k + i) +

N−1

ing matrices are selected as:

i=0



Table 2 Process variables. xA1 , xA2 , xA3 xB1 , xB2 , xB3 xAr , xBr , xCr T1 , T2 , T3 T10 , T20 F1 , F2 F10 , F20 Fr , Fp V1 , V2 , V3 E1 , E2 H1 , H2 ˛A , ˛B , ˛C Q1 , Q2 , Q3 Cp , R, 

i = 1, · · ·, N (36)

i = 0, · · ·, N − 1.

are the heat supplied into the three units, i.e. Qi , i = 1, 2, 3. The sampling time is 0.02h and the control horizon is N = 6. The model used in the DMPC controllers is the linearized model at

Mass fraction of A in vessel 1, 2, 3 Mass fraction of B in vessel 1, 2, 3 Mass fractions of A, B, C in the recycle Temperatures in vessels 1, 2, 3 Feed stream temperatures to vessels 1, 2 Effluent flow rate from vessels 1, 2 Steady-state feed stream flow rates to vessels 1, 2 Flow rates of the recycle and purge Volumes of vessels 1, 2, 3 Activation energy for reactions 1, 2 Heats of reaction for reactions 1, 2 Relative volatilities of A, B, C Heat inputs into vessels 1, 2, 3 Heat capacity, idea gas constant and solution density

J(x(k), u) =

u(k + i) Ri u(k + i). The weight-

Qxi = diag 5.2 × 1012 [2000, 2000, 25, 2000, 2000, 25, 2000, 2000, 25] , Ri = diag ([1, 1, 1]) ,

is

T

While not necessary in this case, a terminal constraint can be added to ensure stability. 4.2.1. Case 1: Dual-mode strategy In this process, the switch parameter is set as  = 5 and  =∞, respectively. Figs. 9 and 10 show the states trajectories and the inputs trajectories for these two cases. Fig. 11 shows the iteration number at each time instant. From the input and output trajectories, it is clear that the performance of the two cases is nearly identical. However, the iteration number at the case  = 5 is greatly reduced as the iteration begins at the fifth time instant. Moreover, the iteration number of both cases is 21 and 83. Thus, in this case Table 3 Equilibrium point. xA1s xB1s T1s

0.605 0.386 425.9 [K]

xA2s xB2s T2s

0.605 0.386 422.6 [K]

xA3s xB3s T3s

0.346 0.630 427.3 [K]

X. Cai et al. / Computers and Chemical Engineering 71 (2014) 158–170 6

2.2

6

x 10

1.9 ε=5 ε=+∞

2

6

x 10

2.1 ε=5 ε=+∞

1.8

ε=5 ε=+∞

1.9 1.8

1.8

1.7 Q

Q

1.6

3

2

1.6 Q1

x 10

2

1.7

1.5

1.6 1.5

1.4

1.4

1.4

1.3

1.2

1

167

1.3

0

0.2

0.4

0.6

0.8

1

1.2

1.2 0

0.2

0.4

time t

0.6

0.8

1

1.1

0

0.2

time t

0.4

0.6

0.8

1

time t

Fig. 10. The inputs trajectories at the case that  = 5 and  =∞.

the two-stage strategy can reduce the iteration number without any performance degradation.

20 ε=5 ε=+∞

18

4.2.2. Case 2: Warm-start strategy combined with dual-mode strategy In this simulation, the warm-start strategy is integrated with the dual-mode strategy. It is observed that the state and input trajectories of these three cases are virtually identical, as shown in Figs. 12 and 13. However, the iteration number for the cases with warm-start are significantly reduced after the first and fifth time instant, as shown in Fig. 14. The total iteration numbers of three cases are 20, 37 and 83. From this simulation, the warm-start strategy could greatly improve the convergence speed and it can combined with dual-mode strategy to further reduce the iteration number.

16

Iteration number

14 12 10 8 6 4 2 0

5. Discussion and conclusion 0

0.2

0.4

0.6

0.8

time t Fig. 11. The iteration number at the case that  = 5 and  =∞.

1

This paper develops a novel fast DMPC algorithm based on a distributed active set method. The relatively simple form of the

Fig. 12. The states trajectories for the case with warm-start (WS) and without warm-start.

168

X. Cai et al. / Computers and Chemical Engineering 71 (2014) 158–170

Fig. 13. The inputs trajectories for the case with warm-start (WS) and without warm-start.

constraints appearing in MPC applications, means that the active set method is a natural choice. Because of the special form of the distributed QP problem considered in this paper (i.e. that appearing in DMPC applications), it is unfair to compare the proposed approach with the other fast QP algorithms, but through the simulation it can be seen that it has fast computational speed since it can converge much faster with fewer iterations compared with the traditional centralized active set method. To develop this fast DMPC algorithm the specific characteristics of the QP problem appearing in linear DMPC have been exploited. The inverse of the Hessian matrix is computed offline, and the initial iteration point is obtained from the unconstrained solution projected to the constraint bound. A dual-mode strategy has been proposed to accelerate the computation of the control action. When the state is far away from the set point, i.e. the unconstrained solution is far away from the set point, the applied control law is the unconstrained solution moving to the constraint bound. Alternatively, when the unconstrained solution is near the constraint bound, the unconstrained solution projected to the constraint bound is utilized as an initial point for the distributed active set based optimization. Finally, a warm-start strategy is proposed to further reduce the computational burden. Overall, the proposed algorithm is highly parallel and may significantly reduce

ε ε ε

∞ ∞

computation time and can be integrated with a warm-start strategy to reduce the number of iterations further. An important area of research is nonlinear distributed MPC, as such; the problem of nonlinear convex optimization is of interest. The application of the distributed optimization method developed in this paper to the nonlinear convex optimization problems may be challenging. The proposed method fully utilizes the special structure of the QP problem in linear distributed MPC. Its extension to nonlinear optimization problems is not trivial. Further work should focus on specific classes of nonlinear systems whose structure can be easily exploited. Acknowledgements This work is partially supported by the ARC Discovery Project DP130103330, National Natural Science Foundation of China 61320106009 and 61374121, Zhejiang Key Discipline of Instrument Science and Technology JL130107. The first author wishes to acknowledge the financial support from Zhejiang University. Appendix A. A.1. Centralized model predictive control The now classical state space form of the standard linear MPC QP problem (see Rawlings and Mayne, 2009 for example) is developed below for completeness. Consider a large-scale process which is represented by the following linear state space model: x(k + 1) = Ax(k) + Bu(k), Rn

(37) Rm

where x(k) ∈ and u(k) ∈ are the state and input vector at time instant k. Suppose the full state information is available to controller. The centralized MPC objective function is defined as: ␾(k, x(k))

Fig. 14. The iteration number for the case with warm-start (WS) and without warmstart.

=

1 1 T T x(k + i) Qi x(k + i) + u(k + i) Ri u(k + i) 2 2

+

1 T x(k + N) Qf x(k + N) 2

=

1 1 T x Qx1|N + uT1|N Ru1|N , 2 1|N 2

N−1

N−1

i=1

i=0

(38)

where the weighting matrices Q = diag(Q1 , Q2 , . . ., QN−1 , Qf ) and R = diag(R1 , R2 , · · · , RN ) are positive definite; Qf is the terminal penalty applied to stabilize the system; the predicted T

T T

states and inputs are x1|N = [x(k + 1) , · · ·, x(k + N) ] and u1|N =

X. Cai et al. / Computers and Chemical Engineering 71 (2014) 158–170

169

T T

T

[u(k) , · · ·, u(k + N − 1) ] respectively; and N is the prediction horizon. Based on the system model represented by (37), the relationship between the predicted states and inputs for a finite horizon can be expressed as: x1|N = Fx(k) + Eu1|N

(39)

where



B

⎢ AB ⎢ E=⎢ . ⎣ ..

0

···

B

···





A

⎢ A2 ⎢ ⎥ ⎥;F = ⎢ .. ⎢ .. . 0⎦ ⎣ .

.. .

AN−1 B

0

AN−2 B

···

0⎥

B

⎤ ⎥ ⎥ ⎥. ⎥ ⎦

(40)

AN−1

Fig. 15. Original space.

Substituting the above equation into (38) to eliminate x1|N , the following objective function can be obtained for the centralized MPC:

␾(k, x(k)) =

  T 1 T  T E QE + R u1|N + ET QFx(k) u1|N + constant u 2 1|N (41)

As for the DMPC framework, the control input of each subsystem is denote by u1|N,i , i = 1, 2, . . ., M; with M the total number of subsystems. To group the inputs according to the order of subsystems, a permutation matrix should be pre-multiplied to u1|N to interchange its rows. U1|N = Tu1|N

(42) U1|N = [uT1|N,1 . . .uT1|N,M ]

where

T

and

uT1|N,i =

T T

T

[ui (k) . . .ui (k + N − 1) ] . Thus, the objective function can be rewritten as: ␾(k, x(k)) = where

1 T U HU1|N + fT U1|N + constant, 2 1|N



(43)



H = T−T ET QE + R T−1



T

f = T−T ET QFx(k)

(44) .

In the above equation, it is clearly that H is positive and symmetric. Additionally, the Hessian matrix is only related to the system model as well as weighting matrices Q and R, and the vector f is related to the current states x(k). A.2. Objective function of the online QP Consider the following optimization problem with uncoupled constraints: min U1|N

s.t





˚ U1|N =

1 T U HU1|N + fT U1|N 2 1|N

(45)

LU1|N ≤ d

Suppose only box constraints are considered here for each element of U1|N . In the above optimization problem, due to the non-diagonal Hessian matrix H, there is coupling among the decision variables of each subsystem through the objective function. However, there is no coupling in the constraints, as only independent box constraints are considered here. In Fig. 15, the center of the ellipse represents the unconstrained solution to the optimization problem (45). The area of the whole coordinate is divided into 9 regions according to the satisfaction of constraint.

References Bartlett RA, Biegler LT, Backstrom J, Gopal V. Quadratic programming algorithms for large-scale model predictive control. J Process Control 2002;12(7):775–95. Bemporad A, Morari M, Dua V, Pistikopoulos EN. The explicit linear quadratic regulator for constrained systems. Automatica 2002;38(1):3–20. Biegler LT. Efficient solution of dynamic optimization and NMPC problems. In: Nonlinear model predictive control. Springer; 2000. p. 219–43. Cairano SD, Brand M, Bortoff SA. Projection-free parallel quadratic programming for linear model predictive control. Int J Control 2013;86(8):1367–85. Camponogara E, Jia D, Krogh BH, Talukdar S. Distributed model predictive control. IEEE Control Syst 2002;22(1):44–52. Christofides PD, Liu J, De La Pena DM. Networked and distributed predictive control: methods and nonlinear process network applications. Springer; 2011. ˜ de la Pena ˜ D, Liu J. Distributed model predictive Christofides PD, Scattolini R, Munoz control: a tutorial review and future research directions. Comput Chem Eng 2013;51:21–41. Clarke DW, Mohtadi C, Tuffs P. Generalized predictive control. Part I: The basic algorithm. Automatica 1987;23(2):137–48. Cutler CR, Prett DM, Ramaker BL. Dynamic matrix control method, US Patent 4,349,869, September 14, 1982. Farina M, Scattolini R. Distributed predictive control: a non-cooperative algorithm with neighbor-to-neighbor communication for linear systems. Automatica 2012;48(6):1088–96. Ferreau HJ, Bock HG, Diehl M. An online active set strategy to overcome the limitations of explicit MPC. Int J Robust Nonlinear Control 2008;18(8):816–30. Giselsson P, Doan MD, Keviczky T, Schutter BD, Rantzer A. Accelerated gradient methods and dual decomposition in distributed model predictive control. Automatica 2013;49(3):829–33. Hermans RM, Jokic´ A, Lazar M, Alessio A, Van den Bosch PP, Hiskens IA, et al. Assessment of non-centralised model predictive control techniques for electrical power networks. Int J Control 2012;85(8):1162–77. Johansen TA, Petersen I, Slupphaug O. Explicit sub-optimal linear quadratic regulation with state and input constraints. Automatica 2002;38(7):1099–111. ˜ ˜ D, Christofides PD. Distributed model predictive control of Liu J, Munoz de la Pena nonlinear process systems. AIChE J 2009;55(5):1171–84. ˜ ˜ D, Christofides PD. Sequential and iterative archiLiu J, Chen X, Munoz de la Pena tectures for distributed model predictive control of nonlinear process systems. AIChE J 2010;56(8):2137–49. Liu S, Zhang J, Liu J, Feng Y, Rong G. Distributed model predictive control with asynchronous controller evaluations. Can J Chem Eng 2013;91(10):1609–20. ˜ ˜ D, Camacho E, Alamo T. Distributed model predictive Maestre J, Munoz de la Pena control based on agent negotiation. J Process Control 2011;21(5):685–97. Magni L, Scattolini R. Stabilizing decentralized model predictive control of nonlinear systems. Automatica 2006;42(7):1231–6. Mayne DQ, Rawlings JB, Rao CV, Scokaert PO. Constrained model predictive control: stability and optimality. Automatica 2000;36(6):789–814. Necoara I, Doan D, Suykens JA. Application of the proximal center decomposition method to distributed model predictive control. In: 47th IEEE conference on decision and control, 2008 (CDC 2008). IEEE; 2008. p. 2900–5. Necoara I, Nedelcu V, Dumitrache I. Parallel and distributed optimization methods for estimation and control in networks. J Process Control 2011;21(5):756–66. Pannocchia G, Rawlings JB, Wright SJ. Fast, large-scale model predictive control by partial enumeration. Automatica 2007;43(5):852–60. Qin SJ, Badgwell TA. A survey of industrial model predictive control technology. Control Eng Pract 2003;11(7):733–64. Rawlings JB, Mayne DQ. Model predictive control: theory and design. Nob Hill Pub; 2009. Rawlings JB, Stewart BT. Coordinating multiple optimization-based controllers: new opportunities and challenges. J Process Control 2008;18(9):839–45. Richalet J, Rault A, Testud J, Papon J. Model predictive heuristic control: applications to industrial processes. Automatica 1978;14(5):413–28. Scattolini R. Architectures for distributed and hierarchical model predictive control – a review. J Process Control 2009;19(5):723–31.

170

X. Cai et al. / Computers and Chemical Engineering 71 (2014) 158–170

Stewart BT, Rawlings JB, Wright SJ. Hierarchical cooperative distributed model predictive control. In: IEEE American control conference (ACC), 2010; 2010. p. 3963–8. Tippett MJ, Bao J. Distributed model predictive control based on dissipativity. AIChE J 2013;59(3):787–804. Venkat AN. Distributed model predictive control: theory and applications. University of Wisconsin-Madison; 2006 [Ph.D. thesis]. Wang Y, Boyd S. Fast model predictive control using online optimization. IEEE Trans Control Syst Technol 2010;18(2):267–78.

Wright S, Nocedal J. Numerical optimization, vol. 2. New York: Springer; 1999. Yildirim EA, Wright SJ. Warm-start strategies in interior-point methods for linear programming. SIAM J Optim 2002;12(3):782–810. Zafra-Cabeza A, Maestre J, Ridao MA, Camacho EF, Sánchez L. A hierarchical distributed model predictive control approach to irrigation canals: a risk mitigation perspective. J Process Control 2011;21(5):787–99. Zheng Y, Li S, Li N. Distributed model predictive control over network information exchange for large-scale systems. Control Eng Pract 2011;19(7): 757–69.