Recent developments in the control of constrained hybrid systems

Recent developments in the control of constrained hybrid systems

Computers and Chemical Engineering 30 (2006) 1619–1631 Recent developments in the control of constrained hybrid systems Manfred Morari, Miroslav Bari...

255KB Sizes 3 Downloads 69 Views

Computers and Chemical Engineering 30 (2006) 1619–1631

Recent developments in the control of constrained hybrid systems Manfred Morari, Miroslav Bari´c∗ ETH Zentrum, Automatic Control Lab, Physikstrasse 3, ETL K13.2, CH-8092 Zurich, Switzerland Received 22 February 2006; received in revised form 4 May 2006; accepted 16 May 2006 Available online 22 August 2006

Abstract We review recently developed schemes for the constrained control of systems integrating logic and continuous dynamics. The control paradigm we focus on is model predictive control (MPC) and its derivatives, with the emphasis on explicit solution. The exposition of the basic theory is supplemented by a number of application case studies showing the effectiveness as well as the limitations of the deployed algorithms. Current and future lines of research are briefly discussed. © 2006 Elsevier Ltd. All rights reserved. Keywords: Constrained systems; Hybrid systems; Model predictive control; Optimal constrol; Explicit solution

1. Introduction In the 1980s model predictive control (MPC) under its various guises (dynamic matrix vontrol Cutler and Ramaker (1979), model algorithmic control Richalet, Rault, Testud, and Papon (1978)) took the process industries by storm. From a practical point of view its main advantage (and the reason for its success) was that it was able to handle multivariable systems with constraints in a systematic and transparent manner. To the present day MPC remains the only control technology for which this is true. Indeed, many companies say that “for us multivariable control is MPC”. From a control theoretic point of view MPC is characterized by two features: 1) An optimal control problem is solved over a finite horizon. The first one of the computed control moves is implemented. At the next time step the state of the system is determined. The optimal control problem is solved again with the horizon shifted forward by one time step (receding horizon control, RHC) 2) The described optimal control problem is solved on-line in real time (MPC). We will distinguish these two characteristics: the first one is related to the formulation of the control problem and defines as ∗

Corresponding author. Tel.: +41 1 632 6145; fax: +41 1 632 1211. E-mail address: [email protected] (M. Bari´c).

0098-1354/$ – see front matter © 2006 Elsevier Ltd. All rights reserved. doi:10.1016/j.compchemeng.2006.05.041

such the control law; the second one is related to the implementation of the controller. For example, generalized predictive control (GPC) (Clarke, Mohtadi, & Tuffs, 1987)is a RHC scheme but the controller is determined explicitly off-line involving no optimization or other significant computation on-line. On the other hand, dynamic matrix control (Cutler & Ramaker, 1979)is an MPC scheme involving the solution of a linear or quadratic program in real time. Clearly, the optimization is performed online if it is not feasible or inconvenient to solve it off-line in order to obtain an explicit representation of the controller. Over the last 25 years RHC and MPC have undergone many developments regarding the underlying theory, the implementation (on-line optimization) and the type of systems and applications that can be handled. This has been especially true in the five years since the last CPC as was analyzed at a recent meeting (Findeisen, Biegler, & Allg¨ower, in press), and will be summarized in the paper. 1) The algorithms for on-line optimization have been improved. New ideas have been proposed for solving large quadratic programs (Tenny, Wright, & Rawlings, 2004; Pannocchia, Rawlings, & Wright, in press) as they arise from the optimal control of constrained linear systems with a quadratic objective. Also much progress has been achieved in tailoring general nonlinear programming techniques for on-line use making them suitable for the control of “fast” nonlinear systems (Diehl et al., 2002; Jockenhoevel, Biegler, & Waechter, 2003; Kameswaran & Biegler, in press). Finally the classes of systems where MPC can be applied have been expanded

1620

2)

3)

4)

5)

M. Morari, M. Bari´c / Computers and Chemical Engineering 30 (2006) 1619–1631

to include systems with switches and logic (Bemporad & Morari, 1999; El-Farra, Mhaskar, & Christofides, 2005a, 2005b; Tyler & Morari, 1999). Stability and infinite-time feasibility (invariance) of RHC have been studied extensively and are now thoroughly understood (Mayne, Rawlings, Rao, & Scokaert, 2000). A unified view has emerged how the RHC problem needs to be formulated and what (sufficient) conditions need to be imposed such that the closed loop system does not evolve in a way so that no feasible control move exists and that it is stable. It should be noted that all these conditions are conservative in the sense that they usually require unnecessarily long horizons. The mathematical properties of the RHC law, e.g. continuity and differentiability, have been analyzed for a wide range of objective functions and system classes (Borrelli, 2003). Explicit RHC laws have been available for unconstrained linear systems minimizing a quadratic objective (the classic linear quadratic regulator) for decades, but for no other problem class until recently. Techniques have been developed and software is now available to calculate such explicit laws efficiently and numerically reliably for constrained linear systems of low order (Bemporad, Morari, Dua, & Pistikopoulos, 2002), hybrid (Borrelli, Baotic, Bemporad, & Morari, 2005) and even some simple nonlinear systems (Fotiou, Rostalski, Sturmfels, & Morari, 2005) using tools from computational and algebraic geometry. In this explicit representation the control law requires very little memory and computational effort for the on-line implementation and therefore makes the power of RHC accessible to very fast systems where the sampling times are in the range of microseconds. The explicit control law representation makes it also possible for the first time to analyze the closed loop properties of RHC, in particular stability. Thus, it is not necessary any more to impose sufficient conditions at the design stage that guarantee stability but at the same time require long horizons and therefore lead to excessive controller complexity. Stability can be analyzed after the design and the design parameters, in particular the horizon and weights, can be adjusted iteratively until closed loop stability is achieved.

In this perspective paper we will focus on several of the developments mentioned above. First, we will discuss the extension of the system classes that MPC can handle to include systems with logic and constraints (1). In the second part we will discuss the explicit RHC law representation (3–5). 2. MPC for hybrid systems Most of the control theory and tools have been developed for systems, whose evolution is described by smooth linear or nonlinear state transition functions. In many applications, however, the system to be controlled also comprises parts described by logic, such as for instance on/off switches or valves, gears or speed selectors, evolutions dependent on if-then-else rules. Often, the control of these systems is left to schemes based on heuristic rules inferred from practical plant operation.

In the 1990s, researchers started dealing with hybrid systems, namely hierarchical systems comprising dynamical components at the lower level, governed by upper level logical/discrete components (Branicky, Borkar, & Mitter, 1998; Grossmann, Nerode, Ravn, & Rischel, 1993). Hybrid systems arise in a large number of application areas, and have been attracting much attention in both academic theory-oriented circles as well as in industry. First with Tyler and Morari (1999) and then with Bemporad and Morari (1999) we set out to establish a framework for modeling and controlling models of systems described by interacting physical laws, logical rules, and operating constraints. Grossmann (Raman & Grossmann, 1991) introduced us to concepts of integer and logic programming that form the basis, we translated them to the dynamic system domain. According to techniques described, for example, by Williams (1993), Cavalier, Pardalos, and Soyster (1990) and Raman and Grossmann (1992), propositional logic can be transformed into linear inequalities involving integer and continuous variables. Combining the logic with the continuous system we obtain mixed logical dynamical (MLD) systems described by linear dynamic equations subject to linear mixed-integer inequalities, i.e. inequalities involving both continuous and binary (or logical, or 0 − 1) variables: xk+1 = Axk + B1 uk + B2 δk + B3 zk

(1a)

yk = Cxk + D1 uk + D2 δk + D3 zk

(1b)

E2 δk + E3 zk ≤ E1 uk + E4 xk + E5

(1c)

where the state x, the output y and the input u can have continuous as well as 0 − 1 components. The continuous variables z are introduced when translating some propositional logic expressions into mixed-integer inequalities. Conditions have been derived for an MLD system to be well posed (Bemporad & Morari, 1999), i.e. for one state–input pair (xk , uk ) to be mapped into a unique state xk+1 at the next time step. Though this may not be obvious at first sight, many practical systems can be described in the MLD framework. MLD systems generalize a wide set of models, among which there are linear hybrid systems, finite state machines, some classes of discrete event systems, constrained linear systems, and nonlinear systems whose nonlinearities can be expressed (or, at least, suitably approximated) by piecewise linear functions. Indeed, when the described map is continuous, then Heemels, De Schutter, and Bemporad (2001) have shown that MLD systems are entirely equivalent (under some mild assumptions) in their expressiveness to a wide range of other system descriptions in discrete time, in particular, piece-wise affine (PWA) systems, linear complementarity (LC) systems, extended linear complementarity (ELC) systems and max–min-plus scaling (MMPS) systems. In general, the derivation of an MLD model on the basis of an engineering description is a tedious task almost impossible to do by hand except for trivial example systems. Therefore, we have developed a modeling language HYSDEL (Torrisi & Bemporad, 2004) that makes the novel framework readily accessible to the engineering community. As detailed in (Bemporad & Morari, 1999) stability of MLD systems can be defined, and optimal control problems for MLD

M. Morari, M. Bari´c / Computers and Chemical Engineering 30 (2006) 1619–1631

systems can be posed in the classic sense, where some objective function penalizing states and inputs is minimized over a finite horizon. Closed loop stability can be guaranteed, when the finite time optimal controller with the appropriate terminal constraints is applied in a receding horizon fashion. When the objective function is linear or quadratic, then the mathematical programming problem that needs to be solved is a mixed integer linear or quadratic program (MILP or MIQP) for which free as well as commercial solvers are readily available (ILOG, 2003). In summary, we have described an approach to apply MPC to a general class of systems involving continuous dynamics, logic and constraints. Because MILPs or MIQPs have to be solved on-line, the approach is limited to slow systems of moderate complexity. In the following sections we will demonstrate the effectiveness of the proposed control technique on two examples of industrial importance. After that we will discuss its limitations and possible alternatives. 2.1. Control of co-generation power plants In the last decade, the electric power industry has been subject to deep changes in structure and organization. On the one hand, market liberalization and its associated fierce competition has led to a strong focus on cost reduction and optimal operation strategies. On the other hand, more strict environmental legislation makes operational constraints tighter. In this context, the use of combined cycle power plants (CCPPs) has become more and more popular. They are more efficient and flexible than conventional configurations based on boilers and steam turbines, not to mention nuclear power plants. A typical CCPP is composed of a gas cycle and a steam cycle. The gas cycle is driven by some fossil fuel (usually natural gas) and produces electric power via expansion of hot gasses in a (gas) turbine. The steam cycle is supplied with the still hot exhaust gases of the gas turbine and generates both electricity and steam for the industrial processes. Clearly, the liberalization of the energy market has promoted the need of operating CCPPs in the most efficient way, that is, by maximizing the profits due to the sales of steam and electricity and by minimizing the operating costs. We considered the problem of optimizing the short-term operation of a CCPP, i.e., to optimize the plant on an hourly basis over a time horizon that may vary from a few hours to one day (Ferrari-Trecate, Gallestey, Letizia, Morari, & Antoine, 2004). The usual paradigm is to recast the economic optimization into the minimization of a cost minus revenues functional and to account for the physical model of the plant through suitably defined constraints. The studied co-generation combined cycle power plant comprises four main components: a gas turbine, a heat recovery steam generator, a steam turbine, and a steam supply for a paper mill. The main features that require modeling the island power plant as a hybrid system are: • the presence of the binary inputs denoting the switching of the steam turbines;

1621

• the turbines have different start up modes, depending on how long the turbines were off; • electric power, steam flow and fuel consumption are continuous valued quantities evolving with time; • the CCPP is modeled as a piecewise affine system. Furthermore, the following constraints have to be taken into account: • the operating constraints on the minimum amount of time for which the turbines must be kept on/off (the so-called minimum up/down times); • a priority constraint allowing the steam turbine to be switched only when the gas turbine is on. This condition, together with the previous one, leads to constraints on the sequences of logic inputs which can be applied to the system; • the gas turbine load and the steam mass flow are bounded. The complete MLD model capturing all hybrid features of the power plant involves 12 state variables, 25 binary δ-variables, 9 z-variables and 103 inequalities. In our case studies we considered horizons up to 24 h yielding MILPs with up to 1100 decision variables, 650 of which are integers, and 2850 constraints. The computational times needed for solving the MILPs on a Pentium II 400 were less than 100 s, a time much shorter than the sampling time of 1 h, making the scheme attractive for online implementation. Independently, Gollmer, Nowak, R¨omisch, and Schultz (2000) proposed an optimal control system for a utility company comprising 34 thermal and 7 hydro plants. Their control is also based on hourly discretization and includes start up cost for thermal plants. Computation times for solutions with a one week horizon range between 1 and 8 min. 2.2. Supermarket refrigeration systems The heart of any supermarket refrigeration system is a central compressor rack, comprising several compressors connected in parallel, that maintains the required flow of refrigerant to the refrigerated display cases located in the supermarket sales area. Each display case has an inlet valve for refrigerant that needs to be opened and closed such that the air temperature in the display case is kept within tight bounds to ensure a high quality of the goods. For many years, the control of supermarket refrigeration systems has been based on distributed control systems, which are flexible and simple. In particular, each display case used to be equipped with an independent hysteresis controller that regulates the air temperature in the display case by manipulating the inlet valve. The major drawback, however, is that the control loops are vulnerable to self-induced disturbances caused by the interaction between the distributed control loops. In particular, practice and simulations show that the distributed hysteresis controllers have the tendency to synchronize (Larsen, 2004), meaning that the opening and closing actions of the valves coincide. Consequently, the compressor periodically has to work hard to keep up the required flow of refrigerant, which results in low

1622

M. Morari, M. Bari´c / Computers and Chemical Engineering 30 (2006) 1619–1631

efficiency, inferior control performance and a high wear on the compressor. In a typical refrigeration system the compressors supply the flow of refrigerant in the system by compressing the low pressure refrigerant from the suction manifold, which is returning from the display cases. The compressors maintain a specified constant pressure in the suction manifold, thus ensuring the desired evaporation temperature. From the compressors, the refrigerant flows to the condenser and further on to the liquid manifold. The evaporators inside the display cases are fed in parallel from the liquid manifold through expansion valves. The outlets of the evaporators lead to the suction manifold and back to the compressors thus closing the circuit. The control problem is complicated by the fact that many of the control inputs are restricted to discrete values, such as the opening/closing of the inlet valves to the display cases and the on/off control of the individual compressors. Whenever one of the aforementioned switches is changed, the dynamics of whole system changes: a display case or a compressor disappears from the dynamic description. In summary, the refrigeration system is a complex hybrid system with discrete changes of the dynamics triggered by external inputs (discrete manipulated variables). In (Larsen, Geyer, & Morari, 2005) we studied a simple example system involving two compressors and two display cases. The compiler HYSDEL (Torrisi & Bemporad, 2004) was used to generate the matrices of the MLD system starting from a highlevel textual description of the system. The MLD system has 8 states, 2 z-variables, 4 δ variables and 52 inequality constraints. To solve the optimal control problem online at each time step, CPLEX 9.0 was run on a Pentium IV 2.0 GHz computer. For a sampling time of 1 min and a horizon of 10 steps, the computation time was on average 3.9 s and always less than 9.7 s rendering an online implementation of the MPC scheme computationally feasible. The case study of the supermarket refrigeration system illustrated the performance limitations of traditional control schemes. MPC in connection with the MLD model proved to be better suited for handling the discrete control actions, taking into account the interactions between the display cases and the compressor, respecting the temperature constraints, minimizing the variations in the suction pressure, and reducing the switching of the compressors. This would lead to lower wear of the compressors and higher energy efficiency of the supermarket refrigeration system. Most importantly, the design and tuning of the cost function was easy and intuitive, and the extension to larger refrigeration systems would be straightforward. 3. Explicit receding horizon control The approach to the implementation of RH control for hybrid systems described in the previous section suffers from two serious drawbacks: • Solving an optimization problem on-line is a computationally demanding task and inherently non-real-time, which immediately rules out the application of an MPC scheme to processes which require a high sampling rate (kHz and above).

• Stability and invariance of the RH controller are difficult to verify, while enforcing them by additional constraints may become too conservative and lead to performance degradation or infeasibility of the optimization problem. These two issues are addressed by a control design which presolves RH optimization problems off-line and obtains the optimal control law in an explicit, closed form which is easy to evaluate. This control strategy is now commonly known as explicit RHC. It does not only enable extremely efficient, real-time implementation of the RH controller, the explicit solution gives also a possibility to directly analyze properties of the controller, namely feasibility and stability. In this section we will give an overview of the explicit RHC, while the analysis procedures based on the explicit solution will be addressed later. We will start with explicit RHC for constrained linear systems. This will introduce the key concepts and tools needed for the design of explicit controllers, most importantly the concept of parametric programming. Synthesis and analysis of explicit RH controllers for hybrid systems described by (1) will be introduced afterwards, since it strongly relies on the tools and concepts initially developed for the design of explicit controllers of constrained linear systems. 3.1. Explicit controller for constrained linear systems Consider the discrete-time linear time invariant (LTI) system: xk+1 = Axk + Buk ,

(2)

subject to constraints on states xk and inputs uk defined by linear inequalities: xk ∈ X = {x|Hx x ≤ kx },

(3)

uk ∈ U = {xu |Hu xu ≤ ku },

(4)

In its standard formulation, RHC is based on the concept of constrained finite time optimal control (CFTOC). In each time instance t, given the current state vector xt , CFTOC computes a control vector optimal with respect to a certain cost function: J ∗ (xt|t ) := min J(UtN−1 , xt|t ),

(5)

⎧ ⎪ ⎨ xt+k+1|t = Axt+k|t + But+k|t , subj. to xt+k|t ∈ X, ut+k|t ∈ U ⎪ ⎩ xt+N|t ∈ T,

(6)

UtN−1

where xt+k|t denotes a model-based prediction of state xt+k , when the state xt|t = xt is given. The prediction and the optimization are carried out over a finite time horizon N. An additional terminal state constraint in (6) defines the set of admissible states T for the time instance t + N. In a receding horizon scheme, only the first of the optimal control moves U∗ N−1 = [u∗ Tt|t , . . . , u∗ Tt+N−1|t ]T ∈ Rm,...,N is applied to the t plant and the procedure is repeated in the following sampling instance.

M. Morari, M. Bari´c / Computers and Chemical Engineering 30 (2006) 1619–1631

1623

We will consider the cases where a cost function J(UtN−1 , xt|t ) is chosen as: • piecewise-linear (PWL): J(UtN−1 , xt|t ) := PN xt+N|t  + +

N−1 

Qxt+k|t 

k=0

+ Rut+k|t  ,

(7)

where  ∈ {1, ∞}; or • quadratic:

Fig. 1. Explicit receding horizon control.

T J(UtN−1 , xt|t ) := xt+N|t PN xt+N|t + +

N−1 

T xt+k|t Qxt+k|t

k=0 T + ut+k|t Rut+k|t

(8)

where PN , Q, R  0. It is well known that using a piecewise-linear cost (norms 1 and ∞) or quadratic cost function J(·), the task of RHC for a specific given state xt|t , reduces to solving a linear or quadratic program, respectively. The idea of explicit RHC is to find a closed form for the optimal control law, i.e. the optimizer function u∗ (xt|t ), which can be easily evaluated for a given state xt|t , and thus avoid the need for solving an optimization problem on-line. As mentioned before, the idea of “pre-solving” the optimization problem is not new or unusual. In LQR control, the optimality conditions for the underlying quadratic optimization problem are used to derive the optimizer function u∗ (xt|t ), linear in state xt|t . The idea behind explicit RHC for constrained systems is the same: “derive” the explicit, closed form solution to the corresponding optimization problems by using general optimality conditions. This specific problem is known as parametric programming and is closely related to sensitivity analysis of the solution to an optimization problem. In a parametric/sensitivity setup, a state vector xt|t is treated as a parameter or perturbation to the optimization problem (5) and the goal is to find the optimizer function u∗ (xt|t ) for all admissible states, i.e. ∀xt|t ∈ X. In general, parametric optimization problems are hard to solve, or intractable for many optimization problems. However, parametric linear and (convex) quadratic programs can be readily solved and a number of solvers have become available. Often in the literature a distinction is made between parametric programs with a single parameter and those with more than one parameter. The latter are commonly referred to as multi-parametric programs, hence, the terms multi-parametric linear program (mp-LP) and multi-parametric quadratic program (mp-QP). Without going into details, the most important property of the explicit RH controller for constrained discrete-time LTI systems with the cost function defined by (7) or (8) is stated by the following theorem. Theorem 3.1. For an optimal control problem (2)–(6) with the cost function J(UtN−1 , xt|t ) defined by (7) or (8), the optimizer function u∗ (xt|t ) is piecewise-affine over polyhedra, i.e. of the form: u∗ (xt|t ) = Fi xt|t + Gi

if

xt|t ∈ Ci ,

(9)

where Ci , i = 1, . . . , NR are polyhedra defining a polyhedral partition of the set X of feasible states xt|t . Furthermore, if the cost is given by (8), the optimizer function u∗ (xt|t ) is continuous, while in case of piecewise-linear cost (7) a continuous optimizer function always exists. These important properties of the optimizer function follow directly from the character of the solution to the corresponding mp-LP or mp-QP. Once computed, the explicit solution makes the implementation of RHC straightforward. Again we will draw a parallel to LQR. In the case of LQR a single affine control law is optimal for all states. In the constrained case, characterized by Theorem 3.1, the set of feasible states is divided into polyhedral sets with the explicit, affine optimal control law defined over each polyhedron. Therefore, compared to LQR, one needs to perform an additional step of identifying the polyhedron Cj ⊆ X which contains the current state xt|t and evaluate the corresponding affine function: u∗ (xt|t ) = Fj xt|t + Gj . The whole procedure is illustrated on Fig. 1. Identification of the polyhedral region Cj containing the current state is, therefore, the most demanding part of the on-line algorithm. The procedure itself is very simple, consisting of evaluating linear inequalities which define polyhedra Ci , until the polyhedron which contains the current state vector is found, or infeasibility is detected. The number of polyhedra in an explicit controller may, however, become very large and a number of algorithms have been developed to make the on-line search procedure efficient and applicable even for a large number of polyhedral regions (Borrelli, Baoti´c, Bemporad, & Morari, 2001; Jones, Grieder, & Rakovi´c, 2005; Tøndel, Johansen, & Bemporad, 2003). An important fact is that, no matter what procedure is used to identify the affine optimal control law, we can always provide a fixed upper bound on the computational complexity of the controller and thus guarantee its real-time performance. 3.2. Explicit controller for hybrid systems Computation of the explicit controller for hybrid systems represented in the MLD form (1) relies strongly on the computational tools developed for explicit control of constrained linear systems. Thus we will consider it as a mere extension of the concepts presented in the previous subsection. Before proceeding further, we will switch to another representation of discrete-time hybrid systems, namely discrete-time

1624

M. Morari, M. Bari´c / Computers and Chemical Engineering 30 (2006) 1619–1631

Piecewise Affine (PWA) system: x(k + 1) = fPWA (x(k), u(k)) = A{i} x(k) + B{i} u(k) + f {i} ,   x(k) if ∈ D{i} , u(k) (10)     x x D{i} := ≤ (P0 ){i} |(Px ){i} (Pu ){i} u u where k ≥ 0, x ∈ Rn is the state vector, u ∈ Rm is the con

D trol vector and D{i} i=1 is a bounded polyhedral partition of (x, u) ⊂ Rn+m space. The constraints (Px ){i} x + (Pu ){i} u ≤ (P0 ){i} define both regions in which a particular state update equation is valid as well as constraints on the state and input variables. Under some technical assumptions, PWA system representation is equivalent to MLD form (1) and one can convert one into the other (Heemels et al., 2001). However, both for the purpose of computation and the understanding of the explicit solution, the PWA representation is more suitable here. Assuming the same notation as before, consider the following CFTOC problem: JN∗ (xt|t ) := min JN (UtN−1 , xt|t ),

(11a)

N−1 Ut|t

 subj. to

xt+k+1|t = fPWA (xt+k|t , ut+k|t ), xt+N|t ∈ T,

(11b)

where the cost function JN (UtN−1 , xt|t) is defined by (7) or (8). For a given state vector xt|t the solution to (11a) is obtained by solving a mixed-integer program. The output is a set of optimal control vectors U∗ N−1 , as well as an optimal switching t sequence I∗ = {it , . . . , it+N−1 }, which defines which of the D affine subsystems it+k is valid for each of the time instances t + k within a prediction horizon. Assume now that a switching sequence is given and fixed. Obviously, the problem of obtaining the optimal control vectors U∗ N−1 becomes standard linear t or quadratic program. Thus, the optimal solution corresponding to a fixed switching sequence can be computed in an explicit form by using the techniques of parametric programming previously introduced. Therefore, to obtain an explicit solution to the problem (11a), one needs to solve a parametric program, mpLP or mp-QP, for every feasible switching sequence. Finally, when evaluating an optimal control law for a particular state xt|t , one needs to select among finitely many affine functions the one for which the value function JN∗ (xt|t ) is the smallest. The procedure just described contains the principles of computation and operation of an explicit RH controller for hybrid systems representable by MLD, PWA or any other equivalent form. The explicit solutions to (11a) for piecewise-linear and quadratic cost are characterized by the following theorems (Borrelli, 2003). Theorem 3.2 (Explicit control Law). The solution to the optimal control problem (11a) with the cost function (7) or (8) is a piecewise affine control law of the form: {i}

{i}

∗ ut+k|t = Fk xt+k|t + gk ,

{i}

if xt+k|t ∈ Ck ,

(12)

{i} {i} {i} where regions Ck , i Ck = Xk , i Ck = ∅ for i = 1, . . . , Nk define a partition of the set of feasible states Xk in the kth step. Explicit control law both for the cost based on PWL norms and the quadratic cost has the same form of a piecewise-affine state feedback. However, there is an important difference: the solution for PWL norms is defined over a polyhedral partition of the set of feasible states, while in the case of quadratic cost {i} the regions Ck are in general not polyhedral. From this short description it is obvious that the problem of obtaining the explicit solution to RHC based on CFTOC for hybrid systems is highly combinatorial in nature and the number of parametric linear or quadratic programs one needs to solve in order to compute the solution grows exponentially with the prediction horizon N. Therefore, the algorithms for the computation of the explicit solution for PWA and equivalent hybrid systems try to exploit as much as possible the specific structure of a particular problem to compute the explicit solution more efficiently. Currently, two conceptually different types of algorithms are available. The first one, multi-parametric mixedinteger linear programming (mp-MILP) introduced in (Dua & Pistikopoulos, 2000), relies on MLD system representation and iteratively solves MILPs and LPs to obtain an explicit solution to the problem. In the second, newer group are algorithms based on the concept of dynamic programming (DP), where the problems are decomposed into a number of mp-LPs or mp-QPs and solved backwards in time (Borrelli, Baoti´c, Bemporad, & Morari, 2003; Baoti´c, Christophersen, & Morari, 2003; Kerrigan & Mayne, 2002). 3.3. Infinite horizon solution In contrast to the aforementioned constrained finite time optimal control, the constrained infinite time optimal control (CITOC) problem focuses on the optimization problem defined over an infinite prediction/control horizon. The main advantages of the infinite horizon solution, compared to the corresponding RH implementation of the finite time solution of the optimal control problem are inherent stability and all-time feasibility (Sznaier & Damborg, 1987; Mayne et al., 2000). Unlike the case of unconstrained LTI systems, where strong conditions for the existence of the bounded infinite-horizon solution are available, no such result exists for constrained LTI or hybrid systems. Therefore, for these types of problems, no conclusion can be drawn about the existence of the CITOC solution merely from the properties of the system or problem definition. In the last couple of years algorithms for the computation of the explicit solution to CITOC problems for constrained discretetime LTI and PWA systems have been developed, in particular for the computation of: • CITOC for discrete-time LTI systems with quadratic cost (Infinite-Time LQR) (Grieder, Borrelli, Torrisi, & Morari, 2002), • CITOC for discrete-time LTI and hybrid systems with PWL costs (Baoti´c, Christophersen, & Morari, 2003),

M. Morari, M. Bari´c / Computers and Chemical Engineering 30 (2006) 1619–1631

These algorithms use efficient recursive procedures based on techniques of parametric linear and quadratic programming to compute, for a particular problem, a bounded infinite horizon solution, i.e. a solution to the corresponding Bellman equation. The algorithms are guaranteed to converge in finite time, provided that the bounded infinite-horizon solution for the problem exists. 4. Alternative explicit RHC formulations: Robust design and low complexity In this section we will give a brief overview of alternative formulations of the explicit RHC which address two issues: robustness to disturbances/uncertainties and the complexity of the explicit control law. Since a widely cited paper (Kothare, Balakrishnan, & Morari, 1996), a number of authors has considered a robust control design for classical MPC, e.g. (Kouvaritakis, Rossiter, & Schuurmans, 2000; Kerrigan & Maciejowski, 2001, 2002; Mayne, Seron, & Rakovic, 2005; Scokaert & Mayne, 1998). The classical “Min–Max” approach described in (Scokaert & Mayne, 1998)assumes “the worst case scenario” in which the controller is designed to be able to cope with all possible realizations of the disturbances and uncertainties. This design is overly conservative and computationally too demanding for longer horizons and as such is rarely used in practice. Some authors considered different relaxation schemes which lead to computationally tractable solutions (Kerrigan & Maciejowski, 2004; L¨ofberg, 2003). We have developed an algorithm for the computation of an explicit Min–Max RHC for constrained LTI systems affected by bounded additive disturbances and in the presence of parametric uncertainties lying in a bounded polytopic set (Bemporad, Borrelli, & Morari, 2003). The explicit Min–Max controller for PWL cost was computed by parametric linear programming, resulting in an optimizer piecewise-affine over a polyhedral partition of the set of robustly feasible states. In general, the same procedure for PWA systems easily becomes too complex or computationally intractable. Unlike the classical Min–Max approach where both constraints and the cost are modified in order to cope with the disturbance, the algorithms presented in this section tackle the issue of robustness by including robust feasibility constraints, i.e. satisfaction of nominal constraints in the presence of bounded additive disturbances, while preserving the nominal cost function. An issue related to the explicit solution paradigm, that turned out to be the most limiting one, is the complexity of the solution. The explicit solution for RHC problem, though of a very simple structure, may become prohibitively large for many practical problems. Typically, the complexity of the solution, measured in number of polyhedral regions, grows exponentially with the number of parameters (i.e. order of the system) and with the prediction horizon. The explicit controller may exceed the allowable storage capacity or the procedure of the identification of the optimal affine control law on-line may take too long. The problem is more pronounced for hybrid systems.

1625

We will briefly present two explicit RHC schemes, namely the minimum-time controller and the N-step controller. Both approaches generally yield solutions of lower complexity compared to the standard CFTOC and allow simple inclusion of the above mentioned robust feasibility condition. Both algorithms use invariant sets as target sets for the RHC. The concept of invariance is very important in the explicit RH control design and the interested reader is referred to (Blanchini, 1999; Kerrigan, 2000; Rakovi´c, Kerrigan, Kouramas, & Mayne, 2004). 4.1. Minimum time controller The idea behind minimum-time control design is to drive state of a system to a prescribed target set in a minimal number of steps. Robust minimum-time state feedback controller was first presented in (Mayne & Schroeder, 1997). The algorithm for the computation of the explicit minimum-time controller for constrained linear and hybrid systems was presented in (Grieder, Kvasnica, Baoti´c, & Morari, 2004). The algorithm starts with the computation of the invariant maximum admissible set with the corresponding stabilizing feedback control law. Taking the obtained invariant set as a target set, the algorithm computes recursively the explicit solution to N 1-step RH control problems. In the presence of additive uncertainty, robust feasibility is guaranteed by a straightforward change of the constraints in each iteration. Since the terminal set is invariant with the stabilizing control law, the minimum-time controller also ensures robust convergence. Any “low-complexity” character of the minimumtime solution cannot be directly enforced. However, computational experiments and practical experience with the algorithm show that it generally yields an explicit controller of lower complexity (the number of polyhedral control regions) compared to the standard CFTOC approach. The presented minimum-time control algorithm assumes that the computation of the invariant set around the origin with stabilizing feedback law is a feasible task. When this is not the case, the condition of robust convergence has to be relaxed. Instead of enforcing robust convergence, we can only guarantee invariance and verify the convergence in an a posteriori analysis. 4.2. N-step controller: Minimum-time controller “Lite” A computationally very effective algorithm resulting in a robust explicit controllers of relatively low complexity, both for constrained linear and hybrid systems is proposed recently (Grieder & Morari, 2003). The resulting explicit controller, named the N-step controller, is in essence enforcing on the maximal robust control invariant set a piecewise-affine control law. Naturally, this control law ensures all-time constraints satisfaction in the presence of additive bounded disturbance. The limiting property of the N-step controller is that it does not inherently guarantee the convergence of the closed-loop system within the maximal robust invariant set to a (smaller) subset around the origin (robust convergence). However, as will be shown in the following section, it may be possible to

1626

M. Morari, M. Bari´c / Computers and Chemical Engineering 30 (2006) 1619–1631

verify the robust convergence after the explicit controller is obtained. 5. Analysis of hybrid systems

tually reach an unsafe state configuration or satisfy a temporal logic formula. Formal verification is strictly related to the modeling framework used to describe the process, whose safety properties we need to certify. Different models lead to different verification algorithms. For PWA and MLD systems algorithms based on mixed-integer linear programming have been developed to decide finite-time reachability (Torrisi, Bemporad, & Giovanardi, in preparation). The reachability analysis relies on a reach-set computation, i.e. the computation of all reachable states starting within an initial set. Recent developments include the introduction of computationally more efficient approximative schemes (Girard, 2005) and barrier certificates computation based on Sum-of-Squares (Prajna & Papachristodoulou, 2003; Prajna & Rantzer, 2005).

In this section we give a brief overview of the methods developed for the analysis of different properties of hybrid systems. The class of hybrid systems we consider are discrete-time PWA or MLD systems and, therefore, all other equivalent representations. These tools have special importance from the perspective of the explicit control paradigm. If the explicit controller for a hybrid system is a PWA state-feedback law defined over a polyhedral partition of the set of feasible states, then using the analysis tools we can verify stability and constraint satisfaction (feasibility) for the closed-loop. These two properties are hard to predict and analyze for the classical, on-line optimization based RHC. Instead, stability of the controller must be ensured by imposing constraints on the terminal state in the prediction horizon, an approach which may lead to a conservative setup and significantly reduce the set of feasible states for the RHC. With the explicit control design, constraints which enforce stability can be omitted and stability and constraint satisfaction of the closed loop system can be verified directly after the controller is computed. This way conservativeness and complexity of the controller design can be significantly reduced.

Most algorithms and control paradigms presented in the previous sections have been formalized into software tools which enable their deployment by the broader community of researchers and control practitioners. We will briefly present two tools: HYSDEL, developed in our group, and Multi-Parametric Toolbox (MPT), an open-source depository of the tools from various groups active in the area. Closely related to MPT, not open-source, less mature and more limited in scope is the Hybrid Toolbox (Bemporad, 2003).

5.1. Stability analysis

6.1. HYSDEL

Although stability of a linear system can be easily checked with the roots of the characteristic equation, nonlinear systems complicate matters enormously. Even for a constrained linear system, stability cannot be proven globally with these methods. In general, this problem is either NP-complete or undecidable. Moreover, it is also hopeless to deduce the stability/instability of a PWA system from the stability/instability of its affine subsystems. A wide range of methods with varying degrees of conservativeness have been developed for analyzing the stability of PWA systems based on Lyapunov theory. Since there’s no standard method to construct Lyapunov functions, algorithms for a broad class of candidate functions have been developed, e.g. computation of common quadratic, piecewise affine, piecewise quadratic and piecewise polynomial Lyapunov functions. These can be efficiently computed with linear programming, semi-definite programming (SDP) and sum-of-squares techniques (SOS). Computational complexity grows rapidly as the order of the PWA system and the number of dynamics increases, strongly depending on the number of possible dynamics switches. A detailed comparison of the above mentioned stability analysis techniques was reported recently in Biswas, Grieder, L¨ofberg, and Morari (2005).

HYSDEL, already mentioned in Section 2, allows modeling a class of hybrid systems described by the interconnections of linear dynamic systems, automata, if-then-else and propositional logic rules (Torrisi & Bemporad, 2004). HYSDEL can transform the description into the MLD form which can be used immediately for optimization, to solve optimal control problems or as an intermediate step to obtain other popular representations such as piecewise affine systems.

5.2. Reachability analysis and verification Reachability analysis (also known as safety analysis or formal verification) aims at detecting if a hybrid system will even-

6. Software tools

6.2. Multi-Parametric Toolbox The Multi-Parametric Toolbox is a free Matlab toolbox for the design, analysis and deployment of optimal controllers for constrained linear and hybrid systems (Kvasnica, Grieder, Baoti´c, & Morari, 2003). The efficiency and robustness of the code is guaranteed by the extensive library of algorithms from the field of computational geometry and multi-parametric optimization. The toolbox offers a broad spectrum of algorithms compiled in a user friendly and accessible format: starting from different performance objectives (linear, quadratic, minimum time) to the handling of systems with persistent additive and polytopic uncertainties. Among other tools, MPT includes the HYSDEL compiler. The models developed by HYSDEL can be used as a basis for the development of controllers in MPT. Resulting (sub)optimal control laws can either be embedded into users applications in the form of C code, or deployed to target platforms using Matlab’s Real Time Workshop.

M. Morari, M. Bari´c / Computers and Chemical Engineering 30 (2006) 1619–1631

1627

7. Applications In this section we present some recently reported applications of the explicit RHC paradigm for hybrid systems. We would like to emphasize that the list of applications is rapidly expanding with the recent development of reliable software tools for modelling, analysis and synthesis of explicit controllers, presented in the previous section. 7.1. Traction control Traction control systems are amongst the best studied mechatronic systems in automotive applications. They are used to improve a driver’s ability to control a vehicle under adverse external conditions such as wet or icy roads. In particular, the traction controller aims at maximizing the traction force between the vehicle’s tire and the road, thus reducing the slipping of the wheel and at the same time improving vehicle stability and steerability. Essentially, the problem of traction control is not “hybrid”, i.e. the system does not exhibit a switching behavior. However, the traction force is a non-linear function which can be closely approximated by PWA segments, thus yielding a PWA dynamical model of the system. A controller was devised to optimally control the slip. The explicit solution to the optimal control problem was computed and verified in simulation and experiments (Borrelli, Bemporad, Fodor, & Hrovat, 2001). 7.2. Active vibration control Smart materials have long been heralded as the dawn of a new era in the construction of automotive vehicles, airplanes and other structures which have to meet ever more demanding performance requirements. This has largely not happened, at least not in the commercial arena. In this context dynamic behavior is one main design criterion for many kinds of load-carrying structures, as undesirable large-amplitude vibrations often impede the effective operation of various types of mechanical systems, including antennae, spacecrafts, rotorcrafts, automobiles, and sophisticated instruments. It is therefore desirable to introduce structural damping into a system to achieve a more satisfactory response and to delay fatigue damages. The large instrumentation overhead of conventional vibration control can be significantly reduced by a new method that involves attaching an electrical shunt controller across the terminals of one piezoelectric transducer with the view to minimizing structural vibrations. This approach is referred to as piezoelectric shunt damping and is known as a simple, low-cost, lightweight and easy-to-implement method for vibration damping. For that purpose we have developed and implemented a completely autonomous switching shunt circuit. An optimal switching law was derived using the Hybrid System Framework. The obtained switching law was implemented with a few analog electronic components, such that the resulting circuit does not require power for operation. Additionally, the circuit can tune itself automatically. Experimental verification demonstrated a vibration suppression of 60–70% (Niederberger, 2005). A possible application currently studied is the suppression of brake squeal.

Fig. 2. Topology of the step-down synchronous converter.

7.3. Control of fixed frequency dc–dc converters In fixed-frequency switch-mode dc–dc converters, the switching stage comprises a primary semiconductor switch S1 that is always controlled, and a secondary switch S2 that is operated dually to the primary one (Fig. 2). The switches are driven by a pulse sequence of constant frequency, the switching frequency fs corresponding to a switching period Ts . The dc component of the output voltage vo can be manipulated through the duty cycle d, defined by d = ton /Ts , where ton represents the interval within the switching period during which the primary switch is in conduction. The control objective for dc–dc converters is to regulate the dc component of the output voltage vo to its reference. This objective has to be achieved subject to the constraints that are present, resulting from the converter topology. In particular, the manipulated variable (duty cycle) is bounded between zero and one, and in the discontinuous current mode a state (inductor current) is constrained to be non-negative. Additional constraints are imposed as safety measures, such as current limiting or softstarting, where the latter constitutes a constraint on the maximal derivative of the current during start-up. Moreover, the regulation has to be maintained despite gross changes in the load and the input voltage. We derived a hybrid model that is valid for the whole operating regime and captures the evolution of the state variables within the switching period. Based on the converter’s hybrid model, we formulated and solved an MPC problem that allows for a systematic controller design that achieves the aforementioned control objectives. The developed scheme tackles the control problem in a complete manner and features very favorable performance properties. In particular, the control performance does not degrade for changing operating points. Regarding the implementability of the controller, we derived off-line the explicit PWA state-feedback control law, which can be easily stored in a look-up table and used for the practical implementation of the proposed control scheme. The a posteriori analysis of the closed-loop system shows that the set of feasible states with the obtained controller is invariant and that the closed-loop system is exponentially stable through the whole range of operating points (Geyer, Papafotiou, & Morari, 2004). 7.4. DTC In adjustable speed induction motor drives dc–ac inverters are used to drive induction motors as variable frequency threephase voltage or current sources. The basic principle of DTC is to directly manipulate the stator flux vector such that the desired torque is produced. This is achieved by choosing an inverter

1628

M. Morari, M. Bari´c / Computers and Chemical Engineering 30 (2006) 1619–1631

switch combination that drives the stator flux vector by directly applying the appropriate voltages to the motor windings. The choice of the appropriate voltage vector addresses a number of different control objectives. Primarily, the stator flux and the torque need to be kept within pre-specified bounds around their references. In high power applications, where threelevel inverters with Gate Turn-Off (GTO) thyristors are used, the control objectives are extended to the inverter and also include the minimization of the average switching frequency and the balancing of the inverter’s neutral point potential around zero. Our goal was to derive model predictive control schemes that keep these three controlled variables (torque, stator flux, neutral point potential) within their given bounds, minimize the (average) switching frequency, and are conceptually and computationally simple yet yield a significant performance improvement with respect to the state of the art. More specifically, the term conceptually simple refers to controllers allowing straightforward tuning of the controller parameters or even a lack of such parameters, and easy adaptation to different physical setups and drives, whereas computationally simple implies that the control scheme does not require excessive computational power to allow the implementation on DTC hardware that is currently available or at least will be so within a few years. We have proposed three such novel control approaches to tackle the DTC problem, which are inspired by the principles of MPC and tailored to the peculiarities of DTC (Geyer & Papafotiou, 2005). Focusing on the third scheme, which ABB has decided to implement and has protected with a European Patent application, the major benefits achieved is its superior performance in terms of switching frequency, which is reduced over the whole range of operating points by up to 50%, while the average reduction amounts to 25%. Furthermore, the controller needs no tuning parameters. As the computation of an explicit solution is avoided, all quantities may be time-varying including model parameters, set points and bounds. Those can be adapted on-line. 8. Discussion an future research In this section we will briefly discuss current research in the field related to modelling, computational aspects and applications to large scale switched systems. 8.1. Modelling As mentioned, the derivation of an MLD model on the basis of an engineering description is difficult to do by hand, but can easily be automated using the modeling language HYSDEL (Torrisi & Bemporad, 2004). Although HYSDEL was successfully used in many case studies, its modeling features were restricted to scalar variables, which can make the modeling of complex systems very time consuming. Therefore, we developed an extension called matrixHYSDEL, which allows to use vector and matrix variables when modeling hybrid systems. In addition, repetitive tasks can be easily simplified by using matrixHYSDEL’s support of nested FOR cycles.

FOR cycles are not only interesting because they save human effort needed to model a particular system, but they also lead to more efficient problem formulations. As expressed by Eq. (1), MLD models establish a relation between xk , uk , δk , and zk variables and the value of the predicted state at next time instance xk+1 . When such model is used in the MPC framework, predictions at time instances k = 1, . . . , N need to be expressed by means of constraints. By doing so, causality between two successive predictions is no longer maintained explicitly. matrixHYSDEL, however, allows to formulate the optimization problem in such a way that the state vector contains values of the predicted states at each step of the optimization problem, i.e. X = [x0 , x1 , . . . , xN ]T , U = [u0 , u1 , . . . , uN−1 ]T , Y = [y0 , y1 , . . . , yN−1 ]T ,  = [δ0 , δ1 , . . . , δN−1 ]T , and Z = [z0 , z1 , . . . , zN−1 ]T . The augmented MLD representation then becomes ˜ +B ˜ 2 + B ˜ 3Z ˜ 1U + B X = AX

(13a)

˜ +D ˜ 2 + D ˜ 3Z ˜ 1U + D Y = CX

(13b)

˜ 3Z ≤ E ˜ 1U + E ˜ 4X + E ˜5 ˜ 2 + E E

(13c)

and the optimization is performed over the whole vector of input variables U. The advantage of this approach is that logical causality dependencies between any of the predicted states (i.e. between xk and xk+1 ) are preserved explicitly. This is similar to posing an MPC problem in any standard optimization modeling language, such as in AMPL (Fourer, Gay, & Kernighan, 1993). It was verified on a number of test studies that when the optimization problem is formulated in this way, it can be solved significantly faster compared to the formulation where the MLD model in the form of (1) (i.e. the one step ahead prediction model) was used. It was also observed that matrixHYSDEL’s formulation of the MPC problems is, in general, more efficient compared to modeling the optimization problem in AMPL. In particular, we have investigated the three modeling approaches (HYSDEL, matrixHYSDEL, AMPL) for the solution of an MPC problem for the three tanks benchmark (Mignone & Monachino, 2001). Results in terms of time needed to solve the optimal control problem for each modeling approach are summarized in Table 1. As can be seen, the matrixHYSDEL modeling approach is, on average, ten times faster than any of the other approaches. 8.2. Explicit solution: Computational aspects Explicit RHC offers the possibility to apply methods of constrained optimal control to systems with a high sampling rate. Table 1 Time needed to solve an MPC problem for the three tanks benchmark for different values of the prediction horizon Horizon

HYSDEL

matrixHYSDEL

AMPL

7 8 9 10 11

12 s 30 s 69 s 216 s 335 s

1s 2s 3s 11 s 59 s

1s 18 s 23 s 156 s 444 s

M. Morari, M. Bari´c / Computers and Chemical Engineering 30 (2006) 1619–1631

1629

However, the limitations of the explicit solution became obvious with first application examples. Due to the inherent combinatorial character of the underlying parametric optimization, the explicit solution may be prohibitively complex for many applications, with the complexity, both in terms of the required off-line computations and solution size, growing rapidly with the system dimension and the prediction/control horizon. Therefore, current research efforts in this field are focused on the development of more efficient and numerically reliable algorithms for the computation of explicit controllers. The research includes:

event time. The resulting optimization problem becomes an LP in the computation of the event times. The effectiveness of such an approach has been demonstrated in scheduling applications (Floudas & Li, 2004; M´endez, Cerd´a, Grossmann, Harjunkosko, & Fahl, 2005). Variations, using a mixed continuous time and discrete time approach were also successfully applied for the control of dc–dc converters Geyer et al. (2004). A goal of the current research is to formalize this approach and provide additional industrial examples to demonstrate its effectiveness.

• development of computationally efficient algorithms with the goal of reducing the off-line computational complexity (Baric, Grieder, Baotic, & Morari, 2005), • improvement of numerical robustness and efficiency of algorithms for parametric programming and closely related computational geometry algorithms (Jones, 2004).

9. Conclusion

8.3. Control of switched systems A focus in model predictive control research for hybrid systems is the reduction of complexity. Many of the examples presented in the previous sections require state of the art computing facilities to be solvable within the allowed time frame. At the same time industry calls for control solutions to increasingly complex control problems and for longer prediction horizons to attain better solutions. It is well known that a more detailed problem description or a longer prediction horizon results in exponential growth of problem complexity. Thus, it is one of our goals to counteract growth in problem size by including as much the knowledge about the problem structure as possible. This may reduce the computational problem complexity to a minimum, without sacrificing optimality. Throughout the analysis of hybrid systems control problems, we found that many embody a common structure: systems are controlled by discrete inputs, but evolve continuously. This setup is frequently found in practical applications, where binary variables are used to represent the on-off state of system parts, such as heaters, compressors, states of valves, or other machinery. In the currently used control framework, where time is discretized, all future time steps are assigned a binary variable for each on/off input. Surely, for a fine discretization and a long prediction horizon the resulting problems become intractable. Specifically, the restriction to {0, 1} values imposes a combinatorial optimization problem which may quickly become too large to be solved. To maintain computable problem sizes either the prediction horizon is shortened, the sampling intervals are lengthened, or a combination of both. However, short prediction horizons may fall short of the next switching event, thereby poorly predicting the system evolution, and lengthened sampling intervals result in infrequent switching possibilities and poor performance. A possible approach to circumvent these problems can be to shift from a discretized time approach to a continuous time approach. In such an approach switching time points would be computed instead of on-off inputs. Each switching would represent an event and be associated with a continuous variable, the

We have briefly presented recent developments in modeling, analysis and control of systems integrating logic and continuous dynamics, commonly referred to as hybrid systems. After the development of the modeling framework for hybrid systems, model predictive control was immediately recognized and selected as a natural candidate for the development of the control framework. One of the most important recent changes in this field was the introduction of the concept of explicit receding horizon control, which opened a whole new area of research and broadened the scope of application of the MPC paradigm. At this point the knowledge in the field of modeling and control of hybrid systems has reached a high level of maturity. A number of efficient and reliable end-user software tools have been developed. A constantly growing number of new application examples and case studies have brought more experience, fulfilled some initial expectations and, also, shown many limitations and new potential research directions. References Baoti´c, M., Christophersen, F. J., & Morari, M. (2003, September). A new algorithm for constrained finite time optimal control of hybrid systems with a linear performance index. In Proceedings of the European Control Conference (ECC). Baoti´c, M., Christophersen, F. J., & Morari, M. (2003, December). Infinite time optimal control of hybrid systems with a linear performance index. In Proceedings of the Conference on Decision and Control. Baric, M., Grieder, P., Baotic, M., & Morari, M. (2005, July). Optimal control of PWA systems by exploiting problem structure. IFAC World Congress, Prague, Czech Republic. Bemporad, A. (2003). Hybrid toolbox—User’s guide [On-line]. Available: http://www.dii.unisi.it/hybrid/toolbox. Bemporad, A., & Morari, M. (1999). Control of systems integrating logic, dynamics, and contraints. Automatica, 35 (3), 407–427. Bemporad, A., Morari, M., Dua, V., & Pistikopoulos, E. (2002). The explicit linear quadratic regulator for constrained systems. Automatica, 38 (1), 3– 20. Bemporad, A., Borrelli, F., & Morari, M. (2003). Min–max control of constrained uncertain discrete-time linear systems. IEEE Transactions on Automatic Control, 48 (9), 1600–1606. Biswas, P., Grieder, P., L¨ofberg, J., & Morari, M. (2005). A survey on stability analysis of discrete-time piecewise affine systems. IFAC World Congress Prague, Czech Republic. Blanchini, F. (1999). Set invariance in control—a survey. Automatica, 35 (11), 1747–1767. Borrelli, F., Bemporad, A., Fodor, M., & Hrovat, D. (2001). A hybrid approach to traction control. In A. Sangiovanni-Vincentelli, & M. D. Benedetto (Eds.), Hybrid Systems: Computation and Control, ser. Lecture Notes in Computer Science. Springer Verlag.

1630

M. Morari, M. Bari´c / Computers and Chemical Engineering 30 (2006) 1619–1631

Borrelli, F. (2003). Constrained Optimal Control Of Linear And Hybrid Systems, ser. Lecture Notes in Control and Information Sciences (Vol. 290). Springer. Borrelli, F., Baoti´c, M., Bemporad, A., & Morari, M. (2001, December). Efficient on-line computation of constrained optimal control. In Proceedings of the 40th IEEE Conference on Decision and Control. Borrelli, F., Baoti´c, M., Bemporad, A., & Morari, M. (2003, June). An efficient algorithm for computing the state feedback optimal control law for discrete time hybrid systems. In Proceedings of the 2003 American Control Conference. Borrelli, F., Baotic, M., Bemporad, A., & Morari, M. (2005). Dynamic programming for constrained optimal control of discrete-time linear hybrid systems. Automatica, 41, 1709–1721. Branicky, M., Borkar, V., & Mitter, S. (1998). A unified framework for hybrid control: model and optimal control theory. IEEE Transanctions on Automatic Control, 43 (1), 31–45. Cavalier, T., Pardalos, P., & Soyster, A. (1990). Modeling and integer programming techniques applied to propositional calculus. Computers and Operations Research, 17 (6), 561–570. Cutler, C. R., & Ramaker, B. L. (April 1979). Dynamic matrix control—A computer control algorithm. In Proceedings of the AIChE 86th National Meeting. Clarke, D. W., Mohtadi, C., & Tuffs, P. S. (1987). Generalized predictive control– I. The basic algorithm. Automatica, 23, 137–148. Diehl, M., Bock, H., Sch¨older, J., Findeisen, R., Nagy, Z., & Allg¨ower, F. (2002). Real-time optimization and nonlinear model predictive control of processes governed by differential-algebraic equations. Journal of Process Control, 12, 577–585. Dua, V., & Pistikopoulos, E. (2000). An algorithm for the solution of multiparametric mixed integer linear programming problems. Annals of Operations Research, 99, 123–139. El-Farra, N., Mhaskar, P., & Christofides, P. (2005a). Output feedback control of switched nonlinear systems using multiple lyapunov functions. Systems and Control Letters, 54, 1163–1182. El-Farra, N., Mhaskar, P., & Christofides, P. (2005b). Predictive control of switched nonlinear systems with scheduled mode transitions. IEEE Transactions on Automatic Control, 50, 1670–1680. Ferrari-Trecate, G., Gallestey, E., Letizia, P., Morari, M., & Antoine, M. (2004 Sept). Modeling and control of co-generation power plants: A hybrid system approach. IEEE Transactions on Control System Technology, 12 (5), 694– 705. Findeisen, R., Biegler, L.B., & Allg¨ower, F. (Eds.) Assessment and Future Directions of Nonlinear Model Predictive Control, ser. Lecture Notes in Control and Information Sciences. Springer-Verlag, Berlin, 2006. To appear. Floudas, C. A., & Li, X. (2004). Continuous-time versus discrete-time approaches for scheduling of chemical processes: a review. Computers & Chemical Engineering, 28 (11), 2109–2129. Fotiou, I., Rostalski, P., Sturmfels, B., & Morari, M. (2005, September). An algebraic geometry approach to nonlinear parametric optimization in control (ETH Zurich, Tech. Rep.) [On-line]. Available: http://control.ee.ethz.ch/index.cgi?page=publications;action=details;id. Fourer, R., Gay, D. M., & Kernighan, B. W. (1993). AMPL: A Modeling Language for Mathematical Programming. Danvers, MA, USA: The Scientific Press (now an imprint of Boyd & Fraser Publishing Co.). Geyer, T., Papafotiou, G., & Morari, M. (2004). On the optimal control of switchmode dc–dc converters. In O. Maler, & A. Pnueli (Eds.), Hybrid Systems: Computation and Control, ser. Lecture Notes in Computer Science (pp. 342– 356). Springer Verlag. Geyer, T., & Papafotiou, G. (2005). Direct torque control for induction motor drives: A model predictive control approach based on feasibility, in: M. Morari, & L. Thiele (Eds.), Hybrid Systems: Computation and Control, ser. Lecture Notes in Computer Science (pp. 274–290). Springer Verlag. Girard, A. (2005). Reachability of uncertain linear systems using zonotopes. In Morari, M., & Thiele, L. (Eds.), Hybrid Systems: Computation and Control, ser. Lecture Notes in Computer Science (pp. 291–305), Springer Verlag. Grieder, P., & Morari, M. (2003, December). Complexity reduction of receding horizon control. In Proceedings of the 42th IEEE Conference on Decision and Control.

Grieder, P., Borrelli, F., Torrisi, F., & Morari, M. (2003, June). Computation of the constrained infinite time linear quadratic regulator. In Proceedings of the 2003 American Control Conference. Grieder, P., Kvasnica, M., Baoti´c, M., & Morari, M. (2004, June). Low complexity control of piecewise affine systems with stability guarantee. In American Control Conference, Boston, USA. Gollmer, R., Nowak, M. P., R¨omisch, W., & Schultz, R. (2000). Unit commitment in power generation—A basic model and some extensions. Annals of Operations Research, 96, 167–189. Grossmann, R. L., Nerode, A., Ravn, A. P., & Rischel, H. (Eds.). (1993). Hybrid systems. New York: Springer Verlag (736 in LCNS). Heemels, W., De Schutter, B., & Bemporad, A. (2001). Equivalence of hybrid dynamical models. Automatica, 37 (7), 1085–1091. ILOG, Inc., CPLEX 8.0 user manual, Gentilly Cedex, France, 2003. http://www.ilog.fr/products/cplex/. Jockenhoevel, T., Biegler, L., & Waechter, A. (2003). Dynamic optimization of the Tennessee Eastman process using the OptControlCentre. Computers & Chemical Engineering, 27 (11), 1513–1531. Jones, C. (2004). Polyhedral tools for control. Ph.D. dissertation, University of Cambridge, Control group, Department of Engineering. Jones, C., Grieder, P., & Rakovi´c, S. (2005). A logarithmic-time solution to the point location problem for closed-form linear mpc. In Proceedings of IFAC World Congress. Kameswaran, S., & Biegler, L. (In press). Simultaneous dynamic optimization strategies: Recent advances and challenges. In Proceedings of the CPC7. Kerrigan, E. C. (2000). Robust constraints satisfaction: Invariant sets and predictive control. Ph.D. dissertation, Department of Engineering, The University of Cambridge, Cambridge, England. Kerrigan, E. C., & Maciejowski, J. M. (2001, December). Robust feasibility in model predictive control: Necessary and sufficient conditions. In Proceedings of the 40th IEEE Conference on Decision and Control. Kerrigan, E. C., & Maciejowski, J. M. (2003, June). Robustly stable feedback min–max model predictive control. In Proceedings of the 2003 American Control Conference. Kerrigan, E., & Maciejowski, J. (2004). Feedback min–max model predictive control using a single linear program: Robust stability and the explicit solution. International Journal of Robust Nonlinear Control, 4 (14), 395– 413. Kerrigan, E. C., & Mayne, D. Q. (Dec. 2002). Optimal control of constrained, piecewise affine systems with bounded disturbances. In Proceedings of the 41st IEEE Conference on Decision and Control. Kothare, M. V., Balakrishnan, V., & Morari, M. (1996). Robust constrained model predictive control using linear matrix inequalities. Automatica, 32 (10), 1361–1379. Kouvaritakis, B., Rossiter, J. A., & Schuurmans, J. (2000). Efficient robust predictive control. IEEE Transactions on Automatic Control, 45 (8), 1545– 1549. Kvasnica, M., Grieder, P., Baoti´c, M., & Morari, M. (2003). Multi Parametric Toolbox (MPT). In Hybrid Systems: Computation and Control, ser. Lecture Notes in Computer Science. [On-line] Springer Verlag. Available: http://control.ee.ethz.ch/∼mpt. Larsen, L. F. S. (2004). Effects of synchronization of hysteresis controllers in a supermarket refrigeration system (Tech. Rep.). Danfoss A/S, www. control.auc.dk/∼lfsl [Online]. Available: www.control.auc.dk/∼lfsl. Larsen, L. F. S., Geyer, T., & Morari, M. (2005, June). Hybrid model predictive control in supermarket refrigeration systems. In Proceedings of the IFAC World Congress. L¨ofberg, J. (2003). Minimax approaches to robust model predictive control. Ph.D. dissertation, Link¨oping University, Link¨oping, Sweden, dissertation no. 812. Mayne, D. Q., & Schroeder, W. R. (1997). Robust time-optimal control of constrained linear systems. Automatica, 33 (12), 2103–2118. Mayne, D. Q., Rawlings, J., Rao, C., & Scokaert, P. (2000, June). Constrained model predictive control: Stability and optimality. Automatica, 36 (6), 789– 814. Mayne, D., Seron, M., & Rakovic, S. V. (2005). Robust model predictive control of constrained linear systems with bounded disturbances. Automatica, 41, 219–224.

M. Morari, M. Bari´c / Computers and Chemical Engineering 30 (2006) 1619–1631 M´endez, C. A., Cerd´a, J., Grossmann, I. E., Harjunkosko, I., & Fahl, M. (2005). State of the art review of optimization methods for short term scheduling of batch processes (Tech. Rep.) Department of Chemical Engineering, Carnegie Mellon University, Pittsburgh, USA. Mignone, D., & Monachino, N. (2001). The total three tank tutorial text (Tech. Rep.) [On-line]. Available: http://control.ee.ethz.ch/index.cgi?page= publications;action=details;id. Niederberger, D. (2005). Design of optimal autonomous switching circuits to suppress mechanical vibration, in: M. Morari, & L. Thiele (Eds.), Hybrid Systems: Computation and Control, ser. Lecture Notes in Computer Science (pp. 511–525). Springer Verlag. Pannocchia, G., Rawlings, J., & Wright, S. (In press). A partial enumeration strategy for fast large-scale linear model predictive control. In Proceedings of the CPC7. Prajna, S., & Papachristodoulou, A. (2003, June). Analysis of switched and hybrid systems—Beyond piecewise quadratic methods. American Control Conference, Denver, Colorado, USA. Prajna, S., & Rantzer, A. (2005). Primal-dual tests for safety and reachability, in: Morari, M., & Thiele, L. (Eds.), Hybrid Systems: Computation and Control, ser. Lecture Notes in Computer Science (pp. 542–556). Springer Verlag. Rakovi´c, S. V., Kerrigan, E. C., Kouramas, K. I., & Mayne, D. (2004, January). Invariant approximations of robustly positively invariant sets for constrained linear discrete-time systems subject to bounded disturbances (Tech. Rep.) [On-line]. Department of Engineering, University of Cambridge, CUED FINFENG TR.473. Available: http://www-control.eng.cam.ac.uk/eck21. Richalet, J., Rault, A., Testud, J. L., & Papon, J. (1978). Model predictive heuristic control-application to industrial processes. Automatica, 14, 413–428.

1631

Raman, R., & Grossmann, I. E. (1991). Relation between MILP modeling and logical inference for chemical process synthesis. Computers & Chemical Engineering, 15 (2), 73–84. Raman, R., & Grossmann, I. E. (1992). Integration of logic and heuristic knowledge in MINLP optimization for process synthesis. Computers & Chemical Engineering, 16 (3), 155–171. Sznaier, M., & Damborg, M. J. (1987). Suboptimal control of linear systems with state and control inequality constraints. In Proceedings of the 26th IEEE Conference on Decision and Control. (pp. 761–762). Scokaert, P., & Mayne, D. Q. (1998). Min-max feedback model predictive control for constrained linear systems. IEEE Transactions on Automatic Control, 43 (8), 1136–1142. Tenny, M. J., Wright, S., & Rawlings, J. B. (2004). Nonlinear model predictive control via feasibility-perturbed sequential quadratic programming. Computational Optimization and Applications, 28 (1), 87–121. Torrisi, F., & Bemporad, A. (2004 March). HYSDEL — A tool for generating computational hybrid models. IEEE Transactions on Control Systems Technology, 12 (2), 235–249. Torrisi, F., Bemporad, A., & Giovanardi, L. (In preparation). Reach-set computation for analysis and optimal control of discrete hybrid automata (Tech. Rep.) Automatic Control Lab, ETHZ, Switzerland. Tøndel, P., Johansen, T., & Bemporad, A. (2003). Evaluation of piecewise affine control via binary search tree. Automatica, 39 (5), 945–950. Tyler, M., & Morari, M. (1999). Propositional logic in control and monitoring problems. Automatica, 35 (4), 565–582. Williams, H. (1993). Model building in mathematical programming (3rd ed.). John Wiley & Sons.