Annual Reviews in Control 40 (2015) 3–13
Contents lists available at ScienceDirect
Annual Reviews in Control journal homepage: www.elsevier.com/locate/arcontrol
Closing the gap between planning and control: A multiscale MPC cascade approach Joseph Z. Lu∗ Honeywell Process Solutions, 1860 W. Rose Garden Lane, Phoenix, AZ 85027-2708, USA
a r t i c l e
i n f o
Article history: Received 12 April 2015 Accepted 17 August 2015 Available online 18 November 2015 Keywords: Model predictive control Muiltiscale Multilevel Multilayer Muiltiscaleness Cascade control Planning Plantwide optimization Production planning Integration
a b s t r a c t A one-to-many, multiscale model predictive control (MPC) cascade is proposed for closing the gap between production planning and process control. The gap originates from the fact that planning and control use models at different scales, and the gap has existed since the first planning tool was deployed. Multiscaleness has been at the core of the challenge to coordinating heterogeneous solution layers, and there has been a lack of systematic treatment for multiscaleness in a control system. The proposed MPC cascade is devised as a plantwide master MPC controller cascading on top of multiple (n) slave MPC controllers.1 The master can use a coarse-scale, single-period planning model as the gain matrix of its dynamic model, and it then can control the same set of variables that are only monitored by the planning tool. Each slave controller, using a finescale model, performs two functions: (1) model predictive control for a process unit, and (2) computation of proxy limits that represent the current constraints inside the slave. The master’s economic optimizer amends the single-period planning optimization in real time with the slave’s proxy limits, and the embedded planning model is thus reconciled with the MPC models for process units in the sense that the master’s optimal solution now honors the slave’s constraints. With this new approach, the proposed MPC cascade becomes the plantwide closed-loop control system that performs the reconciled planning optimization in its master controller and carries out the just-in-time production plan through its slave controllers. © 2015 Published by Elsevier Ltd on behalf of International Federation of Automatic Control.
1. Introduction In the past 30 years, model predictive control (MPC) has become the standard multivariable control solution for many industries. According to two surveys published by Qin and Badgwell in 1996 and again in 2003, more than 90% of multivariable control implementations employ some form of MPC (Qin & Badgwell, 1996, 2003). In industries such as oil refining, competitive pressures have risen significantly over the last decade, as shown in Peter, Dick, and Allen (2012), and MPC has become a key business enabler for process plants to deliver better profit margins and compete effectively. The widespread use of MPC provides a solid foundation for an economically more significant advancement—closed-loop plantwide control and optimization. However, many technical, workflow, and user-experience challenges still exist in most industries (with a few exceptions such as in Nath & Alzein, 2001). As a result, open-loop plantwide optimization, commonly known as production planning, is performed as a conceding alternative.
When a plantwide planning2 solution is employed, the crossfunctional integration of different solution layers has always been one of the primary challenges as reported by several authors (Bodington, 1995; Havlena & Lu, 2005; Iiro, Rasmus, & Alexander, 2009; Kulhavy, Lu, & Samad, 2001; Lu, 2001; Shobrys & White, 2000). Although progress has recently been made in plants where uniscale models can be used for both control and optimization (e.g., in ethylene plants and power plants) (Havlena & Lu, 2005; Kulhavy et al., 2001; Lu, 2001; Nath & Alzein, 2001), in large, complex plants such as oil refineries, the planning optimization layer is rarely, if at all, implemented as a part of the closed-loop control system. In fact, in many industries, planning results are almost always manually adjusted through mediating instruments such as daily operator instruction sheets. As a result, a significant amount of manufacturing profit remains unattained. For example, a recent benefit analysis for a North America refinery (see Section 4) estimates that more than US$20 million of additional profit per year could be captured in the diesel pool of mid-sized refineries with improved real-time plantwide optimization. The refinery-wide profit improvement could be even greater.
∗
Tel.: +1 (602) 293 1547. E-mail address:
[email protected] 1 The technical terms, “master controller” and “slave controller,” have been long used in practice and literature, and no social connotation is intended here.
2 For better readability, the word “planning” is short for “production planning” herein.
http://dx.doi.org/10.1016/j.arcontrol.2015.09.016 1367-5788/© 2015 Published by Elsevier Ltd on behalf of International Federation of Automatic Control.
4
J.Z. Lu / Annual Reviews in Control 40 (2015) 3–13
In some industries, a mediating solution layer, which may consist of one or several open-loop production schedulers, has been devised to disaggregate production plans into smaller, more manageable pieces (Bodington, 1995). This approach assists in the translation of the planning solution into operator actions, but does not eliminate the need for manual adjustments. In some other industries, an openloop production scheduler is used in place of production planning, but its output is often manually adjusted before implementation as well. Fundamentally, one of the main technical challenges is establishing the solution consistency between the incongruent layers. Without a guarantee of solution consistency, the solution from the upper layer may not respect the constraints of the lower layer—hence the need for manual adjustment. In many industries, this manual adjustment involves burdensome activities including, but not limited to, (1) translating or revising high-level production targets to satisfy unitlevel, possibly safety-related constraints, and (2) making adjustments in the solution to compensate for disturbances in production inventories or product quality (due to, for example, unplanned events, feed quality changes, etc.). There are other technical, workflow and operational challenges in addition to the need for manual adjustments. Many of these challenges have been documented as early as 1995 by Bodington (1995). Several formidable issues are further discussed by authors such as in Iiro et al. (2009) and Shobrys and White (2000). This paper, however, will primarily focus on two challenges – solution consistency and disturbance compensation – that compose the two major technical barriers between production planning and process control. In addition to the traditional multilayer solutions above, in the last 10–15 years, there have been several noteworthy advancements in the MPC realm which also aim to better solve largescale problems. For example, distributed-MPC (Christofidesa, Scattolinic, Peñad, & Liu, 2013; Rawlings, 2011) attempts to solve complex problems such as networked manufacturing problems by exploiting the problem’s structure and decentralizing the MPC solution; coordinating MPC (Aske, Strand, & Skogestad, 2008) or hierarchical MPC (Bendtsen, Trangbaek, Klaus, & Stoustrup, 2010; Scattolini & Colaneri, 2007) attempts to better manage the underlying MPC applications and improve overall economic performance; and lastly, hierarchical and distributed MPC (HD-MPC for short) (Alvaradoa et al., 2011; Schutter, 2011) attempts to leverage the strengths of both while mitigating their drawbacks. Many suitable applications have been reported in Alvaradoa et al. (2011), Aske et al. (2008), Bendtsen et al. (2010), Christofidesa et al. (2013), Rawlings (2011), Scattolini and Colaneri (2007), Schutter (2011), and some of the remaining challenges are discussed in Schutter (2011). Challenges aside, many authors, such as in Bodington (1995) and Shobrys and White (2000), point out that a significant amount of economic benefit could be obtained through improved integration of different solution layers. For oil refining, that benefit is estimated to be on the order of US$1/barrel of processed crude (Bodington, 1995; Shobrys & White, 2000), which is staggering given the recent operating margins in typical oil refineries. In the balance of this paper, first, a brief analysis on the widespread use of multiscale models in process industries is presented, which should also explain why the unattained benefits remain so significant. Next, a new multiscale MPC cascade solution is introduced, which can (as an option) reuse a planning model online side-by-side with the existing MPC controllers for just-in-time production optimization. Finally, key results of a gate-to-gate real-time optimization benefit study for a major oil refinery in North America is presented. The study was requested, paid for and vetted by the operating company.
2. Multiscale models used in process industries and manufacturing facilities An industrial plant, such as an oil refinery, LNG plant, or alumina refinery, is often comprised of multiple processing or manufacturing units. At the high level, overall material, component, and energy balances among all processing units must be established and then optimized for maximum operating profit. At the low level, each unit must be properly controlled to ensure the safety of the plant, as well as the smooth, efficient operation of each unit. Planning and MPC models are an example of a pair of multiscale models used to solve multilevel problems (for a broad background on multiscale modeling, see Tarja & Kim, 2013; Weinan, 2011). A planning model looks at an entire plant with a bird’s-eye view and thus represents each individual unit on a coarse scale. It focuses on the inter-unit steady-state relationships pertaining to the manufacturing activities, inter-unit material and energy balances, and overall profitability of the plant. The model variables may include production rates, product yield and quality, factors that influence the yield or quality, intermediate and final product inventories, etc. An MPC model, however, represents a process unit on a finer scale. It focuses on the intra-unit dynamic relationships between the controlled variables (CVs) and the manipulated variables (MVs) that pertain to the safe, smooth, and efficient operation of the unit. The model variables may include temperatures, pressures, flowrates, levels, valve openings, quality measurements, calculated variables, etc. In addition to the difference in the set of variables, the time scales of the two models are also different. An MPC model’s time horizon typically ranges from minutes to hours, whereas a planning model’s time horizon ranges from days to months. Furthermore, a planning model can, for good reasons, exclude non-production or non-economically related variables such as many of the ones in an MPC model. Hence it reduces a process unit to one or several columns of (material or energy) yield vectors. An MPC model, on the other hand, must include all of these operating variables for control purposes to ensure safe and effective unit operation. As a result, the scale difference in terms of number of variables modeled for a unit is typically about one order of magnitude. For example, an MPC model for an oil refinery Fluidized Catalytic Cracking Unit (FCCU) would typically contain about 100 CVs (outputs) and 40 MVs (inputs). However, a planning model of the same unit justifiably focuses only on the key causal relationships between the feed rate and the product yield and quality. As a result, the planning model may have as few as 10 outputs and 2–3 inputs. This variable difference is a formidable barrier that prevents an effective integration of multilevel solutions as discussed in Bodington (1995), Iiro et al. (2009) and Shobrys and White (2000). Other differences, as summarized in Table 1, can also introduce additional challenges to unifying these two multiscale solution layers into a consistent control system. There are pragmatic reasons why multiscale models are used inside a single plant. In the absence of any unified solutions, these twotier models offer at least two advantages: (a) coarse-scale planning models allow the plantwide economic optimization (e.g., production planning) to be succinctly formulated without getting entangled with less important and often obscuring details from any processing unit; and (b) a divide-and-conquer approach can be employed to solve the high-level, plantwide optimization problem first and then find a way to pass the solution down to each processing unit. While a compact, well-built, coarse-scale model makes the planning problem easy to set up, clear to view, and quick to solve, it comes with a handicap—it lacks visibility into the detailed operations of any given underlying unit. Although many of the unit’s operating variables have little to do with the high level planning problem, a small subset usually does. As a result, none of the yield-model based optimization tools available in the market provide any guarantee that its solution, optimal or not, will respect the low level constraints in
J.Z. Lu / Annual Reviews in Control 40 (2015) 3–13
5
Table 1 A comparison of planning model and MPC model. Purpose and advantages (also spatial-scale)
Typical decision variables
Model types
Users
Typical tasks
Yield model
Planning personnel and others
• Plantwide or multi-site sharp focus on key economic factors and inter-unit material and energy balances • Bird’s eye (compact) view of plantwide operations • Free of unnecessary or obscuring details
• Produce the right amounts at the right specs • Coordinate the productions among different units • Satisfy the inventory (and possibly other logistic) inter-unit constraints
• Plant/unit material flow configurations • Unit feed rates, operating modes and/or severities • Feed choices • Product mix • Buy vs sell decision
• Longer time horizon • Less frequently executed • Open-loop
MPC model
APC and process engineers
• Intra-unit or multi-unit • Sharp focus on the safety, operability and product quality of a process unit • Unit-level efficiency is a secondary goal
• Control all safety and operability related CVs • Control the product quality within the specs • Operate the unit at a desirable operating point
• Flows, including the unit feed rates • Temperatures • Pressures • Levels • Pumparounds
• Shorter time horizon • More frequently executed • Closed-loop
all units. However, these related operating constraints in the process units must be accommodated, and thus an off-line planning solution must be manually revised to provide the required constraint accommodation. In this workflow, a significant profit margin could be unknowingly lost. The same could be true for a scheduling solution if it uses a coarse-scale model. From a holistic perspective, an optimization (or control) problem formulated with a high-level yield-based model could benefit from its low-level peers—MPC models. The rationale is simple: The exact details required for the high-level solution to satisfy the unit constraints can be found in the MPC models, albeit these details may not necessarily be organized in the right format. Ideally, MPC models can be used to supplement the coarse-scale planning model on an as-needed basis. A self-evident question is then: in what structural framework can MPC models be used to amend the coarse-scale planning optimization with the needed fine-scale model information?
Time-Scale
(Master) Primary MPC Plant Economic Optimization Multivariable Control
Unit Economic Optimization
Unit Economic Optimization
Unit Economic Optimization
Multivariable Control
Multivariable Control
Multivariable Control
Secondary MPC 1 (Slave)
Secondary MPC 2 (Slave)
Secondary MPC n (Slave)
Fig. 1. The structure of multiscale MPC cascade.
3. An MPC cascade solution using multiscale models 3.1. MPC cascade structure From a plantwide viewpoint, control and planning are almost always coupled—planning relies on control to establish the feasible region for optimization, while control relies on planning to coordinate the units and provide direction for the entire plant to move to the most profitable operating point. More specifically, planning depends on MPC controllers to push the constraints inside every unit to create a larger feasible region for plantwide optimization. Meanwhile, process MPC controllers need an indication from planning as to which constraints are true plantwide bottlenecks and should thus be pushed, and which constraints are not and should remain inactive. Obviously these two solution layers are co-dependent and should be treated simultaneously. One potential way to solve this coupled problem is to design the control and plantwide optimization jointly. Since every MPC controller (industrial version) has an embedded economic optimizer, it is possible to devise a plantwide, monolithic MPC controller that performs both planning optimization and unit-level MPC. However, such an all-or-nothing, monolithic MPC solution would be too complex for the users to interpret any given part of the solution. It would have serious operability issues and thus low user acceptance. In fact, no
such solution has been successfully implemented anywhere in the process industries. The true challenge of any joint design has always been simultaneously providing decentralized controls at the unit level and centralized optimization at the plant level. In other words, decentralized MPC solutions are much desired because of their autonomy, better operability and flexibility in dealing with unit upsets, equipment failures, and maintenance. However, centralized plantwide optimization is also much desired, which is often compendiously set up with a coarse-scale planning model that represents the plant operations succinctly. Furthermore, such a coarse-scale model is easy to modify and maintain in response to frequent changes in unit configuration as well as in routing between the units. The proposed 1-to-n MPC cascade strategy (see Fig. 1) aims to fill the void and satisfy the seemingly contradictive needs. The master controller can optionally use a pre-existing single-period planning yield model as the initial steady-state gain matrix of its MPC model, and the dynamic part of it can then be determined from the (possibly historical) operating data, knowledge of vessel geometry, configurations of the process units, etc. It controls inter-unit variables such as production inventory, manufacturing activity, and product quality within an entire plant. Its embedded economic optimizer, which can be optionally furnished with the same planning model structure and
6
J.Z. Lu / Annual Reviews in Control 40 (2015) 3–13
economics, then reproduces the single-period off-line planning optimization online in real time. The plantwide master MPC controller (or the primary) cascades on top of multiple (n) slave MPC controllers (or the secondaries) at the unit level. Note that this MPC cascade needs a two-way connection between the master and the slaves, unlike a traditional one-way PID-to-PID cascade. It is this two-way connection that also allows the slaves to provide the master with the operating constraint information inside every unit (see Sections 3.3 and 3.4 for more details). With this supplemental information, the real-time planning solution of the two-tier MPC cascade eliminates the aforementioned handicap and will honor all the unit-level operating constraints. Jointly, the master and slave MPC controllers provide simultaneously decentralized controls at the low level with fine-scale MPC models and centralized real-time planning optimization at the high level with a coarse-scale yield model—all in one consistent cascade control system. 3.2. The similarities and differences in optimization Although the main intent of the master’s optimizer is to reproduce the planning optimization online, two noteworthy differences are its feedback mechanism and its use of proxy limits (the latter will be discussed in Section 3.3). The plantwide economic optimization now runs inside the master MPC with closed-loop feedback as opposed to running standalone in open-loop. Typically, open-loop planning solutions (and sometimes, scheduling solutions as well) are updated from the plant measurements once per day at most. Thus, they lack an effective feedback mechanism to deal with inter-sample, let alone intra-sample, disturbances to production inventory and product quality, such as changes in feed quality, changes in unit operating conditions, process upsets, inter-unit heating or cooling limitations, maintenance, etc. Because of this lack of timely feedback, the process units can drift around an obsolete optimum and lead to undesirable operating conditions that can only be corrected for by observant operations personnel. The optimizer embedded in the master MPC controller, however, will run at a user-specified frequency, typically ranging from once every several minutes to once every several hours. Both product quantity and quality of every process unit are measured or calculated at each execution. The prediction errors are estimated in the master controller as part of standard MPC algorithm. If any deviations from the original plan are detected, a re-planning optimization will take place in the master immediately. Within the same execution period, the new optimal production targets will then be sent to and implemented by the slave MPC controllers, completely eliminating the need for manual adjustment. In addition to the feedback mechanism difference, certain planning optimization settings need to be modified to capture more benefits in real time. Some key similarities and differences between the traditional planning and this cascaded MPC solution are noted below: (1) The economic objective function in the master MPC controller remains the same as that in the off-line planning counterpart. (2) The prediction time horizon in the master is an online tuning parameter, ranging from several hours to several days or weeks, depending on the application. However, it is most likely shorter than that of an off-line (particularly multi-period) planning application. (3) To capture more real-time benefits and achieve just-in-time manufacturing, the considerations for tuning should include, but are not limited to, the following: (a) how far in advance the product orders are placed; (b) the variance of product orders in either quantity or grade (or quality); (c) what additional buy/sell opportunities are available and in what time horizon;
(d) what semi-finished components can be exchanged with partners or bought/sold on the spot market; (4) Production inventories and product properties are now controlled dynamically throughout the plant with real-time measurement feedback. (5) Product orders are often known within the time horizon of the master controller—this allows the master to generate a produce-to-order plan in real time. (6) Production capacity of each unit is maintained by a slave MPC controller, and the additional capacity is predicted by the slave and used by the master. Thus, the master can generate a justin-time plan based on the actual capacity as opposed to the assumed one in a typical planning application. (7) The planning model used in the master MPC controller is updated in real time with a yield/quality validation mechanism that is associated with each of the slave MPC controllers. After being cross-validated for possible errors (e.g., metering), the measured yield of each process unit is used to update the master’s model, and the master can generate a more accurate, and thus more profitable production plan. With the key differences noted above, it is conceivable that the proposed MPC cascade solution would significantly expand the space of Advanced Process Control (APC) that has been traditionally limited to the unit or multi-unit controls. However, even with this expansion, it is important to note that the MPC cascade is not designed to replace the off-line planning solution. This point can be easily illustrated with a simple example: The planning solution typically makes the raw feed purchase or product sell decisions one or several months in advance, whereas the proposed cascaded MPC solution is not suitable for making any purchase or sell decisions beyond its time horizon, which is typically one to two weeks. The two primary goals that this MPC cascade solution intends to achieve are: (1) maximizing the profitability of plant production in real-time with the purchased feed; (2) eliminating the need for frequent, manual solution adjustments to accommodate unit-level constraints and/or to respond to production disturbances. Planning (and sometimes scheduling) tools are currently used, directly or indirectly, to help make these manual adjustments and to achieve better profitability. But this is fundamentally a control task, for which MPC is a more suitable technology than open-loop steady-state optimization. In other words, a secondary objective of this MPC cascade is to provide relief to the open-loop planning/scheduling solutions when they are used as makeshift controllers. 3.3. Proxy limit—a conduit for reconciling multiscale models A proxy limit is an alternative representation of slave MPC controller’s constraints in the master’s space, and it can be viewed as a conduit for adding the slave’s constraints to the master MPC. Proxy limits from all slave MPCs are combined and then included in the master’s multivariable control and economic optimization formulations. Inherent relationships exist between the master and slave models—while the CVs of the master do not intermix with the CVs of any slave, the MV/DV variables are interconnected. To illustrate these interweaving relationships, the master model matrix is placed on top of the slave model matrices in Fig. 2. The top half of the diagram represents the master model matrix which is partitioned into multiple column groups to align its MV index with the slave MV indices. The lower half contains the slave model matrices which are arranged block-diagonally. Each cell in the diagram represents a potential dynamic model between its corresponding input and output. Note well that an MV of the master may also be an MV of a slave, and vice versa. In the specific case depicted in Fig. 2, the third MV of
J.Z. Lu / Annual Reviews in Control 40 (2015) 3–13
Fig. 2. Side-by-side view of master–slave models.
the master controller is connected to the first MV of slave MPC 2; they are the same manipulated variable. For illustration purposes, suppose this MV is the feed rate of Unit 2. In this case, the master controller needs to manipulate the feed rate for inter-unit production inventory and quality control, whereas the slave controller also needs it for local control objectives within Unit 2. These connected MVs are denoted as conjoint variables, for the obvious reason that these variables conjoin the master and slaves together. Wherever there is a conjoint variable, either the slave or the master loses one degree of freedom, since neither of them can manipulate the conjoint MV independently of the other. The non-conjoint MVs are denoted as free variables. They can be manipulated unilaterally by the slave for unit-level control or optimization. Whenever extra degrees of freedom remain, they can also be used by the slave to make more room for the conjoint variables, which can then be used by the master to further optimize the entire plant. As the simplest case, in which a given slave controller has only one conjoint MV, the proxy limit can be expressed simply in terms of an MV limit in the master controller. This limit value can be determined by computing the maximum distance that the master can move the conjoint MV before one or more slave variables are pushed against their corresponding CV and MV limits. Take a Hydrocracking Unit (HCU) in an oil refinery for example. Suppose that the HCU feed is configured as MV3 in the master controller and as MV1 in the second slave MPC. The current feed rate is at 33.5 MBPD. The master can then make an inquiry call to the slave and request the slave to maximize the feed rate in its steady-state optimizer. The inquiry call’s result is returned to the master, which indicates that the maximum feed rate of 38.1 MBPD can be requested before one or multiple slave CVs and MVs hit their limits. The returned maximum value for the conjoint MV3, 38.1 in this example, is then used as the proxy limit in the master for all of the constraints in the second slave MPC. If the low proxy limit is also needed, the master can make another inquiry call to minimize the conjoint MV, or the feed rate in the above example. The returned minimum value is then used as the low proxy limit in the master. In this simplest case of 1 conjoint MV per slave, regardless of how many slave constraints can limit the master’s MV, the master MPC only needs to know the point where it must stop pushing the conjoint MV (otherwise some low-level constraint violations will result). This predicted stopping point is a proxy limit, which represents the entire set of potentially active slave constraints that could limit the master’s MV. In this specific HCU example above, no more than two proxy limits are required for representing the entirety of slave constraints!
7
Fig. 3. Slave constraints as plotted in the master MV space and projections of inquiry calls.
As a result, the high and low conjoint MV limits of the master, unlike the MV limits of a regular MPC controller, are not set by the end-users. Instead, they are set by the slave MPC controllers based on the current operating constraints in the process unit. 3.4. Proxy limits for multiple conjoint variables per unit When a slave MPC controller has two or more conjoint variables with the master, proxy limits for this slave can be multivariate in nature. In this general case, the proxy limit concept will be extended and expressed in either the MV or CV space of the master controller. To do so, more inquiry calls in different directions will be made to estimate the feasible region defined by the slave constraints. To explain the general concept of proxy limits, in Fig. 3, the relationship between the master and slave constraints for two conjoint MVs is illustrated. If the free variables of the slave controller are fixed at their current operating values, the (current) MV and (predicted) CV constraints of the slave can then be expressed in the MV space of the master. For a linear, time-invariant slave controller, its MV and CV constraints form a convex region in the master MV space (the beige polygon in Fig. 3). The CV edges of the convex region can change—for any slave CV constraint, its projection in the master’s MV space (as plotted in Fig. 3) can move as the free variables in the slave change, although its slope remains the same. The MV edges of the convex region (the four dashed lines) are brought directly from the slave MV constraints and thus will not move. To estimate the feasible region of the conjoint MVs in the master, multiple inquiry calls in different directions are required. The turquoise dashed arrows inside the convex region illustrate how these inquiry calls work in a two-dimensional space. The first inquiry call is made with an objective function to maximize the first conjoint MV. This inquiry call “pushes” all the movable edges to the right as depicted by the horizontal dashed arrow labeled as “1st”. The red point at the end of this arrow denotes the maximum limit to which the CV constraints can be pushed. Mathematically, the first inquiry call can be written as:
min ct xconj x
s.t.
x G blo ≤ A conj ≤ bhi , where A = I xfree dt xconj = 0
(1)
In Eq. (1), x denotes the slave MVs, G denotes the slave model gain matrix, I denotes an identity matrix. In the first inquiry call, ct equals [−1, 0], and d t equals [0, 1].
8
J.Z. Lu / Annual Reviews in Control 40 (2015) 3–13 Table 2 Example of c and d values for inquiry calls. Call #
ct
1 2 3 4 5 6 7 8
−1 1 0 0 −1 1 1 −1
dt 0 0 −1 1 1 −1 1 −1
0 0 1 1 1 1 1 1
1 1 0 0 1 1 −1 −1
The equality constraint, d t xconj = 0, is added to specify the direction in which the CV constraints are pushed out in the inquiry calls. Other examples of c and d values are listed in Table 2. Based on the values listed in Table 2, the second inquiry call will push all the CV constraints toward the left, which is labeled as “2nd Inquiry Call” in Fig. 3. The third will push the CV constraints upward. The remaining inquiry calls will follow suit, and they are labeled in Fig. 3 according to their call numbers in Table 2. Each inquiry call will return an endpoint, which is marked as a red dot in Fig. 3. Connecting these eight endpoints either clockwise or counterclockwise demarks an approximated feasible region for the master, and the lines that define the eight sides of the approximation are then added into the master’s model as either CV or MV constraints. For a given slave controller, any number of inquiry calls can be made. In general, increasing the number of unique inquiry calls in different directions improves the approximation of the actual feasible region. For a typical application, eight inquiry calls, such as those listed in Table 2, are sufficient. In general, if a slave controller has m conjoint MVs, and if for each conjoint variable, p slope grid-points are selected for specifying the direction (p = 3 in Table 2), p m – 1 inquiry calls are needed to make an adequate estimation of the slave’s feasible region. As the number of conjoint MVs, m, increases, the computational load increases exponentially. Fortunately, in the process industries m is typically between one and three, and in rare cases the number may reach four or five. The total number of inquiry calls required is manageable. The primary goal of proxy limits is to distill all of the CV and MV constraints in a slave MPC into a much smaller set of proxy limits for the coarser-scale master MPC. This is a critical reconciling mechanism that keeps the coarse-scale model intact in the plantwide optimization formulation inside the master while effectively merging the fine-scale slave MPC models into it. The above procedure is applied to each slave controller that is cascaded under the master, and the resultant proxy limits are added into the master’s model at appropriate rows or columns as either CV or MV constraints. With the combined proxy limits, this new MPC cascade makes it possible to keep the planning formulation in its original compact format while acquiring the much needed visibility into the operations of the underlying process units. Note that the time constant of a plantwide planning problem (i.e., the master MPC model) is typically at least 5 times greater than that of the underlying unit control problem (i.e., the slave MPC model). Therefore, computing proxy limits at steady state is sufficient and cost-effective for embedding the planning formulation into the master MPC.
3.5. Model predictive constraint/range control As discussed earlier, the master and slave controllers both require a form of MPC for dynamic control. It turns out that the two control problems defined by the master and by the slave are so similar that one MPC formulation can satisfy both. In this section, the MPC
algorithm is described for both the master and the slave, unless it is explicitly specified. Although many forms of MPC offerings are described in various publications, only the ones that provide dynamic constraint control are suitable for the task. For the process industries, a majority of the CVs in a typical MPC application are constraint CVs, meaning that they need to be constrained within their high/low limits and that they should not be controlled (i.e., regulated) to an internal setpoint that is placed within the high and low limits. The importance of dynamic constraint control here can hardly be overstressed, particularly for the master controller, because more than 95% of its CVs in a typical plantwide production planning problem are constraint CVs. Moreover, the number of CVs in a typical application is about twice as many as the number of MVs. Any artificial internal setpoint can potentially introduce undesirable behavior, particularly, for example, when parallel or near-parallel CVs exist in the controller. A form of MPC that unifies both regulatory and constraint control, Model Predictive Range Control (MPRC) (Lu, 2001), was commercialized by Honeywell over 20 years ago. A brief summary is included below to provide the necessary background. Readers who are interested in more details are referred to Lu (2001), and those who are interested in a broad, theoretic treatment of MPC are referred to Rawlings and Mayne (2009). In many aspects, Model Predictive Range Control is similar to other MPC algorithms used in the industry. One of the key differences lies in the control formulation. In the open-loop control solution step, constraint control is woven into the formulation by adding a slack variable for each pair of high/low constraints (or a range). Thus, the Model Predictive Range Control can be described as follows:
2
min (Ax − y)
W
x,y
+ x2
y_lo ≤ y ≤ y_hi
s.t.
x_lo ≤ x ≤ x_hi mv_lo ≤ Sx ≤ mv_hi
(2)
⎡
1 ⎢ .. ⎢. ⎢1 ⎢
⎢ ⎢ ⎢ ⎢ S=⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
0 1 1 .. . 1
..
. ···
1 ..
0
⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦
..
. ···
. 1 .. . 1
..
. ···
⎤
1
where A is the step-response matrix of the process model. W is a diagonal weighting matrix. y_hi and y_lo are high and low y limits. x_hi and x_lo are high and low limits for x, which is an array of incremental (delta) control moves. mv_hi and mv_lo are high and low MV limits. S is an accumulating-sum matrix, which connects the incremental control moves to the MV limits. x2 = xT x is the move-penalty term ( is semi-positive definite). This term is optional, depending on the funnel design and user preference. For each CV, such as the one depicted in Fig. 4, y_lo and y_hi are part of the funnel design. In a typical application, they can be shaped like a funnel. For a regulatory CV, the funnel’s tail end narrows into to a single line, as shown in Fig. 4, which is then set to a user-specified setpoint. For a constraint CV, the funnel’s tail end opens up (not shown in Fig. 4) and forms two parallel tail lines, which are then set to the user-specified high and low CV limits. In either case, the funnel opening is always wider than the tail end. Since the
J.Z. Lu / Annual Reviews in Control 40 (2015) 3–13
9
performing a control-effort minimization (or minimum-move):
xsol = arg min
x∈S1 ={x∗ }
Fig. 4. Constraint control via a funnel specification.
performance trajectory is not fully specified, the move-penalty term can be optional, and for tuning simplicity, it is often eliminated. As a result, the number of tuning knobs is reduced, and the control performance is then specified directly by the shape of the funnel on a per-CV basis. It is noteworthy that the formulation in Eq. (2) unifies regulatory control and constraint control. In fact, the standard regulatory control is treated as a special case of constraint control, where the high and low funnel bounds narrow to a single line that can then be used as a reference trajectory of any desirable shape. Economic optimization for both the master and slaves (see Fig. 1) can be specified as in Eq. (3) at the end of the prediction horizon, which is close to the steady state:
min f (xss ) cv_lo ≤ g(xss ) ≤ cv_hi mv_lo ≤ xss ≤ mv_hi
(3)
where f (xss ) is the economic objective function, which can be nonlinear; g (xss ) is the nonlinear CV constraint function; and xss is the steady-state control move. For a slave controller, g ( · ) is often a linear function, or g (xss ) = G xss , where G is the steady-state gain matrix of the model. For the master controller, a typical single-period planning problem can be described by Eq. (3), if so desired. Once the economically optimal solution to Eq. (3) is obtained, various algorithmic approaches can drive the slave MPC controllers to the economically optimal point. One approach is to assign the optimal CV solution, yss = G xss , as the internal targets of the MPC. Another approach is to use xss directly in a manner that takes advantage of the inherent extra degrees of freedom in the MPRC formulation (Havlena & Lu, 2005; Kulhavy et al., 2001; Lu, 2001). The optimization target, xss , is implemented as an augmented form of Eq. (2). Note well that extra degrees of freedom may exist in Eq. (2) even with a non-zero move-penalty term, x2 . Moreover, the movepenalty term is recommended to be eliminated to provide simplicity in control tuning, as shown in Eq. (4). In this case, extra degrees of freedom will most likely exist, and additional care is needed in defining a suitable, unique solution.
2
min (Ax − y) x,y
s.t.
(5)
The 2-norm is used in Eq. (5) to provide a definition of minimumeffort control. Of course, other norms or a weighted 2-norm can also be used, depending on the specific situation in a given application. When the move-penalty term is non-zero in Eq. (2), the same 2-step process can also be used to define the minimum-effort control for the constraint (range) control formulation of Eq. (2), albeit the set S1 might only contain one solution. While this two-step procedure reveals the solution structure of Eq. (4) and defines the minimum-effort control, it is not an efficient method to use directly. A more efficient active-set quadratic programming (QP) algorithm, described below, was developed to combine the two steps into one and it has been employed in Honeywell’s commercial product. Notice that Eq. (4) is a QP program, though it is almost always a degenerate one. A special active-set QP algorithm with URV decomposition was developed to efficiently solve Eq. (4). As in the standard active-set method, the active rows (and columns) of the A matrix are tracked in the search iterations, and at the kth iteration, the original problem Eq. (4) is partitioned as shown in Eq. (6).
2 k) A(k) y(act act min x− (k) (k) x,y A y free
free
(6)
W
Then, the active-set of the A matrix is decomposed into the product of U, R, and V matrices, k) A(act = Uk RkVkT
xss
x22
W
y_lo ≤ y ≤ y_hi x_lo ≤ x ≤ x_hi mv_lo ≤ Sx ≤ mv_hi
(4)
In theory, this constraint (range) control formulation of Eq. (4) can be solved in two steps. First, all solution pairs (x∗ , y∗ ) to Eq. (4) are collected into a solution set, S = {(x∗ , y∗ )}. The solution set S can be partitioned into two sets, S = {S1 , S2 }, where S1 = {x∗ } and S2 = {y∗ }. Next, the control move solution, xsol , is chosen from the set S1 by
(7)
where U and V are orthonormal matrices and R is an upper triangular matrix (Stewart, 1991). (k) The size of Aact changes as the search iteration increments. To avoid the inefficiency of re-decomposing a largely identical matrix in each iteration, a rank-1 (row or column) updating method can be utilized. The rank-1 update for the URV decomposition requires much less computational effort, and thus is much more efficient than that of the Singular Value Decomposition (SVD) algorithm (Stewart, 1990, 1991), which provides a reason for choosing URV. Another reason, just as important, is that the URV decomposition preserves a valuable SVD feature—it always generates the 2-norm minimum solution when the solution is not unique. In this way, the two-step process for finding the minimum-effort control move can now be combined into one online algorithm:
2
k) xsol = arg min [Uk RkVkT ]x − y(act x
(8)
W
Furthermore, at any given iteration, any component of x may reach (k) its high or low limit. Just as in the standard active-set method, Aact needs to be partitioned into a set of free-columns that correspond to the free x-components, and a remaining set of fixed-columns that correspond to the fixed, or constrained, x-components. A rank-1 URV column updating method is used (instead of the row updating), and the algorithm proceeds as in the standard active-set QP algorithm. As mentioned before, both the master and the slave can employ Eq. (4) as their MPC control formulation. A slight setup difference exists between the two: In the slave controllers, the mv_hi and mv_lo limits are set by the end-user, whereas in the master, they are instead automatically set to the proxy limits provided by the slaves. With proxy limits, the master’s plantwide optimization solution will always respect the constraints posed in the underlying process units. The two-tier MPC cascade solution enhances the current stateof-the-art multilevel solution in three ways: (1) its embedded realtime plantwide planning solution now runs in closed-loop and honors all of the unit-level operating constraints in the slave MPCs; (2)
10
J.Z. Lu / Annual Reviews in Control 40 (2015) 3–13
Fig. 5. Flow diagram of the distillate production of an oil refinery.
the master MPC controls the same set of variables in closed-loop (e.g., production rates, inventories, quality, etc.) that its off-line planning counterpart merely monitors; and (3) manual adjustment or translation of the open-loop optimization solution is eliminated. By reconciling multiscale planning and control models online while keeping their original format intact for easy reuse, this MPC cascade provides much-desired coarse-scale plantwide (production) optimization in closed-loop, and equally desired decentralized MPC control for different process units, all in a consistent MIMO cascade control system. 4. Benefit study and simulation results 4.1. Data collection and baselining A detailed study of the benefits that could accrue from this new MPC cascade approach was commercially requested by an oil refinery in North America. The distillate pool (aka diesel pool, colloquially) of the refinery was selected to be the process scope of the study. Fig. 5 provides a high level overview of the process units. MPC has been installed on all the major units in this study scope. This study uses the historical operating data of the distillate pool in a one-year operating window as the baseline for evaluating potential improvements. The simulation is established based on the additional optimization moves this new MPC cascade would have implemented online, had it been subject to the same historical operating constraints and other relevant conditions in the same time period. Over the study period in question, the sample (or recoding) frequency for the required data varied. To maintain data consistency in the study, the simulation was carried out on a daily-average basis, which is accurate enough to identify the main sources of optimization benefits and provide solid estimates. The master MPC control
performance was also evaluated on a coarse time scale (once a day), though an online application would typically run at a much higher frequency and thus a better control performance could be expected. In the study year, six different distillate products were delivered. On a volumetric basis, the two or three most highly-demanded products accounted for a significant majority of the total product delivered. For each distillate product, several product qualities must be lab-tested and certified before the product can be shipped. Within the year, a small subset of these qualities was more difficult to meet than others. Most of the important qualities, however, were tightly controlled in the relevant process units under the existing advanced control and thus could be used by the master MPC as its MVs to optimize the product qualities within their proxy limits. The product qualities of particular interest in this study were Cloud Point, Sulfur Content, Distillation 90% Point and Flash Point; eight other qualities were also examined and were not found to constrain the master MPC controller. Within the study year, product property giveaways – the economic values that are given away due to the product quality overqualification – occurred due to a lack of coordination among the process units in the diesel pool. Manual adjustments on a daily basis of both the production and quality targets from the planning solution were not sufficient to streamline the operations among all the units. (A very difficult task indeed—the targets are heuristically adjusted without any assistance from a model matrix or any knowledge of proxy limits). In addition, only two component tanks were available for the diesel blending operation, which constrained the blending recipe choices and made the overall production coordination more difficult. As a result, in the highest-volume product, for example, only one product property had little giveaway throughout the majority of the year, while all other key properties had significant giveaways.
J.Z. Lu / Annual Reviews in Control 40 (2015) 3–13
11
The study year included a five-day refinery shutdown and two unit shutdowns—the Unifiner for five days and the Hydrocracker for 12 days respectively. The shutdowns were included to test how well the new MPC cascade solution would handle them. These process unit showdowns undoubtedly introduced large disturbances to the product inventories and quality. 4.2. Modeling stages and cross-validation The Master MPC simulation model was built in stages starting with a smaller scope of the blending operation and product delivery schedule. Building the plantwide model stage-wise helped to vet the model more thoroughly. As the model scope gradually increased, each newly obtained model was cross-checked against the older model of a smaller scope. Material balances and product qualities were also cross-checked between the models of different stages. The study was divided into four stages to cross-validate the benefit estimation. For the small-scope models, some auxiliary assumptions were provisionally made—for example, the hydrotreating unit feed rates were assumed to be independent for the small models without any crude units. From an optimization perspective, as the model scope increases, and as the provisional assumptions become unnecessary and are thus removed, the intermediate benefit should reduce stage by stage—as it did in the study. In this paper, due to space limitations, only the final results will be presented. Readers who are interested in more details of the benefit study and the methodology used are referred to Lu and Piccolo (in press). 4.3. Simulation setup In the benefit study, the master MPC was designed to control the individual flows of the various crude types subject to an incremental limit of 5 MBPD, which amounted to a small fraction of the total crude charge. There were 14 possible crude types within the study year, but only two or three of them were available at any given time. The master MPC was only allowed to use the same crude types that were available at the time in the refinery. The Master MPC was designed to control the 90% points of the distillate products from the crude units as well. The crude unit naphtha cutpoints were fixed at their historical values to align with the baseline of gasoline production and to keep the gasoline pool unchanged. The yield vectors for the three crude units were modelled as functions of the crude types being processed and their product quality targets. Because all of the products were measured in volume (and some in liquid-equivalents), a strict mass balance enclosing each of the crude units must be established to guarantee a zero volumetric gain3 under any circumstance and at all times. From the crude units on down, the master MPC was permitted to manipulate the feed rates of all three hydro-processing units. In addition, the reactor severities and the distillate 90% point, when they were available in the designated seasons, were also adjusted. Further downstream, the blending recipes were also optimized by the master, which generated significantly different blending recipes from the baseline. The yield vectors for the three hydrotreating units were modelled as a function of the unit feed rate and the product quality targets. These units have different average volumetric gains: the Unifiner has a gain of 1%; the diesel HTR has a gain of 1–4%; and the hydrocracker has the highest gain of 20%. A strict mass balance was established in each of the hydrotreating unit models to ensure that its volumetric gain equaled the measured volumetric gain in the unit at the time.
3 Volumetric gain (aka volume swell) is caused by a reduction of average product density in a reactive unit. Crude oil distillation is a non-reactive unit and hence has a zero gain.
Fig. 6. Control performance of component tank 3.
4.4. Simulation results Due to the vast amount of the simulation results and space limitations, only one simulation figure will be included for each aspect of the overall control and optimization results. More simulation results can be found in Lu and Piccolo (in press). Fig. 6 shows the control performance of selected property variables in blending component tank 3, which receives distillate products from two hydrotreating units. The properties in the tank are controlled within their high and low limits. The limits are shown as red lines in the plots. Note that there are no setpoints in any of the component tanks. The blue curves represent the tank properties after the MPC master is put in closed-loop control. The purple stars represent the baseline performance under the slave MPC control only. As shown in the lower right graph, Sulfur Content is now controlled much closer to its high limit than before, which leads to a significant reduction in both operating and catalyst costs. Note that the blip in the middle of the plot (around Day = 250) shows an instance where the disturbances from a unit showdown were rejected. The master MPC also pushes the Cloud Point higher—more lowcost, heavier components are mixed into the tank, which reduces the average cost of tank components. The master MPC also pushes the Flash Point lower—more low-value, lighter components are also mixed into the tank, further reducing the average cost. Note that the distillate 90% point is pushed higher, along with the Cloud Point, and the average cost is further reduced. The property shift and its resultant cost reduction in the component tank #3 provides a glance of overall optimization results in the distillate pool after the master MPC is cascaded on the slave MPCs. A similar change also takes place in the other component tank. The amount of product quality giveaways is significantly reduced, some of which are cut by 40% (Sulfur Content for example). As a result, the component production costs are reduced. Of the changes that the master MPC makes, the most interesting one is the material routing change. Since the three hydrotreating units have very different volumetric gains, a back-of-the-envelope calculation would call for an increase in the hydrocracker conversion and possibly its feed rate, too. But those moves cannot be computed locally without knowing their feed conditions from upstream and the product delivery requirements from downstream. The master MPC controller is uniquely suitable for optimizing a large, complex plant (or a subset of it) as well as coordinating the operations of all the units involved. In this benefit study, it finds the
12
J.Z. Lu / Annual Reviews in Control 40 (2015) 3–13
Fig. 7. Individual and combined crude feed changes.
optimal path (i.e., material routing amounts) to further leverage the overall volumetric gain of the diesel pool, subject to the operating constraints in each of the units and the final product delivery requirements. The new path indicates that, to produce the same volume of distillate products, a slightly smaller amount of crude is required. Fig. 7 shows the incremental feed rates of the three crude units (see the light blue, green and red curves) and the total combined feed rate (see the blue curve). As shown in the plot, the combined crude consumption for producing the same amount of distillate products in the study year is reduced by ∼0.5%. The refinery ran about 12 crude types during the study year, and the incremental crude feed change is of the same type of crude that was running in the unit at the time. Not surprisingly, the optimal path for maximizing the economic effect of volumetric gains changed from day to day, since different crudes had different distillate yields and product properties, and each operating unit had different constraint conditions at different times. The master MPC controller can compensate dynamic disturbances from the operating units and maintain the production inventories and product quality within their high and low limits, and hence it is able to capture transient economic benefits as they present themselves. The total additional benefit for the entire distillate pool operation, including crude saving, operating and catalyst cost savings, giveaway reductions, etc., is approximately $1.35 per barrel of distillate product. A large percentage of this comes from a slight reduction in overall crude consumption. The new optimal routing path contains small daily revisions to the open-loop path from the planning tool. And each daily revision is comprised of simultaneous adjustments to the distillate product cuts of all three crude units, the feed rates of all three hydro-treating units, and the blending recipes, based on the crude feed available and the final product delivery requirements. This is just-in-time manufacturing in action.
One universal requirement, just as in the case of the PID cascade, is the time scale separation between the master and the slave controllers. In PID cascade control, it is commonly recommended that the master PID be tuned at least three times slower than the slave PID. In the case of MPC cascade, the time horizon difference between the master and the slave also needs to be at least 3 (preferably, 5 or more) times, to maintain the soundness of the steady-state-only proxy limit calculation. Otherwise, the proxy limits at multiple future time intervals may need to be considered. But this, too, is not out of the realm of possibility if a large family of applications can be identified. There are many potential applications in the process industries that satisfy this time scale separation requirement. For example, one befitting application is a multi-bed reactor temperature profile MPC controller that serves as a slave to the unit MPC master controller, which can then manipulate the reactor bed average temperature to control the yield profile of the unit. Another suitable application is a furnace MPC controller that serves as a slave to the unit MPC master controller, in which the master and the slave have the same relationship as in the previous case. A similar approach for a single conjoint variable strategy was proposed in Aske et al. (2008), and other similar methods were also practiced in different industries. The proposed MPC cascade controller can unify many of these existing methods, and many multilayer control problems can be effectively solved in this unified framework of the MPC cascade. 6. Future work Although the number of conjoint MVs per process unit, m, rarely exceeds five in the process industries, m can be larger in other industries. In the general case, p m – 1 inquiry calls are needed to estimate the slave’s feasible region as discussed in Section 3.4. When m is large, for example m > 10, the proposed procedure in this paper would become cumbersome to use in a real-time controller, and a more efficient method would be desirable for estimating the feasible region defined by the slave constraints. Furthermore, the proposed MPC cascade structure can be cascaded multiple times, meaning a three-level (or more levels) MPC cascade is conceivable for solving even bigger problems, such as supply chain optimization of multiple facilities. The same proxy limit technique can be applied at each level of the MPC cascade. However, inclusion of certain existing elements (transportation networks for example) may make the problem more difficult to solve, and new challenges may also arise as the scope of optimization increases drastically. Additional research is needed to further study the potential strengths and the possible weaknesses of the multi-level MPC cascade and identify its most suitable application domains beyond plantwide optimization. Lastly, in the cases where the time scale separation requirement is not satisfied or when the master and slaves are more dynamically coupled, more research is needed. One approach is combining all of the economic optimizers from the slaves as a coordinating master, as proposed in Havlena and Lu (2005), Lu (2001) and Nath and Alzein (2001). Another possible approach is employing multiple proxy limits at different future time intervals. 7. Summary
5. Other applications of MPC cascade The proposed MPC cascade was primarily motivated by the prospect of closing the gap between planning and control, thus delivering significant economic benefits. However, from a technical viewpoint, multiscaleness in the system is not a prerequisite for applying this MPC cascade. If the models in the master and the slave are of the same or similar scale, the proposed MPC cascade may work just as well as in the multiscale cases.
A novel one-to-many multiscale MPC cascade strategy is proposed to close the gap between planning and control. It is devised as a plantwide master MPC controller cascading on top of multiple slave MPC controllers at the unit level. The master can use a coarsescale, single-period planning model as the gain matrix of its dynamic model, and it then can control the same set of variables that are only monitored by the current planning tool. Each slave controller, using a fine-scale model, performs two functions: (1) model predictive
J.Z. Lu / Annual Reviews in Control 40 (2015) 3–13
control for a process unit, and (2) computation of proxy limits that represent the current constraints inside the slave. The master’s economic optimizer amends the single-period planning optimization in real time with the slave’s proxy limits. Therefore, the planning solution embedded in the two-tier MPC cascade will now honor all the unit-level operating constraints and the manual solution adjustment can be eliminated. By reconciling the multiscale planning and control models online, the proposed MPC cascade becomes the plantwide control system that performs the improved planning optimization in the master controller and carries out the just-in-time production plan in closed-loop through slave controllers with much improved agility, accuracy, efficiency and profitability. Hence, the solution consistency gap between planning and control is eliminated. References Alvaradoa, I., Limona, D., Muñoz, D., Maestrea, J. M., Ridaoa, M. A., Scheub, H., et al. (2011, June). A comparative analysis of distributed MPC techniques applied to the HD-MPC four-tank benchmark. Journal of Process Control, 21(5)), 800–815 (Special Issue on Hierarchical and Distributed Model Predictive Control). Aske, E., Strand, S., & Skogestad, S. (2008). Coordinator MPC for maximizing plant throughput. Computers and Chemical Engineering, 32, 195–204. Bendtsen, J., Trangbaek, K., & Stoustrup, J. (2010). Hierarchical model predictive control for resource distribution. In IEEE CDC conference 2010. Bodington, E (Ed.). (1995). Planning, scheduling, and control integration in the process industries. New York: McGraw-Hill. Christofidesa, P., Scattolinic, R., Peñad, D., & Liu, J. (2013). Distributed model predictive control: A tutorial review and future research directions. Computers & Chemical Engineering, 51(5)), 21–41 (CPC VIII). Havlena, V., & Lu, J. (2005). A distributed automation framework for plant-wide control, optimisation, scheduling and planning. In Proceedings of 16th triennial world congress of the International Federation of Automatic Control (pp. 80–94). Iiro, H., Rasmus, N., & Alexander, H. (2009, December). Integration of scheduling and control—Theory or practice? Computers & Chemical Engineering, 33(12), 1909–1918. Kulhavy, R., Lu, J., & Samad, T. (2001). Emerging technologies for enterprise optimization in the process industries. In Proceedings of chemical process control VI. Lu, J. (2001). Challenging control problems and emerging solutions in enterprise control and optimization. In IFAC DYCOPS 2001 (also in Control Engineering Practice, 11, 2003, 847–858). Lu, J., & Piccolo, M. (in press). A detailed benefit study of a novel MPC cascade technology. Hydrocarbon Processing. To be published.
13
Nath, R., & Alzein, Z. (2001). Dynamic real-time optimization and process control of twin olefins plants at DEA Wesseling Refinery. In Proceedings of the 13th annual ethylene producers conference. Peter, R., Dick, H., & Allen, A. (2012). Global market research study of advanced process control & on-line optimization: ARC Report 2012. Qin, S. J., & Badgwell, T. A. (1996, January)). An overview of industrial model predictive control technology. Chemical process control—CPCV. Tahoe City, CA: CACHE. Qin, S. J., & Badgwell, T. A. (2003). A survey of industrial model predictive control technology. Control Engineering Practice, 11(2003), 733–764. Rawlings, J. (2011). An overview of distributed model predictive control. In IFAC workshop: Hierarchical and distributed model predictive control, algorithms and applications. Rawlings, J., & Mayne, D. (2009). Model predictive control: Theory and design. Madison, WI 53703: Nob Hill Publishing. Scattolini, R., & Colaneri, P. (2009). Hierarchical model predictive control. In 46th IEEE conference on decision and control. Schutter, B. D. (2011). Distributed and hierarchical MPC: Main concepts and challenges. In HD-MPC industrial workshop June 24, 2011. Shobrys, D. E., & White, D. C. (2000). Planning, scheduling and control systems: Why cannot they not work together. Computers and Chemical Engineering, 24, 163. Stewart, G. W. (1990). An updating algorithm for subspace tracking: Technical Report CSTR 2494 (Also in IEEE Transactions on Signal Processing, 40, 1992). Stewart, G. W. (1991). Updating a rank-revealing ULV decomposition, computer science technical report series (also in SIAM Journal on Matrix Analysis and Applications, 14, 1993). Stewart, G. W. (1991). On an algorithm for refining a rank-revealing URV decomposition: Technical Report CS-TR 2626. Department of Computer Science, University of Maryland (to appear in Linear Algebra and its Applications). Tarja, L., & Kim, W. (Eds.). (2013). Multiscale modelling and design for engineering application. VTT Technical Research Centre of FinlandJulkaisija-Utgivare-Publisher. Weinan, E. (2011). Principles of multiscale modeling (1st ed.). Cambridge University Press. Joseph Lu received his Ph.D. degree in Chemical Engineering from University of Washington in 1990. He joined Honeywell in 1990, held various research and development roles and currently is Senior Fellow and Chief Scientist of Advanced Solutions. His research interest has primarily been in the areas of advanced control, robust control and multiunit or plantwide optimization for process industries. He is the recipient of the 2010 Control Engineering Practice Award from American Automatic Control Council and is a member of IEEE.