Journal of Process Control 24 (2014) 239–249
Contents lists available at ScienceDirect
Journal of Process Control journal homepage: www.elsevier.com/locate/jprocont
Model predictive control for batch processes: Ensuring validity of predictions D. Laurí a,∗ , B. Lennox a , J. Camacho b a b
Control Systems Group, School of Electrical and Electronic Engineering, The University of Manchester, Manchester, UK Signal Theory, Telematics and Communications Department CITIC, University of Granada, Granada, Spain
a r t i c l e
i n f o
Article history: Received 16 January 2013 Received in revised form 26 July 2013 Accepted 5 November 2013 Available online 25 December 2013 Keywords: Batch optimization Model predictive control Latent variable Validity of predictions
a b s t r a c t The intuitive and simple ideas that support model predictive control (MPC) along with its capabilities have been the key to its success both in industry and academia. The contribution this paper makes is to further enhance the capabilities of MPC by easing its application to industrial batch processes. Specifically, this paper addresses the problem of ensuring the validity of predictions when applying MPC to such processes. Validity of predictions can be ensured by constraining the decision space of the MPC problem. The performance of the MPC control strategy relies on the ability of the model to predict the behaviour of the process. Using the model in the region in which it is valid improves the resulting performance. In the proposed approach four validity indicators on predictions are defined: two of them consider all the variables in the model, and the other two consider the degrees of freedom of the controller. The validity indicators are defined from the latent variable model of the process. Further to this, these are incorporated as constraints in the MPC optimization problem to bound the decision space and ensure the proper use of the model. Finally, the MPC cost function is modified to enable fine case-specific tuning if desired. Provided the indicators are quadratic, the controller yields a quadratic constrained quadratic programming problem for which efficient solvers are commercially available. A fed-batch fermentation example shows how MPC ensuring validity of predictions improves performance and eases tuning of the controller. The target in the example provided is end-point control accounting for variations in the initial measurable conditions of the batch. © 2013 Elsevier Ltd. All rights reserved.
1. Introduction Batch and fed-batch1 processes have widespread applications in chemical and life science industries for the production of products with high added value (e.g. medicines, enzymes, high-performance polymers). The primary control objective in batch processes is to reach a specified product quality at the point of batch termination. Accurately controlling the final quality of a batch process is challenging in that physical measurements of the quality parameters that can be used to predict the final quality are often not available on-line [1,2]. Data-driven models provide an answer to this problem and are widely used for process monitoring [3–5]. The common approach to control the quality of a batch process is endpoint based MPC [6]. In end-point based MPC the control sequence from the current point until the end of the batch is determined such that product quality at the end of the batch reaches a desired value.
∗ Corresponding author. Tel.: +34 645834681. E-mail addresses:
[email protected],
[email protected], David.LauriPla@pfizer.com (D. Laurí). 1 In the sequel batch refers to both batch and fed-batch processes. 0959-1524/$ – see front matter © 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.jprocont.2013.11.005
Depending on the strategy used, the trajectory is either determined at the start of the batch and on-line local controllers are used to track the trajectory (batch-to-batch) [7–10]; or the MPC problem is solved at each decision point within the batch and applied in a receding horizon policy (within-batch) [11,12]. The former is easier to deploy, whereas the later may be less affected by process noise and disturbances. Intermediate solutions can also be defined in which the period to recalculate the MPC trajectory is larger than that used in local controllers that track the calculated trajectory [13]. All these strategies have the commonality of solving an MPC problem, either off-line or on-line. The MPC problem calculates the control trajectory from the calculation instant to the end of the batch, using a model of the process. For the control results to be successful, the model should approximate the behavior of the process to an acceptable level, which for any non-linear process requires it to be used in the region in which it is valid. An important source of variability in predictions is the initial state, e.g. variability in the raw material. Consequently all measurable variables available at the decision points should be considered for inclusion in the model so that the accuracy of the predictions can be improved. All endpoint MPC optimization techniques need to overcome the following challenges to ensure high quality control is achieved:
240
D. Laurí et al. / Journal of Process Control 24 (2014) 239–249
(I) Cope with disturbances: unmeasured variables that affect quality, or changes in measurable variables after the decision point, e.g. feed property change. (II) Cope with faults: Measurement errors or faults in actuators or sensors. (III) Acceptable computational complexity: Make sure the time needed to solve the MPC problem is smaller than the available time. (IV) Account for process-model mismatch: The model used in MPC is an approximation of the real process, hence the model should be used in a region in which it is known to be valid. There are two approaches to cope with disturbances, challenge (I). One approach is to recompute the trajectory of the manipulated variables (MVs) to account for the disturbance using a moving horizon MPC strategy [12]. Another approach is to improve the predictor to actually account for the disturbance, e.g. by using batch-to-batch iterative learning control (ILC). In [10], an ILC approach based on moving window re-identification is proposed. There are also approaches that combine both, ILC and moving horizon MPC [14,15]. There are two approaches to cope with faults, challenge (II). In the active approach the fault is detected and actions are taken mainly by controller reconfiguration [16,17]. In the passive approach the end-point-based MPC formulation explicitly accounts for model uncertainty. One solution is to use min-max based robust MPC in which the goal is to maximize the performance of the MPC by minimizing the worst-case tracking error (the largest difference between the prediction and the actual measurement). However, according to [18], these approaches are often computationally prohibitive. Alternatively, [18] proposes a robust reverse-time reachability region based MPC approach to ensure states can be driven inside a desired end-point neighborhood if the fault is repaired sufficiently fast. There are two branches of solutions to attain an acceptable computational complexity, challenge (III). One branch of solutions increases the available time for computation. This can be achieved for batch processes by either solving the MPC at the beginning of the batch in a batch-to-batch optimization policy, or by reducing the number of decision points during the batch to recompute the trajectory [19]. The other branch of solutions is trying to solve the problem faster. The common approach to solving the cost function faster is to use a linear or set of linear models instead of a non-linear model [20,21], or use a linear MPC controller over an input-output feedback linearized process [22]. Additionally, it is common practice to reduce the d.o.f. (degrees of freedom) to reduce computation time. Move blocking strategies reduce the d.o.f. by fixing the input or its derivatives to be constant over several timesteps. A survey of various move blocking strategies is presented in [23]. An alternative approach is to use Laguerre functions to approximate the control sequence in a large control window, but using a reduced number of d.o.f. [24]. Another solution is to reduce complexity by using latent variable methods in the identification stage and perform the minimization in the space of the latent variables [11,25]. This paper focuses on challenge (IV), accounting for processmodel mismatch. The different approaches to account for processmodel mismatch include: • Minimize process-model mismatch. A more complex model or an adaptive approximation can be used to minimize process-model mismatch. • Weight changes on the control sequence from the nominal trajectory in the MPC cost function by means of u . This is the simplest and most common approach in which u is a design parameter that weights the control effort. Small values of u give freedom to
the controller to propose control trajectories far from the nominal conditions, which assuming the model is a linear approximation of the process around a fixed trajectory, can lead to poor predictions and reduced control performance. Large values of u provide biased control as the deviation from the target term in the MPC cost function loses importance versus the movement suppression term. The most common solution is to tune u by trial and error until the desired performance is obtained. • Robust MPC: model uncertainty is considered in the MPC cost function and solved using a min–max optimization problem. Robust MPC is case dependent and can be challenging to apply it successfully [26]. In [18] a reverse-time reachability region based non-linear MPC for batch processes is presented to cope with model uncertainty. The disadvantage in robust approaches is computational complexity. • Constrain the MPC problem to ensure the model is used in the region in which it is valid. Receding horizon MPC with hard validity constraints has been defined for continuous processes in [27], and for quality by design in [28–30]. In [31], validity indicators are weighted in the MPC cost function for batch processes, but they are not set as hard constraints. This paper is an extension of previous work of the authors [27] that implements hard validity constraints for batch processes. The novelty in this paper is it defines a systematic approach for inclusion of validity constraints that ensures feasibility. The validity indicators are included as two hard constraints and two hard constraints relaxed with slack variables to ensure feasibility. Additionally, the validity indicators are weighted in the cost function to enable further optional fine tuning of the controller. Summing up, this paper formulates an MPC framework for batch processes that ensures validity of predictions by constraining the decision space. The performance of the MPC control strategy relies on the ability of the model to predict the behavior of the process. Hence, using the model only in the region in which it is valid, will improve the resulting control performance. The novelty proposed in this paper is the methodology that is used to implement hard and softened constraints to ensure the validity of predictions. The controller proposed in this paper could be formulated in the latent variable space as in [27], where the same control results would be obtained, but with reduced computational complexity. However, in this paper it is assumed that the computational complexity of the optimization problem is acceptable and the controller is formulated in the original space of the MVs for the sake of readability. Both the model and the validity indicators are obtained in the latent variable space and then formulated in the original space of the MVs to be included in the controller. Although the solution proposed in this paper focuses on challenge (IV), current solutions for the other three challenges presented could be combined with the methodology proposed in this paper to cope with challenges (I)–(III). The structure of this paper is as follows: The traditional MPC methodology applied for end-point control of batch processes is briefly summarized in Section 2. The indices on validity of predictions considered in this paper are introduced in Section 3. A solution for further improvement of end-point control results is provided in Section 4. In Section 5, a fed-batch fermentation example shows how ensuring validity of predictions can improve end-product quality while simplifying deployment. The paper ends with concluding remarks in Section 6. 2. MPC for end-point control in batch processes This section briefly describes the MPC methodology with one decision point for the batch, also known as the mid-course correction methodology. The decision point can be set towards the
D. Laurí et al. / Journal of Process Control 24 (2014) 239–249
241
Y = XZBQT
(5)
where X ∈ R(I×nx ) , input space; I, number of batches in the identification data set; T ∈ R(I×nlv ) , input scores; nlv , number of latent variables; P ∈ R(nx ×nlv ) , input loadings;W ∈ R(nx ×nlv ) , input weights to get orthogonal scores [36]; E ∈ R(I×nx ) , input residuals; Y ∈ R(I×ny ) , output space; U ∈ R(I×nlv ) , output scores; Q ∈ R(ny ×nlv ) , output loadings; F ∈ R(I×ny ) , output residuals.
Fig. 1. MPC for end-point control in batch processes.
beginning of the batch so that the trajectories of the MVs for the rest of the batch are decided. The trajectories are determined with the aim of reaching the desired end-point quality, accounting for measurable initial conditions. In this section: first, the structure of the model used is provided; second, the strategy for model identification is described; and finally, the traditional MPC approach for end-point control in batch processes is described (Fig. 1).
In this paper, batch-wise unfolded partial least squares is used to infer the final batch quality from the on-line measurements [32,33]. BW-PLS consists of unfolding the data to cater for the threedimensional data structure specific to batch processes, and then applying standard partial least squares model. The linear structure of the model is set such that end-point quality variables are estimated from a regression vector multiplied by the matrix of the model. The regression vector is composed of two parts: the first part contains all known data for the batch up to the instant in which the MPC controller will be evaluated2 ; and the second part contains the variables that will be decided by the MPC controller, the trajectory for the MVs. x(i)
y(i) = [xp (i)
xd (i)]
(1)
where y(i) ∈ R1×ny , quality variables; i, number of batch; xp (i) ∈ R1×np , np variables available before the decision point; xd (i) ∈ R1×nd , nd d.o.f. to be decided by the MPC controller at the decision point; ∈ Rnx ×ny , matrix of parameters of the model; and nx = np + nd , number of columns in the regression vector. Assuming there are ni MVs and there are K time points from the decision point until the end of the batch, then nd = ni K. 2.2. Identification using PLS Identification data matrices can be obtained from Eq. (1) for i ∈ [1 . . . I], and PLS is used for identification [34,35]. Prior to identification, matrices X and Y are mean-centered and, if necessary, properly scaled (data are not always scaled, it depends on the nature of the variables). X = TPT + E; T = XW(P T W)
Y = UQT + F −1
The quadratic cost function with linear constraints that defines the end-point MPC approach is defined as [hereafter the argument i is omitted for the sake of readability]:
min||r − y||2F xd
+ u ||xd ||2F
s.t.
AxTd ≤ b y = [xp
JC
2.1. Model structure
2.3. MPC end-point
(2)
(6) xd ]
where r, desired end-point quality defined accordingly to y; u , movement suppression factor3 ; xd , MVs; A and b, define linear constraints on the MVs. 3. Ensuring validity of predictions The MPC in Section 2 relies on a linear model to perform predictions. Some batch processes can exhibit non-linear behavior and linear approximations should be used within the range of validity. Using the model outside this region can deteriorate performance of predictions and consequently reduce control performance [37]. This section is devoted to redefining the MPC problem so that the model can be used more appropriately. The common approach to achieve this is to use a movement suppression term in the MPC cost function to avoid the controller from making decisions outside the validity range of the model. By tuning u in Eq. 6, the designer can decide how much freedom is given to the controller. The approach proposed in this paper to ensure validity of predictions is to impose constraints on validity indicators in the MPC formulation. The advantages of setting constraints on validity indicators versus using the movement suppression term weighted by u are: • The controller is easier to tune. Control results are highly sensitive to u , consequently replacing this term with constraints eases tuning of the controller.4 • The decisions of the controller are explicitly maintained in the region in which the model is considered to be valid. • The validity threshold can be adjusted if a certain degree of extrapolation is acceptable. This can be advantageous in processes that have a known linear behavior in a region larger than that used for identification.
(4)
This section is divided into three parts: first the most common indicators of validity are provided; then two validity indicators on the degrees of freedom are defined; and finally the MPC problem is reformulated to account for such indicators.
2 The known data should contain all data that is not to be decided by the controller. If trajectories of measurable variables are added to the model to improve predictions, they should also be contained in the part of known data. Note at the decision point some of the measurable variables in the model may not be available, such data can be imputed using missing values methods.
3 Note the data is mean-centered prior to identification, then xd represents difference versus nominal trajectory. 4 Large values of u provide biased control. Small values of u give freedom to the controller to propose control trajectories far from the nominal conditions which leads to bad predictions and thus bad control result.
(3)
Z
U = TB
242
D. Laurí et al. / Journal of Process Control 24 (2014) 239–249
3.1. Indices on validity of predictions Two indicators commonly used for detection of an out-ofcontrol situation are the Hotelling’s T2 , and the residuals in the input space Q-statistic [5,38,39]. Hotelling’s T2 measures the Mahalanobis distance to the origin in the latent variable space. The Q-statistic accounts for the residual variability. An observation exceeding the T2 or Q limit defined by the identification data set suggests the model is being used in extrapolation. Note there exist alternative indicators and modeling approaches for monitoring [5,40]. However, in the sake of simplicity and without loss of generality, the most common indicators are used in this paper, which yet show to provide excellent results in the example provided. According to [41], the standarization of the indices improves sensitivity. In this paper the indices are defined to be normalised to the identification data set. Consequently, a value of an indicator below one suggests the model is being used in the region in which it has been identified. The threshold to normalize is of importance because it will constrain the decision space of the controller. The common data-based limit approach to defining the threshold is adopted in this paper. The threshold is defined as the value of the index that accounts for 95% of the samples in the identification data set. The normalized Hotelling’s T2 yields: Jt =
1 Jt max
−1
tSa 2 tT
(7)
where from Eqs. (3) and (1) t = [xp xd ]Z
(8)
is the projection of the input space to the latent variable space; Sa 2 is a diagonal matrix such that element j is the variance of the score tj in the identification data set; and Jt max is the threshold for Hotelling’s T2 . Provided J t is normalized, J t ≤ 1 suggests the model is being used in the region in which it has been identified. The normalized Q yields: Je =
1 Je max
eeT
(9)
where from Eqs. (2) and (8) e = x − tPT = [xp xd ](I − ZPT )
(10)
is the error of projection in the input space; and Je max is the threshold for Q. Provided J e is normalized, J e ≤ 1 suggests the model is being used in the region in which it has been identified. 3.2. Indices on validity of predictions: d.o.f. The two indices measuring the validity of predictions defined in Section 3.1 depend both on xp and xd . xp is measured and known at the time of solving the MPC optimization problem. xd are the d.o.f. of the controller and define the future trajectory of the MVs. Provided the optimization is performed in terms of xd , and known measured data cannot be changed, other indices based on those in Section 3.1, but considering only the actual degrees of freedom of the controller can be defined. Using such indices in the optimization can force the trajectory determined for the MVs to lie in the region spanned by the trajectories contained in the identification data set. Constraining validity in terms of the d.o.f. can be of help in situations in which past known data may lie outside the identification region. In this case the indicators provided in 3.1 may provide a value for the indices that is above the threshold regardless of what future trajectories are determined, xd . Consequently, there may be no solution that provides a value for the indicators below the
threshold because the past is causing the indicators to be out of range. If the indices only account for xd , validity indicators define which future trajectories are acceptable compared to those used for identification. The normalized Hotelling’s T2 on the d.o.f. yields: ˇJt =
1 2−1 T tSˇ t ˇJtmax a
(11)
where from Eq. (8) and considering only the variables which are d.o.f. t = [0xd ]Z
(12)
2 Sˇ a is a diagonal matrix such that element j is the variance of the score tj in the identification data set; and Jˇt max is the threshold defined accordingly to Jt max in Section 3.1. Provided ˇJt is normalized, ˇJt ≤ 1 suggests the determined trajectory for the MVs lies in the region defined by the trajectories used in the identification data set. The normalized Q on the d.o.f. yields:
Jˇe =
1 eeT Jˇe max
(13)
where from Eq. (10) and considering only the d.o.f. e = [0xd ](I − ZPT );
(14)
and Jˇe max is the threshold defined accordingly to Je max in Section 3.1. Provided ˇJe is normalized, ˇJe ≤ 1 suggests the determined trajectory for the MVs maintain the same level of residual variance in the trajectories in the identification data set. 3.3. Constrain the MPC problem There are two options for applying the validity indicators within the LV-MPC problem: weight them in the cost function or introduce them as hard constraints. Weighting the validity indicators in the cost function is used in [31] for batch processes, in [25] for continuous processes, and in [30] for product design. The hard constraints approach is used in [27] for continuous processes, and for quality by design in [29]. The main advantage with weighting the validity indicators is they are easier to implement as the resulting controller is formulated as a QP. The drawbacks in the weighting approach are: although by trial and error one can tune the weights for validity of predictions in a specific situation, validity of predictions in general cannot be ascertained; the shape of the cost function is modified which can lead to a biased end-point result; and more time for deployment is needed provided the weights for validity indicators are tunned by trial and error. Using hard constraints overcomes such limitations. The main drawback with hard constraints is validity indicators are quadratic in xd , then the controller yields a quadratically constrained quadratic program (QCQP). A QCQP can be solved using available solvers, but it is more computational demanding than a QP. Additionally, if any of the validity indicators cannot meet constraints, the problem can be infeasible which includes further complications. The approach proposed in this paper is to set hard constraints for the four validity indicators defined in Sections 3.1 and 3.2. The main drawback in hard constraints, computational complexity, is not considered a limitation in this paper which deals with batch processes which often present slow dynamics. The second drawback, feasibility, is attained as follows: • J t and J e account for both, d.o.f. of the controller and past data. If J t and/or J e are used as hard constraints, the part of the indices that depends on the d.o.f. of the controller is not a problem in terms of
D. Laurí et al. / Journal of Process Control 24 (2014) 239–249
feasibility. However, the part that depends on past data may not lie in the region of the identification data set due to the presence of disturbances or abnormal operation. If past data does not lie in the region of the identification data set, J t and/or J e may be violated regardless of the decision of the controller, which will result in an infeasible problem. Specially J e can exceed the threshold defined for identification data because the normal region defined by the Q control limit includes residual components that are mainly noise [21]. For this reason, hard constraints for these two indices should be relaxed whenever the constraints cannot be met. Constraint relaxation is accomplished by introducing slack variables that allow some level of constraint violation if no feasible solution exists. Slack variables are such that they are non-zero only if the corresponding constraints are violated. • ˇJt and ˇJe only account for the d.o.f. of the controller. These two indices ensure the trajectory for the MVs lay in the region of those contained in the identification data set. Provided these two indices only depend on variables that are decided in the optimization, there will always be a feasible solution. Consequently, hard constraints can be imposed on these two indices. The minimization problem in Eq. (6) augmented with validity constraints yields:
min||r − y||2F + p T xd ,
JC
s.t.
⎧ T Axd ≤ b ⎪ ⎪ ⎪ ⎪ ⎪ y = [xp xd ] ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ 1 ⎨ Jt Je ⎪ ⎪ T ≥0 ⎪ ⎪
≤
1
+ T
(15)
⎪ ⎪ ⎪ ⎪ ˇJt ⎪ 1 ⎪ ⎪ ⎪ ⎩ ˇ ≤ 1 Je
where ∈ R1×2 are the slack variables that soften constraints on validity indicators that contain past data; and p, is a value large enough to force the term of the slack variables added to the MPC cost function to be zero if hard softened constraints hold. There are approaches to set the lower bound of p such that the soft-constrained problem will produce the same result of the hardconstrained problem if it is feasible [42]. However, for the sake of clarity of the paper p is set to be equal to 1e10. p = 1e10. Note that the limits for constraints are defined for a value of 1, which corresponds with the boundary defined by the calibration data set given the indices are normalized. However, if the user wants to allow some excursion in some of the indices, the corresponding limit can be increased. The above problem is a QCQP. In a QCQP a convex quadratic function is minimized over a feasible region that is the intersection of ellipsoids [43]. There are numerous solvers that can solve QCQP problems, in this paper the Matlab fmincon function with the active-set algorithm is used.
4. Further improvement of end-point control performance The proposed constrained MPC problem in Eq. (15) provides a reliable and systematic solution for the MVs to attain the desired end-point quality. The solution is reliable in that the decision space is restricted to the region in which the model is known to be valid. The solution is systematic in that no parameters need to be tuned except for the weight for the slack variables, which can be assumed to take a large value. However, the approach in Eq. (15) presents the following two limitations:
243
• The solution of the MPC problem can be understood as inverting the latent variable model while keeping the solution within a region for which predictions are known to be valid. Whenever a model is inverted there is a possibility of a null space, which are regions of the parameters almost flat in terms of the cost function. Multiple solutions in those flat regions are possible. • There are no fine tuning parameters. Once the control strategy has been on the process for some time, it can be beneficial to have some fine tuning parameters that can be manipulated to adjust the performance of the control system. These two limitations are overcome in this section by means of weighting the validity indicators in the cost function. The resulting cost function that replaces JC in (15) yields: ˇ t ˇJt + ˇ e ˇJe J = JC + t J t + e J e + ˇ e , reduces the error of projection seekWeighting Q indices, e and ˇ t, ing better predictions. Weighting Hotelling’s T2 indices, t and modifies the shape of the cost function to overcome the null space problem and provides more conservative solutions by reducing the amount of exploration allowed. However, weighting introduces some bias to the original cost function, thus should be used with caution. As guidance for tuning, all the weights can be initially set to zero and tune them by trial and error to improve performance. Note all the weights have to be positive. e is expected to be the most relevant parameter because it weights Q. From experience in monitoring, Q is often the most sensitive indicator for out of range situations. This approach trades some bias in the original cost function, JC , to further reduce the validity indicators below the valid threshold and shape the cost function to overcome null space. This strategy complements the systematic approach provided in Section 3 enabling fine case-specific tuning. As shown in the example in Section 5, this further improvement trading some bias for validity indicators can improve the performance of the strategy. 5. Simulation results and discussion In this section batch trajectory optimization of a fermentation process is performed: first, a description of the process is provided; second, the predictor is obtained and validated; and finally, batch end-point optimization is performed accounting for initial conditions. 5.1. Process description The process to be controlled is the Saccharomyces cerevisiae cultivation process presented in [44], which can be downloaded with the MP-toolbox from http://wdb.ugr.es/ ∼josecamacho/downloads.php. The model includes 12 stoichiometry reactions, and nine states with complex dynamics, namely: glucose, pyruvate, acetaldehyde, acetaldehyde dehydrogenase, acetate, ethanol and biomass concentrations, volume and active cell material. The target is end-point biomass produced for a fixed processing time of 10 h; the MV is the feed-rate and is discretized to 100 intervals; and the assumed measurable initial state contains four variables. • CV: – y1 : Biomass concentration (dry weight) at the end of the batch; operating point 10 g l−1 . • MV: – u1 . . . u100 : Glucose concentration in the feed solution (g l−1 ). • Initial state:
244
D. Laurí et al. / Journal of Process Control 24 (2014) 239–249
Fig. 3. Trajectory of MVs for the 20 identification batches.
Fig. 4. End-point quality of the 20 identification batches.
Fig. 5. Validation results for y1 with nlv = 14; RMSEP = 0.45.
Fig. 2. Initial state in the 20 identification batches.
– m1 : Initial biomass concentration; operating point 1 g l−1 . – m2 : Active cell material comprising the protein synthesis system apparatus (RNA, ribosomes), various anabolic and catabolic enzymes, transport enzymes, precursors and active metabolites; operating point 0.3 g g−1 . – m3 : Acetaldehyde dehydrogenase proportional to the measured activity; operating point 0.0075 g g−1 . – m4 : Volume; operating point 7 l. 5.2. Identification and validation of the model The identification set is composed of 20 batches. 20% variability has been added to the feed law and initial state and 1% noise has been added to the output. The initial state for the 20 identification batches is shown in Fig. 2, for the MV in Fig. 3, and for the quality variable in Fig. 4. All these data are mean-centered and scalled prior to identification. The linear approximation of the process can be expressed as: x
yˆ 1 = [m1 . . .m4
x1 . . .x100 ]
xp
xd
Provided PLS is used to obtain , the number of latent variables to be used needs be decided. nlv can take any value in-between
1 and the rank of x. Thus, provided there are 20 batches, nlv cannot be larger than 20, nlv ∈ [1, 20]. From the root mean squared error of prediction (RMSEP) evaluated in leave-many-out cross validation, nlv = 14. A large number of latent variables is anticipated as the data contains highly excited data, which introduces significant independence between the variables.5 Predictive performance of the obtained model is assessed using a validation data set. The validation data set consists of 20 batches run in the same conditions as the identification batches. Fig. 5 compares the predicted and actual values for the external validation data set.6 The RMSEP for the model in external validation is 0.45. From the value of RMSEP and predictions in Fig. 5, the predictor is considered to successfully approximate the process so it is suitable for endpoint control. 5.3. Control results The target is to define the trajectory of the MV, [u1 . . . u100 ], so that the predicted quality at the end of the batch, yˆ 1 , reaches a desired set point. The determination of the MV trajectory is taken given the initial state of the batch, [m1 . . . m4 ], and with the model identified in the previous section.
5 Although control results in this paper are shown for nlv = 14, similar conclusions from the comparison of the control methodologies considered can be drawn with smaller values of nlv . 6 Note the model has been fit with normalized data, then the results shown in the figure are the predictions considering the normalization. Then predictions are multiplied by standard deviation and the mean observed in the identification data set is added.
D. Laurí et al. / Journal of Process Control 24 (2014) 239–249
A
Lower reference
B C
Upper reference
0.2
0.2
0.1
0.1
0.1
50
100
0
50
100
0.2
0.2
0.2
0.1
0.1
0.1
0
50
100
0
50
100
0.3
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0.1
0
D
Middle reference
0.2
0
50
100
0
50
100
0.3
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0.1
0
E
245
50
100
0
50
100
0.3
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0.1
0
50 100 Duration of batch
0
50 100 Duration of batch
0
50
100
0
50
100
0
50
100
0
50
100
0
50 100 Duration of batch
Fig. 6. Trajectory of the manipulated variable.
Three different set points for quality are evaluated: 9.5, 10, and 10.8. These values cover the range of the output measured in the identification batches, as shown in Fig. 5, where both the extreme points and mid points are covered. The following control strategies are implemented for the sake of comparison: A B C
D
No control: the mean trajectory of the MV in the identification data set is used. End-point MPC as defined in Section 2 with u = 0. End-point MPC as defined in Section 2 with u = 1. Different values of u have been tested with no significant improvement in closed loop results versus u = 1. End-point MPC ensuring validity of predictions as defined in Section 3.
E
Improved end-point MPC ensuring validity of predictions as defined in Section 4. The best results were obtained for e = 1 with all the other lambdas equal to zero. Consequently, in this particular example, penalization of Q in the cost function provides better results in terms of closed loop performance.7 For further understanding of the effect of the weights, a sensitivity analysis has been introduced in Section 5.4.
7 To tune the different lambdas, four control experiments have been performed. In each experiment one lambda was assigned a value of 1 while the others remained as zero. This coarse tuning is enough in this case to demonstrate the advantage of trading some bias to improve predictions.
246
D. Laurí et al. / Journal of Process Control 24 (2014) 239–249
Fig. 7. (a–h) Control results for 20 batches evaluating 5 strategies for 3 set points.
Hard constraints were imposed in all cases on u so that 0 ≤ uj ≤ 0.5, ∀ j ∈ [1, 100]. The 5 control strategies, at the 3 set points were evaluated for 20 batches. The initial conditions of the batches were in the range of those batches used for identification. The results obtained are represented as follows:
• Fig. 6 displays the trajectories for the 20 batches for each control strategy (row), and set point (column). The gray area represents the region spanned during identification. • Fig. 7(a) displays the end-point value of y1 and is divided into 5 zones, each zone corresponds to a different control strategy: A–E. In each zone there are 3 different set points drawn as a thick circle. Each control strategy and set point is evaluated for 20 batches and shown as a box plot. The median of the 20 batches is represented as a dot. The edges of the box are the 25th and 75th percentiles, the whiskers extend to the most extreme data points not considered outliers, and outliers are plotted individually as thin circles. The gray area represents the region of the identification data set.
• Fig. 7(b) displays the root squared error of prediction (RSEP). This evaluates the actual quality of predictions obtained in the optimization. The gray area shows the RMSEP obtained in external validation (0.45). Note 0.45 was the mean value for the RSEP. • Fig. 7(c) displays J t . The valid region is represented as the gray area. Similarly, Fig. 7(d)–(f) contain J e , ˇJt , and ˇJe . • Fig. 7(g) displays the value of the term ||r − y||2 in JC in Eq. (15). F If this value is zero, the controller thinks is actually attaining the target at the end of the batch, so any deviation is due to modeling error. • Fig. 7(h) displays all the terms in the MPC cost function, with the exception of the term ||r − y||2F . Each of the control strategies is analyzed separately: A If no control is implemented, variability in the initial conditions results in variability in the end-point quality. The same result is obtained for the 3 set points as no control is implemented. Validity indicators J t and J e lie approximately within the gray region defined by the identification batches. Validity indicators ˇJt and ˇJe
D. Laurí et al. / Journal of Process Control 24 (2014) 239–249
B
C
D
E
are exactly zero because there is no deviation from the mean trajectory. This is equivalent to set the decision equal to zero in the normalized space. The other indicators such as costs and RSEP are not evaluated as no control moves or predictions are made in this strategy. This strategy is presented to provide a benchmark with which to compare the performance of the different advanced control strategies against the simplest approach. MPC with u = 0 allows the controller make decisions outside the validity range of the model. Validity indicators are outside the valid region, especially for the extreme set points and hence the model is used in extrapolation, resulting in RSEP values outside the gray region. The cost term costy equals 0 which indicates that the controller believes it is reaching the desired set point. The deviation from the set point is caused by biased predictions. From RSEP, the error is seen to be larger for the extreme set-points hence the resulting end-point quality is more biased for these runs. MPC with u = 1 constrains the trajectory to stay close to the mean trajectory obtained during identification. The trajectories for the MV in Fig. 6 for C can be seen to be very similar to those for A. This means u = 1 is so large that it has restricted any actions from the controller to account for changes in the initial conditions. Validity indicators are similar to those obtained in A. This approach replaces u with constraints on the validity indicators. The resulting controller outperforms all the previous results as shown in Fig. 7(a). The mean of the desired quality is closer to the desired set points and the box defining the variability has shrunk considerably. According to the mean value of RSEP, the quality of predictions lies within the valid region which agrees with all validity indicators lying in the gray region, however, the RSEP is above that of C. costy is zero which means the controller believes it is attaining the target, so deviations from the target are caused by the errors in the predictions. The deviations from the desired set point and variability in approach D are caused by errors in the predictions. In this strategy J e is weighted in the MPC cost function so that trajectories that provide a smaller J e are obtained. Validity indicators remain in the gray area and J e gets closer to zero. Improved predictions are made when compared to D, as shown by RSEP in Fig. 7(b). The risk in this approach is costy can take values larger than zero, which can lead to biased end-point results. for e = 1, cost y remains almost at zero, indicating no bias is present, but predictions are improved. This results in better end-point results as shown in
247
Fig. 7(a). One could consider further increasing e = 1, but there is not much more room for improvement in RSEP, and costy can increase yielding biased end-point results. Summing up: • The traditional approach without validity constraints relies on u to ensure the model is only used in operating regions within which it was designed. In the case study evaluated above, u = 0 was too small, which resulted in deterioration of the quality of predictions; and u = 1 was too large, resulting in biased control. Intermediate values have been tested and do not provide significantly improved results. This can be confirmed from the trajectories for the MV in Fig. 6. Trajectories for D and E provide desirable end-point results, and there is no intermediate trajectory in-between B and C that can provide a similar behavior to D or E. Consequently, with the traditional approach to batch control and for the example evaluated, only the set points in the middle of the identification region were attainable. • Ensuring validity of predictions provides a systematic approach to attaining end-point specifications accounting for initial conditions. The approach is systematic in that no case specific tunning is required. • The proposed approaches (D and E) can attain end-point quality set points in the middle of the range of the identification batches with minimal variability. Extreme set points can be attained with some variability. This variability is significant less that the more traditional MPC based approaches using u . • The proposed approach E, which trades bias for prediction error, further improves the results of D. 5.4. Sensitivity analysis of the tuning parameters In this section the sensitivity of the attained end-point value ˇ t, ˇ e – defined against each of the four tuning parameters – t , e , in Section 4 is analyzed. Fig. 8 is composed of four sub-figures, in each sub-figure one weight is analyzed for 5 different values, [0, 0.1, 1, 10, 100], and the other three weights are set to 0. Each sub-figure displays the end-point value of y1 and is divided into 5 zones, each zone corresponds to a different value of the weight being analyzed. In each zone there are 3 different set points drawn as a thick circle. Each control strategy and set point is evaluated for 20 batches and shown as a box plot. The median of the 20 batches is represented
Fig. 8. (a–d) Sensitivity analysis: each of the 4 sub-figures shows the end-point results for 5 values of a specific tuning parameter setting the other 3 tuning parameters to 0.
248
D. Laurí et al. / Journal of Process Control 24 (2014) 239–249
as a dot. The edges of the box are the 25th and 75th percentiles, the whiskers extend to the most extreme data points not considered outliers, and outliers are plotted individually as thin circles. The gray area represents the region of the identification data set. The following observations can be drawn from the sensitivity analysis in Fig. 8: • In the four cases, setting the weight to 0 is equivalent to case D in Section 5.3 and yields the same results. ˇ t – provide • Large weights on Hotelling’s T2 based indices – t and the same results obtained with no control strategy – case A in Section 5.3. The reason is the optimization focuses on minimizing movements in the T space, and the minimum implies taking no movement. Consequently, the average trajectory is used for all batches, which is the same approach when no control strategy is implemented. ˇ t does not improve end-point results. The rea• Weighting t and ˇ t are movement suppression terms in the T space, son is t and which can be advantageous to shape the cost function to reduce the null space problem. Null space are regions of the parameters almost flat in terms of the cost function. Multiple solutions in ˇ t = 0 is the best option those flat regions are possible. Since t = in this example, it can be inferred that the null space problem is not dominant in this example. • There is no significant difference in terms of sensitivity against t ˇ t . This means the contribution of the past (known data) to and the region spanned by the identification data set in the T space is not significant compared against the actual degrees of freedom in the optimization. ˇ e – improves • Increasing the weights on Q based indices – e and the end-point results by minimizing the error of prediction. Excessively large values on the weights can deteriorate end-point performance by being overly conservative. The deterioration in end-point performance for large weights is expected to worsen if more latent variables are used in the model because the normalized Q increases with the number of latent variables used in the model. • Weighting the Q index that contains the past, e , provides better results than weighting the Q based index that neglects the past, ˇ e . This is expected to be the general case as long as predictions depend both on initial state and control trajectory. From the sensitivity analysis the following rule of thumb to tune the weights can be drawn: The four weights can be set initially to 0. If the null space problem is present, weighting of Hotelling’s T2 indices can help shape the cost function and mitigate the problem. Predictions can be improved by weighting the Q based indices, specially the Q that also accounts for past data. Better predictions can lead to better end-point results because the MPC relies on those predictions to compute the control trajectory. 6. Conclusions This paper proposes an MPC-based end-point optimization strategy for batch processes. The strategy has been defined to be implemented at one decision point towards the beginning of the batch, but it can be straightforwardly extended to mid-course corrections. The computed trajectories for the MVs consider the initial conditions and are determined to enable the quality set point at the end of the batch to be achieved. The strategy computes the trajectories for several manipulated variables simultaneously. The main challenge tackled in this paper is to account for process-model mismatch in the optimization procedure. Making appropriate use of the model has shown to be highly important. This is because all processes, and particularly batch processes, are non-linear and
the model used for predictions in the optimization is linear. Consequently, the model should only be used in the region in which it is known to be valid. Using the model outside such region leads to extrapolation, which given process-model mismatch, leads to poor predictions and hence poor control performance. Traditionally, movement suppression terms are utilised in MPC to restrict the control action that is taken. However, in this paper hard and soft constraints utilising measures of validity of the predictions have been incorporated in to the optimization problem. These validity constraints ensure that the optimization algorithm restricts control action to lie within the conditions experienced when identifying the model. The major advantage with the proposed approach is that it provides a method for ensuring the validity of predictions without any need for trial and error tuning, and control performance is significantly improved. The major drawback with the approach is that the resulting optimisation problem is QCQP in form, which is more computationally demanding than the traditional QP. It has been shown through a fed-batch example that improved end-point results are obtained using the proposed strategy. The validity approach provides improved end-point quality results and there is no need for trial and error tuning. Consequently, a consistent methodology that is easy to deploy and provides parameters for further fine tuning if desirable has been proposed. Future work will focus on combining the proposed strategy with existing solutions to tackle other existing challenges in MPC such as: handling disturbances or faults, and reducing the computational complexity.
Acknowledgments The first author is a research associate in the University of Manchester funded by the pharmaceutical company Pfizer.
References [1] H. Zhang, B. Lennox, Integrated condition monitoring and control of fed-batch fermentation processes, Journal of Process Control 14 (2004) 41–50. [2] C. Ündey, S. Ertunc, T. Mistretta, B. Looze, Applied advanced process analytics in biopharmaceutical manufacturing: Challenges and prospects in real-time monitoring and control, Journal of Process Control 20 (2010) 1009–1018. [3] M. Kano, Y. Nakagawa, Data-based process monitoring, process control, and quality improvement: recent developments and applications in steel industry, Computers and Chemical Engineering 32 (2008) 12–24. [4] A. AlGhazzawi, B. Lennox, Model predictive control monitoring using multivariate statistics, Journal of Process Control 19 (2009) 314–327. [5] S. Qin, Survey on data-driven industrial process monitoring and diagnosis, Annual Reviews in Control 36 (2012) 220–234. [6] A.S. Kumar, Z. Ahmad, Model predictive control (MPC) and its current issues in chemical engineering, Chemical Engineering Communications 199 (2012) 472–511. [7] N. Aziz, M. Hussain, I. Mujtaba, Performance of different types of controllers in tracking optimal temperature profiles in batch reactors, Computers & Chemical Engineering 24 (2000) 1075–1096. [8] J. Camacho, J. Pico, A. Ferrer, Self-tuning run to run optimization of fed-batch processes using unfold-PLS, AIChE Journal 53 (7) (2007) 1789–1804. [9] O. Osunnuyi, O. Marjanovic, J. Wan, B. Lennox, Trajectory tracking of exothermic batch reactor using NIR spectroscopy, in: UKACC International Conference on Control 2012, 2012. [10] J. Jewaratnam, J. Zhang, A. Hussain, J. Morris, Batch-to-batch iterative learning control using updated models based on a moving window of historical data, Procedia Engineering 42 (2012) 206–213. [11] J. Flores-Cerrillo, J. MacGregor, Latent variable MPC for trajectory tracking in batch processes, Journal of Process Control 15 (2005) 651–663. [12] M. Golshan, J. MacGregor, M. Bruwer, P. Mhaskar, Latent variable model predictive control (LV-MPC) for trajectory tracking in batch processes, Journal of Process Control 20 (2010) 538–550. [13] J. Flores-Cerrillo, J. MacGregor, Within-batch and batch-to-batch inferentialadaptive control of semibatch reactors: a partial least squares approach, Industrial & Engineering Chemistry Research 42 (2003) 3334–3345. [14] Z. Xiong, J. Zhang, C. Ren, M. Chen, Integrated tracking control of batch processes with model uncertainties by updating a linear perturbation model, in: 18th IFAC World Congress, 2011. [15] I. Chin, S. Qin, K. Lee, M. Cho, A two-stage iterative learning control technique combined with real-time feedback for independent disturbance rejection, Automatica 40 (2004) 1913–1922.
D. Laurí et al. / Journal of Process Control 24 (2014) 239–249 [16] M. Kettunen, S. Jamsa-Jounela, Data-based, fault-tolerant model predictive control of a complex industrial dearomatization process, Industrial & Engineering Chemistry Research 50 (2011) 6755–6768. [17] J. MacGregor, A. Cinar, Monitoring, fault diagnosis, fault-tolerant control and optimization: Data driven methods, Computers and Chemical Engineering 47 (2012) 111–120. [18] S. Aumi, P. Mhaskar, Robust model predictive control and fault handling of batch processes, AIChE 57 (2010) 1796–1808. [19] J. Flores-Cerrillo, J. MacGregor, Control of particle size distributions in emulsion semibatch polymerization using mid-course correction policies, Industrial & Engineering Chemistry Research 41 (2002) 1805–1814. [20] N. Blet, D. Megías, J. Serrano, C. de Prada, Non linear MPC versus MPC using on-line linearisation – a comparative study, in: IFAC, 2002. [21] S.J. Qin, T.A. Badgwell, A survey of industrial model predictive control technology, Control Engineering Practice 11 (2003) 733–764. [22] J. Chen, Y. Lin, Multibatch model predictive control for repetitive batch operation with input–output linearization, Industrial & Engineering Chemistry Research 51 (2012) 9598–9608. [23] R. Cagienard, P. Grieder, E. Kerrigan, M. Morari, Move blocking strategies in receding horizon control, Journal of Process Control 17 (2007) 563–570. [24] L. Wang, Model Predictive Control System Design and Implementation Using Matlab, Springer-Verlag, London, 2009. [25] D. Laurí, J. Rossiter, J. Sanchis, M. Martínez, Data-driven latent-variable modelbased predictive control for continuous processes, Journal of Process Control 20 (2010) 1207–1219. [26] A. Bemporad, M. Morari, Robust model predictive control: a survey, in: Lecture Notes in Control and Information Sciences, 1999, pp. 207–226. [27] D. Laurí, J. Sanchis, M. Martínez, A. Hilario, Latent variable based model predictive control: ensuring validity of predictions, Journal of Process Control 23 (2013) 12–22. [28] F. Yacoub, J. MacGregor, Product optimization and control in the latent variable space of nonlinear PLS models, Chemmometrics and Intelligent Laboratory Systems 70 (2004) 63–74. [29] F. Yacoub, J. Lautens, L. Lucisano, W. Banh, Application of quality by design principles to legacy drug products, Journal of Pharmaceutical Innovation 6 (2011) 61–68. ˜ [30] E. Tomba, M. Barolo, S. García-Munoz, General framework for latent variable model inversion for the design and manufacturing of new products, Industrial & Engineering Chemistry Research 51 (2012) 12886–12900. [31] J. Flores-Cerrillo, J. MacGregor, Control of batch product quality by trajectory manipulation using latent variable models, Journal of Process Control 14 (2004) 539–553. [32] P. Nomikos, J. MacGregor, Multi-way partial least squares in monitoring batch processes, Chemometrics and Intelligent Laboratory Systems 30 (1995) 97–108. [33] J. Camacho, J. Pico, A. Ferrer, Bilinear modelling of batch processes. Part I: theoretical discussion, Journal of Chemometrics 22 (5) (2008) 299–308. [34] P. Geladi, B.R. Kowalski, Partial least-squares regressions: a tutorial, Analytica Chimica Acta 185 (1986) 1–17. [35] S. Wold, M. Sjöström, L. Eriksson, PLS-regression: a basic tool of chemometrics, Chemometrics and Intelligent Laboratory Systems 58 (2001) 109–130. [36] H. Martens, Reliable and relevant modelling of real world data: a personal account of the development of PLS regression, Chemometrics and Intelligent Laboratory Systems 58 (2001) 85–95. [37] G. Gins, J. Vanlaer, P.V. den Kerkhof, J.V. Impe, Extending discrete batch-end quality optimization to online implementation, in: 8th IFAC Symposium on Advanced Control of Chemical Processes, 2012. [38] S.J. Qin, H. Yue, R. Dunia, Self-validating sensors with application to air emission monitoring, Industrial & Engineering Chemistry Research 36 (1997) 1675–1685.
249
[39] P. Nomikos, J. MacGregor, Multivariate SPC charts for monitoring batch processes, Technometrics 37 (1995) 1. [40] S. Qin, Y. Zheng, Quality-relevant and process-relevant fault monitoring with concurrent projection to latent structures, AIChE 59 (2013) 496–504. [41] J. Westerhuis, S. Gurden, A. Smilde, Standarized q-statistic for improved sensitivity in the monitoring of residuals in MSPC, Journal of Chemometrics 14 (2000) 335–349. [42] E. Kerrigan, J. Maciejowski, Soft constraints and exact penalty functions in model predictive control, in: Control 2000 Conference, Cambridge, 2000. [43] S. Boyd, L. Vandenberghe, Convex Optimization, Cambridge University Press, New York, 2004. [44] F. Lei, M. Rotboll, S. Jorgensen, A biochemically structured model for Saccharomyces cerevisiae, Journal of Biotechnology 88 (2001) 205–221.
D. Laurí is an associate researcher at The University of Manchester. In 2012, he finished PhD at the Department of Systems Engineering and Control at Polytechnic University of Valencia UPV. In 2008, he obtained master degree in Automatics and Industrial Informatics at UPV. He received a training grant at ALCOA (ALuminum Company of America) as a process engineer for one year. He received the BSc degree in Control Automation Engineer in 2007 at UPV, honored with various academic efficiency awards. His research interests include model-based predictive control relevant identification, identification of MIMO processes in presence of co-linearity in the data set, and control in the space of latent variables. B. Lennox is professor of Applied Control and Head of the Control Systems Group in the School of Electrical and Electronic Engineering in The University of Manchester. He is a Senior Member of the IEEE, Fellow of the IET, Member of the InstMC and a Chartered Engineer. He received a BEng in Chemical Engineering and a PhD from Newcastle University. He worked as a Research Associate and Fellow at Newcastle and Monash Universities and was appointed Lecturer at the University of Manchester in 1997. His research interests are in the control and monitoring of batch processes, particularly those in the pharmaceutical industry and in flow assurance in the oil and gas industry. J. Camacho is associate professor in the Department of Signal Theory, Networking and Communications and researcher in the Information and Communication Technologies Research Centre, CITIC for its initials in Spanish, at the University of Granada, Spain. He belongs to the Scientific Advisory Board of ReKom Biotech, Spin-off of the University of Granada, and to the Network Engineering and Security Group. He holds a degree in Computer Science from the University of Granada (2003) and a PhD from the Technical University of Valencia (2007). His PhD was awarded with the second Rosina Ribalta Prize to the best PhD projects in the field of Information and Communication Technologies (ICT) from the EPSON Foundation, and with the D.L. Massart Award in Chemometrics from the Belgian Chemometrics Society. He also worked as a post-doctoral fellow at the University of Girona, granted by the Juan de la Cierva program. His research interests include exploratory data analysis, pattern recognition and optimization with multivariate data analysis techniques applied to data of very different nature, including chemometrics, medicine and most especially networking.