Computers and Chemical Engineering 43 (2012) 91–107
Contents lists available at SciVerse ScienceDirect
Computers and Chemical Engineering journal homepage: www.elsevier.com/locate/compchemeng
Optimal design and control of dynamic systems under uncertainty: A probabilistic approach Luis A. Ricardez-Sandoval ∗ Department of Chemical Engineering, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada
a r t i c l e
i n f o
Article history: Received 15 November 2011 Received in revised form 19 March 2012 Accepted 21 March 2012 Available online 11 April 2012 Keywords: Process design Process control Monte Carlo sampling Uncertainty
a b s t r a c t This paper presents a new methodology for the simultaneous design and control of systems under random realizations in the disturbances. The key idea in this work is to perform a distribution analysis on the worst-case variability. Normal distribution functions, which approximate the actual distribution of the worst-case variability, are used to estimate the largest variability expected for the process variables at a user-defined probability limit. The resulting estimates in the worst-case variability are used to evaluate the process constraints, the system’s dynamic performance and the process economics. The methodology was applied to simultaneously design and control a Continuous Stirred Tank Reactor (CSTR) process. A study on the computational demands required by the present method is presented and compared with a dynamic optimization-based methodology. The results show that the present methodology is a computationally efficient and practical tool that can be used to propose attractive (economical) process designs under uncertainty. © 2012 Elsevier Ltd. All rights reserved.
1. Introduction Dynamic systems are subject to sudden and unpredictable changes in the process characteristics and model parameter uncertainties that affect the operability and performance of the process. While time-invariant (deterministic) uncertainties can be accounted for at the process design stage, time-varying (random) fluctuations in the disturbances affecting the process are not usually considered in the initial stages of the design of a system. This is because the process design has been traditionally formulated as an optimization problem that aims to estimate the optimal design parameters and operability conditions at steady-state. Once the design parameters have been fixed, a controllability study is conducted to study the system’s dynamics and the effect of the disturbances that may be affecting the process during its normal operation. This sequential approach imposes restrictions on the flexibility, i.e., the ability of the process to move from one operating point to another, the feasibility, i.e., the ability of the system to operate under allowable limits in the presence of parameter uncertainties and time-varying disturbances, and the controllability of the process since the optimal process design, obtained at steady-state, may limit, or may not satisfy, the minimum performance requirements specified for this process when the system is operated in the transient mode. Therefore, there is a motivation to obtain the optimal design of a process that minimizes an annual
∗ Corresponding author. Tel.: +1 519 888 4567x38667; fax: +1 519 746 4979. E-mail address:
[email protected] 0098-1354/$ – see front matter © 2012 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.compchemeng.2012.03.015
cost function and satisfies both the system’s dynamic performance and process constraints due to random changes in the perturbations and uncertainty in the system’s parameters, i.e., assess the optimal design of a dynamic system under uncertainty. This activity has been referred to as simultaneous design and control or integration of design and control. A key challenge in a simultaneous design and control methodology is the estimation of the largest (worst-case) variability in a process variable due to critical realizations in the disturbance and uncertainty in the system’s parameters, i.e., the worst-case scenario. The computation of this condition plays a central role in simultaneous design and control methodologies because it is used to evaluate the process constraints and the system’s dynamic performance. Accordingly, the current integration of design and control methodologies can be classified based on the method used to estimate the worst-case scenario. This classification is as follows:
• Controllability index approach: These methodologies evaluate the variability in the process variables using open-loop controllability metrics such as the condition number (Luyben & Floudas, 1994; Nguyen, Barton, Perkings, & Johnston, 1988), the integral error criterion (Lenhoff & Morari, 1982), singular value decomposition (Palazoglu & Aarkun, 1986, 1987) and the relative gain array (Alhammadi & Romagnoli, 2004). The process output variability estimated from these indicators was then used to assess the optimal process design. Linear systems or steady-state models were mostly used by these methods to represent the behavior of the system to be designed.
92
L.A. Ricardez-Sandoval / Computers and Chemical Engineering 43 (2012) 91–107
• Dynamic optimization approach: These methods proposed a formal dynamic optimization problem that aims to estimate the critical profile in the disturbance that produces the worst-case scenario (Bahri, Bandoni, & Romagnoli, 1997; Bansal, Perkins, & Pistikopoulos, 2002; Bansal, Perkins, Pistikopoulos, Ross, & Van Schijndel, 2000; Mohideen, Perkings, & Pistikopoulos, 1996). The resulting optimization formulations, which include the rigorous mechanistic (nonlinear) process model, are computationally demanding even for simple process systems. Thus, the applicability of these techniques to optimally design a large-scale dynamic system, e.g., a chemical plant, in the presence of disturbances and parameter uncertainty is challenging or even prohibitive. Alternative design and control methodologies based on approximations to the rigorous dynamic optimization strategies have also been proposed in the literature (Kookos & Perkins, 2001; Malcolm, Polan, Zhang, Ogunnaike, & Linninger, 2007; Swartz, 2004). • Model-based approach: These methods approximate the dynamic (nonlinear) behavior of the process using suitable model structures that enable the efficient computation of the worst-case scenario (Chawankul, Ricardez Sandoval, Budman, & Douglas, 2007; Gerhard, Monnigmann, & Marquardt, 2005; Gerhard, Marquardt, & Monnigmann, 2008; Ricardez-Sandoval, Budman, & Douglas, 2008, 2009a). Thus, these methods alleviate the computational burden imposed by the dynamic optimization methods and can be used to perform the optimal design of large-scale dynamic systems (Monnigmann & Marquardt, 2005; Munoz, Gerhard, Hannemann, & Marquardt, 2011; Ricardez-Sandoval, Budman, & Douglas, 2010, 2011). Also, a model-based strategy that follows a systematic procedure to assess the simultaneous design and control has been recently reported (Alvarado-Morales et al., 2010; Hamid, Sin, & Gani, 2010).
A comprehensive review of the integration of design and control methodologies can be found elsewhere (Ricardez-Sandoval, Budman, & Douglas, 2009b; Sakizlis, Perkins, & Pistikopoulos, 2004; Seferlis & Georgiadis, 2004). One common aspect associated with the current simultaneous design and control methodologies is the lack of an analysis on the worst-case scenario. That is, the current methods only estimate the largest variability in the system, i.e., the actual worst-case scenario, and use this result to evaluate both the process constraints and the system’s dynamic performance to assess the optimal process design. However, the previous methods do not provide any insight regarding the probability of occurrence of the worst-case variability in the process variables. Thus, conservative (expensive) designs usually result from the application of those methodologies since the critical realizations in the disturbance that produces the worst-case scenario may rarely (or never) occur during the normal operation of the process, i.e., the probability (or the chance) that the worst-case scenario occur may be very low. Consequently, there is a need to develop economically attractive designs that can meet the key (high level) system’s requirements during the normal operation of the process, e.g., safety and environmental constraints, whereas other (low level) design and control requirements such as a liquid level constraint in a storage tank or a saturation limit on a valve may be accepted to exceed its operational limits a few times during normal operation. The current simultaneous design and control methodologies are limited in the sense that they cannot assign probabilities to the worst-case variability of each of the output variables involved in the evaluation of the process constraints, the system’s dynamic performance and cost function. Consequently, the ability of the user to make decisions regarding the optimal design and control of a dynamic system under uncertainty is currently limited to the computation of worst-case scenario.
This work presents a new practical simultaneous design and control methodology that evaluates the worst-case variability of the process variables based on a user-defined probability of occurrence. The key idea in this methodology is to approximate the probability distribution of the worst-case variability of the process variables to a normal probability distribution. Estimates of the worst-case variability at a given (user-defined) probability are computed from the approximated normal probability distribution function and are used by the present method to evaluate the process constraints and the system’s cost function. Therefore, the optimal design obtained by the present approach is subject to a probability of occurrence of the worst-case variability expected for the process variables. The present probabilistic approach is intended to give the user the freedom to rank the key goals of the system to be designed and assign probabilities to each of the goals based on their level of significance. Thus, the present methodology enables the assessment of the optimal process design by assigning different probability levels to the process variables used to evaluate the process constraints and the process economics. Consequently, the user will have the ability with the present methodology to evaluate different process designs and select the design that is more suitable for its needs. Hence, the present approach will provide with a practical tool for the decision making of the optimal process design in the presence of uncertainty. A non-isothermal (nonlinear) Continuous Stirred Tank Reactor (CSTR) process is used as a case study to test the present simultaneous design and control methodology. This case study was also used to illustrate the expected computational burden to simultaneously design and control large-scale systems using the present methodology. This paper is organized as follows: Section 2 presents the mathematical formulation that is proposed in this work to simultaneously design and control dynamic systems under uncertainty using a probabilistic approach. The simultaneous design and control of a CSTR process using the present methodology is presented in Section 3. Section 4 presents the computational load required to optimally design and control dynamic systems with the present methodology. Concluding remarks and future directions for this research are presented at the end of this paper. 2. Simultaneous design and control methodology This section describes the methodology proposed to perform the optimal design and control of dynamic systems under uncertainty. The present methodology assumes that a model that represents the dynamic behavior of the process is available for simulations. In the present method, the process inputs () are classified as follows: = [, d]
(1)
where represents the inputs that can be manipulated for on-line control. The set can be further partitioned as follows: = [u, g]
(2)
where u represents the set of manipulated variables used by the control structure to maintain the operation of the process within the operational limits and g are the remaining adjustable variables that must be kept at a constant (optimal) value during the operation of the process. Following (1), d represents the set of disturbances that will be considered in the analysis for optimal design and control. The process outputs () can be classified as follows: = [y, ]
(3)
where y represents variables that are in closed-loop or variables that are controlled through another process variable that is under closed-loop control, i.e., inferential control. The degree of variability in a few of the variables y can be used to measure the dynamic performance of the system (variability costs) or to evaluate the process
L.A. Ricardez-Sandoval / Computers and Chemical Engineering 43 (2012) 91–107
constraints in the presence of disturbances. Furthermore, the set represents the remaining output variables that are not controlled. 2.1. Worst-case variability analysis The current simultaneous design and control methodologies that include the worst-case scenario return an optimal design and control scheme that is able to accommodate the critical realizations in the disturbances while maintaining the feasible operation of the process, e.g., Mohideen et al. (1996), Bansal et al. (2000), Sakizlis et al. (2004) and Ricardez-Sandoval et al. (2010). Although the worst-case scenario approach returns a robust process design that can accommodate any time-varying profile in the disturbance variables, this method is usually conservative because the maximum expected variability in the process variables may only occur a few times, or it may never occur, during the normal operation of the process. The application of the simultaneous design and control methodologies that include the worst-case scenario is attractive when the critical disturbance profile is known to occur very often in the process’ normal operation, e.g., a combination of step changes in the disturbance variable, or when safety or environmental constraints are critical and must be satisfied for any realization in the disturbance variables, e.g., ensure that the temperature in a reactor involving a highly exothermic reaction do not need to exceed its maximum allowable working temperature at any time. On the other hand, there exist applications where the critical disturbance profile is rare or unique that the probability that this critical trajectory occurs is relatively low, and even if it occurs, it may not produce sustained damages to the surroundings or significant economic losses, e.g., the overflow of a storage tank containing a recyclable material. Moreover, the disturbance is usually defined as an unmeasured random perturbation bounded by upper and lower limits. Although this definition is suitable to estimate the worstcase scenario, there are applications where information about the disturbance characteristics is available from previous experiences with similar processes or from process design heuristics. Therefore, expensive overdesigned processes may be obtained when methodologies based on the worst-case scenario are applied to systems where the occurrence of process constraints violations may be accepted up to a certain probability limit, the probability of occurrence of the critical disturbance trajectory is low, or when information about the disturbance dynamic behavior is known a priori. To address the points mentioned above, the present work performs a distribution analysis on the worst-case variability of the process variables that may occur due to sudden and unpredictable changes in the disturbance. The resulting analysis can be used to estimate the worst-case variability of a process variable at a given (user-defined) probability. Thus, the present worst-case variability analysis provides additional degrees of freedom to the user that result in practical and more economical process designs than those obtained by the simultaneous design and control methodologies that are solely based on the computation of the worst-case scenario. 2.1.1. Disturbance description Each of the disturbances considered in the set d is assumed to be an unmeasured time-varying variable that follows a specific probability density function (PDF) for a given period of time, i.e., dq (t) = {dq |dq ∼PDF(␣q )};
0 ≤ t ≤ tf
(4)
where dq and ␣q represent the qth disturbance in the set d and their corresponding probability distribution parameters. Note that the description in (4) makes no assumptions about the individual disturbance values at any time t. That is, the disturbance is considered to be a random variable which values at any time t (0 ≤ t ≤ tf ) are uncertain and not known a priori. However, the collection of
93
the disturbance values for a given period of time will be described by PDF with probabilistic parameters ␣. Hence, dq is assumed to be a time trajectory formed by random values that are correlated by the PDF(␣q ). The description given in (4) represents a formal mathematical definition for this variable. However, dq (t) is a continuous variable with an infinite number of values between the initial and the final time (tf ). Although direct numerical integration schemes or efficient sampling techniques have been developed to address continuous distributions for time-invariant uncertainty (Bernardo, Pistikopoulos, & Saraiva, 1999a, 1999b; Diwekar & Kalagnanam, 1997), the application of these techniques to timevarying uncertain parameters is computationally intensive. To make (4) computationally tractable, the present work approximates dq (t) with m piecewise constant random values propagated through a fixed period of time. That is, the disturbance variable is discretized in m time intervals as follows: m
dq (t)≈
dq (i); dq (i)∼PDF(␣q ), t = it, tf = mt, m =
tf t
(5)
i=0
where t is the sampling interval. Accurate descriptions of dq (t) are obtained when t → 0. The discrete disturbance description shown in (5) represents an input to the present methodology. Thus, the user needs to define the probability distribution for each disturbance included in the set d. If no a priori information is available for d, then it can be defined as uniform distribution with lower and upper limits to represent the definition commonly used to estimate the worst-case scenario, i.e., a random perturbation bounded by lower and upper limits. 2.1.2. Worst-case variability distribution The disturbance description shown in (5) is used to obtain the largest deviations in the process variables (worst-case variability) through simulations of the closed-loop process model. However, the disturbance specification given in (5) can accommodate a relatively large set of realizations for each disturbance included in the set d. Also, the worst-case variability may vary from one disturbance realization to another. Thus, the resulting set of values of the worst-case variability can be classified in a frequency histogram to determine the distribution of the worst-case variability arising from random realizations in the disturbance set. This will be referred from heretofore as the worst-case variability distribution. In principle, the complete set of realizations that complies with the disturbance specification shown in (5) needs to be tested to obtain a complete description of the worst-case variability of a process variable. Although this approach may return the most accurate description for the worst-case variability distribution, the application of this method will result in lengthy computational times even for simple processes that may result in prohibitive calculations. To reduce the computational demands, the present work applies a Monte Carlo (MC) sampling technique to generate time trajectories for the disturbances that comply with the description given in (5). This method was selected because is a general and widely known technique used in engineering applications to sample from a probability distribution. Although other sampling techniques have been developed, e.g., the Latin hypercube sampling (Iman, Helton, & Campbell, 1981; McKay, Beckman, & Conover, 2000), Hammersley sequence sampling (Diwekar & Kalagnanam, 1997), Orthogonal sampling (Owen, 1992), the efficiency of these techniques depends on the uncertain model parameter specifications and the corresponding process model structure thus limiting their applicability to general process systems applications. Based on the above, the procedure proposed to generate a sample of the worst-case variability distribution is as follows:
94
L.A. Ricardez-Sandoval / Computers and Chemical Engineering 43 (2012) 91–107
Fig. 1. Illustrative representation of the steps to obtain the worst-case variability distribution of a process variable, p.
1. Define tf and m for each disturbance considered in the set d. Also, define the number of MC realizations to be evaluated, i.e., MCmax . Set j = 1. 2. For each disturbance included in the set d, e.g., dq , perform the following tasks: (a) Using a pseudorandom number generator, produce m random numbers that fit the PDF with specific ␣q parameters. Assume that: dq (t = 0) = dq,nom ; where dq,nom represents an initial nominal value for the qth disturbance, e.g., the disturbance’s steady-state value. (b) Propagate the set of random numbers in the time domain to generate a trajectory in the time domain for dq , i.e., each of the random numbers represents a disturbance value in the time domain, i.e., dq (i) (see Eq. (5)). 3. Using the time trajectories in the set d as inputs, simulate the closed-loop process model to obtain: p (j) = max p
(6)
where p (j) represents the largest variability in an process variable p for the jth realization of the disturbance set d. The process variable p may be an output variable () that is used to measure the process economics or the system’s dynamic performance or a manipulated variable (u) that needs to be within limits to maintain the feasible operation of the process. For example, p may represent the temperature in a reactor, which maximum variability needs to be within limits, or the mass flowrate in a product’s stream, which variability can be used to measure the performance of a process system. Note that (6) computes the largest variability of p in the positive direction. Problem (6) can be readily reformulated to compute the largest variability of p in the negative direction by searching for the lowest (minimum) value in p due to changes in the set d. 4. Set j = j + 1 and go back to step 2. This procedure is repeated until j = MCmax . The procedure outlined above assume that p (p ∈ MCmax ) is identified around a nominal operating condition defined by the process variables g, the set points for the controlled variables, y*, and the nominal steady-state value for the manipulated variables u. Thus, p is only valid around a specific nominal operating point. Consequently, p must be updated for each new operating condition that is tested by the proposed design and control methodology. In the present approach, a subset of the process variables () that define the system’s operating condition will be selected as optimization (decision) variables. A degrees of freedom analysis on the closed-loop process model is used in the present method to select the most suitable optimization variables for a given system, i.e., . The resulting process variables () together with the controller‘s tuning parameters () represent the set of optimization variables for the present methodology, i.e., = [, ]. Accordingly, the MC sampling procedure described above must be performed for each new set of values considered in the decision variables set, .
The vector p describes the worst-case variability expected for p due to random realizations in d. Frequency histograms are generated for p to evaluate the worst-case variability distribution expected for p. Fig. 1 shows a schematic representation of the procedure used by the present methodology to generate the worst-case variability distribution arising from random variations in the disturbance set d. The resulting histograms are useful to determine the expected (mean) values for the worst-case variability in p due to random realizations in the set d, which follows specific probability distributions (see Eq. (5)). Also, the tails of the histograms represent the most critical worst-case values expected for the process variable p during the normal operation of the process. Hence, the histogram can be used to analyze the largest variability expected for the process variable p and its corresponding probability of occurrence. Nevertheless, the histograms obtained from this analysis are expected to be non-Gaussian because most of the engineering applications of interest are described with nonlinear mechanistic process models. However, the present approach computes the worst-case variability of the process variables in closed-loop, i.e., the simulations to obtain p are performed in closed-loop. Thus, the controllers’ actions are expected to maintain the dynamic operation of the process around a nominal operating condition specified by . Hence, it is expected that the distributions of the worst-case variability of the process variables are in a region near a nominal (steady-state) operating condition, i.e., . That is, the distribution of the worst-case variability in the process variables is expected to be narrow and centered on an expected (mean) value. Also, the central limit theorem states that the distribution of a process variable, generated from independent random realizations in the input variables, will approximate to a normal (Gaussian) distribution regardless of the distribution of the input variables (Montgomery & Runger, 2007). Therefore, the present work considers that the histograms that describe the distribution of the worst-case variability can be approximated to a normal (Gaussian) probability distribution. Although other probability distribution models can also be used in the present methodology to describe the distribution of the worst-case variability of a process variable, e.g., exponential or lognormal distributions, the normal probability distribution is the most widely used model to describe the probability distribution of a random variable because it is often regarded as an appropriate approximation in most of the engineering applications. Also, the normal distribution can be approximated to other types of probability distributions, e.g., binomial or Poisson (Montgomery & Runger, 2007). The central limit theorem also states that the size of the sample used to describe the distribution of a random variable determines the quality of the sampling distribution and thus the accuracy of the normal approximation. While a large sample will accurately describe the distribution of a random variable, large computational costs are usually required to produce a large sample for dynamic systems. Thus, there is a trade-off between accuracy in the sampled distribution and computational costs. Although a large number of samples can be generated for the random variable
L.A. Ricardez-Sandoval / Computers and Chemical Engineering 43 (2012) 91–107
of interest, there is no guarantee that the sample will capture the complete characteristics of the distribution of a random variable. However, an appropriate number of MC realizations can identify the key characteristics that describe the probability distribution of the process variable to be sampled. To address this trade-off, the present work embedded the MC sampling procedure described above within an iterative algorithm to obtain a sampling distribution that captures the key statistical properties of the distribution of the actual worst-case variability at minimum computational cost. The algorithm proposed to approximate the actual worst-case variability distribution to a normal probability distribution is as follows: 1. Define a tolerance criterion (tol) that will be used to approximate the worst-case variability distribution to a normal probability distribution. Also, set n = 1. 2. Perform the MC sampling procedure described above to obtain the worst-case variability distribution for a process variable p (p ) due to random realizations in the disturbance set d. Set p,n = p , where the array p is used to store current and past worst-case variability distributions for p . The array p is defined as follows:
p = p,1 , p,2 , . . . , p,n
T
;
p ∈ nMCmax
(7)
3. Generate a frequency histogram for p . Fit the histogram of the distribution of p , i.e., f (˚p ), to a normal probability distribution model, i.e., 2 1 −(˚p −p,n )2 /2p,n f (˚p ) ≈ √ exp 2p,n
(8)
where p,n represents the nominal (mean) value of the worstcase variability distribution of variable p estimated at the nth iteration step. Similarly, p,n represents the standard deviation of the worst-case variability of variable p estimated at the nth iteration step. The standard deviation ( p,n ) indicates the dispersion of the worst-case variability expected for the pth process variable. The model parameters for the normal probability distribution function are estimated using the minimum variance unbiased estimator (Montgomery & Runger, 2007):
nMCmax
p,n =
j=1 p,n =
˚p (j) nMCmax 1
nMCmax − 1
(9)
nMCmax
(˚p (j) − p,n )
2
j=1
4. Compare the current mean and standard deviation model parameters with those obtained at the previous iteration (n − 1) for each process variable p, i.e.,
error = error,1 , . . . , error,p , ..., error,P
error = error,1 , . . . , error,p , ..., error,P
p,n − p,n−1 error,p = p,n p,n − p,n−1 error,p = p,n
(10)
where error and error represent vectors that collect the approximated relative tolerance errors in the mean and the standard deviation model parameters at the current iteration step. The parameter P in (10) denotes the complete set of process variables that require a worst-case variability analysis to assess the
95
optimal design with the present method, i.e., error and error are of length P. 5. If error ∞ > tol or error ∞ > tol then at least one of the normal probability distributions obtained at current iteration step does not capture the key statistical properties of their corresponding worst-case variability distribution at a given degree of accuracy, tol. Thus, the MC sampling procedure needs to be redone to be able to obtain a normal probability distribution that can be representative of the actual worst-case variability of those process variables that did not meet the tolerance error criterion (tol) at the current iteration step. Accordingly, the procedure described above will be redone for all the process variables including those that may have satisfied the relative tolerance error criterion. Although the previous procedure can only be repeated on those variables that did not meet the tolerance error criterion, the computational costs associated with the implementation of the MC sampling procedure, i.e., the simulation of the closed-loop process model to random realizations in the set d, are the most intensive calculations that need to performed in the present methodology. That is, the computational time required to do the worst-case variability analysis for a single process variable is several orders of magnitude smaller when compared to the computational time needed to perform the MC sampling procedure described above. The results from the MC sampling procedure will provide with new samples of the worst-case variability for all the process variables included in the analysis. Therefore, these new estimates, together with the worst-case variability samples obtained at previous iterations, will be used to identify normal probability distribution models for those process variables that did not meet the tolerance criterion in the current iteration step. Also, these new estimates will be used to improve the accuracy in the normal probability distribution models of those process variables that already met the tolerance criterion at the current iteration step. Based on the above, if the tolerance error criterion specified above is not satisfied, then update n = n + 1 and go back to step 2. 6. If error ∞ ≤ tol and error ∞ ≤ tol then STOP, normal probability distributions that capture the key statistical properties of the distribution of the actual worst-case variability of the process variables have been identified. The model parameters for each of the normal probability distributions at the nth iteration (p,n , p,n ) are redefined as ∗p and p∗ , respectively. The identification of a normal probability distribution function with mean ∗p and standard deviation p∗ , i.e., N(∗p , p∗ ), represents the key step in the present optimal design and control methodology since it enables the computation of the worst-case variability of a process variable at a given (user-defined) probability limit. That is, the present approach provides with a systematic method to estimate the worst-case variability at different levels of confidence. Therefore, an optimal design and control methodology based on the probability of occurrence of the worst-case variability in the process variables can be formulated. The present approach enables the user to define probabilities for the worst-case variability of those process variables that are included in the evaluation of the process constraints and the system’s dynamic performance. Accordingly, high probabilities can be assigned to those process variables that need to be within limits at all time in the presence of sudden or unpredictable changes in the disturbance whereas low probabilities can be specified for those process variables that can exceed their operational limits due to critical realizations in the disturbances. The normal probability distribution function identified from the present method can also be used to provide with an estimate of the actual worst-case scenario by setting a high probability to the worst-case variability of a process variable. Moreover, the present approach will also provide with additional information that
96
L.A. Ricardez-Sandoval / Computers and Chemical Engineering 43 (2012) 91–107
Worst-case variability in p
Identifed normal prob. distr. function
f(Φp*)
Θp* @ Prp
Θw c
θp Fig. 2. Schematic representation of the worst-case variability analysis.
can be used to reduce the conservatism and the costs of the process design in the presence of random realizations in the disturbances. For example, the normal probability distribution function can provide with the largest variability that is more likely to occur for a process variable arising from random changes in the disturbances. Thus, a more attractive economical design can be obtained if the expected (mean) value for the worst-case output variability (∗p ) is used in the simultaneous design and control calculations at the expense that the final process design may exceed operational limits or do not meet the performance requirements in approximately half of the time. Accordingly, the user has the ability to simultaneously design and control a process that meets its expectations by assigning levels of confidence to the key aspects involved in the process design. The computation of the worst-case variability at a given probability level using the identified normal probability distribution function model is as follows: ˚∗p
=
{˚∗p
:
F(˚∗p , ∗p , p∗ )
Prp = F(˚∗p , ∗p , p∗ ) = 1 = √ 2p∗
f (v)dv
exp
−(v − ∗p )2 2p∗2
2.2. Optimal design and control methodology Normal probability distribution functions describing the worstcase variability distribution can be identified for each of the process variables that need to be within limits to maintain the feasible operation of the system or for the variables that are used to measure the system’s dynamic performance and the cost function. Thus, the probabilistic-based simultaneous design and control methodology proposed in this work is formulated as follows: ∗
∗
s.t. ∗
lk (y, , p ) ≤ 0 k ∈ K ∗
(11)
∗
min CF = CC(y, , p ) + OC(y, , p ) + VC(y, , p , )
=[,]
˚∗p = F −1 (Prp , ∗p , p∗ ) ˚∗p ∈ p
˚∗p
−∞
˚∗p
−∞
= Prp } =
F −1 (Prp , ∗p , p∗ )
than those obtained with previous simultaneous design and control methodologies that are only based on the computation of the worst-case scenario, i.e., ˚wc .
(12)
eig(A(x)| ) < 0 l ≤ ≤ u
dv
where ˚∗p represents the worst-case variability of process variable p at the probability limit Prp , which represents the user-defined probability at which the worst-case variability distribution, represented here by the normal probability distribution function, is evaluated. The function F(Prp , ∗p , p∗ ) in (11) represents the cumulative normal probability distribution function evaluated at Prp . The numerical evaluation of F −1 (Prp , ∗p , p∗ ) is performed rapidly using subroutines in standard computational packages, e.g., the norminv built-in function in MATLABTM . Note that the computation of ˚∗p is only a calculation, i.e., it is not an optimization problem. Fig. 2 illustrates the key benefits on the present approach. While conservative and expensive designs can be obtained using only the actual worstcase scenario (˚wc ), the present approach enables the user to select probabilities (Prp ) at which the worst-case variability expected for p can be evaluated (˚∗p ). Thus, the present method provides with a systematic approach to back off from the actual worst-case scenario calculation. This is a key feature introduced by the present work since economically attractive designs, which meet the user’s expectations in the presence of random variations in the disturbances, can be obtained with this method. Accordingly, the present worst-case scenario distribution method may return more realistic, economical and less conservative design and control schemes
where lk represents the kth process constraint considered in the simultaneous design and control formulation. Each process constraint may be a function of the input () or the output () process variables. The worst-case variability in the time-varying process variables considered on each constraint lk are represented ∗ ∗ by ˚p . Each of the elements in ˚p is estimated using the algorithm described in previous section, i.e., compute the worst-case variability for p at a given probability level (Prp ) using a normal probability distribution function model, which describes the distribution of the worst-case variability expected for p due to random changes in the set d. As mentioned above, the process variable p can be a variable that is controlled in closed-loop or using inferential control (y) or a manipulated variable used in by the control structure (u). That is, p can be any variable that changes in the time domain due to changes in d and that is needed to evaluate problem (12). The cost function (CF) in problem (12) is defined as the addition of the capital costs (CC), the operating costs (OP) and the variability costs (VC). The capital costs (CC) represent the costs of the units included in the process flowsheet. Cost correlation functions, which provide estimates of the cost of a unit given its dimensions, e.g., the diameter and height of a mixing tank, have been developed and are available in the literature, e.g., Towler and Sinnott (2012), Seider, Seader, Lewin, and Widagdo (2009) and Peters, Timmerhaus, and West (2003). The operating (utility) costs (OP) represent the costs
L.A. Ricardez-Sandoval / Computers and Chemical Engineering 43 (2012) 91–107
incurred from the services or utilities that are needed to run the process, e.g., cooling water, steam and electricity. The annual operating costs are estimated from the annual consumption expected of a utility multiplied by its price. Factors and indexes that specify the prices of the most common utilities per unit of time are available in the literature, e.g., Seider et al. (2009) and Peters et al. (2003). Although the CC and the OP costs are often considered steady-state costs, these functions may also be affected by process variables that are changing in time due to changes in the disturbances. For example, a key variable that directly impacts the capital cost in the design of a vessel is the volume hold up. The variability in the vessel’s liquid level, and thus the volume hold up, is affected by the critical time-varying realizations in the inlet flowrate, which can be considered as the disturbance. Thus, the capital cost in this example, which is a direct function of the vessel’s size, has to account for the worst-case variability expected in the volume holdup due to the critical realizations in the inlet flowrate. To apply the formulation presented in (12) to this example, a normal probability distribution function that describes the worst-case variability distribution for the volume holdup in the presence of critical realizations in the disturbances needs to be identified. The resulting normal probability distribution function, which captures the key statistical properties of the actual worst-case variability of the tank’s volume holdup, can be used to estimate the worst-case variability in this variable at a given (user-defined) probability, i.e., ˚∗p . The worst-case variability in the volume holdup of the vessel can then be used to estimate the capital cost for this system. Following the cost function in (12), the variability cost (VC) represents a cost that measures in economic terms the dynamic performance (variability) of the design in closed-loop. The variability cost function is problem specific and specified based on the goals to attain by the process design and control configuration. The variability cost functions are usually defined by assigning a dollar value to the key process variables that are used to measure the dynamic performance of the system. Variability cost functions can be specified to minimize variability of product rate and product quality, minimize product loss from a purge stream during the transient operation of the system, or minimize variability in the key process or environmental constraints that have a direct effect on the process economics, e.g., reduce penalty costs for the emissions of a pollutant by maintaining the variability of that pollutant within specs in the presence of disturbances. Case studies that show the specification of variability cost functions for different applications are available in the literature (Gutierrez, Ricardez, Budman, & Prada, 2011; Ricardez-Sandoval, Budman, & Douglas, 2009c; Ricardez-Sandoval et al., 2011). The optimization variables () considered in (12) are the controller tuning parameters () and a subset of process variables (), selected from the nominal value in the output variables that are under inferential control (y), the nominal value in the manipulated variables (u), the set points for the controlled variables (y*) and the time-independent input variables (g). Note that the set ∗ ˚p is only valid for a specific operating condition defined by the ∗ values in . Thus, the set ˚p needs to be recalculated for every new set of values in tested by the optimization algorithm. Consequently, new normal probability distributions functions around need to be identified for each process variable p at each optimization iteration step. The process model equations and the controller equations that describe the closed-loop dynamic behavior of the system do not explicitly appear in problem (12). However, they are implicitly considered in the formulation since they are needed to generate samples of the worst-case variability from simulations of the closed-loop process model. The resulting samples are then used to build frequency histograms of the actual worst-case variability, f (˚p ). The function f (˚p ) represents the key input data that needs to be generated to identify the normal distribution
97
probability distribution function, which is used to estimate the worst-case variability of variable p (˚∗p ) at a given probability level (Prp ). As shown in (12), a nominal stability test has been included in the present formulation to ensure that the optimal design and control scheme obtained with the present method is nominally stable. To perform this test, the sensitivity matrix of the states of the closed-loop system, i.e., A(x), must be obtained around the system’s nominal operating point defined by . The system is said to be nominally stable around if all the eigenvalues of matrix A are less than zero. Otherwise, it can be concluded that the system is unstable. Matrix A can be obtained using the available numerical differentiation techniques (Maly & Petzold, 1996). Moreover, a second stability check is included in the present optimal design and control formulation to test the system’s dynamic stability in the presence of random perturbations. This stability test is as follows:
˚∗p (iter)
≤ˇ ˚∗p (iter − 1)
(13)
where ˚∗p (iter) and ˚∗p (iter − 1) represent the estimates on the worst-case variability obtained for the variable p at the current and the previous iteration steps of the optimization algorithm, respectively. Similarly, ˇ represents a tolerance criterion used to test if the system’s current operating point defined by is stable in the presence of random realizations in the disturbance set d. The selection of ˇ is problem-specific and represents an input to the present methodology. Preliminary testing using stable and unstable operating conditions for different systems indicate that a suitable value for this parameter is 100. That is, the present methodology assumes that if the estimate of a process variable at the current iteration step is at least two orders of magnitude larger than that obtained in the previous optimization iteration step, then, the system’s current state may be unstable. To rule out instability, a formal internal stability test must be conducted. In the present work, a robust stability test based on the construction of a Quadratic Lyapunov function is used to determine the system’s asymptotical stability. The procedure to implement the robust stability test has been explicitly described in a previous study (Ricardez-Sandoval et al., 2009c). In principle, the robust stability test can be applied at every iteration step in the present formulation; however, this test can be computationally intensive for large-scale systems since it involves the identification of an uncertain state space model that captures the system’s closed-loop nonlinear behavior. Thus, the robust stability test is only applied in the present methodology when inequality (13) is not satisfied for any of the p process variables considered in the optimal design and control formulation. Nevertheless, the present methodology applies the Quadratic Lyapunov (QL) stability test on the final solution obtained by this method to ensure that the system’s final design and control configuration is asymptotically stable. In the theoretical case that the optimal solution is found to be unstable, problem (12) must be modified to include the QL stability test as a constraint that needs to be evaluated at every iteration step. Then, problem (12) must be resolved with the added stability constraint. However, the latter situation did not occur in the scenarios studied with the present methodology, i.e., the optimal solution obtained with the present method returned design and control configurations that satisfies the QL stability test and thus they are assumed to asymptotically stable. Problem (12) can be defined as a nonlinear constrained stochastic optimization problem combined with dynamic simulations. Thus, the present work solved problem (12) using a genetic algorithm (GA), which is an optimization algorithm that simulates the process of natural evolution using a random selection method (Holland, 1992). Although GA is a global optimization algorithm, this method is computationally intensive because it needs to be
98
L.A. Ricardez-Sandoval / Computers and Chemical Engineering 43 (2012) 91–107
Fig. 3. Algorithm for the probability-based optimal design and control methodology.
restarted many times to obtain reliable estimates on the decision variables. As it will be discussed in Section 4, the formulation presented in (12) is computationally parallelizable thus reducing the computational costs associated with the GA method. Fig. 3 summarizes the steps proposed in this work to conduct the optimal design and control of dynamic systems under uncertainty using a distribution analysis of the worst-case variability of the process variables. As shown in Fig. 3, the current optimal design and control approach offers the advantage that the two most time consuming steps in the method can be executed in parallel, i.e., the generation of the samples needed to build f (˚p ) and the corresponding computation of ˚∗p . As illustrated in Fig. 3, the nominal stability test is the first calculation that needs to be performed in the formulation. This is done to reduce computational time in the case that the operating condition specified by is found to be nominally unstable. 2.3. Remarks The accuracy of the actual worst-case variability distribution function, f (˚p ), is directly related to MCmax , i.e., the number of MC simulations used to generate the set p (see Section 2.1.2). In fact, there is no guarantee that a complete representation of true worstcase variability distribution can be obtained by setting MCmax to a large value. However, an appropriate number of MC simulations can capture the basic characteristics of the actual worst-case variability distribution. Additionally, the iterative approach proposed in this work improves the estimates in ˚∗p since it ensures that the key statistical properties of the actual worst-case variability distribution are captured with the normal probability distribution function model shown in (8). Moreover, the selection of the number of MC trials (MCmax ) is not trivial since there is no a generic procedure that determines its optimal value. The specification of the number of trials involves a compromise between accuracy in the results and computational load, i.e., a large number of MC trials produce more accurate process descriptions at the expense of requiring lengthy computational
times. The number of MC realizations is commonly selected based on preliminary simulations tests, process heuristics and the computational resources available. Using a MC approach with a large number of trials may be critical or even prohibitive for on-line applications such as real-time optimization (RTO) or on-line control. However, the present methodology is implemented off-line thus allowing the use of sufficiently large MC realizations. Furthermore, the MC procedure proposed in the previous section to generate estimates for p can be fully parallelizable since each of the MC realizations are independent of each other (see Fig. 3). Thus, a larger number of MC realizations can be reached in reasonable simulation times since the proposed MC simulation procedure can be implemented in parallel in a CPU that has multiple processors or in a High Performance Computer (HPC) cluster. The methodology proposed in this work assumes that the control structure and the process flowsheet are known a priori and they remain fixed during the calculations. That is, the present methodology only addresses the sizing of equipments, the optimal operating conditions and the optimal controller tuning parameters that will meet the process design goals in the presence of random realizations in the disturbance set. In principle, iterative decomposition algorithms can be proposed to integrate structural decisions within the present optimal design and control methodology. The iterative algorithms can include a dynamic flexibility analysis, a feasibility analysis and a stability analysis. The dynamic flexibility analysis seeks for the most suitable process design and control structure for a given realization in the process disturbances. Therefore, the dynamic flexibility analysis can be considered as a mixed-integer stochastic optimization problem since it involves the evaluation of multiple process design and control structures. Note that the design obtained from the dynamic flexibility analysis is only valid for the disturbance profile used in the calculations. Hence, a feasibility analysis is needed in the design algorithm to ensure that the optimal design obtained from the dynamic flexibility analysis remains valid in the presence of other critical realizations in the disturbances. In the feasibility analysis, the optimal process design and control structure obtained from the dynamic flexibility
L.A. Ricardez-Sandoval / Computers and Chemical Engineering 43 (2012) 91–107
99
model that describes the dynamic behavior of this process is as follows: V˙ = qF − q T˙ =
H0 k0 CA exp(−E/RT ) qF (TF − T ) − Qc + V Cp
C˙ A =
qF (CA,F − CA ) − k0 CA exp V
−E
(15)
RT
Qc = 48.1909u1 √ q = 10u2 V
Fig. 4. Flowsheet for the CSTR process.
analysis is fixed during the calculations; therefore, the feasibility analysis can be defined as nonlinear stochastic optimization problem. Moreover, a stability analysis is also needed in the iterative decomposition algorithm to guarantee that the optimal process and control design configuration, which satisfies the feasibility analysis, remains stable in the presence of external perturbations. Although the addition of integer parameters can be done to account for structural changes in the design and control configurations, the solution of the resulting mixed integer nonlinear stochastic optimization problem in the dynamic flexibility analysis stage may be computationally challenging. Due to the computational demands, most of the optimization methods currently available to solve this type of problems have only been applied to small problems (Sahinidis, 2004). Thus, the application of stochastic integer methods to solve large-scale optimal design and control problems using the methodology proposed above may be highly computationally intensive or even prohibitive. Future work on this research topic includes the development of an efficient optimal design and control methodology that can account for integer decisions while using the present probabilistic approach. 3. Case study: optimal design and control of a non-isothermal CSTR process The methodology presented in the previous section was applied to simultaneously design and control a CSTR process. Fig. 4 presents the CSTR process flowsheet. Due to changes in the upstream processes, the inlet stream’s flowrate, qF (t), is assumed to be an unmeasured random perturbation which fluctuations in the time domain can be described with a probability density function. Thus, qF (t) is assumed to be the time-varying disturbance (d) in this process and it is defined according to (5). An exothermic irreversible reaction that transforms reactant A into product B, i.e., A → B, is assumed to be occurring in the CSTR process. The reaction rate is considered to be a first order reaction with the reaction rate constant defined in terms on the Arrhenius expression, i.e.,
rA = k0 CA exp −
E RT
(14)
where CA (mol/L) is the concentration of reactant A inside the reactor at any time t, k0 is the pre-exponential factor (7.2e−10), E is the activation energy (83145 J/mol) and R is the gas constant (8.3144 J/mol K). The CSTR process includes a cooling jacket that maintains the temperature inside the reactor within limits. The temperature and the concentration of reactant A in the inlet stream (TF and CA,F ) are initially assumed to remain constant during the operation of the CSTR process, i.e., TF = 350 K and CA,F = 1 mol/L. As it will be shown in Section 3.2, these two variables will also be considered as disturbances within the analysis. The mechanistic
where the CSTR’s outlet flowrate (q) is defined as a function of the volume (V) inside the reactor at any time t. Likewise, the cooling heat flow (Qc ) regulates the amount of heat transferred to the process. The variables u1 and u2 represent the available manipulated variables for this process, i.e., u in definition (2). Therefore, q and Qc can be adjusted to control the operation of the CSTR process. As shown in (15), the states of the system are given by the volume (V), the temperature (T) and the concentration of reactant A (CA ) inside the reactor at any time t. These variables also represent the system’s output variables that can be used for on-line control, i.e., y in definition (3). Following (15), is the fluid’s density (1e3 g/L), Cp is the fluid’s heat capacity (0.239 J/g K) and H0 is the heat of reaction (4.78e4 J/mol). A multi-loop structure composed of two PI controllers has been considered to control this process. The PI controllers are defined as follows: Kc1 u1 = u¯ 1 + Kc1 e1 + 1 u2 = u¯ 2 + Kc2 e2 +
Kc2 2
t
e1 dt
o t e2 dt
(16)
0
e1 = T ∗ − T e2 = V ∗ − V That is, the outlet flowrate (q) and the heat cooling flow (Qc ) are used to control the liquid level and the temperature inside the reactor, respectively. Therefore, V and T represent the controlled variables for this process. This pairing selection was done using a relative gain array (RGA) analysis complemented with closed-loop simulations. The remaining output variable (CA ) is considered to be an output variable (y) that is maintained within limits using the temperature control, i.e., CA is controlled using inferential control. Following (16), e1 and e2 are the errors for each controller defined in terms of the set points, i.e., T* and V*, and the actual measurements for V and T, respectively. The terms Kc1 , 1 , Kc2 and 2 represent the controller tuning parameters () whereas u¯ 1 and u¯ 2 represent the nominal (steady-state) operating condition for the manipulated variables u1 and u2 , respectively. The objective for the present CSTR case study is to estimate the reactor’s size, the process operating conditions and the controller tuning parameters that minimize the annualized costs for this system while maintaining the feasible operation of the CSTR process in the presence of random fluctuations in the inlet stream flowrate, qF (t). The operational constraints specified for this process are as follows: 400 ≤ T (t) ≤ 480 CA (t) ≤ 0.05
(17)
While the first constraint imposes limits on the minimum and the maximum allowed working temperature in the reactor, the second constraint indicates that the reactant concentration at the outlet stream may not exceed 0.05 mol/L during the continuous operation of the system. That is, an outlet stream with a high
100
L.A. Ricardez-Sandoval / Computers and Chemical Engineering 43 (2012) 91–107
content of product B needs to be obtained for this process. Thus, the performance of this system can be measured in terms of the ability of the system to continuously produce, under the presence of random realizations in the inlet stream flowrate, qF (t), a product’s stream with low contents of component A. Hence, the worst-case variability in the molar flowrate of component A in the product’s stream, wA (mol/min), is used in the present analysis to measure the dynamic performance of the CSTR process. Based on the above, the optimal design and control methodology shown in problem (12) is reformulated for the present case study as follows: min CFCS
s.t. 400 − ˚∗T ˚∗T ˚∗C
max A
min
≤0
− 480 ≤ 0 − 0.05 ≤ 0
max
˚∗V
max
= F −1 (Pr V
∗ ∗ max , V max , V max )
˚∗T
max
= F −1 (Pr T
∗ ∗ max , T max , T max )
˚∗T min
=
F −1 (Pr
˚∗wA max
= F −1 (Pr wA
˚∗C
= F −1 (Pr CA
A
max
(18)
∗ ∗ T min , T min , T min ) ∗ ∗ max , wA max , wA max )
∗ ∗ max , C max , C max ) A A
eig(A(V, T, CA )| ) < 0
A
= [T ∗ , C¯ A , Kc1 , 1 , Kc2 , 2 ] l ≤ ≤ u where C¯ A represents the nominal (steady-state) concentration of component A in the product’s stream. To obtain the set point for the volume holdup inside the reactor (V*) and the nominal values for u¯ 1 and u¯ 2 , the mechanistic process model shown in (15) is solved at steady-state using as inputs the current estimates for the optimization variables () selected by the optimization algorithm. The annualized cost function (CFCS ) considered for the CSTR process is as follows: CFCS = CCCS + VCCS CCCS = 1917r(Diam1.066 H 0.802 ) VCCS = 5.256x105 ˇ˚∗wA Diam = H=
of return on the investment (r = 0.1). Following (19), the variability cost (VCCS ) assigns a dollar value to the dynamic performance of the closed-loop CSTR process. As mentioned above, the performance of this system is measured in terms of the variability in the molar flowrate of component A in the product’s stream. Therefore, VCCS is defined as a function of the largest variability expected in the molar flowrate of component A at the outlet stream, i.e., ˚∗wA max . A normal probability distribution function, which describes the actual worst-case variability for the A’s molar flowrate in the product’s stream, needs to be obtained to estimate ˚∗wA max at a specified probability limit, Pr wA max . The parameter ˇ in (19) represents a conversion factor used to measure in economic terms the controllability of the process. For the present case study, ˇ represents a cost assigned to ˚∗wA max and is set to $0.0002/mol of A. The energy consumption (operation) costs have been neglected for the CSTR process because they are orders of magnitude smaller that the capital and the dynamic performance (variability) costs. The optimal design and control formulation presented in (18) includes the operational constraints considered for the CSTR process. To evaluate the worst-case variation expected in the process constraints due to changes in the disturbance, qF (t), normal probability distribution functions representing the actual worst-case variability of each of the process variables included in the process operational constraints need to be identified. Estimates of the worst-case variability of each of the process variables, i.e., ˚∗T max , ˚∗T min , and ˚∗C max , can be obtained from the process variables’
max
3 (0.03532˚∗V max /4)
(19)
2
3 (0.03532˚∗V max /4)
4
where the annualized capital cost function (CCCS ) is defined as a direct function of the reactor’s size. The reactor is assumed to be a pressurized vessel of a given height, H, and diameter, Diam. As shown in (19), the computation of the capital cost requires the evaluation of the worst-case variability in the volume holdup, ˚∗V max . To obtain an estimate for ˚∗V max , a normal probability distribution function that describes the actual worst-case variability in the reactor’s volume holdup due to random perturbations in the inlet stream flowrate is identified following the procedure presented in Section 2.1.2. The resulting normal probability distribution function is then used to estimate the worst-case variability expected for the reactor’s volume holdup at a given (user-defined) probability level, PrV max (see Eq. (11)). Thus, the capital cost function considered in the present analysis takes into account the worst-case fluctuations in the reactor’s liquid level due to critical realizations in qF (t). The term r in CCCS represents the annualized rate
normal probability distribution functions evaluated at a given confidence level, i.e., Pr T max , Pr T min , and Pr CA max . The estimates of the worst-case variability obtained for each of the process variables are calculated around a nominal operating condition specified by the current estimates in . Thus, the procedure described in Section 2.1.2 to identify the normal distribution functions, used to represent the actual worst-case variability distribution of the process variables, need to be performed for each new set of values assigned to by the optimization algorithm. Although the CSTR process model and the controller equations shown in (15) and (16) do not explicitly appear in problem (18), they are implicitly included in the formulation since they are used to generate the frequency histograms for the worst-case variability for each of the process variables considered in the analysis. Following problem (18), the sensitivity matrix A is defined as a function of the states of the CSTR system, i.e., V, T and CA . The state matrix A is evaluated around a nominal operating condition defined by the current values considered for . Consequently, the nominal stability test shown in (18) must be performed at each optimization step. In the present analysis, the elements of the sensitivity matrix were estimated using finite differences; however, efficient methods for the computation of the sensitivity are currently available in the open literature (Kookos, 2003). Problem (18) is a stochastic nonlinear constrained optimization problem that also requires dynamic simulations of the closed-loop CSTR process model. This problem is stochastic because the process variables included in the process constraints and the cost function are evaluated from normal probability distribution functions, estimated from samples of the process variables’ worst-case variability distribution, which are obtained from the MC sampling technique presented in Section 2.1.2. The formulation shown in (18) was programmed in MATLABTM and solved using the ga subroutine which implements the GA optimization method. The CSTR formulation presented in (18) was solved under different scenarios to show the key features of the present optimal design and control methodology. These scenarios are discussed next. A study that shows the computational costs associated with the present simultaneous design and control approach is presented in Section 4.
L.A. Ricardez-Sandoval / Computers and Chemical Engineering 43 (2012) 91–107
3.1. Disturbance description and output probability levels The two key inputs that need to be specified for the present optimal design and control methodology is the description of the disturbances affecting the system (see Eq. (5)) and the probabilities assigned to the process variables’ worst-case variability (Prp ) that are used to evaluate the process constraints, the system’s dynamic performance and the cost function. The disturbance description can be specified based on process heuristics, e.g., assigning a probability distribution that is more likely to occur during the operation of the process, or from previous experience with similar process systems. However, a uniform distribution bounded by upper and lower limits can also be specified if no a priori information is available for the disturbance. The latter probability distribution may result in conservative designs because it represents an approximation to the computation of the worst-case scenario, i.e., the largest variability in a process variable due to the most critical realization in the disturbance set. Nevertheless, the resulting design will also be able to accommodate the most critical realizations in the disturbance set that may affect the process. On the other hand, the probability levels assigned to each of the process variables (Prp ) represent the degree of confidence, assigned by the user, at which the process variables that appear in the process constraints and the cost function are evaluated. Thus, this parameter represents a compromise between conservative (expensive) designs that will be satisfied at all time, i.e., Prp → 1, or attractive (economic) designs for which violations in the process constraints or the system’s dynamic performance are expected to occur during the normal operation of the process, i.e., Prp < 1. Therefore, both the disturbance description and Prp are decision-making input parameters that will directly affect the process design obtained by the present method. Based on the above, the optimal design and control formulation shown in (18) for the CTSR process was solved using different disturbance descriptions and probability limits for the process variables’ worst-case variability. The disturbance, i.e., the inlet feed flowrate, was defined using two different probability distributions: a normal probability distribution with mean 200 and standard deviation 3, i.e., qF (t)∼N(200, 3) and a uniform probability distribution with an upper and a lower limit of 220 L/min and 180 L/min, respectively, i.e., qF (t)∼U(220, 180). For each disturbance specification, the probability of occurrence of the worst-case variability in the process variables (Prp ) that appear in the process constraints and the cost function in (19) was set to 50% and 99.73%, respectively. Problem (18) was solved for the four different scenarios considered in the present analysis. Each scenario represents a combination between the disturbance descriptions and the probability assigned to the worst-case variability of the process variables. Problem (18) was solved using the GA method with a relatively large population. Multiple starts were performed to validate the results. Likewise, MCmax and m were set to 1000 and 100, respectively. The process constraints and the cost function described in (18) and (19) require the estimation of the worst-case variability of five process variables at a given probability limit, i.e., ˚∗V max , ˚∗wA max , ˚∗T max , ˚∗T min , and ˚∗C max . Thus, five normal probability A
distributions functions, representing the probability distribution of the worst-case variability of each of these process variables, need to be identified to completely evaluate the formulation shown in (18). The resulting normal probability distribution functions were used to estimate the worst-case variability of each of the process variables using the probabilities specified for the process variables, i.e., PrV max , Pr wA max , PrT max , PrT min and Pr CA max . Table 1 shows the results obtained for the scenarios considered in the present analysis. The results indicate that the key process variables that directly affect the CSTR’s design are the volume holdup in the reactor (V), the reactant’s molar flowrate at the outlet stream (wA ) and the liquid level controller’s tuning
101
Table 1 Optimal design and control schemes obtained for a single disturbance.
(scenario no.) Process variables ˚∗V max ˚∗wA max CA,ss T* Kc,1 Kc,2 1 2 Cost breakdowns CCCS VCCS CFCS ($/year)
qF (t) ≈ N(200, 3)
qF (t) ≈ U(220, 180)
50% (Sc1)
99.73% (Sc2)
50% (Sc3)
99.73% (Sc4)
63.83 10.26 0.0474 479.50 −1.4536 −0.3940 0.9320 0.9638
66.12 10.39 0.0464 479.30 −1.2475 −0.3717 0.7056 0.9989
72.49 10.89 0.0436 478.71 −1.5829 −0.5841 0.3280 0.9961
74.21 10.30 0.0426 478.69 −2.7876 −1.2578 0.7053 0.9667
740.21 1079.42 1819.63
901.86 1092.21 1993.07
1510.72 1145.13 2655.85
1723.51 1083.39 2806.90
parameters (KC,1 and 1 ), respectively. This is because the only disturbance considered in the present analysis is the inlet stream’s flowrate, qF (t), which has a direct effect on the process variables and the controller tuning parameters mentioned above. As shown in Table 1, the costs of the CSTR designs obtained when the disturbance was specified as a uniform distribution are higher than those obtained when the disturbance was assumed to follow a normal probability distribution. This is because any value between the upper and lower limits specified for the uniform distribution has the same probability to occur whereas the values that are more likely to occur in the normal distribution approximation are those near to the mean (expected) value. Thus, larger deviations (variability) in the process variables are expected when a uniform distribution is used to describe the disturbance’s fluctuations in the time domain. Accordingly, conservative and robust designs are obtained when the disturbance follows a uniform probability distribution because those process designs are able to accommodate drastic and sudden realizations in the disturbance set. On the other hand, the worst-case deviations in the process variables are smaller when the disturbance is described with a normal probability distribution, which translates in less conservative and economically attractive process designs. Following Table 1, the costs of the CSTR’s designs obtained when Prp was set to 50% are approximately 8.7% (qF (t)∼N(200, 3)) and 5.8% (qF (t)∼U(220, 180)) lower than those obtained when the probabilities assigned to the process variables’ worst-case variability was set to 99.73%, respectively. These results indicate how spread is distribution of the worst-case variability values for the CSTR’s process variables. Also, the standard deviation (p∗ ) in the normal probability distribution function is an indicator of the dispersion in the distribution of the worst-case variability expected for each process variable. Table 1 also shows that the capital cost function (CCCS ) proposed for the CSTR process is more sensitive to sudden and drastic fluctuations in the inlet feed flowrate (uniform distribution) than to random perturbations that follow a normal probability distribution. As shown in (19), CCCS is defined as a function of the worst-case variability expected in the CSTR’s volume holdup (˚∗V max ), which is the variable used to estimate the size of the CSTR. Thus, larger variability in the volume holdup are expected when drastic fluctuations in the inlet feed flowrate are considered, which are more likely to occur when qF (t) is approximated with a uniform probability distribution. Accordingly, setting qF (t)∼U(220, 180) in the CSTR’s optimal design and control formulation returns a capital cost that is 50% higher to that obtained when the inlet stream flowrate is described with a normal probability distribution, i.e., qF (t)∼N(200, 3).
102
L.A. Ricardez-Sandoval / Computers and Chemical Engineering 43 (2012) 91–107
8
6
a)
4.5
b)
c) 4
7 5
3.5
6 4
3
5 2.5 4
3 2
3 2
1.5
2
1 1
1
0.5
0 72.2
72.4 72.6 Volume, V
72.8
0 479.6
479.8 480 480.2 Temperature, T
0 10 10.5 11 11.5 Component A's mass flowrate, wA
480.4
Fig. 5. Simulation results for Sc1. (a) Reactor’s volume holdup, (b) reactor’s temperature and (c) molar flowrate of component A at the outlet stream. Dashed lines: optimal solution point. (Figure in color is available on the web version of this article.).
To validate the results obtained by the present analysis, the optimal design and control configurations obtained for each scenario were simulated for 100,000 profiles in the inlet stream flowrate, qF (t). The realizations in qF (t) were randomly selected following the criteria defined for this variable, i.e., qF (t)∼N(200, 3) or qF (t)∼U(220, 180). Fig. 5 shows the actual distribution of the worst-case variability expected for the reactor’s volume holdup, the temperature inside the reactor and the reactant’s molar flowrate at the outlet stream obtained when qF was set to a uniform distribution and Prp was set to 0.5 (Sc3 in Table 1). The red dashed lines in the figure denote the optimal solution obtained with the present methodology. As shown in Fig. 5, the optimal design and control solution obtained for Sc3 covers approximately 53%, 49% and 48% of the worst-case variability expected for the volume holdup (V),
16
the molar flowrate of component A at the outlet stream (wA ) and the temperature inside the reactor (T), respectively; which is in reasonable agreement with the probability limits assigned to those process variables, i.e., Pr V max = Pr wA max Pr T max = 0.5. Likewise, Fig. 6 shows the distributions of the worst-case variability in V, wA and T obtained for Sc4 (see Table 1). As shown in Fig. 6, the optimal design obtained with the present method covers all the scenarios obtained from simulations. This result was expected since the probability limit assigned to the key process variables in this system was set to a relatively high value, i.e., Pr V max = Pr wA max Pr T max = 0.9973. This result shows that the present methodology can also be used to simultaneously design and control dynamic systems using a worst-case scenario approach by setting the disturbance specification to a uniform distribution and assigning probabilities close
10
a)
9
14
20
b)
18
8
16
7
14
6
12
5
10
4
8
3
6
2
4
1
2
c)
12 10 8 6 4 2 0 74
74.1 74.2 Volume, V
74.3
0 479
479.5 Temperature, T
480
0 9.8 10 10.2 10.4 Component A 's mass flowrate, wA
Fig. 6. Simulation results for Sc3. (a) Reactor’s volume holdup, (b) reactor’s temperature and (c) molar flowrate of component A at the outlet stream. Dashed lines: optimal solution point. (Figure in color is available on the web version of this article.)
L.A. Ricardez-Sandoval / Computers and Chemical Engineering 43 (2012) 91–107 Table 2 Cases considered with multiple disturbances. Case 1 Disturbances qF (t) TF (t) CA,F (t) Probability levels PrV max PrT max PrT min Pr CA max Pr wA max
103
Table 3 Optimal design and control schemes obtained for multiple disturbances. Case 2
N(200, 3) N(350, 5) N(1,0. 2)
N(200, 3) N(350, 5) N(1,0. 2)
0.50 0.50 0.50 0.50 0.50
0.6827 0.9973 0.9973 0.6827 0.6827
to the unity to the worst-case variability in the process variables. The other two scenarios considered in the present analysis, Sc1 and Sc2, were also validated in the same fashion and are not discussed here for brevity. 3.2. Effect of multiple disturbances The aim of this section is to perform the optimal design and control of the CSTR process under the effect of multiple disturbances. That is, the present scenario expands the formulation presented in the previous section by adding two additional disturbances into the optimal design and control formulation of the CSTR process: the temperature of the inlet stream (TF ) and the concentration of reactant A at the inlet stream (CA,F ). Note that the model parameters that were set to constant values in this case study, e.g., E, k0 , H0 , can also be treated in the same way as the process disturbances by providing specific probability distributions that describe the realizations of those parameters. Table 2 shows the disturbance specifications and the probability levels assigned to the worst-case variability for the two scenarios considered for this analysis. As shown in Table 2, the first case (Case 1) sets the probability of the CSTR’s process variables to 50% whereas the second case (Case 2) presents a situation where the process variables are assigned to different probability limits, i.e., the minimum probability assigned to a process variable for Case 2 is 68.27%. For comparison purposes, similar disturbance descriptions were specified for Case 1 and Case 2 (see Table 2). The optimal design and control formulation shown in (18) for the CSTR process was solved for the two cases considered in the present analysis. As in the previous scenario, problem (18) was solved using the ga built-in function in MATLABTM that implements the GA algorithm. The size of the population in the GA algorithm was set to 60, i.e., 10 times the length of the decision variables vector, . This population’s size was found to provide similar results when problem (18) was solved using multiple starts. Similarly, MCmax and m were set to 3000 and 100, respectively. As mentioned in the previous section, there is no a general framework to select the optimal number of MC samples required in a simulation. The maximum number of MC samples is usually assigned from experience or process heuristics. In the present analysis, different MC samples were tested from simulations of the CSTR process. The numbers assigned to MCmax and m were found to be suitable for the present methodology since they returned representative results of the probability distribution of the worst-case variability for the process variables considered for the CSTR process. As in the previous scenario, the distribution of the worst-case variability of each of the process variables included in the formulation presented in (19) was approximated to a normal probability distribution function following the procedure described in Section 2.1.2. As shown in Table 3, the optimal design obtained for Case 2 is 23.5% more expensive than that obtained for Case 1. This result was expected because the probabilities assigned to the CSTR’s process variables for Case 1 are all lower (50%) than those assigned for Case 2. Similarly, the
Case 1 Process variables ˚∗V max ˚∗wA max CA,ss T* Kc,1 Kc,2 1 2 Cost breakdowns CCCS VCCS CFCS ($/year)
96.46 9.9212 0.0334 478.33 −3.4033 −1.0036 0.9755 0.9839 7487.10 1043.23 8530.33
Case 2 101.76 9.9614 0.0326 477.66 −3.5402 −1.5482 0.9763 0.8856 10,109.69 1047.14 11,156.83
optimal design obtained for Case 1 is almost 79% more expensive than that obtained for Sc1 (see Table 1), where the only disturbance considered in the analysis was the inlet stream flowrate. By inspecting the variability and the capital costs obtained for Sc1 and Case 1, it can be observed that the variability costs did not change significantly whereas the capital costs changed by more than one order of magnitude. As shown in (19), the variability costs are a direct function of the variability in component A’s molar flowrate at the outlet stream, which is directly related to the random changes in the concentration of component A in the inlet stream. Thus, these results show that the disturbance description used in the present analysis to represent the random changes in the concentration of component A at the inlet stream does not significantly affect the operation of the process. Consequently, the random fluctuation in the inlet stream’s temperature, TF (t),is the key disturbance that significantly affects the operation of this process. The results show that this disturbance have significant effects on the worst-case variability of the reactor’s temperature and the reactor’s volume holdup, where the latter is the key variable used to estimate the capital costs for this process. Moreover, the results in Tables 1 and 3 indicate that the optimal operation of the CSTR process is near the maximum allowable working temperature. As shown in Table 1, the reactor’s temperature set point (T*) obtained for Sc1 is very close to the maximum allowable working temperature specified for this process (480 K). However, Case 1 reported a T* that is slightly lower than that reported for Sc1. This is because Case 1 was solved under the assumption that TF (t) is a time-varying disturbance, which produces a variability in the reactor’s temperature that is larger than that expected for Sc1, where TF was assumed to be constant parameter. Accordingly, Case 1 specified a reactor’s set point temperature that is slightly farther from the maximum temperature constraint to be able to accommodate the worst-case variability expected for this process variable. Consequently, the costs to operate the CSTR process under the assumptions given in Case 1 are higher than those obtained for Sc1 because the reactor’s set point temperature needs to be specified farther from the maximum temperature limit, which represents the condition where the CSTR process can be operated at minimum cost. Following Table 3, the solution obtained for Case 2 returned a T* that is lower than that obtained from Case 1’s optimal design. As shown in Table 2, the probability assigned to the worst-case variability of the reactor temperature for Case 1 and Case 2 is 0.5 and 0.9973, respectively. Thus, in order for the optimal design to meets its goals, the reactor’s set point temperature for Case 2 needs to be specified farther from the maximum temperature constraint because it needs to cover a wider range of the worst-case variability expected for this process variable (99.73%) than that considered for Case 1 (50%). Based on the above, it is clear that the reactor’s temperature needs to be in closed-loop to ensure that the CSTR process is at its near optimal operating condition. Accordingly, a
104
L.A. Ricardez-Sandoval / Computers and Chemical Engineering 43 (2012) 91–107
30
1.8
a)
0.9
b)
1.6
0.8
1.4
0.7
1.2
0.6
1
0.5
0.8
0.4
0.6
0.3
0.4
0.2
0.2
0.1
c)
25
20
15
10
5
0 96.4
96.5 Volume, V
96.6
0 479
480 481 Temperature, T
482
0
8 10 12 14 Component A's mass flowrate, wA
Fig. 7. Simulation the optimal design and control configuration obtained for Case 1. (a) Reactor’s volume holdup, (b) reactor’s temperature and (c) molar flowrate of component A at the outlet stream. The dashed lines denote the optimal solution obtained by the present method. (Figure in color is available on the web version of this article.)
tight control on the reactor’s temperature is needed to maintain the variability in this variable closer to its nominal set point value. This action will enable the specification of a set point for the reactor temperature that is closer to the maximum temperature constraint, i.e., operation of the CSTR at minimum cost. As shown in Tables 1 and 3, the temperature controller for Case 2 is more aggressive (magnitude of the proportional gain larger) than those obtained for Case 1 and Sc1, respectively. This is because the variability in the reactor’s temperature expected for Case 2 is larger than that expected for Case 1 and Sc1, respectively. Defining a less aggressive temperature controller for Case 2 may result in larger variability in the reactor’s temperature, which will produce an expensive design because T* will have to set to a value that may be farther away from maximum allowed temperature. Additionally, the temperature controller specified for Case 1 is more aggressive that than specified for Sc1 for the same reasons explained above. The optimal designs obtained for Case 1 and Case 2 were also validated using the approach followed in the previous section, i.e., performing extensive simulations of the optimal designs due to random realizations in the disturbance set. Fig. 7 shows the distribution of the worst-case variability for the reactor’s volume holdup and temperature, and the molar flowrate of component A at the outlet stream obtained for Case 1. As shown in the figure, the process variables satisfied the probability limits assigned for their corresponding worst-case variability. Fig. 8 shows the simulation results obtained for Case 2. As shown in this figure, the reactor’s temperature worst-case variability is satisfied in 99.84% of the cases whereas the worst-case variability in the reactor’s volume holdup and the A’s molar flowrate at the outlet stream is satisfied in 53% and 52% of the cases tested for Case 2; which agrees with the probability limits assigned to the worst-case variability of these process variables (see Table 2). These results show that the worst-case variability analysis introduced in this work can be used to conduct optimal design and control studies using different probability
limits for the worst-case variability of the key process variables. The designs obtained from this study will allow the user to select the optimal process design that will satisfy the system’s operational demands and constraints up to a user-defined probability limit and in the presence of random realizations in the disturbance set. Therefore, the user will have the flexibility to make a robust process design decision under uncertainty. 4. Computational costs One of the main limitations of the current simultaneous design and control methodologies is regarding the computational time needed to assess the optimal design of large-scale process systems. The goal of this section is to present the expected computational costs that may be required to implement the present methodology for the simultaneous design and control of large-scale systems in the presence of random realizations in the disturbances. As shown in Section 2, the calculation of the worst-case variability at a given probability limit (˚∗p ) represents the main calculation that needs to be repeated as many times as process variables are included in the analysis. Since ˚∗p is also a function of , the calculations to obtain ˚∗p must be performed at each iteration step in the optimization algorithm. Thus, ˚∗p represents the main core calculation that needs to be performed by the present method to assess the optimal design of the process systems under uncertainty. As shown in Fig. 3, computer parallelization methods can be used to estimate ˚∗p since the calculation of each of its elements is independent from each other. Therefore, the parallelization of the computations needed to obtain ˚∗p can significantly decrease the computational demands required by the present method. The only restriction imposed by a parallelization method is the need of a CPU with multiple processors. However, personal computers with multiple cores or HPC systems are becoming standard computational tools in academia and R&D departments in the industry. An alternative method that
L.A. Ricardez-Sandoval / Computers and Chemical Engineering 43 (2012) 91–107
30
2
a)
0.9
b)
1.8
105
c)
0.8
25 1.6
0.7
1.4 20
0.6 1.2 0.5
15
1 0.4 0.8
10
0.3 0.6 0.2
0.4 5
0.1
0.2
0 101.7
101.8 Volume, V
101.9
0 478
479 480 Temperature, T
0
481
8 10 12 14 Component A's mass flowrate, wA
Fig. 8. Simulation the optimal design and control configuration obtained for Case 2. (a) Reactor’s volume holdup, (b) reactor’s temperature and (c) molar flowrate of component A at the outlet stream. Dashed lines: optimal solution. (Figure in color is available on the web version of this article.)
max ˚wc,p
dq ∈ d
s.t. ˙ x, , , d, t) = 0 f (x,
(20)
˙ c, , u, t) = 0 h(c, t = [0, tf ) where ˚wc,p represents the largest variability in a process variable p due to the critical (worst-case) realization in the disturbance set, d. Problem (20) seeks for the critical profile in the set d that generates the worst-case variability for p. While ˚wc,p represents the actual worst-case variability, ˚∗p provides an estimate on the worst-case variability expected for the variable p at a given probability, Prp . Thus, ˚∗p can be obtained at different probability limits whereas problem (20) only returns the worst-case variability (˚wc,p ), i.e., an estimate of the worst-case scenario subject to a probability of occurrence cannot be obtained from problem (20). Nevertheless, ˚∗p approximates to ˚wc,p when Prp → 1. Thus, both approaches are directly related when the latter condition is used to estimate ˚∗p . Consequently, the computational costs required by both methods can be compared when this condition is met. Based on the above, the case study presented in Section 3.1 was used to evaluate the computational demands required to estimate ˚∗p and ˚wc,p , respectively. Accordingly, the present analysis estimated the computational time required to compute the variability in the CSTR’s volume holdup using both approaches, i.e., ˚∗V max and ˚wc,V . The only disturbance considered in the analysis was the inlet stream’s flowrate, i.e., qF (t)∼N(200, 3). Likewise, the reactor’s volume holdup variability was estimated for a specific operating condition, i.e., was fixed in the calculations. The key input parameters that have a direct effect on the computational time required to estimate ˚∗V max and ˚wc,V are the number of
discrete points used to describe qF (t), i.e., m in (5), and the number of MC realizations used to generate the nonlinear probability distribution in the reactor’s liquid level, i.e., MCmax (see procedure in Section 2.1.2). While the parameter m has an effect on the computational time needed to estimate both ˚∗p and ˚wc,p , the parameter MCmax only affects the computational costs associated with ˚∗p . As shown in Fig. 9, the calculation of ˚wc,V using (20) is only computationally attractive when the number of discretization points used to describe the disturbance is sufficiently low. Although setting m to a low value is suitable to describe sluggish processes, i.e., the maximum variability is observed at the process settling time, most of the closed-loop dynamic systems present overshoots and nonlinearities, e.g., inverse responses, which require the use of
10 5
10 4
CPU time, seconds
can be used to obtain estimates for ˚∗p is by solving the rigorous worst-case scenario formulation (dynamic optimization problem), which can be defined as follows:
10 3
10 2
10 1
10 0
10
20
30
40
50
60
70
80
90
100
Discretization points for the disturbance, m Fig. 9. Effect of the discretization points (m) and the number of MC trials (MCmax ) on the computational time required to estimate ˚∗V max and ˚wc,V .
106
L.A. Ricardez-Sandoval / Computers and Chemical Engineering 43 (2012) 91–107
to conduct the simultaneous design and control large-scale systems in the presence of random realizations in the disturbance. Also, these results show that the present methodology is more computationally efficient than a dynamic-optimization based methodology since the present method can be executed using parallel programming techniques.
103 MCmax=100 MCmax=1000
CPU time, seconds
MCmax=10000
102
5. Conclusions
101
100 1
2
3
4
No. of processors, N Fig. 10. Computational times required for a single function evaluation of problem (18) using multiple processors.
small sampling intervals (t) or a large number of discretization points. Fig. 9 also shows that at high m’s, the computational costs required to estimate ˚wc,V are at least 8 times higher than those needed for ˚∗V max . In fact, the inner plot in Fig. 9 shows that the computational demands required to estimate ˚wc,V grow at a high exponential rate whereas the computation of ˚∗V max has a computational cost that grows a low exponential rate. Furthermore, Fig. 9 shows that there is roughly a 1:1 ratio between the computational costs needed to estimate ˚∗V max and MCmax . This ratio suggests that the computational costs are expected to increase by one order of magnitude when MCmax is also increased by an order of magnitude. The computational efforts required to obtain the worst-case variability for n process variables are expected to increase n times if the computations are performed in a serial fashion. As shown in Fig. 3, a key feature of the present method is that it can be parallelized to reduce the computational costs. To show the benefits of implementing a parallelization technique to solve the proposed optimal design and control formulation presented in this work, the computational time required to perform a single evaluation of the optimization problem shown in (18) was estimated for different MCmax values. As shown in problem (18), five worst-case variability analyses need to be performed to evaluate the process constraints and the cost function considered in the CSTR case study. To conduct this study, the Parallel Computing Toolbox in MATLABTM was used to parallelize the worst-case variability analyses that are needed to evaluate problem (18). For the present analysis, m was set to 100 and qF (t)∼N(200, 3). This study was performed on an Intel Xeon @2.27 GHz (12 GB in RAM) with 4 physical cores. Fig. 10 presents the results obtained for the CSTR case study using multiple processors. As shown in this figure, the computational time decreases asymptotically as the number of processors used for the evaluation of the optimization function shown in (18) are increased. The results show that the computational load can be reduced almost by a quarter when the number of processors used to evaluate the optimization function shown in (18) goes from 1 (serial calculation) to 4. It is expected that the use of more physical cores will reduce even further the computational loads required for the present methodology. Fig. 10 shows that the 1:1 ratio observed between the CPU time and MCmax (see discussion above) also holds when parallel programming techniques are used to solve the simultaneous design and control formulation proposed in the present work. The results shown in this section can be used as a guideline to estimate the expected computational demands that may be required
A new methodology that applies a probabilistic approach for the simultaneous design and control of dynamic systems in the presence of random perturbations has been presented in this work. The key idea in this methodology is to perform a distribution analysis of the worst-case variability expected during the normal operation of the system. The systematic method developed in this work enables the evaluation of the worst-case variability of a process variable at a given (user-defined) probability limit. That is, the optimal design attained by the present methodology is subject to the probability of occurrence of the worst-case variability expected for the process variables that determine the process feasibility, the system’s dynamic performance or the process economics. This represents a novel feature introduced by the present approach since the current simultaneous design and control methodologies are mostly based on the computation of the worst-case scenario, which usually leads to conservative and expensive designs. The present methodology offers the flexibility to assign probabilities to the worst-case variability expected in the system based on the goals to be attained by the design. Therefore, multiple process designs, each evaluated at different levels of significance, can be obtained by the present method. Accordingly, the simultaneous design and control approach presented in this work is a practical tool that can be used to select the optimal process design that meets the user minimum operational and economic requirements in the presence of sudden and unpredictable realizations in the disturbances. The proposed methodology was implemented to simultaneously design and control a CSTR process. This case study was solved under different scenarios to show the key features and benefits of the proposed probabilistic-based approach. The results show that the present methodology can reduce the conservatism in the process design by assigning different probabilities of occurrence to the worst-case variability of the process variables that are used to evaluate the process constraints, the system’s dynamic performance and the process economics. This case study was also used to show the computational costs that may be expected when implementing the present approach to other process systems. This study showed that the present approach is at least 8 times faster than a dynamic optimization-based method. Therefore, the present methodology is a computationally efficient and practical tool that can be used to simultaneously design and control large-scale systems under uncertainty. Future work in this research is focused on improving the computational speed of the present approach by designing a new pre-screening method that allows the reduction of the number of the random disturbance samples that are needed in the analysis. Also, future work in this research includes the application of the present probability-based approach to simultaneously design and control a large-scale chemical process, e.g., the Tennessee Eastman process, and the development of an efficient probability-based methodology that accounts for structural changes in the process flowsheet and the control structure. References Alvarado-Morales, M., Abd Hamid, M. K., Sin, G., Gernaey, K. V., Woodley, J. M., & Gani, R. (2010). A model-based methodology for simultaneous design and control of a bioethanol production process. Computers and Chemical Engineering, 34, 2043–2050.
L.A. Ricardez-Sandoval / Computers and Chemical Engineering 43 (2012) 91–107 Alhammadi, H. Y., & Romagnoli, J. A. (2004). Process design and operation incorporating environmental, profitability, heat integration and controllability considerations. In P. Seferlis, & M. C. Georgiadis (Eds.), The integration of process design and control (pp. 264–270). Amsterdam: Elsevier. Bahri, P. A. P. A., Bandoni, J. A., & Romagnoli, J. A. (1997). Integrated flexibility and controllability analysis in design of chemical processes. AIChE Journal, 43, 997–1000. Bansal, V., Perkins, J. D., Pistikopoulos, E. N., Ross, R., & Van Schijndel, J. M. G. (2000). Simultaneous design and control optimisation under uncertainty. Computers and Chemical Engineering, 24, 261–270. Bansal, V., Perkins, J. D., & Pistikopoulos, E. N. (2002). A case study in simultaneous design and control using rigorous, mixed-integer dynamic optimization models. Industrial and Engineering Chemistry Research, 41, 760–770. Bernardo, F. P., Pistikopoulos, E. N., & Saraiva, P. M. (1999a). Integration and computational issues in stochastic design and planning optimization problems. Industrial and Engineering Chemistry Research, 38, 3056–3060. Bernardo, F. P., Pistikopoulos, E. N., & Saraiva, P. M. (1999b). Robustness criteria in process design optimization under uncertainty. Computers and Chemical Engineering, 23, S459–S460. Chawankul, N., Ricardez Sandoval, L. A., Budman, H., & Douglas, P. L. (2007). Integration of design and control: A robust control approach using MPC. Canadian Journal of Chemical Engineering, 85, 433–440. Diwekar, U. M., & Kalagnanam, J. R. (1997). Efficient sampling technique for optimization under uncertainty. AIChE Journal, 43, 440–450. Gerhard, J. J., Marquardt, W., & Monnigmann, M. (2008). Normal vectors on critical manifolds for robust design of transient processes in the presence of fast disturbances. SIAM Journal on Applied Dynamical Systems, 7, 461–470. Gerhard, J., Monnigmann, M., & Marquardt, W. (2005). Constructive nonlinear dynamics – Foundations and application to robust nonlinear control. In Control and observer design for nonlinear finite and infinite dimensional systems. SpringerVerlag., pp. 165–182. Gutierrez, G., Ricardez, L.A., Budman, H., & Prada, C. (2011). Integration of design and control using an MPC-based superstructure for control structure selection. Preprints of the 18th IFAC world congress, Milan, Italy. pp. 7648–7650. Hamid, M. K. A., Sin, G., & Gani, R. (2010). Integration of process design and controller design for chemical processes using model-based methodology. Computers and Chemical Engineering, 34, 683–690. Holland, J. H. (1992). Adaptation in natural and artificial systems (new edition). Cambridge: MIT Press. Iman, R. L., Helton, J. C., & Campbell, J. E. (1981). An approach to sensitivity analysis of computer models. II. Ranking of input variables, response surface validation, distribution effect and technique synopsis. Journal of Quality Technology, 13, 232–240. Kookos, I. K., & Perkins, J. D. (2001). An algorithm for simultaneous process design and control. Industrial and Engineering Chemistry Research, 40, 4079–4080. Kookos, I. K. (2003). Optimal operation of batch processes under uncertainty: A Monte Carlo simulation-deterministic optimization approach. Industrial and Engineering Chemistry Research, 42, 6815–6820. Lenhoff, A. M., & Morari, M. (1982). Design of resilient processing plants – 1. Process design under consideration of dynamics aspects. Chemical Engineering Science, 37, 245–250. Luyben, M. L., & Floudas, C. A. (1994). Analyzing the interaction of design and control – 1. A multiobjective framework and application to binary distillation synthesis. Computers and Chemical Engineering, 18, 933–940. Malcolm, A., Polan, J., Zhang, L., Ogunnaike, B., & Linninger, A. A. (2007). Integrating systems design and control using dynamic flexibility analysis. AIChE Journal, 53, 2048–2050. Maly, T., & Petzold, L. R. (1996). Numerical methods and software for sensitivity analysis of differential-algebraic systems. Applied Numerical Mathematics, 20, 57–79.
107
McKay, M. D., Beckman, R. J., & Conover, W. J. (2000). Comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics, 42, 55–61. Mohideen, M. J., Perkings, J. D., & Pistikopoulos, E. N. (1996). Optimal design of dynamics systems under uncertainty. AIChE Journal, 42, 2251–2260. Monnigmann, M., & Marquardt, W. (2005). Steady-state process optimization with guaranteed robust stability and flexibility: Application to HDA reaction section. Industrial and Engineering Chemistry Research, 44, 2737–2740. Montgomery, D. C., & Runger, G. C. (2007). Applied statistics and probability for engineers (4th ed.). Hoboken: Wiley & sons. Munoz, D. A., Gerhard, J., Hannemann, R., & Marquardt, W. (2011). Integrated process and control design by the vector normal approach: Application to the Tennessee-Eastman process. In E. N. Pistikopoulos, M. C. Georgiadis, & A. C. Kokossis (Eds.), Computer-aided chemical engineering (pp. 668–670). Elsevier. Nguyen, T. C., Barton, G. W., Perkings, J. D., & Johnston, R. (1988). Condition number scaling policy for stability robustness analysis. AIChE Journal, 34, 1200–1210. Owen, A. B. (1992). Orthogonal arrays for computer experiments, integration and visualization. Statistica Sinica, 2, 439–440. Palazoglu, A., & Aarkun, Y. (1987). Design of chemical plants with multiregime capabilities and robust dynamic operability characteristics. Computers and Chemical Engineering, 11, 205–210. Palazoglu, A., & Aarkun, Y. (1986). Multiobjective approach to design chemical plants with robust dynamic operability characteristics. Computers and Chemical Engineering, 10, 567–570. Peters, M. S., Timmerhaus, K. D., & West, R. E. (2003). Plant design and economics for chemical engineers (5th ed.). New York: McGraw-Hill. Ricardez Sandoval, L. A., Budman, H. M., & Douglas, P. L. (2008). Simultaneous design and control of processes under uncertainty: A robust modelling approach. Journal of Process Control, 18, 735–740. Ricardez-Sandoval, L. A., Budman, H. M., & Douglas, P. L. (2009a). Simultaneous design and control of chemical processes with application to the Tennessee Eastman process. Journal of Process Control, 19, 1377–1380. Ricardez-Sandoval, L. A., Budman, H. M., & Douglas, P. L. (2009b). Integration of design and control for chemical processes: A review of the literature and some recent results. Annual Reviews in Control, 33, 158–160. Ricardez-Sandoval, L. A., Budman, H. M., & Douglas, P. L. (2009c). Application of robust control tools to the simultaneous design and control of dynamic systems. Industrial and Engineering Chemistry Research, 48, 801–810. Ricardez-Sandoval, L. A., Budman, H. M., & Douglas, P. L. (2010). Simultaneous design and control a new approach and comparison with existing methodologies. Industrial and Engineering Chemistry Research, 49, 2822–2830. Ricardez-Sandoval, L. A., Budman, H. M., & Douglas, P. L. (2011). A methodology for the simultaneous design and control of large-scale systems under process parameter uncertainty. Computers and Chemical Engineering, 35, 307–310. Sahinidis, N. V. (2004). Optimization under uncertainty: State-of-the-art and opportunities. Computers and Chemical Engineering, 28, 971–980. Sakizlis, V., Perkins, J. D., & Pistikopoulos, E. N. (2004). Recent advances in optimization-based simultaneous process and control design. Computers and Chemical Engineering, 28, 2069–2070. Seider, W. D., Seader, J. D., Lewin, D. R., & Widagdo, S. (2009). Product and process design principles: Synthesis, analysis and evaluation (3rd ed.). Hoboken: Wiley & sons. Seferlis, P., & Georgiadis, M. C. (2004). The integration of process design and control (1st ed.). Amsterdam: Elsevier Ltd. Swartz, C. L. E. (2004). The use of controller parametrization in the integration of design and control. In P. Seferlis, & M. C. Georgiadis (Eds.), The integration of process design and control (pp. 239–240). Elsevier. Towler, G., & Sinnott, R. (2012). Chemical engineering design: Principles, practice and economics of plant and process design (2nd ed.). Oxford: Elsevier.