Dynamic Model Development: Methods, Theory and Applications S.P. Asprey and S. Macchietto (editors) © 2003 Elsevier Science B.V. All rights reserved
175
Process Design Under Uncertainty: Robustness Criteria and Value of Information F. P. Bemardo^ P. M. Saraiva^ and E. N. Pistikopoulos^ ^Department of Chemical Engineering, University of Coimbra, Polo II, Pinhal de Marrocos, 3030 Coimbra, Portugal ^Centre for Process Systems Engineering, Department of Chemical Engineering, Imperial College, London SW7 2BY, United Kingdom
In the last three decades, process design under uncertainty has evolved as an optimizationbased tool to systematically handle uncertainty at an early design stage thus avoiding overdesign, underdesign or other suboptimal decisions. In this chapter, v^e address this issue presenting a generic framework to guide the decision-maker on the design problem definition that systematises several considerations and assumptions, namely regarding constraints violation, uncertainty classification and modelling, operating policy and decision-making criteria under uncertainty. This generic framev^ork is then explored for handling process robustness and value of information features. A case study of a reactor and heat exchanger system illustrates the several presented formulations, namely optimal process design accounting for product quality and optimal R&D decisions in order to selectively reduce uncertainties.
1. INTRODUCTION Decision-making in the presence of uncertainty is a key issue in process systems design, since at this early stage decisions have to be made v^ith limited know^ledge, whether concerning the assumed process model (kinetic constants, transfer coefficients, etc.), external environment (product demand, raw material availability, etc.), among other sources of uncertainty. Before the development of systematic tools to handle such uncertainties, the traditional approach used to rely on a deterministic paradigm, where the optimal design values were determined based on nominal values of the uncertain parameters, thus considered to be perfectly known and to assume fixed values over the plant lifetime. The solution thus obtained was then corrected applying, for instance, empirical overdesign factors to the sizes of equipment units. Ranges for these factors can be found in the literature for different kinds of equipment (Rudd and Watson, 1968), but their general application is quite arguable, since they are only based
176 on accumulated experience and extrapolation to a new specific situation may lead to severe over or underdesign. In practice, corrections were made, and are still being made, mainly based on the decision-maker experience and intuition. In the last three decades, however, several systematic approaches have been proposed, based on optimization formulations and application of decision-making theories under uncertainty to process design problems (see Grossmann et a/., 1983 and Pistikopoulos, 1995 for a review). The exponential growth of computational resources and the development of efficient numerical tools have obviously stimulated research in this area (Grossmann and Halemane, 1982; Diwekar and Kalagnanam, 1997; Acevedo and Pistikopoulos, 1998; Bernardo, Pistikopoulos and Saraiva, 1999a, amongst others), and made it able to handle problems of practical interest. In this chapter, we present a generic optimization framework to process design under uncertainty, mainly focussing on process robustness issues and the value of information regarding uncertain parameters. The proposed formulations are able to identify optimal solutions and corresponding accurate overdesign factors relative to a fully deterministic approach, and furthermore take into account process variability and R&D investments in order to selectively reduce uncertainties. It is thus an attempt to formally integrate our previous published work, namely robustness criteria in process design under uncertainty (Bernardo, Pistikopoulos and Saraiva, 1999b, 2001) and the value of information in process design decisions (Bernardo, Saraiva and Pistikopoulos, 2000), giving a general perspective of design problem formulations and a package of alternative or complementary design criteria that may be considered by the decision-maker. Therefore, the remaining parts of this chapter are organised as follows: in section 2 we highlight some concepts and criteria for process design decision-making under uncertainty; then, section 3 provides a quick tour through process design under uncertainty developments observed in the last three decades, using for that purpose an illustrative case study that comprises a chemical reactor and a heat exchanger; next, section 4 introduces a generic framework for process design problem formulations under uncertainty, and explains how it can be explored for handling robustness and value of information issues; finally, section 5 establishes some concluding remarks and points towards some possible lines for future work.
2. DECISION-MAKING CRITERIA UNDER UNCERTAINTY In order to introduce some basic concepts and criteria for decision-making under uncertainty, let us start by briefly recalling a homely example taken from Rudd and Watson (1968), who took these issues into consideration for process design problems as early as in the sixties: An entrepreneur has contracted to paint a set of buildings and offered a one-year guarantee that the paint will not fade. If the paint fades, the entrepreneur must repaint the building at his own expense. The entrepreneur receives $500 for the contract and has available paints A, B and C at a cost of $200, $100 and $5, respectively. Paint A will never fade, paint B will fade after 250 days of sunshine, and paint C will fade after 50 days of direct sun. The cost of
177 labour is $200 regardless of the paint used. The contract did not say when the repainting job must be done, so the entrepreneur can use paint C to repaint any time within 50 days of the expiration date of the contract andfulfil the guarantee according to the letter of the law, if not the spirit. Which paint should the entrepreneur choose? The problem here is basically to decide on the paint in face of the uncertainty regarding next year's weather, and therefore a decision-making criterion is needed. Let us first consider the max-min profit criterion, which leads us to choose the paint maximizing the minimum profit that can possibly happen in face of the different weather scenarios. This is a pessimistic criterion, protecting the decision-maker against the worst case that can possibly happen. Three different scenarios should be discriminated: if 0 stands for the number of days of sunshine next year, we'll have scenario 6>^^^ for 0 < 6>< 50, 0^^^ for 50 < 6>< 250 and (9^^^ for 250 < 6>< 365. Table 1 shows the profit matrix, constructed computing the net profit that corresponds to each paint and for each 6 scenario. According to the max-min criterion, paint A is the best, resulting in a minimum profit of $100. Table 1 .Profit and regret matrixes ($). Profit matrix ^(1)
Paint A Paint B Paint C
100 200 295
Regret matrix
^(2)
^(3)
^(1)
100 200 90
100 -5 90
195 95 0
0(2)
100 0 110
^(3)
0 105 10
If the entrepreneur focus attention on the loss of opportunity associated with a decision, he might prefer to adopt the min-max regret criterion, with the regret being computed as the difference between the profit that might have been made in the absence of uncertainty and the profit made in the given uncertain environment. In this case, we construct a regret matrix (Table 1), where for instance the use of paint C under scenario ^^^^has an associated regret value of $110, since only a $90 profit is achieved, while if paint B were to be chosen a $200 profit could have been obtained. Looking at this regret matrix, we conclude that paint B is the one that we should be used. The above two criteria are both based on extreme uncertainty scenarios. If one has additional information about the probability associated with each one of the scenarios, i.e., in other words, if an uncertainty model can be constructed, then perhaps a decision based on average or expected profit is preferable. Suppose that the past weather records indicate probabilities of 0.52, 0.28 and 0.20, for scenarios 6 ^^\ 6 ^^^ and 6 ^\ respectively. Then, the maximum expected profit occurs for paint C: E{P) = 0.52x295 + 0.28x90 + 0.20x90 = $198. Note that this expected profit corresponds to the average profit to be realised over a large number of trials.
178 Therefore, this very simple example provides enough evidence to support that different best solutions can be found depending on the particular way uncertainties are handled. Ignoring that they exist, as is commonly done in many process design problems, is thus very likely to result in wrong or at least suboptimal solutions regarding namely equipment dimensions, operating conditions, forecasted economic performance, etc. Let us now analyse in more detail the concept of regret mentioned above. The regret may also be interpreted as the value ofperfect information (VPI) about future events, i.e., in this case, next year's weather. This VPI is computed as the difference between two extreme behavioural models of action mvdQx future uncertainty: the wait-and-see model, where the decision is made after uncertainty realization, and the here-and-now model, where the decision is takon prior to uncertainty resolution (lerapetritou et al., 1996). For instance, the here-and-now decision on paint C and supposing that scenario 0^^^ will take place, has an associated VPI of $110, which is the difference between the profit of the best wait-and-see decision under 0^^^ scenario (paint B, $200) and the profit of the here-and-now decision on paint C ($90). Fig. 1 shows the VPI associated with paint C across the several 0 scenarios. Profit ($) 300
V \^
wait-and-see decisions (each point corresponds to a different paint)
\
250
^
200
\
^i^
\
4N
\ \
150 100
\
VPI
\ , here-and-now • decision on paint C
\ \
50 0-50
50-250
250-365
Number of sunshine days, 6 Fig. 1. Here-and-now versus wait-and-see decisions (painting example).
The expectation operator can also be applied to VPI: the decision on Paint C, for instance, has an associated expected value ofperfect information of EPVI = 0.52x0 + 0.28x110 + 0.20x10 = $32.8. A minimum EVPI criterion can then be constructed, leading to a decision with a minimum expected regret, which is precisely paint C in this case. The above concepts and criteria can also be used in the context of process design problems under uncertainty, which may be represented mathematically as follows:
179 optimize
^[f(d,z,x,0)]
d,z,x
s.t. h(d,z,x,O) = 0 g(d,z,x,e)<0 deD,zeZ,xeX,0ee,
(1)
where d, z and x are the vectors of design, control and state variables, respectively, 6 represents the vector of uncertain parameters over the domain 0 , h and g are vectors of process model equality and inequality constraints. The decision-making criterion is here to optimize O, where O is a function of the scalar/that defines a process performance metric (generally an economic indicator). For instance, i f / i s a profit function and an expected value criterion is adopted, O is the average profit obtained over the uncertainty space 0, or the feasible part of 0 (the topic of feasibility in face of 0 will be addressed in section 3.3). In the case of ^uncertainty described by a joint probability density function (PDF)7(0, the expectancy operator applied to a general scalar function/is provided by the following w-dimensional integral, where n is the number of uncertain parameters: EM)-\f{0)mde.
(2)
0
Although we will not cover here the associated numerical challenges, for complex problems it is well known that the computation of reliable estimates for a multidimensional integral such as the above can be rather time consuming, although different specific techniques are available for that purpose (for more on this topic, see Bernardo, Pistikopoulos and Saraiva, 1999a). Problem (1) is commonly formulated under the optimistic assumption that during process operation control variables z are optimally adjusted to uncertain parameters realisations, according to the observed state of the system (Watanabe et al., 1973; Grossmann and Sargent, 1978; Halemane and Grossmann, 1983; Pistikopoulos and lerapetritou, 1995). Such an operating policy will be here designated as perfect control operation. Given this assumption, problem (1) is formulated in two stages: the first (design) where design variables are selected (here-and-now decisions) and the second (operating) where a perfect control strategy is assumed (control variables as wait-and-see decisions): Design stage. optimizeO[/\d,6)],
deD^Oee
d
Operating stage: fXd,e) = m^ixf(d,z,x,e) z,x
s.t. h(d,z,x,6) = 0 g(d,z,x,e)<0 zeZ,xeX.
(3)
180 In the context of decision theory, the problem of process design under uncertainty can be interpreted as a two-person game between nature and the designer, where the uncertain parameters 6 are the nature's strategy, while the decisions d and z form the strategy of the designer (Watanabe et al., 1973). According to this interpretation, perfect control operation supposes that when the process is brought into the stage of operation, the nature's strategy 0 becomes clear and, consequently, controls z are adjusted so as to maximize/in the presence of each possible ^realisation. According to formulation (3), Table 2 summarises the possible decision criteria mentioned above in reference to the painting example, where the min-max regret criterion needs a formal definition ofVFl(d,0). In the case of problem (3), when perfect control operation is assumed, the value of perfect information will be
y?i(d,e) = f\0)-fxd,0i
(4)
where / " ( ^ ) is the process performance assuming design variables as wait-and-see decisions: / "((9) = m a x / ( J , z,x,6>) d,z,x
s.t. h(d,z,x,O) = 0
/^\
g(d,z,x,O)<0 d eD,zeZ,xeX and f\d,0)
as defined in (3) (design variables as here-and-now decisions).
Table 2. Decision criteria in process design under uncertainty problems. max-min performance max min / \d,0) d
e
min-max VPI
minmaxfVPI(J,(9)l
max expected performance
maxEQ {/\d,0)}
min expected VPI
min^0{VPI(J,<9)}
d
e ^
d d
The choice of the most suitable metrics depends on the problem in hands and the decisionmaker own judgement. From the strictly mathematical and decision theory point of view, however, it is usual to assume that good criteria should verify a number of common sense properties, such as transitivity and strong domination. As pointed out by Rudd and Watson (1968), and among the first three criteria in Table 2, only the expected performance criterion meets all these tests. In the literature of process design under uncertainty, this is precisely the most widely used criterion, although exceptions may be cited: Nishida et al. (1974) proposed
181 a min-max cost criterion for the optimal synthesis of process systems; Watanabe et al. (1973) suggested an intermediate strategy between the min-max cost (pessimistic) and the minimum expected cost (optimistic) criteria; lerapetritou and Pistikopoulos (1994) and lerapetritou et al, (1996) presented formulations of operational planning under uncertainty where a restriction of maximum allowed regret is incorporated. In Watanabe et al. (1973) a study of different decision strategies is presented from the viewpoint of decision theory: the first three criteria in Table 1 and also other more sophisticated criteria are applied to a simple case study, illustrating the different corresponding solutions thus obtained.
3. PROCESS DESIGN UNDER UNCERTANTY: A QUICK TOUR Taking into account the previous considerations and process design decision-making criteria identified, we will now enumerate several possible approaches for problem formulation and solving, through a simple case study that will be described next. 3.1 An illustrative case study Fig. 2 presents a flowsheet consisting of a reactor and a heat exchanger (RHE), where a first order exothermic reaction A -> B takes place (Halemane and Grossmann, 1983; ChaconMondragon and Himmelblau, 1996). Table 3 describes the system mathematical model, including mass and heat balances, process constraints and quality specifications, while parameter nominal values are shown in Table 4. Process performance is quantified through the total plant annual cost, including investment and operating costs. The model variables can be classified as follows: design variables d= {V^}, control variables z = {FuF^^} and state variables x= {XA^TI, Ti, T^i}.
R CA\
F,r,
CAO
T\
To L ^
^T^
^1 ^1
J
A )HE "--^
'A
t>v2
Fig. 2. Reactor (R) and heat exchanger (HE) system.
182 Table 3. Reactor and heat exchanger system mathematical modef. Reactor material balance ^ ^Co(l-^.)F = 0, x ^ = % - ^ ^ 0 ^ ^ - • kj^ exp RT, •^AQ Reactor heat balance F,Cp{T, -T,)-F,Cp{T, -T2) + (-AHj,)FoX^ =0 Heat exchanger design equation (T,-T^2)"(T2-T^,) F,cJT,-T2) = AUAT,^, ATi^=In ^1 ~ ^w2 ^2 ~ ^wl
Heat exchanger energy balance Temperature bounds (K) Heat exchanger operation constraints
F\Cp(Ji -T2) = F^Cp^(T^2 -^wi) 3110,
T,2-TM^0
T,-T^2^nA, x^ > 0.9
r^-r^i>ii.i
Quality constraint Cost function ($/year) C - 691.2F^-^ + 873.6^^-^ +1J6F^ + l.Q56F^ Profit function ($/year) P = 15000x.-C ^ Variables: F, reactor volume (m^); A, heat transfer area for the heat exchanger (m^); Fi, reactant flowrate in the heat exchanger (kmol/h); F^, cooling water flowrate (kg/s); XA, conversion of A in the reactor; Tx, reactor temperature (K); 7^2, reactant temperature after cooling (K); T^2, cooling water outlet temperature (K).
The design problem, considering the model parameters to be exactly known and equal to the nominal values shown in Table 4, is to determine the best values for V and A and also the optimal nominal operating point {FuF^} (and corresponding state), so as to minimize the annual plant cost C. Attending to the process model, this design problem is then a non-linear non-convex optimization problem whose solution, using GAMS/MIN0S5, results in an overall cost of 12 230 $/year, that corresponds to the optimal deterministic design {V,A} = {4.429 m^,5.345 m^}. As expected, the optimizer moves the reactor temperature Ti to its upper bound (389 K), in order to increase reaction rates. This fact, by itself, reveals some limitations of such a deterministic paradigm, since for the system operating around the solution thus obtained with an active constraint the reactor temperature upper limit is likely to be violated: for instance, if the operational heat transfer coefficient becomes slightly smaller than the assumed value or the feed temperature increases momentarily. The traditional procedure adopted to prevent situations like this one is to apply empirical overdesign factors, that may however be inaccurate, therefore leading to unfeasible or too conservative design solutions.
183 Table 4. Parameter values (RHE system). Qo Concentration of A in the feed stream Fo Feed flowrate (pure A) To Feed temperature Twi Cooling water inlet temperature kR Arrhenius rate constant U Overall heat transfer coefficient E/R Ratio of activation energy to the perfect gas constant AHR Molar heat of reaction Cp Reactant heat capacity Cpvt; Cooling water heat capacity
32.04 kmol/m^ 45.36 kmol/h 333 K 293 K 12 h~^ 1635 kJ/(m^.h.K) 555.6 K -23 260 kJ/kmol 167.4 kJ/(kmol.K) 4.184 kJ/(kg.K)
For this reason, in the last three decades several developments try to address in an explicit and systematic way, based upon optimization formulations, the topic of uncertainty in process design. In this section we highlight some of these efforts, without being extensive and giving special emphasis to the assumptions and objectives underlying each one of the formulations, and their application to our RHE case study, rather than to the respective mathematical details. All the different optimization problem formulations that we will describe here and in the following sections have been solved using GAMS together with the solvers MIN0S5 or CONOPT (Brooke e^ a/., 1992).
3.2 Basic assumptions in the formulation of design problems under uncertainty The formulation of a design problem under uncertainty like (1) needs to address the following items: (i) hard/soft constraints, (ii) uncertainty classification and modelling, (iii) assumed operating policy in face of uncertainty and (iv) decision criteria. Decision criteria in face of uncertainty were already discussed in section 2, but it should be noticed that the generic function 0 ( / ) may cover other design objectives, besides strict process economics, such as flexibility, robustness, quality, safety, environmental concerns and value of information issues. We will consider some them in forthcoming sections, and for now focus our attention around points (i), (ii) and (iii) above.
3.2.1 Hard/soft constraints One of the key concepts in process design under uncertainty, as we'll see in section 3.3, is process flexibility, which is basically the probability of feasible operation in face of uncertainty. Complete feasibility over the entire 0 space is a usual assumption (for instance, Halemane and Grossmann, 1983) that can be, however, too conservative, especially concerning those constraints whose verification is not a strict demand and are only violated for a barely probable ^scenario. Furthermore, at a design stage there may be some uncertainty associated with the true upper/lower limits for a number of constraints. Finally, the 0 ( / ) gains
184 deriving from a solution with a reduced violation of a constraint under unlikely ^realisations may lead the decision-maker to prefer such a solution, rather than forcing strict compliance with all of the inequality constraints. Thus, one needs to make a clear distinction between hard constraints, i.e., those which must always be satisfied, and soft constraints, that may be violated for some realisations of the uncertain parameters. may be considered as a hard constraint if one knows for sure that above this value a reactant decomposition or phase transition occurs, safety issues arise, etc. However, especially at an early decision stage, this limit may be by itself uncertain and if a considerable benefit may result from its violation for given 0 realisations, then perhaps a soft constraint provides a better problem representation. Quality constraints, such as XA > 0.9, should usually be treated as soft, with the performance function / penalised with a quality loss term (Bernardo, Pistikopoulos and Saraiva, 2001), eventhough it is still common practice to adopt strict product specification limits as hard constraints to be verified. 3.2.2 Uncertainty classification and modelling Based on the sources of uncertainty, Pistikopoulos (1995) proposes an uncertainty classification with four categories (Table 5a), where for each category an example referring to our case study is also provided, together with information sources that may be used for reducing the corresponding present levels of uncertainty. A second possible classification is based on the uncertainty nature and the models adopted to describe it (Table 5b). The so-called "deterministic" uncertain parameters, here designated by categorical parameters, are well described by a set of TV^ discrete scenarios 6^ (or periods, in the case of uncertainty along time), with a given probability of occurrence (for instance, Grossmann and Sargent, 1978; Halemane and Grossmann, 1983). The seasonal variation of product B demand, treatment of different raw materials A or operation at different levels of capacity Fo are specific instances for this kind of uncertainties. The so-called stochastic uncertainties, on the other hand, have a continuous random variability, described by a joint probability density function (PDF) (Pistikopoulos and lerapetritou, 1995; Bernardo and Saraiva, 1998). This model seems to be more adequate to describe, for instance, modelinherent uncertainties or the variability of an operating variable in a steady-state process. The choice of an adequate PDF obviously depends on the available information, with an uniform distribution representing a maximum degree of ignorance and any other distribution function assuming greater knowledge about the uncertain parameters (Rudd and Watson, 1968). From the modelling point of view, the distinction between categorical and stochastic uncertain paranieters should not be taken in a severe way, since, for instance, intrinsically continuous stochastic parameters may be approximated by a scenario-based model and a given discrete PDF may also be chosen to describe a seasonal periodic fluctuation. A third approach for handling uncertainty, and probably the most ambitious one, is simply not to model it, but rather to solve problem (1) parametrically, in the space of uncertain parameters (Acevedo and Pistikopoulos, 1996, 1997; Pertsinidis et al., 1998). The resulting
185 solution is then itself a function of the uncertain parameters reahsations, providing a full map of optimal decisions over 0 . Another relevant distinction is between hard and soft uncertainties (Table 5c). Like hard and soft constraints, this classification refers to process flexibility: hard uncertain parameters are those for which feasible operation must be ensured along the entire domain 0 , while for soft parameters, a design decision that guarantees feasibility only along a part of the 0 domain is allowed. This classification is closely related with two distinct design objectives in face of process flexibility (see section 3.4). Hard parameters are usually also categorical while a continuous stochastic model commonly describes soft parameters. Table 5. Uncertainty classification. a. Based on the source of uncertainty Examples (RHE case study) Source of information Category Kinetic constants, heat transfer Experimental and pilot plant Model-inherent uncertainty coefficients (^i?, t/) data Flowrates and temperature (On-line) measurements, Process-inherent uncertainty variations (Fi, Ti) equipment specifications Raw-material (A) availability, Historical data, market Extemal uncertainty equipment cost coefficients indicators Equipment availability, Supplier's specifications, Discrete uncertainty seasonal product (B) demand operational and marketing data b. Based on uncertainty nature/uncertainty model Examples (RHE case study) Category Seasonal variation of product Categorical parameters (B) demand Continuous stochastic parameters
Fifluctuationsabout a steadystate nominal point
c. Based on feasibility in face of Category Hard parameters (usually also categorical) Soft parameters (usually also continuous)
0 Examples (RHE case study) Seasonal variation of product B demand Model-inherent uncertainties (kR,U)
d. Based on uncertainty eventual reduction Category Examples (RHE system) "Reducible" parameters Model-inherent uncertainties "Non-reducible" parameters
Product B demand
Uncertainty model Variability described by a set of scenarios 0 = {|9:6>^<6>'<6>^,/ = 1,...,7V| Continuous random variability 0 = {6>: 6> e j(0)}
Description Complete feasibility over 0 is required Feasibility is only required over R e 0 Description Present uncertainty level can be further reduced Additional uncertainty reduction is beyond our control
186 Regarding information about uncertain parameters, a fourth uncertainty classification is proposed by Bernardo, Saraiva and Pistikopoulos (2000), who distinguish between parameters whose present knowledge can be improved through further experimentation and parameters whose uncertainty reduction at the present time is believed to be beyond our control (Table 5d). Uncertain process model parameters, such as a kinetic parameter fe or heat transfer coefficient U, fall in the first category, since further information about them can be obtained, although at a certain cost associated with laboratory or pilot plant experiments. On the other hand, parameters such as product B demand may belong to the second category, assuming that their variability is due to market fluctuations that cannot be further reduced or forecasted with more accuracy than at present.
3.2.3 Operating policy in face of uncertainty The selection of an operating policy in the presence of uncertainty is another issue to be considered when addressing problem (1). Perfect control operation, already formulated in section 2, corresponds to the most optimistic assumption. On the other hand, the most conservative policy is to assume fixed setpoints for the control variables, regardless of operation information that will become available. Under this perspective, here designated by rigid control operation, both design and control variables are treated as here-and-now decisions that remain constant during process operation. The work by Diwekar and Rubin (1994), Bernardo and Saraiva (1998) can be included in this category. While the rigid control policy is too conservative, the perfect control assumption is rather optimistic. A more realistic policy should fall somewhere between these two extreme approaches, selecting an operating policy that makes use of plant data through available supervisory control systems. The work by Bathia and Biegler (1997) points in this direction, by considering that available information about uncertainty is subject to an assumed feedback control law relating state and control variables.
3.3 Flexibility Analysis One of the first questions that arises when trying to solve the general formulation (1) is process feasibility in face of ^.VrocQ^s flexibility \^ thus the ability that a process does have to operate under feasible conditions in face of the considered uncertainty, and for di fixed ydX\xQ of the design variables d. Two distinct problems may then be formulated in a flexibility analysis: (i) flexibility test, where one determines whether the process is feasible or not in face of 0 ; (ii) flexibility index, where the goal is to determine the extent of process flexibility, according to a given measure. In both cases, perfect control operation is usually assumed. Referring to our case study, let us consider the design {F,^} = {5.3 m^5.5 m^}, parameters fe and t/to be uncertain and described by a range of possible values: kR = 12(1 ± 0.2) h~^ (20% variation around nominal value) and U = 1635(1 ± 0.3) kJ/(m^.h.K) (30% variation around
187 nominal value). A feasibility test for a given design J and parameters realisation 6 can be formulated as follows: ^ ( J , ^ ) = minw, s.t. h(d,z,x,0) =
OAg(d,z,x,0)
(6)
with the projection of our feasible region in the ^space being represented by the condition y4^d,0) < 0 (see the qualitative illustration in Fig. 3). Assuming that this region is one-dimensional convex (let's call C this convexity condition), the flexibility test is reduced to feasibility tests conducted over all of the 0 vertices (Swaney and Grossmann, 1985a). Solving (6) for the four vertices of our 0 rectangle, one verifies that the process is unfeasible in face of the uncertainty level considered, since the maximum value of ^ is positive, occurring for both vertices (14.4,1144.5) and (9.6,1144.5), where larger constraint violations occur. It should be noticed however that it is quite difficult, if not impossible, to verify if condition C holds, since that would require to explicit x variables in the h equations.
C/(kJ/(mlh.K) Feasible Region, y/(d,0) < 0 2125.5
1635
1144.5
>h(h-') 9.6
12
14.4
Fig. 3. Illustration of flexibility index F and stochastic flexibility (SF).
Several flexibility indices have been proposed, two of them being also illustrated in Fig. 3: the index F of Swaney and Grossmann (1985a) and stochastic flexibility (SF) (Pistikopoulos and Mazzuchi, 1990; Straub and Grossmann, 1990). Index F corresponds to inscribe within our feasible region the largest possible rectangle (hyperrectangle in the case of n uncertain
188 parameters), whilG SF is a less conservative measure, that corresponds qualitatively to the fraction of 0 space that lies within the feasible region (striped area). Under the convexity condition C, index F can be calculated as the minimum of the allowed deviations S along each one of the directions defined between the nominal point 0^ and 0 vertices. For each direction. Sis computed by a problem of the type: S = maxu, s.t. h(d,z,x,d) = OAg{d,z,x,0)
= 0^-huA0,u>O,
(7)
u,z,x
where A^ is the expected deviation along that direction. Considering our assumed deviations (positive and negative) for parameters kR and U, the minimum of J occurs along the direction of the vertex (14.4,1144.5) and equals F = 0.5356. That is, the process is feasible within the rectangle defined by kR = 12(1 ± 0.2F) = 12(1 ± 0.11) h'^ h"^ and U= ^ = 1635(1 ± 0.3F) = 1635(1 ± 0.16) kJ/(mlh.K), qualitatively represented in Fig. 3 by the shaded area. When the number n of uncertain parameters increases, the above formulations, based on vertex enumeration, become computationally intensive. Swaney and Grossmann (1985b) propose two algorithms that avoid this explicit enumeration (an heuristic for direct vertex search and an implicit enumeration algorithm), maintaining however the limitative hypothesis of condition C. More sophisticated formulations, that are able to identify non-vertex solutions, have also been proposed (Grossmann and Floudas, 1987; Ostrovsky et al., 1994, 1999) Stochastic flexibility should only be defined in the case where a stochastic uncertainty model is available, that is, when the uncertain parameters are described by a joint PDFX^- SF is then the w-dimensional integral ofj(0) over the region i?(J) (striped area in Fig. 3), which represents the portion of 0 lying within the feasible region, i.e., the probability for a given design J of having feasible operation: SF(d)=l^^^j(0)d6, R(d) = {0ee\3(z,x):h(d,z,x,0)
(8) = OAg{d,z,x,0)
(9)
SF is thus dependent upon the values taken by variables d, that correspond to a particular process design solution, and the major difficulty in evaluating SF, besides the numerical problems that arise for high values of «, is that the integration region R(d) is only implicitly known. Straub and Grossmann (1990) propose an integration technique (here designated by collocation technique) that overcomes this difficulty, and is based on a product Gauss formula obtained applying a Gaussian quadrature to each dimension of 0 , with points placed within R(d).
189 Let us now consider kR and f/described by independent normal PDFs, truncated to 3.09 sigma between the limits shown in Fig. 3. This means that, taking for instance kR, the probability of 9.6
s.t. h(d,z,x,0) = OAg(d,z,x,0)<0
ku = max ku k^,U,z,x
s.t. h(d,z,x,d) =
(10)
0Ag{d,z,x,6)<0,
resulting in the feasible interval [10.03,14.4]. One then collocates 5 quadrature points within this interval and, for each one of them, solves problems similar to (10), but relative to parameter U, evaluating the limits of R(d) along the U direction. Using also 5 quadrature points along this direction, a grid with a total of 5x5 = 25 points is obtained, and the integration formula based on this grid gives us the estimate SF = 0.9610, which means that the probability of feasible operation under the uncertainty of kR and Uis about 0.96.
3.4 Optimal Design Formulations Although flexibility analysis does not strictly establish an optimal design approach, since it is performed for a fixed value of the design variables d, it is, as we'll see, an important tool to formulate optimal process design problems under uncertainty. The concept of flexibility gives rise to two distinct design objectives (Grossmann et ai, 1983; Pistikopoulos, 1995): (i) design for a fixed degree of absolute flexibility, ensuring feasible operation for any possible realisation of the uncertain parameters; (ii) design for an optimal level of flexibility, exploring the trade-offs between flexibility and economics. Regarding Fig. 3, in objective (i) the triangle (feasible region) must enclose the rectangle 0, while in objective (ii) part of the rectangle may be outside the triangle, specially if the corresponding solution is interesting enough from other perspectives (such as expected profit). Objective (i) is associated with hard uncertain parameters, for which complete feasibility must be ensured along 0. In this case, the uncertainty space 0 is usually approximated by a set of discrete scenarios with a given probability, and, as a result, the original problem (1) is transformed into a multiperiod optimization problem (Grossmann and Sargent, 1978; Halemane and Grossmann, 1983). The main difficulty is then to select a finite number of scenarios ^ ' so as to ensure feasible operation over the entire 0 continuous space. This guarantee exists if 0 is an hyperrectangle, the convexity condition C holds and all the vertices of 0 are included in the set of points considered. Let us now revisit our case study with kR and L'^ described by normal distributions truncated to the rectangle 0 of Fig. 3. Although this corresponds to a continuous stochastic model, it
190 can be incorporated within objective (i), where feasible operation is ensured over the entire 0 rectangle. Using the Gaussian formula mentioned above, now with the 25 points having a fixed location in 0 , and adding to this set the 4 vertices of 0 , so as to ensure complete feasibility, and considering an expected profit criterion, one obtains the following optimization problem: 29
maxY V>(J,z\x^6>0, s.t. h(d,z\x\0') = OAg{d,z\x\0')
(11)
d,z',x' .^j
Note that also here is subjacent the assumption of perfect control operation, since z and x variables are indexed over i. The weights w^ corresponding to the integration points are calculated according to the quadrature and respective j(0) values, while for the 4 vertices w' = 0. This problem formulation results in the final optimal solution that corresponds to column A in Table 7 (section 4.1). Objective (ii) is associated with the so-called soft uncertain parameters, for which entire feasibility over 0 is not a strict demand, and are usually described by continuous stochastic models. Two distinct formulations may be followed: maximize stochastic flexibility subject to a cost upper limit (Straub and Grossmann, 1993) and maximize a profit integral over the feasible region R(d) c 0 (Pistikopoulos and lerapetritou, 1995). The basic idea behind this second formulation is to explore the trade-off between flexibility and profitability, considering design solutions whose feasible region does not cover the entire 0 space but are associated with large average profit scores. Mathematically, this can be formulated as maximizing the integral of the profit function over the region i?((i) c 0 , which is equivalent to maximizing the product between profit expected value over R(d) and the correspondent stochastic flexibility: m^{
P\d,e)Ji0)d0 = max{Ej,^,^[PXd,d)lSF(d)}
(12)
The integrand in (12), according to the assumption of perfect control, is itself an optimization problem that corresponds to the operating stage (just like in problem (3)): P\d,e) = rmxP{d,z,x,e\ s.t. h(d,z,x,e) = 0Ag(d,z,x,6)<0.
(13)
z,x
Since R{d) is only implicitly known, the two-stage design problem - design stage (12) plus operating stage (13) - has to be solved by a decomposition strategy. Making use of the Generalised Benders Decomposition (GBD) and the collocation technique mentioned above, the original problem is converted into a sequence of smaller problems. For a fixed value ofd, feasibility subproblems like (10) are solved and integration points collocated within R(d). For each of these points, problem (13) is then solved and the respective profit integral estimated, which is a lower bound for problem (12) solution. An upper bound and a new d estimate can be obtained solving a suitable master problem constructed based upon GBD principles. Repeating this procedure, this upper bound will converge to the greatest lower bound, and the optimal design solution of (12) is then obtained.
191 The application of this strategy to our case study, considering the profit function and the stochastic model for RR and t/mentioned above, results in solution B of Table 7 (section 4.1).
4. A GENERIC FRAMEWORK FOR PROCESS DESIGN UNDER UNCERTAINTY Given the set of decision criteria and problem formulations presented in the previous sections, we will now revisit the generic formulation (1) for a process design problem under uncertainty: optimizeQ)\^f{d,z,x,0)\ d,z,x
s.t. h(d,z,x,6) = 0
(1)
g{d,z,x,e)<0 dGD,zeZ,xeX,Oee, The complete problem definition and solution comprises, given an available process model, the following steps: Step 1. Distinguish hard from soft constraints. Step 2. Identify and classify the significant uncertainties that are present. Step 3. Define an assumed operating policy in face of these uncertainties. Step 4. Establish adequate decision criteria Step 5. Formulate the specific optimization problem according to the previous steps. Step 6. Characterise the obtained formulation in terms of optimization and numerical details. This generic framework for process design problem formulation will now be illustrated for the situations already introduced in section 3.4, while in the following sections we will see how it can cover situations where robustness (4.1) and value of information (4.2) issues are to be also taken into account. Formulation A: Complete feasibility (Grossmann and Sargent, 1978; Halemane and Grossmann, 1983) Step 1. All constraints are hard. Step 2. Model-inherent, continuous stochastic, hard and non-reducible uncertainties {UR and U). The assumption of hard parameters guarantees an optimal design with *SF= 1. Step 3. Perfect control operation. Step 4. Maximize expected performance over 0 space. Step 5. Problem formulation
192 max£'@{/'(J,^)} "" /'(d,0) = max f(d,z,x,9),
(14) s.t. h(d,Z,X,0)
= OAg{d,z,x,6») < 0, V(9e0
Z,X
Step 6. Problem (14) is a two-stage optimization problem, but using an integration formula with Nf points along 0 it can be formulated as a single level multiperiod problem like in (11): N
msixYw'f(d,z\x\e%
s.t. h(d,z\x\e')
= OAg(d,z\x\0')
/ = !,...,7V
(15)
d,z',x' .^1
Complete feasibility is ensured (subject to the convexity condition C) adding 0 vertices to the set of integration points, leading to a total number A^ equal to A^i + 2", where in the objective function a zero weight w is assigned to 0 vertices. In the case of our RHE example, no numerical difficulties were encountered since two uncertain parameters are considered, thus resulting in only 25 quadrature points. For higher values of n (number of uncertain parameters) more efficient integration tools should be used, such as sampling techniques (Diwekar and Kalagnanam, 1997) or specialised cubatures (Bernardo, Pistikopoulos and Saraiva, 1999a). Another difficulty that arises when n increases is the dimension of the associated optimization problem. That being the case, the use of a decomposition strategy may then be considered (Grossmann and Halemane, 1982). Recent advances in directly solving large multiperiod problems like (11) have been reported by van den Heever and Grossmann (1999). Formulation B: Explicit feasible region evaluation (Pistikopoulos and lerapetritou, 1995) Step 1. All constraints are hard. Step 1. Model-inherent, continuous stochastic, soft and non-reducible uncertainties {kR and U). Step 2. Perfect control operation. Step 4. Maximize the integral of performance function / over the feasible region R{d) c 0 (trade-off between profitability and flexibility is explored). Step 5. maxj d
/'(^,^)7(^)^^
jR(dr
fXd,e)
= msixf(d,z,x,dls.t.
h{d,z,x,d) = OAg{d,z,x,0)
(16)
z,x
R(d) = {0ee\3(z,x):h(d,z,x,O)
=
OAg(d,z,x,O)
Step 6. Problem (16) is a two-stage optimization problem, with R(d) only implicitly known, decomposable in a sequence of smaller problems using Generalised Bender Decomposition together with a collocation technique. No global solution is guaranteed, since the limits of
193 R{d) are a function of d with unknown convexity properties. In our case study, no significant numerical difficulties were encountered since only two uncertain parameters are considered, thus resulting in only 25 quadrature points. For higher values of n integration using a more efficient technique, besides product formulae, has to be performed over 0 (with points outside R(d) being rejected) since there is no method (as far as we know) to collocate integration points within R{d). The above formulation B deserves some remarks, namely regarding the decision criterion adopted. The profit integral over R{d), as stated before, is the product between profit expected value over R{d) and the correspondent stochastic flexibility. In this way, formulation B does explore the trade-off between profitability and flexibility, although a simple product may not be the more suitable way of doing so. Furthermore, for the design solution thus obtained, there is no guarantee about the associated flexibility level reached or control over which constraints and to what extent are being violated over the portion of 0 that lies outside R{d). A possible strategy to cover these issues is perhaps to replace a soft uncertainty approach by a soft constraint approach. Indeed, formulation B can be seen as a relaxation of formulation A, assuming soft uncertainty, and thus may lead to design solutions that do not ensure complete feasibility over 0, that is with SF < 1. If we instead relax formulation A assuming that some of the constraints are soft, we are then able to supervise their violation through a penalty term in the objective function or explicitly limiting the probability and/or expected extent of violation, while simultaneously optimizing the expected process performance over 0. A formulation of this type (formulation C) is presented in the following section, where quality constraints are considered soft and a continuous loss is associated with their violation. 4.1 Process robustness criteria The formulations presented so far do not take explicitly into account process performance variability. For instance, perfect control operation, assumed in (16), may lead to excessive dispersions for relevant quality variables. Some criteria that turn decisions sensitive to process robustness may thus be quite helpful. Here, we define robustness as the ability that a process does have to operate under changing conditions with a relatively constant performance, that is, process insensitivity to uncertainty realisations. Let us designate hyy a set of quality-related process variables (usually a simple function of state and control variables), with desired values y*. Regarding our case study, conversion of A in the reactor can be considered to be a quality variable with %^* = 0.9. One way to directly control process variability would be to include in the problem formulation a restriction of maximum allowed variance for some or all of the y variables. Although this simple criterion will also be considered, there are more meaningful forms for penalising variability, namely according to Taguchi's perspective of continuous quality loss (Phadke, 1989). The j^ deviation from ;;* is thus penalised through an economic quality loss, Z, that may also be designated as quality cost Q , usually given by a quadratic function of the type:
194 (17)
C=kiy-y*Y,
where A: is a penalty constant, also known as quality loss coefficient. It can be easily demonstrated (Phadke, 1989) that the expected value of the loss function (16) is: EQiC^) =
k[a^+(M-y*f],
(18)
where ju and a are the mean and standard deviations of quaUty variable y. Equation (18) clearly shows the two components taken here into account: the loss associated with variability (ka ^) and the loss resulting from deviation of the mean value regarding our desired target [k(ju-y*f]. Fig. 4 illustrates two different perspectives in view of the constraint y^
A
Fig. 4. Quality cost models according to a) Taguchi's perspective and b) hard constraint perspective.
195 Taguchi's perspective of continuous quality loss can be formulated by simply considering quality constraints as soft and thus relaxing them. Process robustness may be guaranteed incorporating in the problem formulation two different types of elements: (i) penalty term in the objective fimction, such as a Taguchi loss function like (17); (ii) explicit restriction over process robustness metrics, such as an upper bound hard constraint over the variance of a quality related variable (Bernardo, Pistikopoulos and Saraiva, 2001). For the first case, Table 6 provides four kinds of loss fimctions based on the quadratic form (17), together with relevant application examples. In the second case, a general robustness metric can be defined as a fimction r of the statistical moments my of the quality variable y, with the following constraint being added to the problem formulation: r{my) < y. The statistical moments rriy are easily obtained using the expectancy operator, with the first three ones (mean (//^), variance (oy^) and skewness (§;)) given by: y"^ = - £ • © (
y) (19)
y-fiy
^2
^y=E@\
\
y J
The generic robustness criterion r(my)< 7 can represent several situations, namely hard quality constraints of the form ju(y) > y, o(y) < y o(y)/ju(y) < yor constraints for six-sigma performance (see section 4.1.1). It can also describe more sophisticated criteria, such as onesided robustness criteria, where for instance only variability above or below a specification is penalised (Ahmed and Sahinidis, 1998) or an upper limit is imposed on the probability of a soft inequality constraint violation and/or its expected extent of violation (Samsatli et aL, 1998). Table 6. Taguchi loss functions based on the quadratic form C = kiy-y'^) (LI) Nominal-the-best [symmetric] (L2) Nominal-the-best [asymmetric] (L3) Larger-the-better (L4) Smaller-the-better
.
k values Same k for all y
Example of a quality variable Product stream with a target composition y*
k = ki if>'<>'* k = k2 ify > 3^* k = k\ ify;;* A:=Oif ;;<;;* k = k2 ify > j ^ *
Product stream with minimum purity requirement;;* (ki = 0) Product stream purity with maximum possible valuej^* Concentration of a pollutant in a waste stream where the minimum concentration that can be achieved is j ; *
196 Using our previous generic problem formulation framework, and adding to it the above robustness considerations, one obtains the following problem statement, which can be seen as a relaxation of formulation A, assuming that some of the constraints are soft. Formulation C: Robust formulation with complete feasibility (Bernardo, Pistikopoulos and Saraiva, 2001) Step 1. Quality constraints are soft {XA > 0.9 is soft), all the other constraints are hard. Step 2. Model-inherent, continuous stochastic and non-reducible uncertainties (UR and U). Parameters are considered to be soft in respect to quality constraints and hard relatively to hard constraints. Step 3. Perfect control operation. Step 4. 4.1. Robustness criteria: quality constraints are relaxed, with process performance / being penalised through a Taguchi loss fimction; if desired, an additional hard quality constraint of the form r{my) < y can be added, where r is a ftmction of the statistical moments my for the quality variable y. In the case of our example, we relax x^ < 0.9 (it simply vanishes) and penalise profit with a one-sided loss function (asymmetric nominal-the-best loss function with
fe = 0): ^ ^ k ( x ^ - 0 . 9 ) 2 if x^< 0.9 ^ |0ifx^>0.9, with ^1 = 6.4x10^ 4.2 Decision criterion: maximize expected performance over 0 space (trade-off between profitability and robustness is explored). Step 5. Problem formulation similar to (14) but incorporating the robustness criteria mentioned in 4.1. In our case study, profit function is penalised with Q and the following constraints are added to the operating stage, in order to switch between penalty above and below x^* = 0.9:
c^^K^A? hXA>XA-XA
(CI)
AJC^>0
(C2).
Since Q is being minimized, when XA < XA*, (CI) is active; otherwise, (C2) is active and therefore the quality cost is equal to zero. Step 6. Same as in formulation A. Formulation C can be further relaxed by not including 0 vertices in the multiperiod optimization problem. The stochastic flexibility for the design solution thus obtained should be then verified a posteriori. This provides a quick way to estimate the loss of opportunity
197 associated with the assumptions of hard uncertain parameters and hard constraints. We designate this variation as a C-soft problem formulation. As stated before, formulation B can be seen as a relaxation of formulation A, assuming soft uncertainty, while C results from A relaxing quality constraints. Considering the same robustness issues as in C and the same decision criterion as in B, we now present a mixed formulation where both types of relaxation are present, that is quality soft constraints and soft uncertainties. Formulation D: Robust formulation with explicit feasible region evaluation Step 1. Quality constraints are soft {XA > 0.9 is soft), all the other constraints are hard. Step 2. Model-inherent, continuous stochastic, soft and non-reducible uncertainties {ICR and U). Step 3. Perfect control operation. Step 4. 4.1 Robustness criteria: same as in formulation C. 4.2 Decision criterion: maximize the integral of performance fimction / over the feasible region R(d) e 0 (trade-off between profitability, flexibility and robustness is explored simultaneously). Step 5. Problem formulation similar to (16) and incorporating the robustness criteria described in formulation C. Step 6. Same as in formulation B. Table 7 shows the solutions obtained according to the five alternative formulations mentioned so far: A, B, C, C-soft and D. Overdesign factors (odf) for both reactor volume V and heat exchanger area A are also shown and were computed as the ratio between the solutions obtained and the one that corresponds to a fully deterministic approach (whose solution is {V^} = {4.429 m^5.345 m^}). The respective SF, /j, and a values were estimated by numerical integration, using the product Gauss formula mentioned earlier on with 25 points along 0 (solution A, C and C-soft) or R{d) (solutions B and D), being also included in Table 7. Table 7. Optimal design solutions considering kR and L^uncertainties (RHE system). A B C C-soft D E(P) ($/ano) 1022' 1021 708 96r 999 4.583 4.513 5.545 4.898 4.513 V{m') odf(F) 1.02 1.02 1.03 1.25 1.11 ^(m^) 6.224 6.175 6.739 6.199 6.495 odf(.4) 1.22 1.26 1.15 1.16 1.16 1 0.9920^ 0.9919^ SF 0.9335 1 0.9182 0.9092 0.9014 0.9014 0.9027 M(XA) 0.004676 0.004568 0.005533 0.005465 0.005533 O(XA) * Ratio between profit integral over R(d) and SF. ^ Estimated using 36 quadrature points; the unfeasible region is a narrow band for low values of U and kRX 12h~^.
198 Solution A is the most conservative one, presenting the largest overdesign factors. The other formulations are obviously less conservative due to soft uncertainty and/or soft constraints relaxation. The comparison between different solutions requires a careful analysis of the assumptions underlying the respective formulations. For instance, when solutions A and B are compared, one should bear in mind that their respective objective functions are equivalent if in formulation A one assigns a zero profit for ^points outside i?(J). If negative profits are observed within R{d), which is the case for this example, the direct comparison of expected profit solutions is not fair. The same remark applies when solutions C and D are compared. Formulations with similar assumptions, on the other hand, can be directly confronted. For instance, looking at solutions A and C, a 41% raise in expected profit can be associated with a relaxation of the XA>0.9 constraint. This increase indicates that the quality constraint, when assumed to be a hard constraint, is a significant source of unfeasibility and consequent overdesign. The expected profit increase is not so significant when solutions B and D are compared, indicating that the problem relaxation corresponding to the assumption of soft uncertainty turns the quality constraint less critical. Comparing solutions C and C-soft, one can see that hard uncertainty and constraint assumptions are not a significant source of overdesign. Regarding process variability, looking at solutions A and C we realise that the quality constraint relaxation results in greater XA standard deviation, together with a smaller average value, very close to the constraint lower limit. Larger quality loss coefficients will result in optimal solutions associated with reduced XA standard deviations, and a parametric study may be conducted in this regard.
4.1.1 Design formulation for six-sigma quality Six-sigma quality is a statistical standard metric for process performance and a philosophy of customer satisfaction. As a statistical standard, six-sigma refers to keeping critical-to-quality characteristics (y variables) within plus or minus six standard deviations of their mean values, and these ones to have a maximum deviation from the corresponding target values of plus or minus 1.5 standard deviations (Craig, 1993). Based on the assumption that random >' variables are normally distributed, six-sigma quality guarantees a defect rate of no more than 3.45 parts per million (Deshpande, 1998). As a philosophy, six-sigma is the commitment to continuous improvement by reducing variation and increasing robustness of processes and products (Craig, 1993; Harry and Shroeder, 2000). The six-sigma quality metric can be easily incorporated into both of formulations C or D above presented, under the form of a hard quality constraint r(my) < y. Process capability indices Cp and Cpk (Deshpandel, 1998) are usually used to quantify six-sigma performance. For a one-sided lower specification;;^ and considering a 3-sigma variation below the target, Cp is defined as:
199
C,=^^
(20)
The Cp index can be interpreted as the inverse of a normalised standard deviation and thus is inversely proportional to y variability. The Cpk index, on the other hand, is a measure of how close the y distribution mean is to the target value >'*, and in the same case of a one-sided lower specification j ^ can be defined as:
^Pk -
^P
1 - ~* T y -y
(21)
If the mean distribution equals target value j ; * , then Cpk= Cp; but otherwise Cpk increases as the mean distribution gets closer to it. Similar definitions can be established in the case of a one-sided upper or two-sided specification. According to the above equations, six-sigma performance can be guaranteed adding to a our design formulation the following hard quality constraints: Cp>2 Cp,>1.5
(22)
Referring to our case study, and when a lower specification XA^ = 0.87 is considered, a C type formulation incorporating constraints (22) results in a slightly more severe solution, mainly due to an increase in the operational costs: EQ(P) = 920 $/year, F = 4.510 m"^, A = 6.494 m^, JU(XA) = 0.9010, O{XA) = 0.005000, CP = 2.000 and Cpk = 1.936.
4.2 Value of information regarding uncertain parameters In the case of perfect control operation, the value of perfect information for a given design decision d was already defined in section 2 as being given by: Yn(d,O) = f\O)-fXd,0l
(4)
where/"(^) stands for process performance assuming both design and control variables as wait-and-see decisions (wait-and-see performance) and f\d,d) stands for the same process performance but with design variables decided prior to uncertainty realisation (here-and-now performance). In section 3.2.2 we introduced the distinction between reducible uncertain parameters, whose present knowledge can be improved at a certain R&D cost, and non-reducible parameters, whose uncertainty reduction is believed to be beyond our control. In both cases, the above
200 definition of VPI (4) applies, if a perfect control operation is assumed. lerapetritou et al. (1996) have addressed this question focusing on non-reducible parameters and their expected VPI in the context of operational planning problems, while Bernardo, Saraiva and Pistikopoulos (2000) have considered the case of reducible uncertain parameters in an early process design stage, where an optimal investment in R&D activities should be decided, in order to reduce uncertainty in the most valuable and selective way. In this last case, the expected VPI receives the name of expected value of eliminating uncertainty (VEU), since it refers to parameters relatively to which perfect information is usually impossible to achieve, even in future process operation. A possible decision criterion is based on expected values of VPI or VEU. Namely, one can establish process design decisions with a maximum allowed expected VPI relative to a given parameter uncertainty or R&D decisions supported by expected VEU values (Bernardo, Saraiva and Pistikopoulos, 2000). Therefore, we will now establish a generic framework to incorporate the evaluation of expected VPI/VEU values within process design formulations of the types presented in our previous section. For that purpose, we must at first subdivide the vector ^ in two disjunctive subsets 6\ e 0i and 9z e 02, where 02 are the uncertain parameters whose value of information is to be computed. Considering decision criteria of expected performance/ equation (4) takes then the following form: £e^ [VPI(J,^2)] = E^^ {£e, [f\0iA)]}-Ee,
{^e, [f\dAA)]}
(23)
Please notice that in the case of formulations with explicit feasible region evaluation, expected values should be taken over R\(d) and Riid), instead of 0 i and 02, respectively. The second term in (23) is simply the expected value over 0 of the here-and-now performance, while the first term is the expectancy over 02 for the expected wait-and-see performance over 0 i , itself a function of ft. In other words, the expected VPI relative to ft is the expected value over 02 of a wait-and-see solution (itself a function of ft, designated by SwsiOz)) minus a here-and-now solution (Shn)'Ey?Iid) = Es^[S„M)]-SHn
(24)
The first term in (24) can be estimated solving wait-and-see problems for each one of the integration points in 02. Thus, a generic procedure to evaluate EVPI can be stated as: Step 1. Identify subset ft relative to parameters whose EVPI is to be evaluated. Step 2. Solve the optimization problem of the type (14) or (16) deciding design variables d for the present level of uncertainty, thus obtaining AS/JW.
201 Step 3. For each integration point in the 02 space, solve the same problem, obtaining a set of solutions Sws{^)- Estimate the expected value of these solutions using an integration technique over 02. Step 4. Compute EVPI(^ as in (24). We also applied this procedure to our case study, considering a wider uncertainty model than before (Table 8). All of the 6 uncertainties are described by normal independent PDFs truncated to 3.09-sigma, such that £= 3.09a/ju. The last three parameters clearly belong to the category of reducible parameters, and the goal here is to find, for the here-and-now optimal design, the expected VEU values associated with each one of them, in order to then select those parameters whose uncertainty reduction is more value adding, and around which R&D efforts should be allocated. Table 8. Uncertainty model (RHE system). Fo To Twi kR
U EIR
Feedflowrate Feed temperature Cooling water inlet temperature Arrhenius rate constant Overall heat transfer coefficient Activation energy over perfect gas constant
Mean, ju 45.36kmol/h 333 K 293 K 12 h"^ 1,635 kJ/(m\K) 555.6 K
Error s 0.20 0.04 0.04 0.30 0.30 0.30
Within the scope of our generic problem formulation framework, introduced in section 4, and adopting a C-soft type of approach, our problem in this context can be defined as follows: Step 1. Quality constraint x^ > 0.9 is soft, all the other constraints are hard. Step 2. Process-inherent, continuous stochastic, non-reducible uncertainties (Fo, To, Ty^\)\ model-inherent, continuous stochastic and reducible uncertainties (fe, U, EIK). Parameters are considered to be soft in respect to the quality constraint and hard relatively to hard constraints. Step 3. Perfect control operation. Step 4. ^^ 4.1. Robustness criteria: x^ < 0.9 is relaxed, with cost being penalised through a one-sided loss function {x/" = 0.9, k\ = 6.4x10^ and ki = 0); a hard quality constraint O(XA)/JU{XA) < 0.01 is also considered. 4.2 Decision criterion: minimize expected cost over 0 space. Step 5. The design problem is formulated as a one-stage problem of the form (15), not including 0 vertices, adding additional constraints to switch between penalty values above and below XA* and also the hard quality constraint. The here-and-now solution of the above formulation results in an overall expected cost of 14 596 $/year (column A of Table 9). Expected values over 0 are estimated using a specialised cubature formula with 2" + 2« = 76 points. In Fig. 5 we plot wait-and-see solutions referring
202 to EIR uncertainty elimination, of which expected value, according to EIR normal PDF and using a specialised quadrature with only 3 points, is 13 317 $/year. The expected VEU for this parameter, and according to (24), is then 14 596 - 13 317 = 1279 $/year. Doing the same calculations for the other reducible parameters, one obtains expected VEU values of 379 and 0 $/year, respectively for parameters kR and U. This reveals that activation energy is indeed the parameter whose uncertainty reduction is most relevant for achieving overall plant cost savings and that, in average, there is no benefit associated with increasing our knowledge about the heat transfer coefficient.
mm
E{C) ($/yr)
r 0.008 14000 •
^
13600 13200
•
.
^
••—"^""•""'^
^ ^,^^
^
0.006
^
- 0.004 •
^
^
0.002 12800 ^
;
^
•
•
.
.
.
- n c\c\c\
350 400 450 500 550 600 650 700 750 £:/7^(K) Fig. 5. Wait-and-see solutions (solid line) and EIR PDF (dotted line) as a function oiEIR true but presently unknown value.
Suppose now that after selecting the most value adding parameters, let us say the subset ft G ft, we want to decide the optimal investment that should be allocated in R&D around this subset, together with the corresponding optimal uncertainty levels that we will end up with. This can be done exploring the trade-off between an assumed information cost and the associated benefits due to uncertainty reduction. The following annual information cost function is then considered for each parameter ^ e ft:
Cij-h^ifj^^j
1 v^^-
1
(25)
J J
where Cifj represents a fixed cost (associated for instance with the investment in laboratory or pilot scale equipment needed to run experiments), while the second term corresponds to variable costs (e.g. reactant and operation costs). If no experiment takes place, the binary variable bj affecting the fixed cost equals zero and the relative error associated with parameter 6j remains at its current nominal level {Sj = sf). Otherwise, bj = 1 and the information cost grows as the relative error decreases, since more precise knowledge about 0/ becomes
203 available; in the limit, an infinite R&D investment would lead to perfect knowledge about this parameter. The total cost of information, C/, is simply the sum of Qj for all ft parameters. When a certain amount Q is spent in R&D, our knowledge about ft parameters increases, with vector //3 assuming a more accurate value in the domain 03 = {ft: ft G yXft)} and the errors £3 being reduced. In order to find out what are the optimal errors £3, corresponding information cost, and best design d, one has to solve parametrically, for different possible realisations of//s, a design problem where (25) is incorporated in the objective function, with Z?3 and £3 as additional decision variables. Adopting a C-soft type of formulation, omitting the operating stage and eventual hard quality constraints, the problem is:
S(jU3) = max E^ sJ.O
{f;^(dM-Cj(h,s,) (26)
ef-Sj
In face of the set of solutions S(jU3), a possible decision criterion may be to select the worst case scenario for all possible JU3 values (e.g., the point where largest optimal information costs are obtained). However, this decision may be too conservative and overestimate R&D resource allocation. Thus, one may instead want to adopt an average criterion identifying the errors £3, associated information costs, and design d that minimize the objective function expected value over ©3. Adopting a C-soft type of formulation, the design problem according to the above average criterion can be formulated as follows (operating stage and eventual hard quality constraints are once more omitted):
s.t.O
(27)
sf-sj
Although the above formulations are defined considering a subset ft of the most valuable reducible parameters, optimal levels of uncertainty can be decided relatively to the entire set ft. The problem complexity, however, when a realistic number of parameters is considered, motivates the above two steps approach: Step 1. Select the subset ft of the most value adding reducible parameters, based on their expected VEU.
204 Step 2. For the reduced subset thus found decide optimal R&D investments and correspondent levels of uncertainty based on an information cost function. Going back to our case study, and focussing only on the most valuable parameter, which is the activation energy. Fig. 6 shows the minimum expected cost and correspondent optimal level of uncertainty £(E/R), when a problem similar to (26) is solved for different E/R mean values (the same 64 points cubature is used). The initial nominal error considered is £r^=0.30, together with an assumed fixed research cost Cif= 100 $/year and a = 90. As activation energy mean increases, optimal information costs become larger, and therefore the corresponding parameter relative error, s(E/R), smaller. This indicates that bigger R&D spending should be afforded if indeed the parameter true mean value happens to correspond to the less favourable scenarios of high E/R values. The worst case scenario (solution B, Table 8) corresponds to an optimal error of only 0.099 around the barely probable mean value of 722.3 K.
E{Cost) ($/yr)
£(E/R)
15500 1
0.18
15000
0.16
14500
\ 0.14
14000
0.12
13500
0.10
13000 350 400 450
500
550
600
650
700
0.08 750
M(E/R){K) Fig. 6. Expected cost (solid line) and optimal error (dotted line) as a function of E/R mean values.
A less conservative decision may be obtained using an average criterion weighted by E/R normal PDF. A problem similar to (27) is then solved, using a 3 points specialised quadrature over 02 and a 42 points specialised cubature over 0 , resulting in a total of 228 points of integration. The optimal solution obtained (column C, Table 8) indicates that, in average, it is profitable to launch a R&D program with an annual depreciation value of 475 $/yr, in order to increase the currently available knowledge about the activation energy, up to the point where an error of 0.133 is achieved.
205 Table 9. Optimal design solutions considering present level of kR, U and EIR uncertainty (solution A) and optimizing EIR uncertainty level (solutions B and C). A B" C 14 252'' 15 169 14 596 E(C) ($/ano) 8.667 6.193 8.254 V{m') 7.804 7.442 7.962 A{m^) 0.300 0.300 0.300 Kh) 0.300 0.300 0.300 6(U) 0.133 0.099 0.300 €{EIR) 0.9421 0.9184 0.9250^ MM 0.01^ 0.01 0.01 O{XA)IJU{XA) ^^^ Worst case scenario: /u{EIR) = 722.3 K. ^^ Expected values in face of ju(E/R) uncertainty.
5. CONCLUSIONS AND FUTURE WORK Process design problems under uncertainty are inherently underdefined problems whose complete definition requires a set of considerations and assumptions, namely regarding constraints violation, uncertainty classification and modelling, operating policy and decisionmaking criteria. In this chapter we have analysed the above issues and integrated them under a generic framework for optimal process design, including a simple procedure to guide the decision-maker on the problem definition and also the associated optimization formulations. This generic framework was then used to systematise some design formulations presented in the literature (mainly focusing on process flexibility), including also some of the authors previous published work on robustness criteria and value of information about uncertain parameters. Starting with a complete feasibility formulation, where all the constraints are forced to be satisfied for every uncertain parameter possible realisation, we have derived other three design problem formulations relaxing constraints and/or uncertainties (uncertainty relaxation corresponds to allowing feasibility over a subset R(d) of the uncertainty space &). Although both types of relaxation can be formulated, we believe that constraint relaxation approaches (C type formulation) may provide a more effective way to define design problems, since they allow us to selectively supervise constraint violations, through penalty terms in the objective function or explicitly limiting the probability and/or expected extent of violation, while simultaneously optimizing the expected process performance and ensuring entire feasibility for hard constraints. On the other hand, formulations with a soft uncertainty assumption (B or D types), even with an explicit lower limit for stochastic flexibility, do not allow us to control which constraints and to what extent are being violated over the portion of 0 outside R(d). Regarding the value of information about uncertainty, we have proposed a two steps procedure to selectively allocate, at an early design stage, optimal investments in R&D to increase the present knowledge about uncertain parameters. In the first step the most value
206 adding parameters are identified, based on their expected value of perfect information, while in the second stage the best design together with optimal R&D investments are decided exploring the trade-offs between economic added value deriving from uncertainty reduction and the associated information costs. A case study comprising a reactor and a heat exchanger has illustrated the usefulness of the presented formulations, namely regarding the following features: accurate computation of overdesign factors, optimal design accounting for product quality, trade-offs between profit, flexibility and robustness, identification of activation energy as the most valuable model parameter regarding which it is profitable to conduct R&D experiments in order to increase our present knowledge about it. In the future we intend to further investigate and materialise some of the issues that were handled here, such as: inclusion in a C type formulation of constraints violation limits, definition of a design objective that covers safety and environmental issues, use of decisionmaking criteria based on value of perfect information, definition of a more effective operating policy somewhere between perfect and rigid control operation. Process design has always been one of the most challenging and noble activities of chemical engineering, where a combination of art and science results in the conception of new or revamped plants. Some of its key concepts have already been defined a number of decades ago. However, due to the lack of adequate computational tools, some of such issues have since then remained to a large extent unexplored, including the ways used (or not) to include and handle several kinds of relevant uncertainties. The past three decades, and the nineties in particular, have shown that new capabilities, tools and frameworks are now available in order to address in a consistent way and consider explicitly uncertainty sources as key issues in achieving design solutions that will result into competitive plants under environments of increasing randomness and volatility. Therefore, uncertainty handling becomes more and more critical, and we now have a number of tools available for making out of the ways we use to do process design more of a science and less of an art. Nevertheless, new and unexplored paths remain to be carefully studied in the future, making the next 30 years in this regard at least as promising and rich as the past 30 years that we have tried to portrait in this chapter.
REFERENCES 1. Acevedo, J. and Pistikopoulos, E. N., A Parametric MINLP Algorithm for Process Synthesis Problems under Uncertainty, Ind. Eng. Chem. Res., 35 (1996) 147. 2. Acevedo, J. and Pistikopoulos, E. N., A Multiparametric Programming Approach for Linear Process Engineering Problems under Uncertainty , Ind. Eng. Chem. Res., 36 (1997), 717. 3. Ahmed, S., and Sahinidis, N. V., Robust Process Planning under Uncertainty, Ind. Eng. Chem.Res., 37(1998), 1883.
207 4. Bernardo, F. P., Pistikopoulos, E. N. and Saraiva, P. M., Integration and Computational Issues in Stochastic Design and Planning Optimization Problems, Ind. Eng. Chem. Res., 38 (1999a), 3056. 5. Bernardo, F. P., Pistikopoulos, E. N. and Saraiva, P. M., Robustness Criteria in Process Design Optimization under Uncertainy, Comp. Chem. Eng., 23, Suppl. (1999b), S459. 6. Bernardo, F. P., Pistikopoulos, E. N. and Saraiva, P. M., Quality Costs and Robustness Criteria in Chemical Process Design Optimization, Comp. Chem. Eng., 25 (2001), 27. 7. Bernardo, F. P. and Saraiva, P. M., A Robust Optimization Framework for Process Parameter and Tolerance Design, AIChE J., 44 (1998), 2007.. 8. Bernardo, F. P., Saraiva, P. M. and Pistikopoulos, E. N., Inclusion of Information Costs in Process Design Optimization under uncertainty, Comp. Chem. Eng., 24 (2000), 1695. 9. Bhatia, T. K. and Biegler, L. T., Dynamic Optimization for Batch Design and Scheduling with Process Model Uncertainty, Ind. Eng. Chem. Res., 36 (1997), 3708. 10. Brooke, A., Kendrick, D. and Meerans, A., GAMS: A User's Guide, Release 2.25, The Scientific Press Series, 1992. 11. Chacon-Mondragon, O. L. and Himmelblau, D. M., Integration of Flexibility and Control in Process Design, Comp. Chem. Eng., 20 (1996), 447. 12. Craig, R. J., Six Sigma Quality, the Key to Customer Satisfaction, 47th Annual Quality Congress, May 1993, Boston, 206 (1993). 13. Diwekar, U. M. and Kalagnanam, J. R., Efficient Sampling Technique for Optimization under Uncertainty, AIChE J., 43 (1997), 440.. 14. Diwekar, U. M. and Rubin, E. S., Parameter Design Methodology for Chemical Processes Using a Simulator, Ind. Eng. Chem. Res, 33 (1994), 292. 15. Deshpande, P. B., Emerging Technologies and Six Sigma, Hydrocarbon Processing, No. 77(1998), 55. 16. Grossmann, I. E. and Floudas, C. A., Active Constraint Strategy for Flexibility Analysis in Chemical Processes, Comp. Chem. Eng., 11 (1987), 675. 17. Grossmann, I. E. and Halemane, K. P., Decomposition Strategy for Designing Flexible Chemical Plants, AIChE J., 28 (1982), 686. 18. Grossmann, I. E., Halemane, K. P., and Swaney, R. E., Optimization Strategies for Flexible Chemical Processes, Comp. Chem. Eng., 7 (1983), 439. 19. Grossmann, I. E., and Sargent, R. W. H., Optimum Design of Chemical Plants with Uncertain Parameters, AIChE J., 37 (1978), 517. 20. Halemane, K. P. and Grossmann, I. E., Optimal Process Design under Uncertainty, AIChE J., 29 (1983), 425. 21. Harry, M. and Schroeder R., Six Sigma: The Breakthrough Management Strategy, Currency, New York, 2000. 22. van den Heever, S. A. and Grossmann, I. E., Disjunctive Multiperiod Optimization Methods for Design and Planning of Chemical Process Systems, Comp. Chem. Eng., 23 (1999), 1075. 23. lerapetritou, M. G. and Pistilopoulos, E. N., Simultaneous Incorporation of Flexibility and Economic Risk in Operational Planning under Uncertainty, Comp. Chem. Eng., 18 (1994), 163. 24. lerapetritou, M. G., Pistikopoulds, E. N. and Floudas, C. A., Operational Planning under Uncertainty, Comp. Chem. Eng., 20 (1996), 1499.
208 25. Nishida, N., Ichikawa, A. and Tazaki E., Synthesis of Optimal Process Systems with Uncertainty, Ind. Eng. Chem. Proc. Des. Dev., 13 (1974), 209. 26. Ostrovsky, G. M., Volin, Y. M., Barit, E. I. and Senyavin, M. M., Flexibility Analysis and Optimization of Chemical Plants with Uncertain Parameters, Comp. Chem. Eng., 18 (1994), 775. 27. Ostrovsky, G. M., Achenie, L. E. K. and Gomelsky, V., A New Approach to Flexibility Analysis, PRES'99 Proceedings, Hungary, Budapest (1999). 28. Pertsinidis, A., Grossmann, I. E., McRae, G. J., Parametric Optimization of MILP Programs and a Framework for the Parametric Optimization of MINLPs, Comp. Chem. Eng., 22, Suppl.(1998), S205. 29. Phadke, M. S., Quality Engineering using Robust Design, Prentice Hall, New Jersey, 1989. 30. Pistikopoulos, E. N., Uncertainy in Process Design and Operations, Comp. Chem. Eng., 19, Suppl. (1995), S553.. 31. Pistikopoulos, E. N. and lerapetritou, M. G., Novel Approach for Optimal Process Design under Uncertainty, Comp. Chem. Eng., 19 (1995), 1089. 32. Pistikopoulos, E. N. and Mazzuchi, T. A., A Novel Flexibility Approach for Processes with Stochastic Parameters, Comp. Chem. Eng., 14 (1990), 991. 33. Rudd, D. F. and Watson, C. C , Strategy of Process Engineering, John Wiley & Sons, New York, 1968. 34. Samsatli, J. N., Papagergiou, L. G. and Shah, N., Robustness Metrics for Dynamic Optimization Models under Parameter Uncertainty, AIChE J., 44 (1998), 1993. 35. Straub, D. A. and Grossmann, I. E., Integrated Stochastic Metric of Flexibility for Systems with Discrete State and Continuous Parameter Uncertainties, Comp. Chem. Eng., 14 (1990), 967. 36. Straub, D. A. and Grossmann, I. E., Design Optimization of Stochastic Flexibility, Comp. Chem. Eng., 17 (1993), 339. 37. Swaney, R. E. and Grossmann, I. E., An Index for Operational Flexibility in Chemical Process Design: Formulation and Theory, AIChE J., 31 (1985a), 621. 38. Swaney, R. E. and Grossmann, I. E., An Index for Operational Flexibility in Chemical Process Design: Computational Algorithms, AIChE J., 31 (1985b), 631. 39. Watanabe, N., Nishimura, Y., Matsubara, M., Optimal Design of Chemical Processes Involving Parameter Uncertainty, Chem. Eng. Sci., 28 (1973), 905.