CHAPTER
Basic features and concepts of optimization
1
Chapter outline 1.1 Introduction ......................................................................................................... 2 1.2 Basic features...................................................................................................... 2 1.2.1 Optimization and its benefits............................................................... 2 1.2.2 Scope for optimization ........................................................................ 3 1.2.3 Illustrative examples........................................................................... 3 1.2.4 Essential requisites for optimization..................................................... 6 1.3 Basic concepts .................................................................................................... 7 1.3.1 Functions in optimization.................................................................... 7 1.3.2 Interpretation of behavior of functions................................................ 12 1.3.3 Maxima and minima of functions ....................................................... 15 1.3.4 Region of search for constrained optimization ..................................... 18 1.4 Classification and general procedure................................................................... 19 1.4.1 Classification of optimization problems .............................................. 19 1.4.2 General procedure of solving optimization problems ............................ 23 1.4.3 Bottlenecks in optimization ............................................................... 23 1.5 Summary ........................................................................................................... 24 References ............................................................................................................... 25
Optimization is the process of finding the set of conditions required to achieve the best solution in a given situation. Optimization is of great interest and finds widespread use in engineering, science, economics, and operations research. This introductory chapter presents the basic features and concepts that set the stage for the development of optimization methods in the subsequent chapters.
Stochastic Global Optimization Methods and Applications to Chemical, Biochemical, Pharmaceutical and Environmental Processes. https://doi.org/10.1016/B978-0-12-817392-3.00001-6 Copyright © 2020 Elsevier Inc. All rights reserved.
1
2
CHAPTER 1 Basic features and concepts of optimization
1.1 Introduction A wide variety of problems in design, operation, and analysis of engineering and technological processes can be resolved by optimization. This chapter provides the motivation for the topic of optimization by means of presenting its basic features along with its scope, examples of its applications, and its essential components. Furthermore, its basic concepts are described in terms of functions, behavior of functions, and maxima and minima of functions. This chapter further deals with the region of search within the constraints, classification of problems in optimization, general solution procedure, and the obstacles of optimization.
1.2 Basic features Optimization with its mathematical principles and techniques is used to solve a wide variety of quantitative problems in many disciplines. In industrial environment, optimization can be used to take decisions at different levels. It is useful to begin the subject of optimization with its basic features and concepts.
1.2.1 Optimization and its benefits Optimization is the process of selecting the best course of action from the available resources. Optimization problems are made up of three basic components: an objective function, a set of unknowns or decision variables, and a set of constraints. An objective function can be maximization or minimization type. In an industrial system, decisions are to be made either to minimize the cost or to maximize the profit. Profit maximization or cost minimization is expressed by means of a performance index. Decision variables are the variables that engineers or managers choose in making technological or managerial system to achieve the desired objective. Optimization has to find the values of decision variables that yield the best values of the performance criterion. Constraints are restrictions imposed on the system by which the decision variables are chosen to maximize the benefit or minimize the effort. Optimization has widespread applications in engineering and science. It has become a major technology contributor to the growth of the industry. In plant operations, optimization provides improved plant performance in terms of improved yields of valuable products, reduced energy consumption, and higher processing rates. Optimization can also benefit the plants by means of reduced maintenance cost, less equipment wear, and better staff utilization. It helps in planning and scheduling of efficient construction of plants. With the systematic identification of objective, constraints, and degrees of freedom in processes or plants, optimization leads to provide improved quality of design, faster and more reliable trouble shooting, and faster decision-making. It helps in minimizing the inventory charges and increases overall efficiency with the allocation of resources or services among various processes or activities. It also facilitates to reduce transportation charges through strategic planning of distribution networks for products and procurement of raw materials from different sources.
1.2 Basic features
1.2.2 Scope for optimization Optimization can be applied to the entire company, a plant, a process, a single unit operation, and a single piece of equipment. In typical industrial environment, optimization can be used in taking decisions at management level, plant design level, and plant operations level [1]. Management level: At management level, optimization helps in taking decisions concerning to project evaluation, product selection, corporate budget, investment in sales, research and development, and new plant construction. At this stage the information available is qualitative and uncertain as these decisions are made well in advance the plant design level. Plant design level: Decisions made at this level are concerned to choice of the process whether batch or continuous, nominal operating conditions, configuration of the plant, size of individual units, use of flow sheeting programs, and aid of process design simulators. Plant operations level: Decisions at this stage include allocation of raw materials on a weekly/daily basis, day-to-day optimization of a plant to minimize steam consumption, cooling water consumption, operating controls for a given unit at certain temperatures and pressures, and costs toward shipping, transportation, and distribution of products.
1.2.3 Illustrative examples The basic applications of optimization are explained in terms of different illustrative examples concerning to industry. (a) Optimum pipe diameter for pumping fluid One typical example is the problem of determining the optimum pipe diameter for pumping a given amount of fluid from one point to another. Here the amount of fluid pumped between the two points can be accomplished by means of different pipe diameters. However, this task can be realized by one particular pipe diameter which minimizes the total cost representing the cost for pumping the liquid and the cost for the installed piping system as shown in Fig. 1.1 [2]. From the figure it can be observed that the pumping cost increases with decreased size of pipe diameter because of frictional effects. But the fixed charges for the pipeline become lower due to reduced capital investment because of the use of smaller pipe diameter. The optimum diameter is located at point E where the sum of the pumping costs and fixed costs for the pipeline becomes a minimum. (b) Optimizing air supply to a combustion process Combustion occurs when fuels such as natural gas, fuel oil, or coal react with oxygen in the air to form carbon dioxide (CO2) and generate heat which can be used in industrial processes [3]. In a combustion process involving a typical fuel,
3
4
CHAPTER 1 Basic features and concepts of optimization
Total cost Cost, $/(year) (ft of pipe length)
Capital investment for installed pipe Power cost for pumping
Optimum pipe diameter E
Pipe diameter
FIGURE 1.1 Determination of optimum pipe diameter.
when small quantity of air is supplied to the burner, there is not enough oxygen in the air to completely react with all the carbon in the fuel to form CO2. Some oxygen in the air can combine with the carbon in the fuel to form carbon monoxide (CO). The CO is highly toxic gas associated with the combustion and its formation has to be minimized. The most efficient use of CO2 can be achieved when its concentration in the exhaust is minimized. This happens only when there is sufficient oxygen (O2) in the air to react with all the carbon in the fuel. The theoretical air required for the combustion reaction depends on the fuel composition and the rate of fuel supply. As the air level is increased up to 100% of the theoretical air, the concentration of CO decreases rapidly to a minimum and the values CO2 and O2 attain their maximum level. Further increase in air supply begins to dilute the exhaust gases causing to decrease the CO2 concentration. These situations are depicted in Fig. 1.2. Optimizing the air supply or air fuel ratio is desired to increase the efficiency of the combustion process.
% CO2
%Gas concentration
% O2
% CO
80 100 120
% Theoretical air
FIGURE 1.2 Efficiency of a combustion process as a function of air supply.
1.2 Basic features
(c) Optimal dilution rate of a chemostat Chemostat is a completely mixed continuous stirred tank reactor used for cultivation of cells. The steady-state mass balances on substrate and cells in a chemostat are described by a Monod chemostat model [4]. Under sterile feed condition, the initial cell mass concentration in the feed is zero. The dilution rate (D) is an important parameter characterizing the processing rate of the chemostat. The steady-state values of substrate (s) and cell mass concentration (x) in the chemostat are influenced by the dilution rate (D). When dilution rate is low and reaches a situation such as D / 0 and s /0, Then x increases to a high value. As D increases continuously, s increases at a faster rate and x tends to decline. As D approaches its maximum rate Dmax, x becomes zero and s attains high value. The condition of loss of cells at steady state where x ¼ 0 and D ¼ Dmax is called near washout condition and when D > Dmax, total washout of cells occur. At near washout condition, the chemostat is very sensitive to variations in D. At this condition, a smaller change in D gives a relatively large shift in x and s. These situations are shown in Fig. 1.3. The rate of cell production per unit volume of chemostat is expressed as Dx which is a function of D as shown in Fig. 1.4. (d) Optimizing the design condition to deal with model uncertainty Model uncertainty is an important issue in product or process design. Failure to account model uncertainty may lead to degradation in process performance [5]. The inputs or parameters of the model are to be selected such that the output responses are robust or insensitive to variations in model parameters. In multiresponse optimization problem, the optimal operating condition chosen for the variables has to simultaneously deal with the robustness and performance for multiple responses. In such case, multiple response problem is converted into a single objective function expressed as a loss function, which takes into account the correlation among the responses and the process economics. This function represents the variability in response, which in turn depends on the operating condition chosen for the variables. The response variability or model uncertainty can be illustrated in Fig. 1.5, where the
x, s Mass/ volume
s x Washout
D
FIGURE 1.3 Variation of x and s with respect to D.
Dmax
5
6
CHAPTER 1 Basic features and concepts of optimization
Dx
D
Doptimum
FIGURE 1.4 Optimum dilution rate that maximizes cell production.
True model
Fitted model
Performance
A
B
FIGURE 1.5 Response behavior due to model uncertainty.
dashed curve represents the true response and the solid curve corresponds to the fitted model. If the goal is to maximize the response performance, the model should provide the optimal value of the design variable at point A. If the fitted model provides point B as the design variable, the response exhibits much lower performance than the true optimal. Thus, even a slight deviation of the fitted model from the true model might result unacceptable performance. It can be observed that the difference between the predicted value of the response based on the design point at B is larger than that of at A.
1.2.4 Essential requisites for optimization The goal of optimization is to find the values of decision variables that yield the best values of the performance criterion or objective function. The process of optimization considered for a particular application depends on the nature of the process, its mathematical description, its constraints, and the type of objective function. Thus, the basic components that are essential for formulation of a mathematical optimization problem are the objective function, the decision variables, and the constraints.
1.3 Basic concepts
Objective function: It is the mathematical function which has to be either minimized or maximized. For example, in a manufacturing process, the aim may be to maximize the profit or minimize the cost. In comparing the data provided by the user-defined model with the observed data, the objective is to determine the decision variables that minimize the deviation between the model prediction data and the observed data. The objective function is usually defined by taking into account the type of optimization application. Decision variables: These are the set of unknowns or variables that control the value of the objective function. In a manufacturing problem, the variables may include the amounts of different resources used or the time spent on each activity. For example, in fitting the data to a model, the unknowns can be the parameters of the model. Constraints: A set of constraints are those that allow the unknowns or variables to take on certain values but exclude others. For example, in a manufacturing problem, one cannot spend negative amount of time on any activity, so one constraint is that the “time” variables are nonnegative. The optimization problem is then to find values of the variables that minimize or maximize the objective function while satisfying the constraints.
1.3 Basic concepts The basic concepts of optimization are described in terms of functions, behavior of functions, maxima and minima of functions, and the regions of search.
1.3.1 Functions in optimization (a) Continuous, discontinuous, and discrete functions A function is continuous at some point x, if f ðxÞ ¼ Limh/0 f ðx þ hÞ
(1.1)
A function of one variable may be represented by an unbroken continuous curve as shown in Fig. 1.6. A discontinuous function is the one in which discontinuity occurs at some point x ¼ x0. The common form of discontinuity is the jump discontinuity which occurs at þ when h 0, and the the limit of f(x þ h) as h / 0. The function approaches f x 0 function approaches a different value f x 0 if h 0. A discontinuous function is shown in Fig. 1.7. Discrete functions are discontinuous functions which are valid only at discrete values. For example, the pressure drop (DP) of a fluid flowing at a fixed flow rate through a pipe of fixed length is a function of pipe diameter (D) as shown in Fig. 1.8. This would result a discrete function because the pipes are normally available only in standard diameter.
7
8
CHAPTER 1 Basic features and concepts of optimization
f(x) Continuous function
+ ve
x
- ve FIGURE 1.6 Continuous function: f (x) ¼ (x þ 5) (x5), f (x) ¼ 0 for x ¼ 5.
f(x)
Discontinuous function
+ ve
x
- ve
FIGURE 1.7 Discontinuous function: f (x) ¼ 1/(x5), f (x) ¼ a for x ¼ 5.
ΔP=f(D)
D1
D2
D3
D4
D
FIGURE 1.8 Discrete function: DP ¼ f (D).
(b) Monotonic functions Monotonic functions can be increasing, nonincreasing, decreasing, and nondecreasing type as shown in Fig. 1.9. A function is termed to be monotonic increasing along a path when f (x2) > f (x1) for x2 > x1. A function is said to be monotonic
1.3 Basic concepts
(A) f(x)
x1
x2
(B)
x .
f(x)
x1
x2
x1
x2
x
(C) f(x)
x
(D) f(x)
x1
x2
x
FIGURE 1.9 Monotonic functions. (A) Monotonic increasing; (B) monotonic nondecreasing; (C) monotonic decreasing; and (D) monotonic nonincreasing.
9
10
CHAPTER 1 Basic features and concepts of optimization
nondecreasing when f(x2) f(x1) for x2 > x1. A function is termed to be monotonic decreasing when f(x2) < f(x1) for x2 > x1. A function is said to be monotonic nonincreasing when f(x2) f(x1) for x2 > x1. (c) Unimodal and multimodal functions When the values of a function are plotted against its independent variables, the function initially starts to increase up to a maximum and then falls away. Similarly, the function may fall to a minimum and then increase for changes in its variable values. Such functions are termed as unimodal functions. These functions possess a local maximum or local minimum represented by a single peak as shown in Fig. 1.10. Functions with two peaks representing the maximum or minimum are called bimodal functions. Functions with more than two peaks are referred as multimodal functions. Such functions are shown in Fig. 1.11. Unimodality: The interpretation of unimodality is useful in the application of numerical search techniques. For one-dimensional case, the unimodality is defined as f x1;1 < f x1;2 < f x1 if x1;1 < x1;2 < x1 (1.2) f x1 > f x1;3 > f x1;4 if x1 < x1;3 < x1;4 This definition satisfies unimodality for maximum at x1 as shown in Fig. 1.12. Similarly, unimodality for minimum can be defined as shown in Fig. 1.13. (d) Convex and concave functions Convex function: This function has a single peak denoting the minimum as shown in Fig. 1.14. The function f(x) is said to be convex if for any two points x1 and x2 in x over the convex set for 0 a 1 such that f ½ax2 þ ð1 aÞx1 af ðx2 Þ þ ð1 aÞf ðx1 Þ
f(x) i ii
x
FIGURE 1.10 Unimodal functions: (i) one minimum, (ii) one maximum.
(1.3)
1.3 Basic concepts
f(x)
i ii
x
FIGURE 1.11 Multimodal functions: (i) bimodal function, (ii) multimodal function.
f(x)
x11 x12
x1* x13 x14
FIGURE 1.12 Unimodality for maximum.
f(x)
x11
x12
x1* x13 x14
FIGURE 1.13 Unimodality for minimum.
f(x)
x*
FIGURE 1.14 Convex function.
x
11
12
CHAPTER 1 Basic features and concepts of optimization
A convex set is the set in which every point in the interval joining the points x1 and x2 for 0 a 1 is defined by x ¼ (1a)x1 þ ax2. A convex function passes below the straight line joining the points x1 and x2 as shown in Fig. 1.15. Concave function: This function has a single peak representing the maximum as shown in Fig. 1.16. The function f(x) is said to be concave if for any two points x1 and x2 in x over the set 0 a 1 such that f ½ax2 þ ð1 aÞx1 af ðx2 Þ þ ð1 aÞf ðx1 Þ
(1.4)
A concave set is the set in which every point in the interval joining the points x1 and x2 for 0 a 1 is defined by x ¼ (1 a)x1 þ ax2. A concave function passes above the straight line joining the points x1 and x2 as shown in Fig. 1.17.
1.3.2 Interpretation of behavior of functions (a) Increasing function and its first derivative A function y ¼ f(x) is termed to be an increasing function if y increases with the increase in x. If x takes the values of x1 and x2 in the given interval with x2 > x1, then for an increasing function, f(x2) > f(x1), as shown in Fig. 1.18A. The first derivative of an increasing function can be depicted from Fig. 1.18B. If x and Dx be the values of x with their corresponding function values as y ¼ f(x) and
FIGURE 1.15 Convex function over the convex set.
f(x)
x*
FIGURE 1.16 Concave function.
x
1.3 Basic concepts
FIGURE 1.17 Representation of concave function.
(A)
f(x)
f(x2) f(x1) x x1
(B)
x2
f(x)
f(x+Δx) f(x) x x
x+Δx
FIGURE 1.18 (A) Increasing function and (B) depiction of its first derivative.
y þ Dy ¼ f(x þ Dx), then Dy ¼ f(x þ Dx) e f(x). As x þ Dx > x, Dx > 0. Thus, for an increasing function, f(x þ Dx) > f(x) so that Dy > 0. As Dx and Dy are positive, Dy/Dx > 0. Thus, the derivative of the function f(x) is the limit of Dy/Dx as Dx / 0. Lim Dy=Dx ¼ dy=dx Dx/0
(1.5)
As dy/dx > 0, y 0 (x) ¼ f 0 (x) 0. Thus, if the increasing function is differentiable 0. Similarly, if Dx < 0, then x þ Dx < x and f(x þ Dx) < f(x). When Dx and Dy are both negative, Dy/Dx > 0.
f0 (x)
13
14
CHAPTER 1 Basic features and concepts of optimization
(A)
f(x)
f(x1) f(x2) x x1
(B)
x2
f(x)
f(x) f(x+Δx) x x
x+ Δx
FIGURE 1.19 (A) Decreasing function and, (B) depiction of its first derivative.
(b) Decreasing function and its first derivative A function y ¼ f(x) is said to be a decreasing function over the domain of x, if y decreases with the increase of x. For this case, x2 > x1 and f(x2) < f(x1) as shown in Fig. 1.19A. The first derivative of the decreasing function can be depicted from Fig. 1.19B. If x and Dx be the values of x with their corresponding function values as y ¼ f(x) and y þ Dy ¼ f(x þ Dx). For a decreasing function, Dx > 0 and f(x þ Dx) < f(x). Consequently, Dy < 0 and Dy/Dx < 0. In Eq. (1.5), the limit of Dy/Dx as Dx / 0 is defined as dy/dx. As Dy/Dx<0, y0 (x) ¼ f 0 (x) 0. Thus, if the decreasing function is differentiable, f 0 (x) 0. Similarly, if Dx < 0 and Dy > 0, the ratio of Dy/Dx is negative and hence f 0 (x) 0. (c) Increasingedecreasing function A function y ¼ f(x) is termed to be an increasingedecreasing function if f 0 (x) > 0 for all x in some interval and f 0 (x) < 0 for all x in some other interval as shown in Fig. 1.20. For f 0 (x) > 0, the tangent to the graph at any point has a positive slope tending up words to the right. If f 0 (x) < 0, the slope of the function lies downward to the right indicating the decrease in y with the increase of x.
1.3 Basic concepts
f(x)
f '(x)
f '(x)
x
FIGURE 1.20 Depiction of increasingedecreasing function in terms of its first derivative.
1.3.3 Maxima and minima of functions Finding the maximum or minimum of a function is important in optimization applications. Local maximum: A function f(x) is said to have a local maximum at x ¼ x0, if f(x0) > f(x) over the domain of x. The point P in Fig. 1.21 corresponds to the local maximum of the function. Local minimum: A function f(x) is said to have a local minimum at x ¼ x0, if f(x0) < f(x) over the domain of x. The point Q in Fig. 1.22 corresponds to the local minimum of the function. Stationary point: The point at x ¼ x0 where the tangent to the function y ¼ f(x) is horizontal is referred as stationary point or critical point. For a continuous function, the condition for critical point is f0 (x0) ¼ 0 as indicated in Fig. 1.23. The critical point refers to a well-defined function for which the derivative exists. Saddle point: The point at which the function indicates neither local maximum nor local minimum is called a saddle point or inflection point. The point R at x ¼ x0 refers to the saddle point condition where the tangent is flat and the derivative of the function f 0 (x0) ¼ 0 (Fig. 1.24). The same optimal point for maximum and minimum: If the point x* represents the minimum value of a function f(x), the same x* can be used to represent the maximum value as the negative of the function f(x) as shown in Fig. 1.25. The optimization can be considered as a problem of minimization of the function P
f(x)
x x1
x0
FIGURE 1.21 Local maximum: f(x0) > f(x1) and f(x0) > f(x2).
x2
15
16
CHAPTER 1 Basic features and concepts of optimization
f(x) Q
x x1
x0
x2
FIGURE 1.22 Local minimum: f (x0) < f (x1) and f(x0) < f(x2).
f(x)
f'(x0)=0
x x0
FIGURE 1.23 Stationary point of a function.
f(x) R
f'(x0)=0
x x0
FIGURE 1.24 Saddle point of a function.
and the maximum of a function can be obtained by seeking the minimum of the negative of the same function. First derivative test for optimum: The stationary point for maximum or minimum of a function can be tested by using the first derivative test. (i) If f 0 (x) alters sign from negative to positive at x ¼ x0 so as f 0 (x) < 0 for x < x0 and f 0 (x) > 0 for x > x0, then the function f(x) has minimum at x ¼ x0 (Fig. 1.26). (ii) If f 0 (x) alters sign from positive to negative at x ¼ x0 so as f 0 (x) > 0 for x < x0 and f 0 (x) < 0 for x > x0, then the function f(x) has maximum at x ¼ x0 (Fig. 1.27).
1.3 Basic concepts
f(x) Minimum of f(x)
x
x*
0
Maximum of - f(x)
FIGURE 1.25 The same point representing the minimum and maximum of f(x).
f(x)
f '(x)
f '(x) (+)
(-)
x x0
FIGURE 1.26 First derivative test for minimum of a function.
f(x)
f '(x)
f '(x)
(+)
(-)
x x0
FIGURE 1.27 First derivative test for maximum of a function.
17
18
CHAPTER 1 Basic features and concepts of optimization
f(x) f'(x) f'(x)
x x0
FIGURE 1.28 First derivative test for saddle point representation.
(iii) If f 0 (x) does not alter sign at x ¼ x0, then the function f(x) exhibits inflection point at x ¼ x0 (Fig. 1.28).
1.3.4 Region of search for constrained optimization Any optimization problem consists of an objective function subject to constraints. The objective function is also termed as cost function or profit function, which is expressed by Maximize=Minimize f ðxÞ
(1.6)
gj ðxÞ 0; hi ðxÞ ¼ 0;
(1.7)
subject to the constraints j ¼ 1; 2; .; m i ¼ 1; 2; .; p
where x ¼ [x1, x2, ., xn]. The constraints gj(x), j ¼ 1, ., m refer inequality constraints and hi(x), i ¼ 1, ., p represent equality constraints. Fig. 1.29 shows a two-dimensional design space that depicts the region of search in which the feasible region is denoted by hatched lines. The set of values of x that satisfy the equation gj(x) 0 forms a boundary surface in the design space called a constraint surface. The constraint surface divides the design space into two regions: (1) gj(x) < 0 and (2) gj(x) > 0. The constraints can be linear or nonlinear and the design space will be bounded by the curves as well. The points lying on the region where gj(x) < 0 are feasible or acceptable. The points that lie on the hyperspace will satisfy the constraints critically, whereas the points lying in the region gj(x) > 0 are infeasible or unacceptable. A design point that lies on the constraint surface is called a bound point, and the associated constraint is called an active constraint. Design points that do not lie on any constraint surface are known as free points. The design points that lie in the acceptable or unacceptable regions can be classified as (i) free and acceptable point, (ii) free and unacceptable point, (iii) bound and acceptable point, and (iv) bound and unacceptable point (Fig. 1.29).
1.4 Classification and general procedure
x2 Linear constraints Bound unacceptable point
Infeasible region
Feasible region
Nonlinear inequality constraint
Free point Bound acceptable points
Linear constraint
x1 Nonlinear inequality constraint
Bound unacceptable points
FIGURE 1.29 Search region for constrained optimization.
1.4 Classification and general procedure This section covers classification of optimization problems, general solution procedure, and the bottlenecks in optimization.
1.4.1 Classification of optimization problems Classification of an optimization problem is an important step in the optimization process, as algorithms for solving optimization problems are tailored to a particular type of problem. Optimization problems can be classified into different types as discussed below [6]. (a) Continuous optimization versus discrete optimization Optimization problems involving functions that can take on any real value of the variables are called continuous optimization problems. Certain functions make sense if the variables take on values from a discrete set and the optimization problems involving such variables are referred as discrete optimization problems. Continuous optimization problems are easy to solve than the discrete optimization problems. However, discrete optimization problems can also be solved efficiently due to improvements in algorithms coupled with advancements in computing technology.
19
20
CHAPTER 1 Basic features and concepts of optimization
(b) Unconstrained optimization versus constrained optimization Another important classification is based on problems involving constraints on the variables and no constraints on the variables. Unconstrained optimization problems arise directly in many practical applications. These problems may also arise because of the replacements of constraints by a penalty term in the objective function formulation of constrained optimization problems. Constrained optimization problems can occur from applications in which there are explicit constraints on the variables. The constraints on the variables can vary widely from simple bounds to systems of equalities and inequalities that model complex relationships among the variables. Constrained optimization problems can be furthered classified according to the nature of the constraints (e.g., linear, nonlinear, convex) and the smoothness of the functions (e.g., differentiable or nondifferentiable). (c) Classification of constrained optimization problems Based on the nature of equations for the objective function and the constraints, optimization problems can be classified as linear, nonlinear, geometric, and quadratic programming problems. The classification is very useful from a computational point of view because many predefined special methods are available for effective solution of a particular type of problem. (i) Linear programming problem If the objective function and all the constraints are “linear” functions of the design variables, the optimization problem is called a linear programming problem (LPP). A linear programming problem is often stated in the standard form: 8 9 x1 > > > > > > > > > > > < x2 > = Find X ¼ (1.8) : > > > > > > : > > > > > : > ; xn which maximizes f ðXÞ ¼
n X
ci xi
(1.9)
i¼1
subject to the constraints n X
aij xj bi ;
i ¼ 1; 2; .; m
j¼1
xj 0; where ci, aij, and bi are constants.
j ¼ 1; 2; .; n
(1.10)
1.4 Classification and general procedure
(ii) Nonlinear programming problem If any of the functions among the objectives and constraint functions of the optimization problem is nonlinear, the problem is called a nonlinear programming (NLP) problem. This is the most general form of a programming problem and all other problems can be considered as special cases of the NLP problem. (iii) Geometric programming problem A geometric programming (GMP) problem is one in which the objective function and constraints are expressed as polynomials in X. A function h(X) is called a polynomial (with m terms) if h can be expressed as hðXÞ ¼ c1 xa111 xa221 ; .; xann1 þ c2 xa112 xa222 ; .; xann2 þ; /; þcm xa11m xa22m ; .; xannm
(1.11)
where cj (j ¼ 1,., m) and aij (i ¼ 1,., n and j ¼ 1,., m) are constants with cj 0and xi 0. Thus, GMP problems can be posed as follows: Find X, which minimizes ! N0 n X a aij f ðXÞ ¼ cj xi ; cj > 0; xi > 0 (1.12) j¼1
subject to gk ðXÞ ¼
Nk X j¼1
ajk
n a
i¼1
! q xi ijk
> 0;
ajk > 0;
xi > 0;
k ¼ 1; 2; .; m
(1.13)
i¼1
where N0 and Nk denote the number of terms in the objective function and in the kth constraint function, respectively. (iv) Quadratic programming problem A quadratic programming problem is the best behaved nonlinear programming problem with a quadratic objective function and linear constraints and is concave (for maximization problems). It can be solved by suitably modifying the linear programming techniques. It is usually formulated as follows: f ðXÞ ¼ c þ subject to
n P i¼1
n X
qi x i þ
i¼1
n X n X i¼1 j¼1
aij xi ¼ bj ; j ¼ 1,2, ., m xi 0;
i ¼ 1; 2; .; n
where c, qi, Qij, aij, and bj are constants.
Qij xi xj
(1.14)
21
22
CHAPTER 1 Basic features and concepts of optimization
(d) Deterministic optimization versus stochastic optimization In deterministic optimization, the data of a given problem are defined accurately. In many practical problems, the data cannot be known with certainty and are associated with random noisy behavior. Problems associated with such data are called stochastic optimization problems. (e) Integer programming problem versus real-valued programming problem If some or all of the design variables of an optimization problem are restricted to take only integer (or discrete) values, the problem is called an integer programming problem. A real-valued programming problem is the one in which the values of real variables within an allowed set are systematically chosen to minimize or maximize a real function. (f) No objective, one objective, and more objectives Feasibility problems are problems in which the goal is to find values for the variables that satisfy the constraints of a model with no particular objective to optimize. That means the goal is to find a solution that satisfies the complementarity conditions. Such problems give rise to no objective optimization problems. One objective is the case where a single objective function is involved in the optimization problem. Many applications in engineering and science involve multiple objectives, where some of the objectives are to be maximized while minimizing some other objectives. A set of optimal conditions need to be established to satisfy all the objectives. Such problems are called multiobjective optimization problems. (g) Classification based on separability of the functions Based on this classification, optimization problems can be classified as separable and nonseparable programming problems based on the separability of the objective and constraint functions. Separable programming problems: In this type of a problem the objective function and the constraints are separable. A function is said to be separable if it can be expressed as the sum of n single variable functions, f1(x1), f2(x2), . , fn(xn). A separable programming problem can be expressed in standard form as Find X which minimizes f ðXÞ ¼ subject to gj ðXÞ ¼
n P
n X
fi ðxi Þ
(1.15)
i¼1
gij ðxi Þ bj , j ¼ 1, 2, ., m, where bj is a constant.
i¼1
Nonseparable programming problems: These problems are considerably more difficult to solve than separable problems. Any nonseparable objective functions and/or constraints in nonlinear programming problem can be approximately expressed as separable functions. However, various cases of nonseparable problems such as nonseparable convex continuous problems, as well as nonseparable quadratic convex continuous problems can be solved efficiently by using a collection of relevant techniques.
1.4 Classification and general procedure
(h) Optimal control problems An optimal control problem is a mathematical programming problem involving a number of stages, where each stage evolves from the preceding stage in a prescribed manner. It is defined by two types of variables: the control or design variables and state variables. The control variables define the system and control how one stage evolves into the next. The state variables describe the behavior or status of the system at any stage. The problem is to find a set of control variables such that the total objective function/performance index of over all stages is minimized, subject to a set of constraints on the control and state variables.
1.4.2 General procedure of solving optimization problems In design and operation of an engineering system, it is necessary to obtain a suitable model to represent the system, to choose a suitable objective function to guide the decisions, and to select an appropriate method of optimization. Once the model is selected and the solution needed is obtainable, an appropriate method of optimization need to be chosen to determine the information required in the optimization problem. The following general procedure is used for the analysis and solution of optimization problems. 1. Examine the process and identify the process variables and process characteristics of interest. Make a list of all the variables. 2. Determine the criterion for optimization and define the objective function involving the process variables. 3. Develop a valid model of the process or equipment relating the input and output variables and parameters. Define the equality and inequality constraints. Identify the dependent and independent variables and obtain the number of degrees of freedom. 4. If the problem formulation is too large, break it up in to manageable parts and simplify the objective function and the model. 5. Apply a suitable optimization method to solve the problem. 6. Determine the optimum solution for the system. Check the sensitivity of the results to the changes in the parameters of the problem.
1.4.3 Bottlenecks in optimization Many optimization problems in engineering and science are characterized by the nonconvexity of the feasible domain or the objective function may involve continuous and/or discrete variables. For problems with well-behaved objective functions and constraints, optimization presents no difficulty. However, for problems involving complicated objective function and constraints, some of the optimization procedures may be inappropriate and may fail in providing the desired solution. The following characteristics can cause a failure in obtaining the desired optimal solution.
23
24
CHAPTER 1 Basic features and concepts of optimization
1. In certain optimization problems, gradients do not exist at every point or at the optimal point. Difference approximation to the gradient may not be useful and may lead to failure. The algorithm may not converge or may converge to nonoptimal point. 2. The objective function or the constraint functions may be nonlinear functions of variables. Linear approximation of nonlinear functions may lead to loss in accuracy of solution. 3. The objective function or the constraint functions may have finite discontinuities in the continuous parameter values. For example, the pressure drop of a fluid flowing at a fixed flow rate through a pipe of fixed length is not a continuous function of pipe diameter, as the pipes are normally available only in standard diameter. 4. There are certain stiff problems which are analytically smooth but numerically nonsmooth. Such problems may pose difficulties in providing optimal solution. 5. The objective function and the constraint functions may have complicated interactions of the variables. One such interaction is the temperature and pressure dependence in the design of a pressure vessel. Such interactions between the variables can prevent in determining the unique values of the variables. 6. The objective function and the constraint functions may exhibit almost flat behavior for some ranges of variables. Such functions may not be sensitive for certain ranges of values of variables. 7. For functions exhibiting multiple local optima, the solution obtained in some region may be less acceptable than the solution in the other region. The better solution may be reached only by initiating the search for optimum from a different starting point.
1.5 Summary In this chapter the basic features of optimization along with its scope, illustrative examples, and prerequisites are explained. Also the basic concepts of optimization are described in terms of functions, behavior of functions, and maxima and minima of functions. Furthermore, the region of search within the constraints, classification of optimization problems, general solution procedure, and the obstacles of optimization are illustrated. The features, concepts, and benefits of optimization described in this chapter motivate the readers to explore the classical and advanced optimization methods and their applications described in subsequent chapters of this book.
References
References [1] T.F. Edgar, D.M. Himmelblau, S. Lasdon, Optimization of Chemical Processes, second ed., McGraw Hill Higher Education, 2001. [2] M.S. Peters, K.D. Timmerhaus, Plant Design and Economics for Chemical Engineers, fourth ed., McGraw-Hill, Inc., New York, 1991. [3] Combustion Analysis Basics, TSI Incorporated 2004. [4] J.E. Bailey, D.F. Ollis, Biochemical Engineering Fundamentals, McGraw Hill Book Company, New York, 1986. [5] E. Qi, Q. Su, J. Shen, F. Wu, R. Dou, in: E. Qi, Q. Su, J. Shen, F. Wu, R. Dou (Eds.), Proceedings of the 5th International Asia Conference on Industrial Engineering and Management Innovation (IEMI, 2014), Atlatis Press, 2015. [6] S.S. Rao, Engineering Optimization-Theory and Practice, fourth ed., John Wiley & Sons, 2009.
25