Applied Mathematics and Computation 219 (2013) 8632–8645
Contents lists available at SciVerse ScienceDirect
Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc
Open-loop optimal controller design using variational iteration method Ahmed Maidi a, J.P. Corriou b,⇑ a b
Laboratoire de conception et conduite des systèmes de production, Université Mouloud MAMMERI, 15 000 Tizi-Ouzou, Algerie Laboratoire Réactions et Génie des Procédés, UPR 3349-CNRS, Nancy Université, ENSIC-INPL, 1 rue Grandville, BP 20451, 54001 Nancy Cedex, France
a r t i c l e
i n f o
Keywords: Optimal control Pontryagin’s minimum principle Hamilton–Jacobi Variational calculus Lagrange multiplier Variational iteration method
a b s t r a c t This article presents a design approach of a finite-time open-loop optimal controller using Pontryagin’s minimum principle. The resulting equations constitute a two-point boundaryvalue problem, which is generally impossible to solve analytically and, furthermore the numerical solution is difficult to obtain due to the coupled nature of the solutions. In this paper, the variational iteration method is adopted to easily solve Hamilton equations by use of iteration formulas derived from the correction functionals corresponding to Hamilton equations. The proposed approach allows to derive the numerical solution of the optimal control problem but an analytical or approximate expression of the optimal control law can often be obtained as a function of the time variable, depending on the nature of the control problem, which is simple to implement. The different possible forms of control law that can be attained following the proposed design approach are illustrated by four application examples. Ó 2013 Elsevier Inc. All rights reserved.
1. Introduction Optimal control is the process of finding a control law that minimizes a performance index for a dynamic system over a period of time [1–3]. Optimal control has found applications in many different fields and represents one of the most challenging problem in the context of industrial, medical and economical applications. The enormous advances in computing power have enabled the application of optimal control methods to complex problems [3]. To solve an optimal control problem (OCP), several approaches are proposed in the literature [3–8]. However, two main approaches can be distinguished, direct and indirect methods [4]. Direct methods are based on collocation or a parametrization of control and possibly state variables [9], or iterative dynamic programming [4,6,10], and the OCP is solved by efficient nonlinear programming which presents good convergence properties and conveniently handles constraints. Thus, direct methods are purely numerical [3,4,11,6]. Indirect methods are based on the Variational Calculus [12] from which Hamilton–Jacobi theory and Pontryagin’s principle of minimum or maximum [13,14] are issued for systems described by continuous state-space models and dynamic Programming based on Bellman’s principle of optimality [15,1,16] for systems described by discrete state-space models. This results in a nonlinear two-point boundary-value problem which is difficult to solve. Thus, indirect methods possess an analytical basis, and in some rare cases, fairly simple optimal problems can be solved analytically, for instance when the performance index is quadratic and the dynamic system is linear [11,17]. Furthermore, the presence of active constraints on the inputs or on the states renders the problem more difficult to handle than in
⇑ Corresponding author. E-mail address:
[email protected] (J.P. Corriou). 0096-3003/$ - see front matter Ó 2013 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.amc.2013.02.075
A. Maidi, J.P. Corriou / Applied Mathematics and Computation 219 (2013) 8632–8645
8633
the case of direct methods [18]. Thus, in most cases, they need extensive numerical calculation together with advanced numerical methods. To implement an optimal controller, two configurations are possible: open-loop optimal control (OLOC) and closed-loop optimal control (CLOC). Following [11], these configurations are equivalent, the difference being related to the form of the control law. In the case of OLOC, the control law is obtained as a time function whereas in the case of CLOC, the control law is obtained as a state feedback. The implementation of OLOC only requires a digital machine to perform the necessary numerical calculations, generally off-line. Note that CLOC is generally difficult to obtain except in the case of infinite-time optimal control problem of a linear system with quadratic performance index, known as the LQ problem [11]. In the case of finitetime LQ problem, a matrix differential Riccati equation, i.e a set of nonlinear differential equations, must be solved, and the solution is generally carried out numerically [19,20]. In addition, the practical implementation of CLOC requires the estimation of the entire state of the dynamic system like in LQG control, which makes observability of the system a primary question [21]. Among analytical methods developed for dynamic optimization problems, Pontryagin’s minimum principle [14] is the most powerful and attractive approach. Nevertheless, the optimal control law is determined by solving a two-point boundary-value problem, given by a set of state and costate relations called Hamilton equations with appropriate boundary conditions, which is difficult to solve analytically or numerically using indirect methods because of opposite directions of integration for these coupled equations and possible constraints. The shooting method is commonly used to solve this two-point boundary-value problem [10,22]. Its principle consists in finding the unknown initial boundary conditions of costate variables so that Hamilton equations can be integrated together in the same direction. The main drawback of the shooting methods is that they are difficult to make converge because initial conditions for the costate variables should be guessed [10]. In addition, the numerical integration in the same direction of Hamilton equations is very often unstable [23]. To overcome these difficulties, direct methods are well adapted, even if the accuracy of indirect methods is claimed better. The minimum principle remains the most used tool for solving both unconstrained and constrained optimal control problems. Several approaches have been proposed in the literature to deal with optimal control problem with constraints depending on their nature using the minimum principle. The main idea of the different proposed approaches consists in converting the original optimal control problem with various constraints [24] to a new one without constraints [18], then to apply the minimum principle. Equality constraints can be taken into account in the criterion to be minimized by use of Lagrange multipliers [25,26] and inequality constraints by use of Karush–Kuhn–Tucker-multipliers or use of slack variables [27,11]. Then the usual Hamilton–Jacobi method or Pontryagin’s minimum principle can be used with an augmented number of variables. Graichen and Petit[18] transforms a constrained optimal control problem into an unconstrained one by a modification of the system dynamics under a normal form which incorporates the constraints. Graichen et al. [28] transforms an inequality-constrained optimal control problem into an equality-constrained one using saturation functions. Numerically, penalty functions [29,26,27,11] and interior-point methods [30] can also be used in the case of inequality constraints. To solve ordinary differential equations, various approximate analytical methods have been proposed [31]. The commonly used methods are the Adomian Decomposition Method [32], the Homotopy Perturbation Method [33] and the Variational Iteration Method [34]. The comparison of the variational iteration method with the Adomian method [32] and the homotopy perturbation [33] reveals that the variational iteration method,which is nothing else but Picard–Lindelof iterative procedure [31,35], is more powerful and gives an approximation of higher accuracy and exact solutions if they exist. Thus, in this paper, a new design approach of OLOC using Pontryagin’s minimum principle is proposed. The main idea consists in using the variational iteration method [34,36] to get a solution, which can be analytical, approximate analytical or numerical depending on the nature of the nonlinearities of the optimal control problem. Thus, an approximate optimal control law, as a time function, can be easily obtained. Note that few applications of the variational iteration method to solve optimal control have been reported in the literature. [37] used the variational iteration method to solve the Ricatti differential equation to find the finite-time CLOC, i.e. under a state feedback form, for a linear system with quadratic performance index, known as the linear quadratic regulator (LQR). The same problem has been studied by Olotu and Adekunle [38]. Using the variational iteration method, Kucuk [39] proposed an active optimal control of the Korteweg and de Vries equation, that minimizes a quadratic functional, by considering the direct control parameterization approach. The main contribution of the present paper consists in using the variational iteration method to solve iteratively the twopoint boundary-conditions (Hamilton equations with the boundary conditions) and the unknown initial conditions of the costate variables can be simply determined by solving a set of nonlinear algebraic equations, which makes this approach an interesting alternative to the shooting method. In addition, although Hamilton equations are sufficient and necessary optimality conditions, the proposed approach provides the global solution. The choice of the variational iteration method is motivated by the fact that this method provides the solution of the differential equations even with time-variant parameters, in terms of a rapidly convergent infinite series, using correction functionals. Thus, an iterative procedure allows to overcome the instability of the direct numerical integration of Hamilton equations. The design method is illustrated by application examples that show the different possible forms (analytical, approximate analytical and numerical) of the optimal control that can be carried out following the proposed design approach. The structure of the article is as follows. Section 2 is devoted to Pontryagin’s minimum principle used for solving optimal control problem. Section 3 is dedicated to the proposed design approach to solve an open-loop optimal control problem based on the variational iteration method. The convergence of the method is demonstrated in this section. Section 4 illustrates the proposed design approach by some application examples. Finally, a conclusion ends the article.
8634
A. Maidi, J.P. Corriou / Applied Mathematics and Computation 219 (2013) 8632–8645
2. Minimum principle An optimal control problem is stated as follows. Find the optimal control law uðtÞ : 0; t f R # R to minimize the performance index
J ¼ wðxðt f Þ; t f Þ þ
Z
tf
/ðxðtÞ; uðtÞ; tÞ dt
ð1Þ
0
subject to
_ xðtÞ ¼ f ðxðtÞ; uðtÞ; tÞ;
ð2Þ
xð0Þ ¼ x0
ð3Þ
and terminal constraints
lðxðt f Þ; t f Þ ¼ 0;
ð4Þ
n
where xðtÞ 2 R is the state vector (n is the system order), uðtÞ 2 R the control variable. xðtÞ and uðtÞ are measurable functions defined in ½0; t f of Rþ . w : Rn R # R is a terminal cost function, / : Rn R R # R is the functional, l : Rn R # Rn and f : Rn R R # Rn are smooth vector fields. w; /; l and f are C 1 . xð0Þ the known initial state. tf is the fixed final time and the final state xðt f Þ can be fixed or free. Note that the terminal constraints (4) can be incorporated [3] in the algebraic part of the criterion (1) rewritten as the augmented criterion
wðxðt f Þ; t f Þ þ m T lðxðt f Þ; t f Þ þ
Z
tf
/ðxðtÞ; uðtÞ; tÞ dt;
ð5Þ
0
where m are Lagrange multipliers that should be calculated in order to satisfy the terminal constraints (4). The Hamiltonian Hðx; pðtÞ; uðtÞ; tÞ is defined as
HðxðtÞ; pðtÞ; uðtÞ; tÞ ¼ /ðxðtÞ; uðtÞ; tÞ þ pT ðtÞ f ðxðtÞ; uðtÞ; tÞ;
ð6Þ
where pðtÞ is the costate or adjoint vector. The following Hamilton equations are necessary conditions
_ xðtÞ ¼
@HðxðtÞ; pðtÞ; uðtÞ; tÞ ; @pðtÞ
_ pðtÞ ¼
ð7Þ
@HðxðtÞ; pðtÞ; uðtÞ; tÞ : @xðtÞ
ð8Þ
The first Eq. (7) is the system model (2) whereas the second equation is called the costate or adjoint equation. They constitute a two-point boundary-value problem with (3) and the transversality boundary condition at the fixed final time tf , written as [26]
" # T @wðxðt f Þ; t f Þ @lðxðtf Þ; tf Þ m pðtf Þ dxf ¼ 0: @xðtf Þ @xðtf Þ
ð9Þ
According to Pontryagin’s minimum principle, if u ðtÞ is a solution of the optimal control problem (1)–(3), then
u ðtÞ ¼ arg min HðxðtÞ; pðtÞ; uðtÞ; tÞ:
ð10Þ
uðtÞ
Consequently, in the absence of constraints other than (2)–(4),
@HðxðtÞ; pðtÞ; uðtÞ; tÞ ¼ 0 ) u ðtÞ ¼ UðxðtÞ; pðtÞ; tÞ; @uðtÞ
ð11Þ
which is referred as weak Pontryagin’s minimum principle. The solution of the optimal control problem exists if
@H2 ðx ðtÞ; p ðtÞ; u ðtÞ; tÞ > 0; @u2 ðtÞ
ð12Þ
where u ðtÞ; p ðtÞ, and x ðtÞ satisfy Hamilton conditions (7) and (8). The optimal value of the Hamiltonian is
H ðxðtÞ; pðtÞ; tÞ ¼ HðxðtÞ; pðtÞ; uðtÞ; tÞjuðtÞ¼UðxðtÞ;pðtÞ;tÞ :
ð13Þ
The solution of Eqs. (7) and (8), with boundary conditions (3) and (9), gives the optimal trajectories x ðtÞ and p ðtÞ. Then, by substituting these trajectories in the optimal control law (11), the open-loop optimal control u ðtÞ results.
A. Maidi, J.P. Corriou / Applied Mathematics and Computation 219 (2013) 8632–8645
8635
The solution of Hamilton equations (7) and (8) requires 2n boundary conditions. The first n boundary conditions are related to initial state xð0Þ, which is always known, and the n other boundary conditions are related to the final state. As the final time t f is fixed, if the final state xðtf Þ is specified, consequently the n other boundary conditions are
xðt f Þ ¼ xf ;
ð14Þ
otherwise, the final state xðt f Þ being free, the remaining n boundary conditions are given by the boundary condition (9) at t f yielding the terminal constraint on the costate variable pðt f Þ as
pðtf Þ ¼
T @wðxðt f Þ; t f Þ @lðxðt f Þ; t f Þ : þ @xðt f Þ @xðtf Þ
ð15Þ
Note that the analytical solution of Hamilton equations (7) and (8) can be obtained only for some special types of optimal control. The numerical solution of this two-point boundary-value problem is difficult because of the coupled nature of the solutions xðtÞ and pðtÞ [11], that is, the state xðtÞ has to be solved by forward integration of (2) starting from its initial condition xð0Þ and the costate pðtÞ has to be solved by backward integration of (8) starting from its final condition pðt f Þ since its initial condition pð0Þ is unknown. When pð0Þ is used for forward integration of (8), this corresponds to the shooting method [10,22]. In the present work, the variational iteration method will be adopted to easily solve the two-point boundary-value problem defined by Hamilton equations (7) and (8), which allows to obtain the optimal control law u ðtÞ. 3. Proposed design approach The variational iteration method (VIM) that was recently developed has been successfully applied to solve both ordinary and partial differential equations even with symbolic coefficients [34,36]. The method gives rapidly convergent successive approximations of the exact solution if such a solution exists, or an approximate solution with high accuracy even if few iterations are used [34,36,40,41]. Compared to numerical methods, the VIM provides analytic and numerical solutions without linearization or discretization, and the obtained result is not affected by rounding error. The VIM constitutes an efficient mathematical tool and an interesting different approach to solve nonlinear coupled ordinary differential equations [40,41]. In the following, the VIM will be used to solve the OLOC problem, that is, to solve Hamilton equations (7) and (8). According to the VIM principle [34,36,40,41,31], the correction functionals for Hamilton equations (7) and (8) are
xðNþ1Þ ðtÞ ¼ xðNÞ ðtÞ þ
Z
t
kx ðnÞ x_ ðNÞ ðnÞ rpðnÞ H xðNÞ ðnÞ; pðNÞ ðnÞ; n dn;
ð16Þ
kp ðnÞ p_ ðNÞ ðnÞ þ rxðnÞ H xðNÞ ðnÞ; pðNÞ ðnÞ; n dn;
ð17Þ
0
pðNþ1Þ ðtÞ ¼ pðNÞ ðtÞ þ
Z
t
0
with kx ðnÞ ¼ diag kx1 ðnÞ; . . . ; kxn ðnÞ and kp ðnÞ ¼ diag kp1 ðnÞ; . . . ; kpn ðnÞ , where kxi ðnÞ and kpi ðnÞði ¼ 1; . . . ; nÞ are the Lagrange multipliers which can be identified using the stationary conditions from variational theory as explained in the following. Now set
wðtÞ ¼
xðtÞ pðtÞ
;
kðnÞ ¼
kx ðnÞ
0nn
0nn
kp ðnÞ
;
rH ðwðnÞ; nÞ ¼
rpðnÞ H : rxðnÞ H
ð18Þ
The term rH can be written as the sum of two contributions, Linear and Nonlinear,
rH ðwðnÞ; nÞ ¼ L wðnÞ þ NðwðnÞ; nÞ
ð19Þ
and the matrix L can be decomposed as L ¼ A þ B, with A ¼ diagðL11 ; . . . ; Lnn Þ. Hence,
rH ðwðnÞ; nÞ ¼ A wðnÞ þ B wðnÞ þ NðwðnÞ; nÞ:
ð20Þ
Consequently, the correction functionals (16) and (17) can be written as
wðNþ1Þ ðtÞ ¼ wðNÞ ðtÞ þ
Z
t
ðNÞ ðNÞ _ ðnÞ A wðNÞ ðnÞ B w ~ ðNÞ ðnÞ N w ~ ðnÞ; n dn; kðnÞ w
ð21Þ
0
~ ~ where wðtÞ is a restricted variation [42], i.e. dwðtÞ ¼ 0. Imposing the variation to Eq. (21) gives
Z
t
ðNÞ _ ðnÞ A dwðNÞ ðnÞ dn kðnÞ dw Z t t _ kðnÞ þ kðnÞ A dwðNÞ ðnÞ dn ¼ dwðNÞ ðtÞ þ kðnÞ dwðNÞ ðnÞ 0
dwðNþ1Þ ðtÞ ¼ dwðNÞ ðtÞ þ
ð22Þ
0
0
As the variation dwðNÞ ð0Þ ¼ 0, thus
ð23Þ
8636
A. Maidi, J.P. Corriou / Applied Mathematics and Computation 219 (2013) 8632–8645
dwðNþ1Þ ðtÞ ¼ I nn þ kðnÞjn¼t dwðNÞ ðtÞ
Z
t
_ kðnÞ þ kðnÞ A dwðNÞ ðnÞ dn:
ð24Þ
0
Considering the requirement of stationary correction functional dwðNþ1Þ ðtÞ, whatever the variation dwðNÞ ðtÞ, the following stationary condition results
_ kðnÞ þ kðnÞ A ¼ 0;
ð25Þ
I nn þ kðnÞjn¼t ¼ 0;
ð26Þ
which gives
kðnÞ ¼ eðntÞ A
ð27Þ
and the following iteration formula results
Z
wðNþ1Þ ðtÞ ¼ wðNÞ ðtÞ
t
ðNÞ _ ðnÞ rH wðNÞ ðnÞ; n dn: eðntÞ A w
ð28Þ
0
Note that, as it will be demonstrated in the following subsection, to guarantee the convergence of the variational iteration method when applied to solve Hamilton equations (7) and (8), the vector function rH wðNÞ ðnÞ; n must be Lipschitzcontinuous. 3.1. Convergence of the VIM for solving Hamilton equations The convergence of the variational iteration method was thoroughly investigated in the literature [43,44]. Here, it is studied in the framework of the solution of Hamilton equations. The exact solution wðtÞ of Hamilton equations (7) and (8) verifies
Z
wðtÞ ¼ wðtÞ
t
_ eðntÞ A ½wðnÞ rH ðwðnÞ; nÞ dn:
ð29Þ
0
Introduce the error function
eðNþ1Þ ðtÞ ¼ wðNþ1Þ ðtÞ wðtÞ; hence from (28) and (29) it follows that
Z
t
eðntÞ A e_ ðNÞ ðnÞ rH wðNÞ ðnÞ; n rH ðwðnÞ; nÞ dn
Z t n¼t ¼ eðNÞ ðtÞ eðntÞ A eðNÞ ðnÞ n¼0 þ eðntÞ A A eðNÞ ðnÞ dn
eðNþ1Þ ðtÞ ¼ eðNÞ ðtÞ
ð30Þ
0
þ
0
Z
t
eðntÞ A rH wðNÞ ðnÞ; n rH ðwðnÞ; nÞ dn:
ð31Þ
0
Now, as the initialisation of VIM is wð0Þ ðtÞ ¼ wð0Þ, thus wðNÞ ð0Þ ¼ wð0Þ, that is, eðNÞ ð0Þ ¼ 0, therefore
eðNþ1Þ ðtÞ ¼
Z
t
eðntÞ A A eðNÞ ðnÞ dn þ 0
Z
t
eðntÞ A rH wðNÞ ðnÞ; n rH ðwðnÞ; nÞ dn
ð32Þ
0
and
ðNþ1Þ e ðtÞ 2 6 kAk2
Z
t 0
ðntÞ A ðNÞ e e ðnÞ dn þ 2 2
Z
t 0
ðntÞ A e rH wðNÞ ðnÞ; n rH ðwðnÞ; nÞ dn: 2 2
ð33Þ
Since rH wðNÞ ðnÞ; n is Lipschitz-continuous function, hence
ðNþ1Þ e ðtÞ 2 6 kAk2 ¼ kAk2
Z Z
t
ðntÞ A ðNÞ e e ðnÞ dn þ 2 2
t
ðntÞ A ðNÞ e e ðnÞ dn þ 2 2
0
0
Z Z
t
ðntÞ A ðNÞ e c w ðnÞ wðnÞ dn 2 2
t
ðntÞ A ðNÞ e c e ðnÞ dn; 2 2
0
0
ð34Þ
where c is the Lipschitz constant with respect to rH . For n 6 t 6 tf , we have t maxja j ðntÞ A e 6 ekðntÞ Ak2 6 ekðntÞk2 kAk2 6 e f i ii ; 2
ð35Þ
hence
ðNþ1Þ e ðtÞ 2 6 M
Z 0
t
ðNÞ e ðnÞ dn; 2
ð36Þ
A. Maidi, J.P. Corriou / Applied Mathematics and Computation 219 (2013) 8632–8645
8637
with M ¼ etf maxi jaii j ðkAk2 þ cÞ. Therefore,
ð1Þ e ðtÞ 6 M 2 ð2Þ e ðtÞ 6 M 2
Z Z
t
ð0Þ e ðnÞ dn 6 M max eð0Þ ðn0 Þ t; 2 2
ð37Þ
n0 2½0; t f
0
ð1Þ 2 e ðnÞ dn 6 M 2 max eð0Þ ðn0 Þ t ; 2 2 2! n 2½0; t 0 f 0 Z t 3 ð3Þ ð2Þ e ðtÞ 6 M e ðnÞ dn 6 M 3 max eð0Þ ðn0 Þ t : 2 2 2 n0 2½0; t f 3! 0 t
ð38Þ ð39Þ
From this development, it follows that N N ðNÞ e ðtÞ 6 max eð0Þ ðn0 Þ ðM tÞ 6 max eð0Þ ðn0 Þ ðM t f Þ : 2 2 2 n0 2½0; t f n0 2½0; t f N! N!
ð40Þ
As limN!1 xN =N! ¼ 0 with fixed x > 0, it follows that, as N ! 1; eðNÞ ðtÞ 2 ! 0, i.e., the sequence wð1Þ ; . . . wð1Þ converges to the exact solution wðtÞ. 3.2. Solving the optimal control problem via VIM Under the assumption that rH wðNÞ ðnÞ; n is a Lipschitz-continuous function, the successive approximations xðNþ1Þ ðtÞ and pðNþ1Þ ðtÞ of the exact solutions xðtÞ and pðtÞ will be readily obtained upon using any selective functions xð0Þ ðtÞ and pð0Þ ðtÞ for initialization using VIM. Thus, in order to simplify the calculations, it is proposed to take
xð0Þ ðtÞ ¼ xð0Þ and pð0Þ ðtÞ ¼ pð0Þ;
ð41Þ
since the initial state xð0Þ is always known. With respect to the choice of pð0Þ ðtÞ, the initial costate vector pð0Þ is an unknown vector denoted in the following as h of dimension n. As such, the integration of Hamilton equations with these initial conditions becomes a well posed Cauchy problem. Nevertheless, using the correction functionals (16) and (17), the solution xðNÞ ðtÞ and pðNÞ ðtÞ can be carried out by considering pð0Þ as a vector of unknown constants. The real value of pð0Þ will be determined after a convergence process based on the satisfaction of either the boundary condition (14) or (15). Consequently, the approximate solutions are given by
xðtÞ ¼ lim xðNÞ ðtÞ and pðtÞ ¼ lim pðNÞ ðtÞ: N!1
ð42Þ
N!1
Notice that by using the variational iteration method, the solution is easily computed using recursive formulas. Thus, if the exact solutions xðtÞ and pðtÞ exist, the sequences (16) and (17) converge toward these solutions. In this case, the optimal control law is given by (11). If the exact solutions cannot be reached, approximate solutions xðNÞ ðtÞ and pðNÞ ðtÞ with high accuracy can be obtained by considering an appropriate value of N denoted by N . The value N is taken as the smallest value of N above which the convergence of the approximate solutions is reached with a threshold, which can be estimated from (40) as
ðM t f ÞN max eð0Þ ðn0 Þ 2 : n0 2½0; t f N !
ð43Þ
Once the approximate solutions xðNÞ ðtÞ and pðNÞ ðtÞ are determined, the optimal control law is given by
u ðtÞ uðN Þ ðtÞ ¼ U xðN Þ ðtÞ; pðN Þ ðtÞ; t :
ð44Þ
For the choice of N , it is proposed to evaluate the performance index (1), at each iteration N, by considering the approximate solutions xðNÞ ðtÞ and uðNÞ ðtÞ, i.e.
J ðNÞ ¼ w xðNÞ ðt f Þ; t f þ
Z
tf
/ xðNÞ ðtÞ; uðNÞ ðtÞ; t dt
ð45Þ
0
and, after the iteration N ¼ N , the variational iteration method converges to an approximate solution, and the performance index (45) will also converge toward a given limit for any value of N P N . Consequently, all values N P N are appropriate, but to obtain a simple optimal control expression, one can simply take N ¼ N . This is explained by the fact that after the iteration N ¼ N , the contributions of the integral parts in the correction functionals (16) and (17) will be insignificant. Thus, by neglecting these insignificant contributions, the solutions can be approximated as
xðtÞ xðN Þ ðtÞ and pðtÞ pðN Þ ðtÞ:
ð46Þ
Notice that, when exact solutions exist, after the iteration N ¼ N , these contributions will tend to zero. Consequently, in that case
xðkþ1Þ ðtÞ ¼ xðkÞ ðtÞ and pðkþ1Þ ðtÞ ¼ pðkÞ ðtÞ 8k P N :
ð47Þ
8638
A. Maidi, J.P. Corriou / Applied Mathematics and Computation 219 (2013) 8632–8645
This proposed design approach can be applied both for singular and nonsingular optimal control problems. For nonsingular control problem, the different steps of the proposed design approach can be summarized as: Step Step Step Step
1. 2. 3. 4.
Step 5. Step 6. Step 7. Step 8. Step 9. Step 10. Step 11. Step 12.
Define the Hamiltonian HðxðtÞ; pðtÞ; uðtÞ; tÞ according to (6), Derive Hamilton equations (7) and (8), and the boundary conditions, Determine the expression of the optimal control law u ðtÞ as explained by (11), Define H ðxðtÞ; pðtÞ; tÞ and Hamilton equations by substituting the obtained optimal control law u ðtÞ, respectively in (6)–(8), Define the correction functionals (16) and (17), Identify the Lagrange multipliers vectors kx ðnÞ and kp ðnÞ, Set N ¼ 0; xð0Þ ðtÞ ¼ xð0Þ and set pð0Þ ðtÞ ¼ h where h is an unknown constant vector to be determined by an iterative procedure. Calculate xðNþ1Þ ðtÞ and pðNþ1Þ ðtÞ using (16) and (17). The integral parts involved can be evaluated either analytically or numerically depending on the complexity of Hamilton equations (7) and (8). Determine h ¼ pðNÞ ð0Þ by imposing either the condition (14) or (15) depending on the nature of final state xðt f Þ, which can be fixed or free. If there is no solution for h go to step 8, else go to step 10. Evaluate the performance index J ðNÞ , given by (45), for all possible solutions of h, and kept the solution that provides better optimal performance index and verifies the optimality condition (12), If jJ ðNÞ J ðN1Þ j P e (where e > 0 is a desired threshold), set N ¼ N þ 1 and go to step 8, else go to Step 12. Set N ¼ N and deduce the open-loop optimal control law as in (44). h
4. Application examples In this section, examples of optimal control are considered to illustrate the proposed approach. The first example refers to a case where the proposed design approach leads to the exact solution, whereas, in the second example, the approach provides an approximate analytical solution both in the case of fixed final state and terminal constraint. The third example concerns a singular optimal control problem with inequality constraints. The last example shows that, when it is difficult to evaluate analytically the integral parts of the correction functionals, a numerical solution can be obtained. The calculations are performed symbolically using MapleÓ for the three first problems and numerically using MatlabÓ for the last one. 4.1. Problem 1: case of exact analytical solution Consider the optimal control of a second-order system with a fixed final state
minJ ¼ uðtÞ
1 2
Z
2
½x1 ðtÞ þ uðtÞ2 dt
ð48Þ
0
subject to:
x_ 1 ðtÞ ¼ x2 ðtÞ;
ð49Þ
x_ 2 ðtÞ ¼ x1 ðtÞ þ uðtÞ; 0 xð0Þ ¼ ; 0
ð50Þ 2
xð2Þ ¼
1
:
ð51Þ
For this example, condition (12) holds, and the expression of the control law is
u ðtÞ ¼ x1 ðtÞ þ p2 ðtÞ :
ð52Þ
Hamilton equations (7) and (8) corresponding to this optimal control problem can be easily derived, and their correction functionals are ðNþ1Þ
x1
ðNþ1Þ
x2
ðNÞ
ðtÞ ¼ x1 ðtÞ
ðNþ1Þ
p1
ðNþ1Þ
p2
ðNÞ
ðtÞ ¼ x2 ðtÞ ðNÞ
ðtÞ ¼ p1 ðtÞ ðNÞ
ðtÞ ¼ p2 ðtÞ
Z Z
t 0 t 0
Z
t
0
Z
0
t
h
i ðNÞ ðNÞ x_ 1 ðnÞ x2 ðnÞ dn;
ð53Þ
h
i ðNÞ ðNÞ x_ 2 ðnÞ þ p2 ðnÞ dn;
ð54Þ
ðNÞ p_ 1 ðnÞ dn;
h
i ðNÞ ðNÞ p_ 2 ðnÞ þ p1 ðnÞ dn;
ð55Þ ð56Þ
A. Maidi, J.P. Corriou / Applied Mathematics and Computation 219 (2013) 8632–8645
8639
with
xð0Þ ðtÞ ¼ xð0Þ ¼ ½0; 0T ; xðNÞ ðt f Þ ¼ xðtf Þ ¼ ½2; 1T pð0Þ ðtÞ ¼ h ¼ ½h1 ; h2 T ðunknownÞ: The iteration formulas give for N ¼ 1 ð1Þ
ð57Þ
ð1Þ
ð58Þ
ð1Þ
ð59Þ
x1 ðtÞ ¼ 0; x2 ðtÞ ¼ h2 t; p1 ðtÞ ¼ h1 ; ð1Þ
p2 ðtÞ ¼ h1 t þ h2 : ð1Þ x1 ðtÞ
The solutions yields ð2Þ
x1 ðtÞ ¼ ð2Þ
x2 ðtÞ ¼
ð60Þ
and
ð1Þ x2 ðtÞ
clearly do not satisfy the final condition about xðt f Þ, thus a new iteration is necessary which
h2 2 t ; 2
ð61Þ
h1 2 t h2 t; 2
ð62Þ
ð2Þ
ð1Þ
ð63Þ
ð2Þ
ð1Þ
ð64Þ
p1 ðtÞ ¼ p1 ðtÞ ¼ h1 ; p2 ðtÞ ¼ p2 ðtÞ ¼ h1 t þ h2 : The final conditions about xðtf Þ can now be satisfied ð2Þ
x1 ð2Þ ¼ 2 h2 ¼ 2;
ð65Þ
ð2Þ x2 ð2Þ
ð66Þ
¼ 2 h2 þ 2 h1 ¼ 1;
which lead to h1 ¼ 1=2 and h2 ¼ 1 and ð2Þ
x1 ðtÞ ¼
t2 ; 2
uð2Þ ðtÞ ¼ J ð2Þ ¼
1 2
Z
2
0
t2 t þ 1; 2 2 2 ð2Þ x1 ðtÞ þ uð2Þ ðtÞ dt ¼ 0:3333
and for N ¼ 3 and N ¼ 4
h1 3 h2 2 t t ; 6 2 h1 2 ð4Þ ð3Þ x2 ðtÞ ¼ x2 ðtÞ ¼ t h2 t; 2 ð4Þ ð3Þ p1 ðtÞ ¼ p1 ðtÞ ¼ h1 ; ð4Þ
ð3Þ
x1 ðtÞ ¼ x1 ðtÞ ¼
ð4Þ p2 ðtÞ
¼
ð3Þ p2 ðtÞ
ð67Þ ð68Þ ð69Þ
¼ h1 t þ h2 ;
ð70Þ ð3Þ
Imposing the final condition on x ðtf Þ yields
4 h1 ¼ 2; 3 ð3Þ x2 ð2Þ ¼ 2 h2 þ 2 h1 ¼ 1; ð3Þ
x1 ð2Þ ¼ 2 h2 þ
which gives h1 ¼ 3=2 and h2 ¼ 2 and ð4Þ
ð3Þ
x1 ðtÞ ¼ x1 ðtÞ ¼
t3 þ t2 ; 4
ð71Þ ð72Þ
8640
A. Maidi, J.P. Corriou / Applied Mathematics and Computation 219 (2013) 8632–8645
t3 3t þ 2; t2 2 4
uð4Þ ðtÞ ¼ uð3Þ ðtÞ ¼ J ð4Þ ¼ J ð3Þ ¼
1 2
Z
2
0
2 ð3Þ x1 ðtÞ þ uð3Þ ðtÞ dt ¼ 1:
The previous results show that above N ¼ N ¼ 3, the performance index becomes constant, and the variational iteration method converges to the exact solution x ðtÞ ¼ xð3Þ ðtÞ. Thus, the optimal control is u ðtÞ ¼ uð3Þ ðtÞ. The optimal value of the objective function results as J ¼ J ð3Þ ¼ 1. At convergence, the value of the costate vector at t ¼ 0 is pð0Þ ¼ h ¼ ½1:5; 2T . 4.2. Problem 2: case of inequality contraints and singular control Consider the following optimal control problem [24]
minJ ¼ uðtÞ
1 2
Z
3
x1 ðtÞ dt;
ð73Þ
0
subject to x_ 1 ðtÞ ¼ uðtÞ;
x1 ð0Þ ¼ x1 ð3Þ ¼ 1;
1 6 uðtÞ 6 þ1; x1 ðtÞ P 0:
ð74Þ ð75Þ ð76Þ
By introducing a slack variable x2 ðtÞ [27,11], the inequality constraint (76) is transformed as the following equality
x1 ðtÞ
1 2 x ðtÞ ¼ 0; 2 2
ð77Þ
which yields by differentiation
x_ 1 ðtÞ x2 ðtÞ x_ 2 ðtÞ ¼ 0; or : uðtÞ ¼ x2 ðtÞ x_ 2 ðtÞ: A new control variable
minJ ¼ uðtÞ
ð78Þ ð79Þ
v ðtÞ ¼ x_ 2 ðtÞ is introduced so that the optimal problem is converted as 1 2
Z
3
x1 ðtÞ dt
ð80Þ
0
subject to x_ 1 ðtÞ ¼ x2 ðtÞ v ðtÞ;
x1 ð0Þ ¼ x1 ð3Þ ¼ 1; pffiffiffi x_ 2 ðtÞ ¼ v ðtÞ; x2 ð0Þ ¼ 2; 1 6 uðtÞ 6 þ1:
ð81Þ ð82Þ ð83Þ
The initial condition x2 ð0Þ results from
x1 ð0Þ
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 2 x2 ð0Þ ¼ 0 ) x22 ð0Þ ¼ 2 x1 ð0Þ; 2
ð84Þ
where it should be noticed that the sign has no influence on the trajectories and on the criterion. For this singular optimal control, the correction functionals corresponding to Hamilton equations are: ðNþ1Þ
x1
ðNþ1Þ
x2
ðNÞ
ðtÞ ¼ x1 ðtÞ
ðNþ1Þ
p1
ðNþ1Þ
p2
ðNÞ
ðtÞ ¼ x2 ðtÞ ðNÞ
ðtÞ ¼ p1 ðtÞ ðNÞ
ðtÞ ¼ p2 ðtÞ
Z Z
t 0 t 0
Z
t
0
Z
0
t
h h
i ðNÞ ðNÞ x_ 1 ðnÞ x2 ðnÞ v ðnÞ dn; ðNÞ x_ 2 ðnÞ
v ðnÞ
i
dn;
ð85Þ ð86Þ
h
i ðNÞ p_ 1 ðnÞ þ 1 dn;
ð87Þ
h
i ðNÞ ðNÞ p_ 2 ðnÞ þ p1 ðnÞ v ðnÞ dn;
ð88Þ
where v ðnÞ ¼ M g with g ¼ sgnðp1 ðnÞ x2 ðnÞ þ p2 ðnÞÞ and M > 0 to be determined. To determine the control law following the proposed approach, assume that g is constant. The VIM converges after two iterations (N ¼ 2) and yields the following exact solutions:
A. Maidi, J.P. Corriou / Applied Mathematics and Computation 219 (2013) 8632–8645
pffiffiffi 1 x1 ðtÞ ¼ 1 þ g M 2 t þ g2 M2 t 2 ; 2 x2 ðtÞ ¼ 1 þ g M t;
8641
ð89Þ ð90Þ
p1 ðtÞ ¼ h1 t;
ð91Þ
pffiffiffi 1 p2 ðtÞ ¼ h2 h1 g M 2 t þ g M t2 : 2
ð92Þ
u ðtÞ ¼ M ð1 þ g M tÞ g
ð93Þ
pffiffiffi Imposing x1 ð3Þ ¼ 1 gives M ¼ 2 p2ffiffiffi=ð3 gÞ, and as M > 0, hence at t ¼ tf ¼ 3; g must be equal to 1. In addition, x2 ð3Þ is free, thus p2 ð3Þ ¼ 0, which yields h2 ¼ 2 ð3 2 h1 Þ. As at t ¼ tf ¼ 3; g ¼ 1 ¼ sgnððh1 3Þ ð1 3 MÞÞ, consequently h1 < 3 and the unconstrained optimal control law is
and, taking into account the constraint on uðtÞ, the constrained optimal control law is
8 > < 1 u ðtÞ ¼ M ð1 þ g M tÞ g > : þ1
if : M ð1 þ g M tÞ g < 1 if : 1 6 ð1 þ g M tÞ g 6 þ1
ð94Þ
if : M ð1 þ g M tÞ g > þ1
The optimal value of the performance index is J ¼ 1 as in [24] although the control law expressions are different. The obtained optimal trajectories are plotted in Fig. 1. 4.3. Problem 3: case of an approximate analytical solution (3a) Consider a single-input second-order system with a free final state:
min J ¼ uðtÞ
1 2
Z p=2 0
4 x22 ðtÞ þ u2 ðtÞ dt;
ð95Þ
subject to x_ 1 ðtÞ ¼ uðtÞ; x1 ð0Þ ¼ 2; x_ 2 ðtÞ ¼ x1 ðtÞ; x2 ð0Þ ¼ 1:
ð96Þ ð97Þ
For this problem, the final state xðtf Þ is free and wðxðtÞ; tf Þ ¼ 0. Starting from xð0Þ ðtÞ ¼ xð0Þ and pð0Þ ðtÞ ¼ h ¼ ½h1 ; h2 T unknown, and considering a threshold e ¼ 106 , the convergence is reached after 14 iterations (N ¼ N ¼ 14) as shown by Fig. 2 where the normalized values of h1 ; h2 and J defined as
h01 ¼
h1 ð1Þ
h1
;
h02 ¼
h2 ð1Þ
h2
;
J0 ¼
J J ð1Þ
are represented.
Fig. 1. Optimal trajectories for the constrained problem.
8642
A. Maidi, J.P. Corriou / Applied Mathematics and Computation 219 (2013) 8632–8645
Fig. 2. Normalized values evolutions for 14 iterations.
The approximate optimal control obtained is
u ðtÞ uð14Þ ðtÞ 5:6694 þ 7:6686 t 2 t 2 1:3333 t3 þ 0:94490 t4 0:25562 t5 þ 0:02222 t6 þ 0:00635 t7 0:00225 t8 þ 0:00034 t9 0:18 104 t10 0:32067105 t11 þ 0:75750 t12 0:78816 107 t13 þ 0:29365 108 t 14
ð98Þ
and the approximate optimal state trajectories are ð14Þ
x1 ðtÞ x1 ðtÞ 2 5:6694 t þ 3:8343 t2 0:66667 t 3 0:33333 t 4 þ 0:18898 t 5 0:042603 t 6 þ 0:0031746 t7 þ 0:79365 103 t8 0:24997 103 t 9 þ 0:33812 104 t 10 0:16033 105 t 11 0:26722 106 t 12 þ 0:58269 107 t13 0:56297 108 t 14 ;
ð99Þ
ð14Þ
x2 ðtÞ x2 ðtÞ 1 þ 2 t 2:8347 t 2 þ 1:2781 t 3 0:16667 t 4 0:066667 t5 þ 0:031497 t 6 0:60862 102 t 7 þ 0:39683 103 t8 þ 0:88183 104 t 9 0:24997 104 t 10 þ 0:30738 105 t 11 0:13361 106 t 12 0:20556 107 t 13 þ 0:41619 108 t14 :
ð100Þ
ð14Þ
The quasi-optimal value of the criterion is J ¼ J ¼ 9:503675426 and the initial costate vector is found as pð14Þ ð0Þ ¼ h ¼ ½5:6694; 7:6686T . 3b Consider the same optimal control problem for which the objective is to transfer the system from xð0Þ to the terminal constraint
x1
p 2
þ x2
p 2
¼ 1:
ð101Þ
To take into account this terminal constraint, it is incorporated in the performance index using Lagrange multiplier m (see Eq. (5)). Thus, from the transversality boundary condition (9), it follows that
p1
p 2
p2
p 2
¼ 0:
ð102Þ
By considering e ¼ 106 , the proposed approach converges from N ¼ 14 and provides J ¼ J ð14Þ ¼ 9:715571368; T ð14Þ p p ð0Þ ¼ h ¼ ½5:5017; 8:0051 ; p ¼ ½0:4220; 0:4220T and xð14Þ p2 ¼ ½0:2001; 1:2002T . 2 ð14Þ
4.4. Problem 4: case of a numerical solution Consider a batch reactor in which the following parallel reaction mechanism occurs k1
A ! B;
ð103Þ
k2
A ! C:
ð104Þ
Both reactions are first order and irreversible. The objective is to find the temperature time profile T ðtÞ that maximizes the yield of species B at final time tf . The mass balance relations for species, A ðx1 ðtÞÞ and B ðx2 ðtÞÞ, in the batch reactor are
A. Maidi, J.P. Corriou / Applied Mathematics and Computation 219 (2013) 8632–8645
x_ 1 ðtÞ ¼ ðk1 ðtÞ þ k2 ðtÞÞ x1 ðtÞ; x_ 2 ðtÞ ¼ k1 ðtÞ x1 ðtÞ;
8643
ð105Þ ð106Þ
with ki ¼ ki0 eEi =R TðtÞ ði ¼ 1; 2Þ with k10 ¼ 106 s1 ; k20 ¼ 5:1011 s1 ; E1 ¼ 104 cal=gmol and E2 ¼ 2:104 cal=gmol [45]. It can be noticed that the numerical value (noted v) of k2 is such that v ðk2 Þ ¼ 0:5 v 2 ðk1 Þ. The final time is t f ¼ 1. This optimal control problem has been studied by Biegler [9], Logsdon and Biegler [46] and England et al. [45] by considering a constraint on the temperature TðtÞ. Here, the optimal control law design is obtained by assuming that the temperature is not constrained. The model can be simplified by assuming as manipulated variable uðtÞ ¼ k1 ðtÞ which amounts to manipulate the temperature. Thus, the optimal control problem is given by
maxx2 ðt f Þ () minJ ¼ x2 ðtf Þ;
ð107Þ
x_ 1 ðtÞ ¼ uðtÞ þ 0:5 u2 ðtÞ x1 ðtÞ; x_ 2 ðtÞ ¼ uðtÞ x1 ðtÞ;
ð108Þ
uðtÞ
subject to :
uðtÞ
ð109Þ
x1 ð0Þ ¼ 1;
ð110Þ
x2 ð0Þ ¼ 0:
ð111Þ
For this problem wðxðtÞ; tf Þ ¼ x2 ðtÞ, thus according to (15): pðtf Þ ¼ ½0; 1T . Starting with the initial state xð0Þ ðtÞ ¼ x0 and the initial costate vector pð0Þ ðtÞ ¼ ½h1 ; h2 T with h unknown, and considering a threshold e ¼ 106 , the proposed approach converges after 16 iterations (N ¼ N =16). Note that the integral parts involved in the iteration formulas are evaluated numerically using Gauss–Legendre integration method [47]. The values of the performance index and of the initial costate vector pð0Þ are shown in Fig. 3. The optimal value of the performance index optimal value is J ð16Þ ¼ 0:57663881. Fig. 4 show the optimal state and costate trajectories and the optimal control. It must be noted that when t ! 1, the control tends to infinity. This comes from the optimal control law
u ðtÞ ¼ 1 þ
p2 ðtÞ p1 ðtÞ
ð112Þ
together with the final condition on p which is
pð1Þ ¼
p1 ð1Þ p2 ð1Þ
¼
0 1
:
Hence, as p1 ðtÞ ! 0 when t ! 1; u ðtÞ ! þ1.
Fig. 3. Evolution of the parameter vector and the criterion for the batch reactor.
ð113Þ
8644
A. Maidi, J.P. Corriou / Applied Mathematics and Computation 219 (2013) 8632–8645
Fig. 4. Optimal trajectories for the batch reactor.
5. Conclusion In this article, the variational iteration method is adopted to design a finite-time open-loop optimal control using the minimum principle theory for both unconstrained and constrained optimal control problems. The main idea of the proposed approach consists in solving Hamilton equations iteratively using practical iterative formulas by considering the unknown initial costate vector as constants, then by imposing boundary conditions, a system of nonlinear algebraic equations is obtained. The solution of this set of equations yields the unknown initial costate vector, thereby the global optimal open-loop control law can be easily determined. The solution of Hamilton equations based on iteration formulas, i.e. correction functionals, allows to overcome the instability problem encountered with direct numerical integration of these coupled differential equations. The convergence of the method has been demonstrated. In addition, optimal control problems for which the analytical solution of Hamilton equations is difficult to determine or does not exist can be handled using this approach, and approximate solutions can be obtained with high accuracy. Notice that the correction functionals are simple to implement as a computer program using either symbolic or numerical programming. The proposed design approach is illustrated by applications examples. It is shown that the variational iteration method either converges to the exact solution (Examples 1 and 3) or ensures an approximate analytical solution (Example 2) or approximate numerical solution (Example 4). References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17]
D.E. Kirk, Optimal Control Theory. An Introduction, Prentice-Hall, 1970. F.L. Lewis, Optimal Control, John Wiley, 1986. A.E. Bryson, Dynamic Optimization, Addison Wesley, 1999. R.W.H. Sargent, Optimal control, Journal of Computational and Applied Mathematics 124 (2000) 361–371. S.R. Upreti, A new robust technique for optimal control of chemical engineering processes, Computers and Chemical Engineering 28 (2004) 1325–1336. J.P. Corriou, Process Control – Theory and applications, Springer, London, 2004. M. Diehl, J. Gerhard, W. Marquardt, M. Mönnigmann, Numerical solution approaches for robust nonlinear optimal control problems, Computers and Chemical Engineering 28 (2008) 1279–1292. M. Miranda, J.M. Reneaume, X. Meyer, M. Meyer, F. Szigeti, Integrating process design and control: a application of optimal control to chemical processes, Chemical Engineering and Processing 47 (2008) 2004–2018. L.T. Biegler, Solution of dynamic optimization problems by successive quadratic programming and orthogonal collocation, Computers and Chemical Engineering 8 (1984) 243–248. E. Trélat, Optimal control and applications to aerospace: some results and challenges, Journal of Optimization Theory and Applications 154 (2012) 713–758. D. Naidu, Optimal Control Systems, CRC Press, Boca Raton, FL, 2003. B. Van Brunt, The Calculus of Variations, Springer, NY, USA, 2004. L.S. Pontryagin, V.G. Boltyanskii, R.V. Gamkrelidze, E.F. Mishchenko, The Mathematical Theory of Optimal Processes, Pergamon Press, New York, 1964. M.J. Sewell, Maximum and minimum principles, A Unified Approach, with Applications, Cambridge University Press, NY, USA, 1987. R. Bellman, Dynamic Programming, Princeton University Press, Princeton, NJ, 1957. J.L. Troutman, Variational Calculus and Optimal Control. Optimization with Elementary Convexity, Springer, NY, USA, 1996. D.J. Chmielewski, V. Manousiouthakis, On constrained infinite-time linear quadratic optimal control with stochastic disturbances, Journal of Process Control 15 (2005) 383–391.
A. Maidi, J.P. Corriou / Applied Mathematics and Computation 219 (2013) 8632–8645
8645
[18] K. Graichen, N. Petit, Incorporating a class of constraints into the dynamics of optimal control problems, Optimal Control Applications and Methods 20 (2009) 537–561. [19] W.F. Arnold, A.J. Laub, Generalized eigenproblem algorithms and software for algebraic Riccati equations, IEEE Proceedings 72 (12) (1984) 1746–1754. [20] A.J. Laub, A Schur method for solving algebraic Riccati equations, IEEE Transactions on Automatic Control AC-24 (6) (1979) 913–921. [21] B.D.O. Anderson, J.B. Moore, Optimal Control. Linear Quadratic Methods, Prentice-Hall International, USA, 1989. [22] J.T. Betts, Practical Methods for Optimal Control and Estimation Using Nonlinear Programming, Society for Industrial and Applied Mathematics, 2009. [23] W.H. Ray, Advanced Process Control, Butterworth, Boston, 1989. [24] R.F. Hartl, S.P. Sethi, R.G. Vickson, A survey of the maximum principle for optimal control problems with state constraints, SIAM Review 37 (2) (1995) 181–218. [25] R.F. Stengel, Optimal Control and Estimation, Dover Publications, 1994. [26] W.F. Ramirez, Process Control and Identification, Academic Press, 1994. [27] W.F. Feehery, P.I. Barton, Dynamic optimisation with state variable path constraints, Computers and Chemical Engineering 22 (9) (1998) 1241–1256. [28] K. Graichen, A. Kugi, N. Petit, F. Chaplais, Handling constraints in optimal control with saturation functions and system extension, System & Control Letters 50 (2010) 671–679. [29] J.F. Bonnans, T. Guilbaud, Using logarithmic penalties in the shooting algorithm for optimal control problems, Optimal control Applications and Methods 24 (2003) 257–278. [30] S.J. Wright, Primal-dual interior-point methods, SIAM Philadelphia, 1997. [31] J.I. Ramos, On the variational iteration method and other iterative techniques for nonlinear differential equations, Applied Mathematics and Computation 199 (1) (2008) 39–69. [32] A.M. Wazwaz, A comparison between the variational iteration method and Adomian decomposition method, Journal of Computational and Applied Mathematics 207 (2007) 129–136. [33] L. Jin, Application of variational iteration method and homotopy perturbation method to the modified Kawahara equation, Mathematical and Computer Modelling 49 (2009) 573–578. [34] J.H. He, Variational iteration method a kind of non-linear analytical technique: some examples, International Journal of Non-Linear Mechanics 34 (1999) 699–708. [35] J.I. Ramos, On the Picard–Lindelof method for nonlinear second-order differential equations, Applied Mathematics and Computation 203 (1) (2008) 238–242. [36] J.H. He, Variational iteration method for autonomous ordinary differential systems, Applied Mathematics and Computation 114 (2000) 115–123. [37] S.A. Yousefi, M. Dehgan, A. Lotfi, Finding the optimal control of linear systems via He’s variational iteration method, International Journal of Computer Mathematics 87 (5) (2010) 1042–1050. [38] O. Olotu, A.I. Adekunle, Analytic and numeric solutions of discretized constrained optimal control problem with vector and matrix coefficients, Advanced Modeling and Optimization 12 (1) (2010) 119–131. [39] I. Kucuk, Active optimal control of the KdV equation using the variational iteration method, Mathematical Problems in Engineering Article ID 929103, 10 p. doi: http://dx.doi.org/10.1155/2010/929103. [40] J. Biazar, H. Ghazvini, He’s variational iteration method for solving linear and non-linear systems of ordinary differential equations, Applied Mathematics and Computation 191 (2007) 287–297. [41] A.D. Ganji, M. Nourollahi, E. Mohseni, Application of He’s method to nonlinear chemistry problems, Computers and Mathematics with Applications 54 (2007) 1122–1132. [42] B.A. Finlayson, The Method of Weighted Residuals and Variational Principles, Academic Press, New York, 1972. [43] M. Tatari, M. Dehghan, On the convergence of He’s variational iteration method, Journal of Computational and Applied Mathematics 207 (2007) 121– 128. [44] Z.M. Odibat, A study on the convergence of variational iteration method, Mathematical and Computer Modelling 51 (2010) 1181–1192. [45] R. England, S. Gómez, R. Lamour, Expressing optimal control problems as differential algebraic equations, Computers and Chemical Engineering 29 (2005) 1720–1730. [46] J.S. Logsdon, L.T. Biegler, Accurate solution of differential-algebraic optimization problems, Industrial & Engineering Chemistry Research 28 (11) (1989) 1628–1639. [47] J.P. Corriou, Méthodes Numériques et Optimisation – Théorie et pratique pour l’ingénieur, Lavoisier, Tec & Doc, Paris, France, 2010.