Available online at www.sciencedirect.com
Applied Mathematics and Computation 200 (2008) 529–536 www.elsevier.com/locate/amc
Reduced gradient method combined with augmented Lagrangian and barrier for the optimal power flow problem Esdras Peneˆdo de Carvalho a,*, Ane´sio dos Santos Ju´nior b, To Fu Ma c a
b
Departamento de Matema´tica, Universidade Estadual de Maringa´, 87020-900 Maringa´, PR, Brazil Faculdade de Engenharia Ele´trica e de Computacßa˜o, Universidade Estadual de Campinas, 13083-970 Campinas, SP, Brazil c Instituto de Cieˆncias Matema´ticas e de Computacßa˜o, Universidade de Sa˜o Paulo, 13560-970 Sa˜o Carlos, SP, Brazil
Abstract A new approach for solving the optimal power flow (OPF) problem is established by combining the reduced gradient method and the augmented Lagrangian method with barriers and exploring specific characteristics of the relations between the variables of the OPF problem. Computer simulations on IEEE 14-bus and IEEE 30-bus test systems illustrate the method. Ó 2007 Elsevier Inc. All rights reserved. Keywords: Optimal power flow; Nonlinear optimization; Gradient methods; Augmented Lagrangian; Barrier method
1. Introduction Optimal power flow (OPF) is one of the key problems in power system management. Its mathematical formulation, as a nonlinear programming problem, was proposed by Carpentier [4] in 1962, as an extension of the economic dispatch problem, and since then, it becomes a major subject in nonlinear optimization. OPF is a steady nonlinear optimization problem that calculates a set of optimum variables from the network state, load data, and system parameters. Optimal values are computed in order to minimize a certain goal such as generation cost or line transmission power losses under equality and inequality constraints. The OPF problem can be expressed as follows: F ðX Þ 8 > < gi ðX Þ ¼ 0; 1 6 i 6 m; subject to hj ðX Þ 6 0; 1 6 j 6 q; > : min X 6 X 6 X max ;
minimize
ð1:1Þ
where X 2 Rn , n > m, is the vector of state variables, representing voltage magnitude and phase angles. The objective function F : Rn ! R represents the system performance, considered here as real power losses in
*
Corresponding author. E-mail address:
[email protected] (E.P. de Carvalho).
0096-3003/$ - see front matter Ó 2007 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2007.11.025
530
E.P. de Carvalho et al. / Applied Mathematics and Computation 200 (2008) 529–536
the transmission. The restrictions gi : Rn ! R represent the power flow equations, which are obtained by mean of the conservation of energy principle at all buses of the system. The functions hj : Rn ! R, q < m, represent the functional constraints of the power flow in the transmission lines and buses. The vector inequality X min 6 X 6 X max indicates inequality component to component. A large number of methods has been developed or employed to solve the OPF problem. For instance, reduced gradient, projected Lagrangian, sequential quadratic programming, augmented Lagrangian and interior point methods. A review of the literature in solving OPF problem up to 1993 can be found in Momoh et al. [13,14]. Our approach is based on the reduced gradient combined with augmented Lagrangian and barrier methods. The reduced gradient method was proposed by Wolfe [19] to solve nonlinear programming problems with linear constraints. It was extended by Abadie and Carpentier [1] to solve problems with nonlinear constraints through approximate linearized problems. This extension, called generalized reduced gradient method (GRG), was applied to the OPF problem by Dommel and Tinney [7], combining penalty techniques on the dependent variables and functional constraints. However, when the penalty parameter becomes large, the problem tends to be ill-conditioned, making it hard to solve. This difficulty was one of the motivation for the development of the augmented Lagrangian method by Hestenes [10] and Powell [16], for general nonlinear optimization problems. The idea is to add a quadratic penalty term to the usual Lagrangian, obtaining an augmented Lagrangian function. This method was specifically applied to the OPF problems in, for instance, [2,5,17,18]. Another important method in solving the OPF problem is the barrier method. Essentially, a restricted problem is approximated by an unrestricted or partially unrestricted problem, through the addition of a barrier term in the objective function. See for instance [8,9]. In this work our objective is to combine the GRG method with techniques from augmented Lagrangian and log-barrier, by re-grouping the restrictions in a suitable form, and analyzing the specific relations between the variables of the OPF problem. The method exploits the strengths of each technique and avoid some of their individual weakness. The paper is organized as follows: In Section 2 we outline the GRG method and the GRG method combined with augmented Lagrangian. In Section 3 we present our main contribution: the GRG combined with log-barrier and augmented Lagrangian. Section 4 describes the IEEE test systems and shows some computer simulations. Conclusions are presented in Section 5. 2. The GRG and GRG combined with augmented Lagrangian method In this section we outline the GRG method and the combined GRG–augmented Lagrangian method which will be used as references in our study. 2.1. The GRG method The GRG method was developed to solve nonlinear programming problems of the form minimize subject to
f ðX Þ ( gi ðX Þ ¼ 0;
1 6 i 6 m;
ð2:1Þ
X min 6 X 6 X max ;
where f : Rn ! R is the objective function and g i : Rn ! R, ðm < nÞ are the nonlinear constraints. Firstly, through linear approximation of the restrictions, the problem is transformed in a sequence of linearized subproblems. At the optimal point, the approximated problem possess the same solution as the original problem (e.g. [1,11]). Each subproblem with linear constraints is then solved with the reduced gradient method. The method reduces the dimension of the problem by representing part of the variables, called basics, by means of a subset of independent variables, called non-basics. The splitting of the variable X into ðx; uÞ, where x is the vector of basic variables and u is the vector of nonbasic variables, follows the non-degeneracy hypotheses:
E.P. de Carvalho et al. / Applied Mathematics and Computation 200 (2008) 529–536
531
1. The vector x has dimension m and the vector u has dimension n m. 2. The Jacobian matrix J gx of g ¼ ðg1 ; . . . ; gm Þ, with respect to x, is non-singular in X ¼ ðx; uÞ. Þ be the current feasible point (i.e., X satisfies the constraints of (2.1)). Then the Let, for example, X ¼ ðx; u specific variables to be chosen as basic must be selected so that J gx , evaluated at x, is non-singular. In this case the constraint gðx; uÞ ¼ 0 can be solved (at least conceptually) for x in terms of u to yield the basics variables as a function of the non-basics variables xðuÞ. This representation is valid for all u sufficiently near u (see [11,12]). The objective function is then reduced to a function of u only, f ðxðuÞ; uÞ ¼ fr ðuÞ and the original problem (2.1) (at least in a neighborhood of x) is transformed to a simpler reduced problem minimize
f r ðuÞ
subject to
umin 6 u 6 umax ;
ð2:2Þ
where u 2 Rnm . The function fr ðuÞ is called the reduced objective and its gradient, rfr ðuÞ, the reduced gradient. Now we summarize the GRG algorithm which solves (2.1) by mean of (2.2). Algorithm 1 (GRG). Initialization: Let X 0 2 Rn be an arbitrary initial point. Split X 0 into m basic variables x0 and n–m non-basic variables u0 such that J gx ðx0 ; u0 Þ is a non-singular matrix. Set k ¼ 0 and go to the next step. Step 1. Solve gðxk ; uk Þ ¼ 0 for xk by Newton method. Step 2. Compute the reduced gradient rfr ¼
of ðX k Þ of ðX k Þ g k 1 g k ½J x ðX Þ J u ðX Þ: ou ox
Step 3. Compute a feasible descent direction d k by projecting rfr on the feasible domain, that is, for j ¼ 1; . . . ; n m, 8 and ukj ¼ umin if rfrj > 0 > j ; <0 k k ð2:3Þ dj ¼ 0 if rfrj < 0 and uj ¼ umax ; j > : rfrj if otherwise: If d k is null, stop. X k is a KKT point. Otherwise, go to the next step. Step 4. Perform a one-dimensional search to compute the step length ak with ak ¼ min fr ðuk þ ad k Þ: 06a6amax
Step 5. Update X k : X kþ1 ¼ X k þ ak d k : Replace k by k þ 1 and go back to Step 1. Remarks. In the Step 4, the line search is done in two stages: a bracketing phase finds an interval ½0; amax containing desirable step lengths, and an interpolation phase computes a good step length within this interval. A condition stipulates that ak should first of all give sufficient decrease in the reduced objective function fr , as measured by the Armijo’s condition (see [3,6,15]) t
fr ðuk þ ad k Þ 6 fr ðuk Þ þ c1 arfr ðuk Þ d k
ð2:4Þ
for some constant c1 2 ð0; 1Þ. In the case of nonlinear constraints, the one-dimensional search can terminate in two different ways. (a) If the Newton method converges, and ak produced decrease for fr , then the one-dimensional search is finished and the resolution of a new reduced problem is initialized. (b) If ak did not produce
532
E.P. de Carvalho et al. / Applied Mathematics and Computation 200 (2008) 529–536
decrease, then a quadratic interpolation of fr is performed on the three values fr ð0Þ, fr0 ð0Þ and fr ðuk þ ak d k ÞÞ, and fr is evaluated at the minimum of this parabola. If we get decrease of fr then the search is finished; otherwise, a similar cubic interpolation is performed. This procedure stops when the Armijo’s condition (2.4) is satisfied. 2.2. Combined GRG and augmented Lagrangian Now we present the classical approach of Dommel and Tinney [7] which combine GRG with penalty, and GRG with augmented Lagrangian method. Rewriting the OPF problem (1.1) in terms of basic variables (x) and non-basic variables (u) one has the equivalent problem F ðx; uÞ 8 gðx; uÞ ¼ 0; > > > < hmin 6 hðx; uÞ 6 hmax ; subject to > xmin 6 x 6 xmax ; > > : min u 6 u 6 umax ;
minimize
ð2:5Þ
where vector inequalities mean inequalities component to component. Next we construct an augmented Lagrangian functional to deal with the constraints hðx; uÞ and x. The conhj , lxj and l xj denote Lagrangian straints gðx; uÞ and u are treated by the reduced gradient method. Let lhj , l multipliers for the restrictions hðx; uÞ and x of (2.5). Then we define the following terms: 8P m l2xj > if lxj þ cx ðxmin xj Þ < 0; > j 2cx < j¼1 pa ½x ¼ P m > 2 > : lxj ðxmin xj Þ þ 12 cx ðxmin xj Þ otherwise; j j j¼1
pa ½x ¼
pa ½h ¼
pa ½h ¼
8P m 2x l j > > < 2cx
xj þ cx ðxj xmax if l Þ < 0; j
j¼1 m P
> 2 > xj ðxj xmax : l Þ þ 12 cx ðxj xmax Þ otherwise; j j j¼1 8 q P l2hj > min > > < 2ch if lhj þ ch ðhj hj ðX ÞÞ < 0; j¼1 q P
> 2 min min 1 > > : lhj ðhj hj ðX ÞÞ þ 2 ch ðhj hj ðX ÞÞ j¼1 8 q P l2hj > > hj þ ch ðhj ðX Þ hmax Þ < 0; > j < 2ch if l > > > :
j¼1 q P j¼1
2 hj ðhj ðX Þ hmax Þ þ 12 ch ðhj ðX Þ hmax Þ l j j
otherwise;
otherwise;
where cx and ch are penalty parameters for x and h, respectively. For simplicity we use sometimes the vector notation lh ¼ ðlh1 ; . . . ; lhq Þ x . Now we can define a sequence of augmented Lagrangian functions, h , lx and l with similar meanings for l partially constrained, minimize ðx;uÞ
subject to
kh ; lkx ; l kx Þ La ðx; u; lkh ; l gðx; uÞ ¼ 0; umin 6 u 6 umax ;
ð2:6Þ
E.P. de Carvalho et al. / Applied Mathematics and Computation 200 (2008) 529–536
533
where kh ; lkx ; l kx Þ ¼ f ðx; uÞ þ pa ½x þ pa ½x þ pa ½h þ pa ½h: La ðx; u; lkh ; l
ð2:7Þ
Then the KKT conditions for (2.5) are obtained by mean of the sequence of problems (2.6). This sequence is initialized with a problem where it is taken an arbitrary value for all penalty parameters and null Lagrange khj , lkxj and l kxj . From the solution of this first problem, one can generate a family of problems multipliers lkhj , l (2.6) in the following two different ways: Procedure P1. Apply the classical penalty method of Dommel and Tinney [7], where, after each iteration, if necessary, update the penalty parameters cx and ch through a condition like k gc if gck 6 clim ; ckþ1 ¼ ð2:8Þ clim if gck > clim ; where g > 1, clim is a upper bound for c and k ¼ 0; 1; 2; . . . Procedure P2. First update the Lagrangian multipliers: 8 kþ1 lhj ¼ lkhj þ ch ðhj ðx; uÞ hmin > j Þ; > > > kþ1 k
> lxj ¼ lxj þ cx ðxj xj Þ; > > : kþ1 kxj þ cx ðxj xmax xj ¼ l Þ l j
ð2:9Þ
and then update the penalty parameters computed with (2.8). The method to generate the sequence of problems (2.6) and to get the KKT conditions to the problem (2.5) is summarized in the following algorithm Algorithm 2 (Combined GRG). Initialization: Let X 0 ¼ ðx0 ; u0 Þ 2 Rn be an arbitrary initial point. Solve the problem (2.6) by using Algorithm 1. Verify the feasibility of this solution X 1 ¼ ðx1 ; u1 Þ to the problem (2.5). If feasible, stop. X 1 is a solution of the OPF problem. Otherwise set k ¼ 1, set ckx and ckh and kxj ¼ lkxj ¼ l khj ¼ lkhj ¼ 0 and go to the next step. l Step 1. Solve the problem (2.6) through Algorithm 1 to obtain a solution X k . If X k satisfies the constraints for h and x in (2.5), stop. X k is a optimal solution of the OPF problem. Otherwise, go to the next step. kxj , lkxj , l khj , lkhj , ckx and ckh with Procedures P1 or P2. Set k ¼ k þ 1 and repeat Step 1. Step 2. Update l 3. Reduced gradient with barrier and augmented Lagrangian In this section we present our main contribution, an efficient algorithm to solve the OPF problem (2.5) based on a new procedure (Procedure P4 below). The key point is to use log-barrier to treat the constraints on the independent variables (u), that is, transforming the OPF problem (2.5) into minimize subject to
F ðx; uÞ þ bðu; bÞ 8 < gðx; uÞ ¼ 0; hmin 6 hðx; uÞ 6 hmax ; : min x 6 x 6 xmax ;
ð3:1Þ
where bðu; bÞ ¼ b
nm X j¼1
max ½lnðuj umin uj Þ j Þ þ lnðuj
ð3:2Þ
534
E.P. de Carvalho et al. / Applied Mathematics and Computation 200 (2008) 529–536
and b is the barrier parameter. For a given parameter b, problem (3.1) is solved through a sequence of partially constrained augmented Lagrangians. More precisely, we define the sequence of problems kh ; lkx ; l kx ; bk Þ Lab ðx; u; lkh ; l
minimize ðx;uÞ
ð3:3Þ
subject to gðx; uÞ ¼ 0; where kh ; lkx ; l kx ; bk Þ ¼ F ðx; uÞ þ bðu; bk Þ þ pa ½x þ pa ½x þ pa ½h þ pa ½h; Lab ðx; u; lkh ; l pa ½x; pa ½h; pa ½h defined as before. with pa ½x; Like Algorithm 2, the sequence of problems (3.3) is initiated with a first problem where are taken arbitrary values for the penalty and barrier parameters and null Lagrange multipliers. From this initial problem, we construct a sequence of problems (3.3) by using two procedures. Procedure P3. Apply Procedure P1 to (3.3) and then update the barrier parameter using 8 k k
ð3:4Þ
where q > 1, blim is the lower bound for b and k ¼ 0; 1; 2; . . . Now we present a new approach which is specially dedicated to the OPF problem. Procedure P4. Apply Procedure P2 to (3.3) and then update the barrier parameter using (3.4). The strategy to generate the sequence of problems (3.3) and to get the KKT conditions to the problem (2.5) is summarized in the following algorithm: Algorithm 3 (GRG with log-barrier). Initialization: As in Algorithm 2. Step 1. Solve the problem (3.3) through Algorithm 1 to obtain a solution X k . If X k satisfies the constraints for h and x in (2.5), stop. X k is an optimal solution of the OPF problem. Otherwise, go to step 2. kxj , lkxj , l khj , lkhj , ckx , ckh and bk with Procedure P3 or P4. Set k ¼ k þ 1 and repeat Step 1. Step 2. Update l 4. Numerical simulations This section presents some numerical results of simulations obtained by comparing proposed approach P4 with P1 (the classical approach by Dommel and Tinney), P2 and P3. The algorithms were tested on two IEEE standard test systems [20], namely, the IEEE 14-bus and IEEE 30bus systems. The computer implementation was done in MATLABâ and Table 1 shows the main characteristics of the systems. The tests were accomplished with the following initial conditions to the optimization process: 1.0 p.u. (100 MVA base) for voltage magnitude (V) and 0.0 rad for voltage angle (h). Minimum and maximum voltages are Table 1 Electric systems characteristics IEEE system
Buses
Branches
Generators
Variables
Equality constraints
Inequality constraints
14 30
14 30
20 41
4 6
27 59
22 52
18 36
E.P. de Carvalho et al. / Applied Mathematics and Computation 200 (2008) 529–536
535
specified at 0.9 p.u. and 1.1 p.u., respectively. The Lagrange multipliers related to the constraints x and h are x ¼ lh ¼ l h ¼ 0. The initial penalties are defined as cx ¼ ch ¼ 10 and the initial barrier all nulls, that is, lx ¼ l parameter is defined as b ¼ 1. All penalty and barrier increasing parameters were defined as g ¼ 1:1 and q ¼ 1:1, respectively. The optimization processes (P1–P4) for both test systems are summarized in Table 2. Fig. 1a and b shows the number of external and accumulated iterations, considering, respectively, projected reduced gradient in u and projected reduced gradient with barrier in u, on IEEE 14-bus system, both combined with following options: penalty, augmented Lagrangian and augmented Lagrangian with log-barrier. The external iterations correspond to the total of iterations executed in step 2 of the Algorithms 2 or 3. The accumulated iterations correspond to the total of iterations executed in the steps 0 and 1 of the Algorithms 2 or 3. Fig. 2a and b has analogous meanings but describing the test in a IEEE 30-bus system. Table 2 Number of iterations for the electric systems IEEE system
P1
P2
P3
P4
14 30
55 158
33 122
59 177
25 107
a
b
60
55
Accumulated iterations
Accumulated iterations
55
60
50 45 40 35 30 25 20
50 45 40 35 30 25 20
P1 P2
15
P3 P4
15 1
2
3
4
5
6
7
8
9
1
2
3
External iterations
4
5
6
7
8
9
External iterations
Fig. 1. IEEE 14 – Number of external and accumulated iterations considering (a) projected reduced gradient in u and: penalty in x and h (P1) and augmented Lagrangian in x and h (P2); (b) projected reduced gradient with barrier in u and: penalty in x and h (P3) and augmented Lagrangian in x and h (P4).
a
b
180
160
Accumulated iterations
Accumulated iterations
160
180
140 120 100 80 60 40
140 120 100 80 60 40
P1 P2
20
P3 P4
20 1
2
3
4
5
6
External iterations
7
8
9
10
1
2
3
4
5
6
7
External iterations
Fig. 2. Same procedure as in the Fig. 1 but using IEEE 30-Bus system.
8
9
10
536
E.P. de Carvalho et al. / Applied Mathematics and Computation 200 (2008) 529–536
5. Conclusion This work shows a new strategy based on the reduced gradient and the augmented Lagrangian methods with log-barrier to solve the OPF problem. The tested strategy takes advantage of the characteristics of the OPF’s variables, as used in Procedure P4. Tests on IEEE-14 and IEEE-30 bus test systems were performed to compare the Procedure P4 with Procedure P1 (classical) and alternative Procedures P2 and P3. With a closer look on the Table 2 and Figs. 1 and 2, one infers that (a) Procedure P2 (reduced gradient + augmented Lagrangian) had a better performance than P1 (classical penalty method), as it was expected. (b) Procedure P3 (penalty + barrier) had the worst performance in both cases. (c) Our Procedure P4 (augmented Lagrangian + barrier) was more efficient in all considered cases. Therefore the numerical results shown in the tests indicate the viability of the use of augmented Lagrangian function combined with log-barrier and the reduced gradient method to solve the OPF problem. Acknowledgement The authors thank the Brazilian agencies CNPq and FAPESP for financial support. References [1] J. Abadie, J. Carpentier, Generalization of the Wolfe reduced gradient method to the case of nonlinear constraints, in: R. Fletcher (Ed.), Optimization (Proceedings of a Symposium held at University of Keele, 1968), Academic Press, London, 1969, pp. 37–47. [2] M.M. Adibi et al., Optimal transformer tap selection using modified barrier-augmented Lagrangian method, IEEE Transactions on Power Systems 18 (2003) 251–257. [3] D.P. Bertsekas, Constrained Optimization and Lagrange Multiplier Methods, Computer Science and Applied Mathematics, Academic Press, New York, 1982. ´ lectriciens 13 (1962) 431–447. [4] J. Carpentier, Contribution a l’e´tude du dispatching e´conomique, Bulletin de la Socie´te´ Francßaise des E [5] G.R.M. Costa, K. Langona, D.A. Alves, A new approach to the solution of the optimal power flow problem based on the modified Newton’s method associated to an augmented Lagrangian function, in: Proceedings of the International Conference on Power System Technology vol. 2, Beijing, 1988, pp. 909–913. [6] J.E. Dennis Jr., R.B. Schnabel, Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Classics in Applied Mathematics, 16, SIAM, Philadelphia, 1996. [7] H.W. Dommel, W.F. Tinney, Optimal power flow solutions, Transactions on Power Apparatus and Systems 87 (1968) 1866–1876. [8] A.V. Fiacco, G.P. McCormick, Nonlinear Programming, Sequential Unconstrained Minimization Techniques, John Wiley, New York, 1968 (Classics in Applied Mathematics 4, second edition, SIAM, Philadelphia, 1990). [9] A. Forsgren, P.E. Gill, M.H. Wright, Interior methods for nonlinear optimization, SIAM Reviews 44 (2002) 525–597. [10] M.R. Hestenes, Multiplier and gradient methods, Journal of Optimization Theory and Applications 4 (1969) 303–320. [11] L.S. Lasdon, A.D. Waren, A. Jain, M. Ratner, Design and testing of a generalized reduced gradient code for nonlinear programming, ACM Transactions on Mathematical Software 4 (1978) 34–50. [12] L.S. Lasdon, A.D. Waren, Large scale nonlinear programming, Computers and Chemical Engineering 7 (1983) 595–604. [13] J.A. Momoh, M.E. El-Hawary, R. Adapa, A review of selected optimal power flow literature to 1993 part I: nonlinear and quadratic programming approaches, IEEE Transactions on Power Systems 14 (1999) 96–104. [14] J.A. Momoh, M.E. El-Hawary, R. Adapa, A review of selected optimal power flow literature to 1993 part II: Newton, linear programming and interior points methods, IEEE Transactions on Power Systems 14 (1999) 105–111. [15] J. Nocedal, S.J. Wright, Numerical Optimization, Springer Series in Operations Research, Springer-Verlag, New York, 1999. [16] M.J.D. Powell, A method for nonlinear constraints in minimization problems, in: R. Fletcher (Ed.), Optimization (Proceedings of a Symposium held at University of Keele, 1968), Academic Press, London, 1969, pp. 283–298. [17] A. Santos Jr., S. Deckmann, S. Soares, A dual augmented Lagrangian approach for optimal power flow, IEEE Transactions on Power Systems 3 (1988) 1020–1025. [18] A. Santos Jr., G.R.M. da Costa, Optimal-power-flow solution by Newton’s method applied to an augmented Lagrangian function, IEE Proceedings – Generation, Transmission and Distribution 142 (1995) 33–36. [19] P. Wolfe, Methods of nonlinear programming, in: Recent Advances in Mathematical Programming, McGraw-Hill, New York, 1963, pp. 67–86. [20] Power Systems Test Case Archive, University of Washington. .