NOgl~- H(~AND
A C o m b i n e d H o m o t o p y Interior P o i n t M e t h o d for General N o n l i n e a r P r o g r a m m i n g P r o b l e m s Lin Zhenghua, Li Yong, and Yu Bo Depa~ment of Mathematics Jilin University Changchun 130023 People's Republic of China
ABSTRACT A combined homotopy interior point method for solving general nonlinear prog-ramming is proposed. The algorithm generated by this method to Kuhn-Tucker points of the general nonlinear programming problems is proved to be globally convergent, under the "normal cone condition" about the constraints, probably without the convexity. © Elsevier Science Inc., 1996
1.
INTRODUCTION
As is well known, the homotopy method established by Kellogg et al. [1], Smale [2], and Chow et al. [3] has become a powerful tool in finding solutions of various nonlinear problems, for example, zeros and fixed points of maps. A distinctive advantage of that method is that the algorithm generated by it exhibits the global convergence. For a good introduction and a complete survey about that method, we can refer to the books [4, 5]. However, before 1988, the homotopy method for constrained optimization had seldom appeared in the publications (see [5]). In 1988, Megiddo [6] and Kojima et al. [7] discovered that the attractive Karmarkar interior point method [8] for linear programming was a kind of path-following method. Since then, the homotopy method for mathematical programming has become an active research subject (see [6, 9-11]). Recently, McCormick [12], Monteiro and Adler [13], and Zhu [14] and Wang et al. [15] have generalized the Karmarkar interior point method for linear programming to convex APPLIED MA THEMA TICS AND COMPUTATION 80:209-224 (1996) © Elsevier ScienceInc., 1996 0096-3003/96/$15.00 655 Avenue of the Americas, New York, NY 10010 SSDI 0096-3003(95)00295-2
210
L. ZHENGHUA ET AL.
programming. And other interior point methods have been also presented by Sonnevend [16], Sonnevend and Stoer [17], Kortenek et al. [18], Jarre [19] and other authors. Among their works, all convergence of the algorithms considered is obtained under the assumptions that the logarithmic barrier functions are strictly convex. Such assumptions are somewhat strong, even for linear programming. For general nonlinear programming, the situations probably become more difficult, and the applications of the homotopy method and the interior point method to this area are untouched. In this paper, to seek an algorithm of finding solutions for general problems, in particular, nonconvex programming ones, we present a combined homotopy interior point method. This method adopts the main advantages of the interior point method and the homotopy methods; hence, the algorithm generated by this method is globally convergent for extensive programming problems. But it should be also pointed out that by utilizing our method, one can find a Kuhn-Tucker point to the nonlinear programming considered. It has been proved that, in a convex situation, from any interior point of the feasible set, the algorithm can converge to a solution of the convex programming problem without the assumption that the logarithmic barrier function is strictly convex (see [20]). Here, instead of convexity conditions about the feasible sets, we use a more general "normal cone condition"; for details, see Section 2 below. Simple path-following experiments show that the formulated algorithm is effective and convenient (see Section 3). 2.
THE COMBINED HOMOTOPY INTERIOR POINT METHOD AND ITS CONVERGENCE Consider the following general nonlinear programming (GNLP) problem min
f(x),
s.t.
gi(x) = 0,
i e { 1 , . . . , m},
hi(x) .<< 0,
i e {1 . . . . ,1},
(2.1)
where x E R ~' and f, gi, hi are sufficiently smooth functions. We write g = (gl, g2 . . . . . gin)T, h ~ (hi, h 2 , . . . , h~) T. To our knowledge, a point x* that satisfies V / ( x ) + Vg(x) y + Vh(x) z = 0 ,
g ( ~ ) = 0, zh(~)
= 0,
h ( ~ ) .<< 0, z >1 0,
(2.2)
A Homotopy Method for Nonlinear Programming
211
where y ~ R m, V f ( x ) = ( a f ( x ) / , ~ x ) T e R '~, V g ( x ) = (Vgi(x) . . . . , Vg~(x)) e R "x ~ and Z = diag(z) e R TM~, is called a K u h n - T u c k e r point of (2.1). Let ~ i = {x ~ R" : g ( x ) = 0 and h(x) < 0} be the strictly feasible set of (1.1). In general, 121 is composed by some connected subsets and in the rest of this paper, lq~ is used to denote a connected part of the strictly feasible set. Let 1 2 = 1 ~ 1 X R m X Rl++ =~ 121 X R 'n X { z e
Rl: z > 0},
and let c ~ 1 - 12i \ _ ~ i , a ~ = ~ \ 12 be the b o u n d a r y sets of 1] 1 and ~ , respectively, where 12,120 are the closed set and open set, respectively. Clearly, 12i c R ~ and 1] ° = O, then ~ ' ~ 1 :~ ~1" Denote the binding set at x by B ( x ) = {i ~ { 1 , . . . , / } :
h~(x) = 0},
(2.4)
and # B ( x ) is the n u m b e r of the element of B(x), i.e., #B(x)
= #{i E {1,...,/}:
h i ( x ) = 0}.
T h e following two basic conditions are c o m m o n l y used in the literature. (C1) 121 is n o n e m p t y and bounded; (C2) V x a 12i, m a t r i x (Vg(x), V h i ( x ) : i ~ B ( x ) ) = (Vg(x), Vhi~(x), . . . , V hi~ B(~)(x)) ~ R "× (m + # B(~)), where { i i . . . . . i# B( ~)} = B(x), is a m a t r i x of full column rank (the regularity of the constraints). For x ~ 121, (Vg(x), V h i ( x ) : i e B ( x ) ) = Vg(x). Therefore, ~ l is an n - m dimensional smooth manifold. In this paper, besides conditions (C1) and (C2), we also use the following " n o r m a l cone condition". (C3) (The normal cone condition of 121) V x e 012i, the normal cone of 121 at x only meets BI~ i at x, i.e., V w E a f t i × R m X Rl+( = c~l~),
i e B(x)
T h e normal cone condition of a set is a generalization of the convexity; if 121 is a convex set, then it satisfies the normal cone condition. On the other hand, if 121 satisfies the normal cone condition, then the outer normal cone of 1~ i cannot meet 120, but it meets ~ i at x.
212
L. ZHENGHUA ET AL.
To solve (2.2), we construct a h o m o t o p y as follows
H( w, ~(0), ~) (1 - /~)(Vf(x) + Vh(x) z) + Vg(x) y + / ~ ( x g(x) Zh( ~) - ~Z~°)h( ~(°))
x(°)) =0,
(2.6) where w = (x, y, z) ~ R ~+m+t, w (°) = ( x (°), y(0), z(0)) E 1~. We refer to (2.6) as the "combined h o m o t o p y " and the corresponding algorithm as the "combined h o m o t o p y interior point m e t h o d " because the first component of (2.6) is a linear homotopy, while the third component of it, which makes the method an interior point method, is a Newton homotopy. W h e n /x = 1, the h o m o t o p y equation (2.6) becomes Vg(x) y+ x-
x( ° ) = 0 , g ( x ) = 0,
(2.7)
Zh( x) - Z(°)h( ~(o)) = o. By (2.5) and x (°)
~
~'~1, we see t h a t if Vg(x)
y + z -
z (°) = O,
g(x) = o, then V g ( x ) y -- x - x (°) = 0. Using V g ( x ) is a matrix of full column rank, we have x = x (°) ~ ~1 and y(O) _ 0. A n d by x (°) ~ l~l, we have h( x (°)) < 0. It follows t h a t the solution of (2.7) is w = w (°). As /~ = 0, H(w, w(°),/~) = 0 turns to (2.2). For a given w (°), rewrite H(w, w ¢°),/z) as Hw(o)(w, /z). T h e zero-point set of Hw(o) is
~,~,(o) = {(w, ~) ~ a × (o, 1]: H~,o,(~, ~) = 0}.
213
A Homotopy Method for Nonlinear Programming
The inverse image theorem (see Naber [21]) tells us that, if 0 is a regular value of the map H~(0), then H~(01)(0) consists of some smooth curves. And the regularity of H~(o) can be obtained by the following transversality theorem.
TRANSVERSALITY THEOREM (see [3]). Let Q, N, and P be smooth manifolds of dimensions q, m, and ~, respectively. Let W c P be a submanifold of codimension p (that is, the dimension of P = p + dimension of W ) . Consider a smooth map ¢:Q×
N--* P.
If ~b is transversal to W, then for almost every a ~ Q, Ca(') = 4~(a," ) : N ~ P is transversal to W. RecaU that a smooth map h : N --) P is transversal to W if
{ Range( Dh( x ) ) } + { T y W } = T~P
w h e n e v e r y = h( x) ~ W,
where Dh is the Jacobi matrix of h, Ty W, and Ty P denote the tangent spaces of W and P at y, respectively.
In this paper, W = {0}, so the Transversality Theorem is corresponding to the Parameterized Sard Theorem on smooth manifolds, i.e., see the discussion below.
PARAMETERIZED SARD THEOREM ON P be .smooth manifolds of dimensions ¢p : Q × N - - ) P be a C r map, where regular value of ¢P, then for almost (~)a ---- ffP( a, . ).
SMOOTH MANIFOLD. Let Q, N, and q, m, and p, respectively. A n d let r>max{0, m-p}. If 0 ~ P is a all a ~ Q, 0 is a regular value of
LEMMA 2.1. Let H be defined as (2.6), f, gi ( i = 1 . . . . , m) and h i (i = 1 , . . . , l) three times continuously differentiable functions and let the conditions ( C 1 ) - ( C 3 ) hold. Then for almost all w (°) ~ ~ , 0 is a regular value of H~(o) : ~ × (0, 1] ~ R ~+ re+t, and H~(lo)(O) consists of some smooth curves. A m o n g them, a smooth curve, say Fw(o), is starting f r o m (w (°), 1).
214
L. ZHENGHUA ET AL.
PROOF. Denote the Jacobi matrix of H(w,w(°),~) by DH(w,w(°),~). For any w(°) ~ 12 and IZ ~ (0, 1], we have Q
aH(~,, ~(o), ~) =
Vg(z)
r
z Vh(z) T
-/zI
0
0
0
- lzZ (°) Vh( x (°)) T
--/z H( x (°))
(2.8) where
Q = (1 - ~ ) ( V ' f ( x )
+ E z,V'h,(x) + i=l
H( x (°))
y, V2g,(x) + , I , i=1
=
diag( h(x(°))).
Because h( x (°)) < 0 and Vg(x) is a matrix of full column rank, we have 8H(w, w (°), IZ)/a(x, x (°), z (°)) is a matrix of full row rank. Thus, DH(w, w(°), IZ) is a matrix of full row rank. That is, 0 is a regular value of H(w, w(°), ~). By the Parameterized Sard Theorem on smooth manifold, for almost all w(°) ~ ~ , 0 is a regular value of map H~(0) : ~ × (0, 1] -~ R "+ m+t By the inverse image theorem, H~(])(0) consists of some smooth curves. And, because H~(0)(w(°), 1) = 0, there must be a smooth curve F~(0) starting from (w (°), 1). •
LEMMA 2.2. Suppose f, g~, h~ are three times continuously differentiable functions, and the conditions (C1)-(C3) hold. For a given w(°) ~ 1~, if 0 is a regular value of H~(o), then F~(o) is a bounded curve in ~ × (0, 1].
PROOF. From (2.6), it is easy to see that F~(0) c fl × (0, 1]. If F~(o~ is an unbounded curve, then there exists a sequence of points {(w (k), ~k)} c F~(0) such that I](w(k), tzk)ll --* ~. Because 1~1 and (0, 1] are bounded sets, and by (C2), for any x ~ ~1, (Vg(x), Vhi(x) : i ~ B(x)) is a matrix of full column rank. From the first and second equalities of (2.6), we have that the component y of I'~(o) is bounded. Therefore, there exists a subsequence of points {(w (k~),/~k,)} such that x (k~) --* x* ~ ~1, y(k,) __) y . ~ R m, ~k, --* ~* e [0, 1] and Hz(k')ll --' ~, as k~ -~ ~.
215
A Homotopy Method for Nonlinear Programming
From the third equality of (2.6), it follows that h( x (k~)) = ttk, Z~k~)-'Z(O)h(x(O)). So the binding set
B(x*) = {j ~ {1,...,/}:
lim z} k~)
=
+~}
is a nonempty set, i.e., z* E ~121, where z~k~) denotes the j-th element of Z(k~).
By the first equality of (2.6), we have
(I ,O(vf(~(~,)) + vh(~(~,)) z(k,)) -
•4- V g ( X (k~)) y(k,) -4- l.~k,( X (k') -- X(0)) ---- 0.
(2.9)
(1) For pY = 1, rewrite (2.9) as
Y'~ (1 - /zk, ), ~j _(k' ) V h j (
x (k')) + Vg( x (k~)) y(k,) + x(k,) - x(o)
j~ B(z*)
= (i - ~ 0 ( -
z~k~)Vhj(
x (k'))
-
Vf( x (k~)) + x (k') - x(°)).
j~ B( ~* )
(2.i0) Because 1] 1 and {z~k.)}, j ~ B ( x * ) are bounded, we have lim k~'*°:
( E
jeB(x*)
(i - ,kJ4k')Vhj(~(~'))
+ V g ( x (k')) y(k~) + x(k,) _ x(O)) _ 0.
(2.11)
Using x (k,) ~ x* and y(kJ ~ y . ( k i ~ ~), we have from (2.11) that
x(")=
E
lis((1-,k)z?,))Vhj(x*)+Vg(x*)y*
j e B( x*) k~"~'
+z*, (2.12)
216
L. ZHENGHUA ET AL.
which contradicts the condition (C3), because x * ~ a ~ 1 by the third equality of (2.6) and the fact that [[ z(k')[[ --* oo (k i --, oo). (2) For/z* < 1, rewrite (2.9) as
(1-- ~k~)( V f( x(k~)) +
~ ~hj( x(ki)) Z~ki)) + ~ki( X(ki) -- x(O)) j~ B(x*)
+ Vg( x (k~)) y(k,) + (1 - ~k,)
Vh;(x (k')) z~k') =
~
0.
(2.13)
j~ B(z*) From z~k') ~ oo, j ~ 13(x*) and the conditions (C2) and (C3), as k i --* 0% the fourth part in the left-hand side of (2.13) tends to infinity, but the first, second, and third parts are bounded. This is impossible. From (1) and (2), we conclude that F~(0) is bounded. •
THEOREM 2.3 (Convergence of the method). If f, g~, h~ are three times continuously differentiable functions, and the conditions (C1)-(C3) hold, then (2.2) has at least one solution. For almost all w (°) ~ 1~, the zero-point set H~(1)(0) ofhomotopy map (2.6) contains a smooth curve F~(0) c ~ × (0, 1], which starts from (w (°), 1). As lz ~ O, the limit set T X {0} c ~ X {0} of Fw(0) is nonempty, and every point in T is a solution of (2.2). Specifically, if the length of F~(0) is finite and (w*, 0) is the end point of F~(0), then w* is a solution of (2.2).
PROOF. By Lemma 2.1, for almost all w(°) ~ El, 0 is a regular value of H~(o), and /-/~(01)(0)contains a smooth curve Fw(0)c ~ X (0, 1] starting from
(w (°), 1). By the classification theorem of one-dimensional smooth manifold (see Naber [21]), Fw(0)c ~ x (0, 1] is diffeomorphic to a unit cycle or the unit interval (0, 1]. Noticing that
a H~(0,( w(°), 1) aW
=
I
Vg(~(o))
o
Vg( x (°)) ~
0
0
Z~°) Vh( x(°)) T
0
H( x(°))
is nonsingular, we know that F~(0) is not diffeomorphic to a unit circle. That is, F~(0) is diffeomorphic to (0, 1]. Let (~, ~) be a limit point of F~c0). Only
A Homotopy Method for Nonlinear Programming
217
the following four cases are possible: (i) (~, ~) ~ 12 X {1}; (ii) (~, ~ ) ~ a12 X {1}; (iii) (~, ~ ) e ~12 X (0, 1); (iv) (~, ~) ~ 12 X {0}. Because the equation H~(o)(w,1) = 0 has only one solution (w (°), 1) in 12, the case (i) is impossible. In case (ii) and (iii), there must exist a sequence of (w(k), t~k) ~ F~(0) such that hj(x (k)) --* 0 for some 1 ~< j ~< m. From the third equality of (2.6), we have IIz(k)ll -~ ~, which contradicts Lemma 2.2. As a conclusion, (iv) is the only possible case, and hence, ~ is a solution of (2.2). •
REMARK 1. By Theorem 2.3, for almost all w(°) ~ 12 the homotopy (2.6) generates a smooth curve F~(0). We call Fw(0, as the homotopy path. Tracing numerically F~(0) from (w (°), 1) until /x --* 0, one can find a solution of (2.2). Let s be the arclength of Fw(0~.We can parameterize F~(0) with respect to s. That is, there exist continuously differentiable functions w(s), tL(s) such that
Hw,0>(~(s), , ( s ) ) = 0, w(0) = w(°),
~(0) = 1.
(2.14) (2.15)
Differentiating (2.14), we obtain the following theorem.
THEOREM 2.4. The homotopy path Fw(o) is determined by the following initial value problem to the ordinary differential equation
DH~(o,( w( s), . ( s) ) ~
= o, (2.16)
w(o) = w %
~(o) = 1.
And the w component of the solution point (w(s*), p~(s*)) of (2.14), for t~( s* ) = O, is the solution of (2.2). 3.
TRACING THE HOMOTOPY PATH
In this section, we discuss how to trace numerically the homotopy path Fw¢0).A standard procedure is the predictor-corrector method, which uses an explicit difference scheme for solving numerically (2.16) to give a predictor point and then uses a locally convergent iterative method for solving the
218
L. ZHENGHUA ET AL.
nonlinear system of equations (2.14) to give a corrector point. We formulate a simple predictor-corrector procedure as follows.
ALGORITHM 3.1 (NLP's Euler-Newton method). Step 0: Give an initial point (w (°), ~0) E ~ x {1}, an initial steplength h 0 > 0 and three small positive numbers 81, 82 > 0, 83 > 0 and k :-~ 0. Step 1: Compute the direction ~?(k) of predictor step: (a) Compute a unit tangent vector ~(k) E R~+'n+l+l; (b) Determine the direction ~?(k) of the predictor step. o
If the sign of the determinant
"
(k)
DH~()(w(k) r , P.k) is ( - 1 ) ~+t+l, then
~?(k) = ~(k). If the sign of the determinant
DH~(°'(w(k)' I~k) is ( - 1 ) re+l, ~(k) T
then ~?(k) _ _ ~:(k); Step 2: Compute a corrector point (w (k+ 1),/~k+ 1):
+
If If If go to
I[H~(0)(w(k+l), P~k+l)[I ~< 81, hk+l = min{h0,2hk}, go to Step 3; I[ H~(0)(w(k+l), ~k+i)[[ ~ (gl, z:), hk+ 1 = hk, go to Step 3; [IHw(0)(w(k+l), ~k+l)[[ >t e2, hk+ 1 = max{2-2~ho,(hk/2)}, k :-- k + 1, Step 2. Step 3: If tzk+ 1 < ~a, then stop, else k :ffi k + 1, and go to Step 1. In Algorithm 3.1,
DH~(o)( w, p~) + = DHw,o)(w, I~) T( DH~(~)( w, p.) DHw(o,( w, Iz) T) -1 (3.1) is the Moore, Penrose inverse of DHw(o)(w, ~).
REMARK 2. In Algorithm 3.1, the arclength parameter s is not computed explicity. The tangent vector at a point on Fw(0) has two opposite directions, one (the positive direction) makes s increase, and another (the negative direction) makes s decrease, The negative direction will lead us
A Homotopy Method for Nonlinear Programming back to the initial criterion in Step direction is based positive direction determinant
219
point, so we m u s t go along the positive direction. T h e 1 (b) of Algorithm 3.1 t h a t determines the positive on a basic theory of h o m o t o p y m e t h o d [5], t h a t is, the ~ at any point (w, ~ ) on F~(o) keeps the sign of the
T
invariant. W e have the following proposition.
PROPOSITION 3.1. If F~(0> is smooth, then the positive direction ,1(o) at the initial point w(°) satisfies
sign DH~,o,((°)r w
PROOF.
1)
=(-
1)re+l+ 1
From
DH~,o,( w, I-O
OH,~,o,(w, ~) 0(w, ~)
=
Q
Vg(x)
(1 - tL) Vh(x)
vg(~) ~" Z Vh(x) T
o 0
o H(x)
a/ ObJ, (a.2)
where Q = t~I + (1 - t~XV2 f( x) + F/i= l z ~V2 hi( x)) + Ei~ l yi V2 gi( x)), a = x - x (°) - V f ( x ) - V h ( x ) z and b = - Z~°)h(x(°)). Using y(0) = 0, we obtain
DH~,(o)( w (°), 1) =
I
Vg( z (°))
0
a(°)
Vg( z (°)) r
0
0
0
Z (°) Vh( x (°)) T
0
H( x (°))
b(°)
=(M~ i~),
220
L. ZHENGHUA ET AL.
where M 1 ~ R (n+m+0×(n+m+0, /1//2 ~ R(~+m+t) ×]. The tangent vector ~(o) of F,,,(.,) at (w (°), 1) satisfies
where ~:1(°) e R n+m+l, ~(') e R and write s¢(°) =t(~:~o), so(o)). By a simple computation, we have s¢~°) =-M11M2~:2(°). The determinant of
( DH,,'"'(w(°),l) ) (D)"
is
DH'p'(w('~)'I)I=I M1' M21 r M1 M2]~2(o) ~(,,)'~ ~:1((,)r' ~2(o) = _ MTM~T 1 = il
M2
0
[¢(o)
1 + M[M[TM[IM2I
= IMlI~:2(")(1 +
MTMVTMv1M2).
By the definition of M1, we have I ]/1] =
I Vg(x(°))
Vg( x (°))
0
0
o
0
H( x (°))
T
Z (°) Vh( x (°)) T
I I
= I H(x(°)) I Vg(~(o)) T
Vg(ox(°))
=l H(x(")) I "l- Vg( x(°)) TVg(x(°)) [ = ( - 1) mI H(x(°)) I- I Vg( x (°)) TVg( x (°)) .
A Homotopy Method for Nonlinear Programming
221
Hence,
DH~(o,( w(°), 1) (o)~
=
IM~I~:2(o)(I +
MTM~TM~IM2)
= ( - 1) m I H(x(°)) I
•Ivg( x(°)) ~vg(x(°)) I~(°)(1 + W M :
~iCli:).
Because h( x(°)) < 0 and the last element 71(o)of the positive direction 7 (°) should be negative, and (1 + MTM[ TM[IM2) > 0, the sign of
DH~,o,( w (°), 1) ~(o)r
is ( - 1) m+ t+ 1. This proves the proposition.
REMARK 3. The homotopy method is a globally convergent method. It can be used together with a locally convergent method for (2.2), e.g., Newton method, to obtain a faster local convergence rate. In the following, two simple numerical examples are given. In either of them, set y(O)= 0.0, z (°) = (1.0, 1.0), h 0 = 0.2, and as s 1 = 0.01, ~2 = 1.0, s3 = 10-3, and we use the locally convergent Newton method for (2.2) until
vf( x) + Vg(x) y + Vh(z) zl[ g(x)
II < 10-6"
zh(~) Numerical results are listed in the tables. In either of the two tables, x (°) is the initial value of x-component, N 1 the number of predictor-corrector steps, N 2 the number of locally iterative steps, (x*, y*, z*) the approximate Kuhn-Tucker point of the problem considered, and f( x* ) the value of the objective function at x*. Numerical results axe computed by double precision operations, and only four digits of decimal parts of result numbers axe listed.
222
L. ZHENGHUA ET AL.
EXAMPLE 3.1. min
(x 1+3) 2+x~,
s.t.
(x 1-0.9) 2+x~=
1,
x~ + x~ ~< 4, 4 -- (X 1 -- 2) 2 -- X2 ~ 0.
(See Table 1 for numerical results.) EXAMPLE 3.2. min s.t.
-(x 1+3) 2-x~, ( x 1 - 0 . 9 ) 2 + x2 = 1,
x~ ~< 4,
x~ + 4 -
(x,
- 2) ~ - x l ~< o.
(See Table 2 for numerical results.)
TABLE 1 NUMERICALRESULTS OF EXAMPLE 3.1
x(°)
Ni
N2
x*
y*
z*
f(x*)
(-0.1,0.0)
56
2
(-0.1, 0.0)
2.9
(0.0, 0.0)
8.41
(0.0, Y l ~ }
63
2
(--0.1, 0.0)
2.9
(0.0, 0.0)
8.41
/oo
(
)
(oooo)
~1
67
2
(-0.1,0.0)
2.9
(0.0,0.0)
8.41
47
67
2
( - 0.1, 0.0)
2.9
(0.0, 0.0)
8.41
0.05, 40 ]
0.05,
(OLOO)
223
A Homotopy Method for Nonlinear Programming TABLE 2 NUMERICALRESULTSOF EXAMPLE3.2
x(°)
Nx
N~
x*
(-0.1, 0.0)
81
2
( - 0.1, 0.0)
108
2
(0.0863, - 0.5813)
0.05, - - - ~
y* -2.9
z* (0.0, 0.0)
4.5454 (0.0, 3.5454)
.f(x*) -8.41
- 9.8636
REFERENCES 1 R.B. Kellogg, T. Y. Li, and J. A. Yorke, A Constructive Proof of the Brouwer Fixed-Point Theorem and Computational Results, SIAM J. Numerical Analysis. 18:473-483 (1976). 2 S. Smale, A Convergent Process of Price Adjustment and Global Newton Method, J. Math. Econ. 3:1-14 (1976). 3 S.N. Chow, J. Mallet-Paret, and J. A. Yorke, Finding Zeros of Maps: Homotopy Methods that are Constructive with Probability One, Math. Comput. 32:887-899 (1978). 4 E.L. Allgower and K. Georg, Numerical Continuation Methods: An Introduction, Springer-Vergal, Berlin, New York, 1990. 5 C. B. Garcia and W. I. Zangwill, Pathways to Solutions, Fixed Points and Equilibria, Prentice-Hall, Englewood Cliffs, N.J., 1981. 6 N. Megiddo, Pathways to the optimal set in linear programming, in Progress in Mathematical Programming, Interior Point and Related Methods, (N. Megiddo, Ed.), Springer, New York, 1988, pp. 131-158. 7 M. Kojima, S. Mizuno, and A. Yoshise, A primal-dual interior point algorithm for linear programming, in Progress in Mathematical Programming, Interior Point and Related Methods (N. Megiddo, Ed.), Springer, New York, 1988, pp. 29-47. 8 N. Karmarkar, A New Polynomial-time Algorithm for Linear Programming, Combinatorica. 4:373-395 (1984). 9 I. Adler, M. G. C. Resende, G. Veiga, and N. Karmarkar, An Impelementation of Kaxmarkar's Algorithm for Linear Programming Problems, Mathematical Programming 44:297-335 (1989).
224
L. ZHENGHUA ET AL.
10 R. D. C. Monteiro and I. Adler, Interior Path Following Primal-Dual Algorithms. Part I: Linear Programming, Mathematical Programming. 44:27-41 (1989). 11 R. D. C. Monteiro and I. Adler, Interior Path Following Primal-Dual Algorithms. Part II: Convex Quadratical Programming, Mathematical Programming. 44:43-66 (1989). 12 G. P. McCormick, The Projective SUMT Method for Convex Programming, Mathematics of Operations Research. 14:203-223 (1989). 13 R.D.C. Monteiro and I. Adler, An Extension of Karmarkar Type Algorithm to a Class of Convex Separable Programming Problems with Global Linear Rate of Convergence, Mathematics of Operations Research. 15:408-422 (1990). 14 J. Zhu, A Path-Following Algorithm for a Class of Convex Programming Problems, ZOR-Methods and Models of Operations Research. 86:359-377 (1992). 15 Y. Wang, G. C. Feng, and T. Z. Liu, Interior Point Algorithm for Convex Nonlinear Programming Problems, Numerical Mathematics, J. Chinese Universities 1(1) (1992). 16 G. Sonnevend, An analytical centre for polyhedrons and new classes of global algorithms for linear (smooth, convex) programming, in Lecture Notes in Control and Inform Sei. 83, (A. Prekopa, Ed.), Springer-Verlag, 1985, pp. 866-876. 17 G. Sonnevend and Stoer, Global Ellipsoidal Approximation and Homotopy Methods for Solving Convex Analytic Programming, Appl. Math. Optim. 21:139-165 (1990). 18 K. O. Kortanek, F. Potra, and Y. Ye, On Some Efficient Interior Point Algorithms for Nonlinear Convex Programming, Linear Algebra and its Applications. 152:169-189 (1991). 19 F. Jarre, Interior-Point Methods for Convex Programming, Applied Mathematics and Optimization. 26:287-311 (1992). 20 Z. Lin, B. Yu, and G. Feng, A Combined Homotopy Interior Point Method for Convex Programming Problems, Applied Mathematics and Computation, to appear. 21 G.L. Naber, Topological Methods in Euclidean Space, Cambridge University Press, London, 1980.