INNER APPROXIMATION ALGORITHMS FOR OPTIMIZATION OV...
14th World Congress ofIFAC
L-5a-03-2
(~opyri ght
INNER APPROXIMATION ALGORITHMS FOR OPTIMIZATION OVER THE ,\rEAKLY EFFICIENT SET Syullji YaIllada, Tetsuzo Tanino and Masahiro Inuiguchi
Department of Elect100nics and InfoNnation Sy.stems Graduate School of Engineering, O,~aka University, 2-1, Yalnada-Oka, Suita: Osaka 565-0871, Japan. E-mail:
[email protected] Fax: +81-6-6879-7939
Abstract: In this paper, to mininlize a convex cost function over the weakly efficient set of a multiobjective prograrIlrning problcm~ an inner approximation method is proposed. The proposed method transforrns ~uch a probleIIl into a sequence of approxirnate problerns. Every approxinlate problem is reduced to a finite IlllInher of convex minimization problerIls each of v..~hich can be solved easily. Cupyright © 1999 IF/1C KeY\\70rds: Global Optimization, AlgorithnlS, Optinlization, Duality
Let p(x) :== maXj:;::l~ ... )tPj(x). Then ~y- := {x E Rn : p(x) ::; D} and int X == {x E Rn : p(x) < O} # 0 (Slater's constraint qualification).
1. INTRODUCTION Let us consider the folloVt,ring multiobjective programming problem: i
(P) {IIlaxiluize (c ~ x), i = 1, ... ~ k, subject to x E X eRn, \vhere . .Y is a compact convex set and < . , . ) denotes the Euclidean inner product in Rn. The objective functions (c;', x) ~ i :::::: 1, ... , k? express the criteria which the decisiolltuaker wants to maximize. A feasible vector x E -LX" is said to be \veakly efficient if there is no feasible vector y such that {c i , x) <
== {x
E Rn :
:S 0)
== 1, .. " t } where Pi = Rn -+ R, j == 1, ... , t, are differentiable convex functions satisfying Pj(O) < 0 (\\~hence 0 E int X), Pj ( X )
j
(A2) {x; (Ci~ x} < 0 for alIi E {l, ... ~ k}}
¥- 0.
Efficiency~ Multiobject.ive
In this paper, two solution algorithms are preRented for a convex cost function minimization problem over the weakly efficient set. An exanlple of such a problem is furnished by the portfolio optiulization problem in capital markets. A fund manager may look for a portfolio V\J~hich rninimizes the transact.ion cost on the efficient set. In case X is a polytope 1 Konno, Thach and Tuy (1997) have proposed a cutting plane method for solving the problem. In contrast to their method, the inner approximation algorithms presented in this paper are effective for a Inore general problern -y.,t-here ..¥ is not necessarily a. polytope but a compact convex set. The organization of this pa.per is as follov,rs: In Section 2, a convex function minimization over the weakly efficient set are explained. In Section 3~ an inner approximation algorithm for the problelll is forlllulated. lvloreover, the convergence of the algorithrn is confirrneu. In Section 4~ an inner
5915
Copyright 1999 IFAC
ISBN: 0 08 043248 4
INNER APPROXIMATION ALGORITHMS FOR OPTIMIZATION OV...
14th World Congress of IFAC
a.pproximation algorithm using penalty functions is discussed.
Rn, because gH is a qua.si-con vex function and (..¥ + C)O is a compact convex Ket. Denote by
Throughout this paper~ int --,Y) bd X: co . .¥. and Xc denote the interior set of X c R1'\ the boundary set of . .Y , the convex hull of ..Y and the eonlplement of __Y , respectively. El == RU { -
inf (AJ P) a.nd sup (DP) the optimal values of the objective functions in (.l\-[ P) and (D P), respectively. It follo\vs {rorn t.he duality relation bet,veen problems (AlP) and (DP) that int (.J.~1P) == - sup (DP) (ef., Konno, Thach a.nd Tny (1997)).
=
8(x.l . y . )
Given a function defined by
== { 0
~f
x E
H
('U)
==
A solution algorithln for proulenl (kfP) based on an inner approxilnation method is as follo\vs:
~~,
f : Rn ---+ R, fH : Rn
{
X
~
AlgoritlllI1. lAM
El
Initialization. L·et € > 0 be the termination scalar. Generate a polytope Si such that Sl C ..,Y and that 0 E int SI' Compute the vertex set If((Sl + C)O). Set XO ::=: 0 and kt-I and go to Step 1. Step 1. Consider the follo",~ing problem (Pk ):
ERn}
if 'U == 0, _ inf {f ( x) : ('U, X> 2 I} if u :I 0
is called the quasi-conjugate of f. For any function f : Rn -+ R and any a E [-ex), +00]' Lf(n) ::= {x E Rn : f(x) :::; et} is called the (lo\vcr-) level
(P ) {lniniUlize g(x)~ k subject to x E (int (Sk
+ C»c.
Choose v k E \T ( (5 k + C) 0) such that 'v k solves the follc)\ving dual problem of problem (Pk )=
set~
(D ) {maximize gH (x), k subjecttoxE(Sk+C)o.
2. rvIINlIvlIZING A CO:\T'V"EX FlTNCTION O\TE,R TIlE ,,,rE,,4.KLY EFFICIENT SET
Let x(k) be an optimal solution of the following convex minimization problem:
Let us consider the follo~ring problem which minirnizes a function f over the weakly efficient set of (P):
(DES) where tions:
~'\/lETHOD
3.1 An Inner Approxirnation Algorithm
+00 If x f/:. .LX.
- sup{f(x) :
f
3. AN INNER APPR()XI?\,.fA.TION
{min!miZe f(x)) subject to x E X
Ininiruize f (x) { subject to {v k , x) j
Step
e,
f : Rn ----t R satisfies the following
(1)
Solve the following problem:
(2)
) = max{p(x),-(vk,x) + I}. Let and Qk denote an optimal solution and the optimal value of problem (2), respectively~ It ""ill be proved later in Theorem 4 and Lernrna. 5 that problem (2) has an optimal solution and that Zk E .X- ~ respectively. a. If Ok == O~ then stop; v k and x(k) are optimal solutions of problems (DP) and (lv1P), respectively. zk
subject to x E (int (--,Y
b.
+ C))C,
\\rhere g(x) :== J(x) + 8(xtX). The dual problem of problem (IV!P) is fornlulatcd as
DP {maXimize gH(x)~ ) subject to x E (X
X~
\\~here c/>(x;v k
By using the indicator of ..X , problem (DES) can be reformulated as
(
1, x E
minimize 4J(x;vk), { subject to x E Rn,
I.Jet C == {X E R,n : ( c i , x) :5 0, for all i E {I, ... , k} }. Then, the weakly efficient set ~Ye to problem {P) is formulated as . .¥~ :::= X\int C.Y +C).
..
2~
aSSllInp-
(BI) f is a convex function, (B2) arg min{j(x) : x E Rn} == {O}.
(~fP) {Dlininlize g(x),
~
r
+ 0)°.
Note that probleIn (DP) is a quasi-convex maxirnjzatioIl problem over a compact convex set in
1. If j(x(k») - f(x(k -1) < E, then stop; v k and x(k) are compromise solutions of problems (DP) and (1\--1 P), respectively. 2. OtherVlise, set Sk+l = co ({ zk} U Sk) and compute the vertex set ~7((Sk+l + C)O). Set k +- k + 1 and go to Step 1.
It will be discussed latter in Subsection 3~4 that the algorithm terminates after finite iterations
5916
Copyright 1999 IFAC
ISBN: 0 08 043248 4
INNER APPROXIMATION ALGORITHMS FOR OPTIMIZATION OV...
14th World Congress of IFAC
and that, for sufficiently smalJ E > 0, the COlIlpromise solution x(k) provides an approximate solution of problem (_1\1P). At every iteration k of the algoritluu, problellls (1) and (2) are convex minimization problems.
Leln1na 5. ..At iteration k of .A.lgoritllIIl lA !vI, assume SA:. eX. Then o:/,~ ~ 0 and zk EX.
Let {Sk} be genera.ted by the algorithm. T'hen Sl..~ + C 1 i == 1~ 2~ ... , are convex polyhedral sets and satisfy that 0 E int (Sk + C). Hence, from the principle of duality, (Sk + C)O is a polytope. 1-loreover, the following assertions are valid.
From Lernma 5, it follo\vs that Sit': + C C Sk+l + C c ..¥ + C and (Sk + C)C ~ (Sk+l + Cr~ :J (~y + 0)° for any k. NOlv, note that sup (Dk) > sup (D k + 1 ) for any k, that is~
Lemma 4. For any v E Rn ~ problern (2) has an optimal solution.
gH(V 1) Z gH(V 2) ? ... ~ gH(v k ) ~ ... (3) ... 2: sup (DP),
• For any k,
(Sk
+ 0)°
:::; (Skr' n Co == {x : (z, x) ~ 1 Vz E l:,r ( S k ) , (u,x) ~ 0 Vu E E(C)}
v.rhere E(C) is a. finite set of extreme directions of C satisfying C :::= {x E Rn : X ~UEE(C) AuU, Au ~ O}, • For any k, Sk + C == {x E Rn : (v, x) S 1, Vv E l/"((Sk + C)O)}.
j(x(l)) ::; f(x(2)
A Relationship bei'ween Proble:rn (Pk ) Problem (1)
~ ... :::; j(x(k)) ~ ...
.. - S
(4)
inf (ltf P) .
If the algorithm terminates at iteration A·" then, from the follov.ring Theorem, v k is an optimal solution of problelll (D P).
Since problem (DAJ is a quasi-convex maxirnization problerll over (Sk + C)O, there exists a vertex of (Sk + C)O \vhich is an optimal solution of problem (Dk). Denote by inf (Pk) and sup (Df.;J the optimal values in problems (Pk) and {D k )1 respectively. Since (D k ) is the dual problem of problem (Pk ), inf (Pk ) == - SlIp (D k ) (KOUfiO, Thach and Tuy (1997»). 3~2
2:: 2, that
and that inf (Pk-l) ~ inf (Pk ) for any k is,
Theorem 6. At iteration k of Algorithrrl IA1V[, == 0 if and only if v k E ..,¥o.
Uk
Proof To show that v k E XO if cq~ =:::: 0, suppose that v k 1. Then~ there is x E ..X such that {v k , x) > 1. l\'Iorcover; since {x E Rn : (v k , x) > I} is an open set, there exists £ > 0 such that B(x;c) C {x E Rn : (vk,x) > 1} where B(x,c) == {y E Rn : lIy-xll < E}. This implies that (int 1¥)n B(X,E) =f:. 0. Let Xl E (int X) n B(x,c), then
.xo.
and
A~surne that Sk C ..:¥ at iteration k of AIgorithln lA,J\1. The validity of this assulllption will be proved later in Lemma 5. Under this assumption it follovv'B that an optimal solution x(k) of problem (1) solves (Pk ) (sec Theorem 3 described later):
Ok
~ rnin 4J(x; v k ) xERn
~ ruin Ina.x{p(x) , -(vk,x) xERn.
::; nlax{p(x'), - (v
k
,
x')
+
I}
+ I}
< o.
Remark 1 If Sk eX, then Sk + C C X + C. Nforeover, by the principle of duality, (Sk)O ~ ...1 (0 and (Sk + C)O J (X + C)o. p
Lemma 2. At itera.tion k of Algorithm 1..4..1\;1, vr.:: int ..:X"0.
Therefore, it follows that O::k < 0 if v k Consequently, v k E XO if D:k == O.
t!-
XO.
xo,
Next, to sho",~ that (tk = 0 if v k E suppose that v k E XO~ Then, since (XQ)O .:::;;; _X-~ X c {x E Rn : {v k , x) S I}. Therefore~ . .Y n {x E Rn : (v k , x) > l} ;::= 0, that is,
Theor"ern 3. At iteration k of Algorithrn IA~vi, let v k be an optimal solution for prohlem (D k ) and x(k) an optimal solution for problem (1). Then
~
(i) ..:X" n {x E Rn : (v k , x) ~ I} ¥- 0~ (ii) x(k) is contained in the feasible set of problem (Pk), (hi) x(k) solves problem (Pk
tt.
x E Rn such that p(x) < 0, and - (vk,x) + 1
< O.
Hence, for any x E Rn, 4J(x; v k ) 2: 0, that is, Cfk ~ O. Conseqllcntly~ by Lemma 5, minxERn rjJ(x; v k ) ==
).
Uk :::::::
3.3 Stopping Criterion of Algo1ithrn lAM
o.
0
Theorem 7. _~t iteration k of .A.lgorithm IAIv1, if then
Uk ~ O~
In this subsection, the validity of the stopping criterion of Algorithm IA!\1 \vill be verified.
(i) v k is an optimal solution of problerrl (DP),
5917
Copyright 1999 IFAC
ISBN: 0 08 043248 4
INNER APPROXIMATION ALGORITHMS FOR OPTIMIZATION OV...
(ii) x(k) is an optirna,) solution of problem CA1 P).
Proof (i) Suppose that D:k == O. Then~ by Theorem 6, v k E ..y o. Furthermore, v k E --,Y" n Co == (..\ + C)C because v k E (Sk + C)O C Co. Therefore, gH ('Ok) ::; sup(D P). Since 'Ok is an optiIIlal solution of (D k ) and (Sk + C)O ~ (..1:" + C)O, gH(V k ) ~ sup(DP). Hence, gH(v k ) == sup(DP). Consequently, v k is an optimal solution of problem (DP).
14th World Congress of IFAC
'Therefore: lim q ---;. CD
q, and (Sk q +1 + C)O == (Sk q + C)O n {x E Rn : (zk q ~ x) ::::; I}, Iimq--+,~\vkq+l,zk q ) == (v, z) ::;: 1. Hence, liIll q ----4-
Hrn (v k , zk)
By Lelunla 5, liIn sUPk----+CXJ according to (6),
(ii) Since (1.}k, x(k)) 2 1 and v k E ( ..X- + C)O ~ x (k) ~ int (X + C). Therefore, x (k) is contained in the feasible set of problenl (itl P). By TheoreIIl 3,
(}~1;:
(6)
o.
<
k.......-+oo
== lim inf ma.x{p(zk)~ h{zk k-----tDO
~loreover.
1
v k )}
2: Hm iuf h(zk ~ v k ) k----too
=0.
=== inf(DP). C()nsequently~ x(k)
1.
lim inf O'k
::::: _gH(v k )::::: -sllp(DP)
f(x(k)
:=
k-:,.oo
Consequently, limk----too
is an optimal solution of prob-
lem (AIP).
0
ak
==
o
O.
TheorC1n 10. Let V be an accumula.tion point of v belongs to ()( + C)O~
{v k }. Then At iteration k of Algorithm IA1\1 , (v k ~ Zk) > 1 if ak < O. Hence, Sk+l + C:::: co (Sk U {zk}) + C ISk + C because Sk + C C {x E Rn: (vk,x) ::; 1}. I'vloreover, since V(Sk+l) c V"CSk) u {zk}, (Sk+l
== (Sk (Sk
f:-
Proof In order to obtain contradiction, suppose that fJ fThen, there exi~ts x' E ~Y such that h(x' ~ iJ) == -(v, x') + 1 < O. Since h( . ,v) is a continuous function over Rn there exists e > 0 satisfying B(X',E) C {x : h(x,v) < O}. T'his iUlplies that for any x E (int. .aY) n B(x f , E), p(x) < 0 and hex, v) < 0 because int .EY -I f/J. Then,
xo.
+ C)O
1
+ C)O n {x
E Rn: (Zk,X} ~ I}
(5)
+ C)o.
3.4 Convergence of Algorithm lAM
315
In this subsection, assurne that. infinite sequences {x(k)} and {v k } are generated by Algorithm LAM. Then, it ",~ill be sho\vn that every accumulation point of {x(k)} is an optimal solution of probJem (!tIfP) and that 1imk--+OCl f(x(k)) = inf(Al P).
>
0 such that
hex, v) <
1
2h(x, v) < 0, Vv E B(v,6)
and, for any v E Bev,6),
==
min d>(x;v)
:cERn '
,
~
min max{p(x),h(x, v)}
xERn
rnax{p(x),h(x,v)}
Lemma 8. There exists an accumulation point of
:s; max{p(x), ~h(X,v)}
{v k
<
}.
Lemrrta 9. Assume that {ak} is an infinite sequence such that for all k, (};k is the optimal value of problerIl (2) at iteration k of AlgorithrIl IAlvl.
Then
lirnk~.Xl [}k
(7)
o.
Let {v kq } be a subsequence of {v k v kq ---+ V as q 4
o
== o.
} satisfying Then, by Lemrrla 9 and (7)~
00. :=::
linl
q--+CXJ
Dk q
== lirn min 1J(x~ 1.)k q ) q---+QC} xERn
Proof Let v he an accumulation point of {v k } and {v kq } a subsequence of {v k } satisfying v kq ---7 V as q -+ Xl. Let Zk q be an optimal solution of probleln (2) at iteration k q of the algorithm. Since {zk q } belongs to the compact set X, it has an accumulation point z. By the taking a further subsequence if necessary, it can be assumed without loss of generality that {zk q } converges to 2. By Theorenl 6, for all q,
o>
ak Q
== max{p(zk q ) , h(zk q ~ ~ _(v
kq
,
zk
q
)
+
~ max{p(x), ~h(X"u)} < O. This is a contradiction. Hence fj E XO. ~-1oreover, since {v kq } C (51 + C)o C Co and Co is a closed set, lim q -+ oo v kq ::::: V E CC). Therefore, v E (X + C)O == ()(O) n (CO). 0 Corollary 11. Let v be an accumulation point of {v k }. Then v t/:. int xo.
v kq )}
1.
5918
Copyright 1999 IFAC
ISBN: 0 08 043248 4
INNER APPROXIMATION ALGORITHMS FOR OPTIMIZATION OV...
Theorern 12. L,et v is an accurnulation point of {v k }. Then v sables problem (D?). Proof Let a subsequence {v kq } C {vh-} converge to v. Since f is continuous over Rn, h is continuous over Rn X Rn ~ X is a compact set anci {x E Rn : (v~x} ~ 1, X E ...Y } == {x E Rn : -h(x,v) 2 0, X E X} 1= 0 for any 'V E CO\(int XC), gH is upper serni-continuous over CC\ (int XC) (Hogan (1973)) .T'herefore, by condition (3),
gI-{(v) ~ Urn supgH(vkq) ~ sup(DP). q---+(Xl
By Theorem 10, v E (X sup(DP). Consequently,
+
14th World Congress of IFAC
4.
~~N
INNER,
APPROXI~,1ATIOK~·1ETHOD
USING PEN.A.LT):'" FUNCTIONS In order to obtain an optinlal solution of problem (Pk ), problem (1) has been solved for each v E ~/·((Sk + C)O)\{O} at every iteration of --el\lgorithm l.A~I discussed in section 3. In this section, by using pena.lty functions~ probleln (1) v,,'"ill be transforrned into an uncoIlstrained convex minimization problem.
4.1 An Inner Approximation Algorithm Using Penalty Functions
C)o. Henee~ gH(v) S An inner approximation algorithm for problelll (OES) incorporating an exterior penalty rnethod is as follovls:
AIgorithlll lAP
o
Initialization. Let E > 0 be the termination scalar. Generate a poly-tape 51 such that 51 c X and that 0 E int St. Compute the vertex set l/((Sl + C)O). Choose a penalty parameter f£l > 0, a scalar B > 1 and s ~ 1. For COIlvenience J let ~/. ( (So + C) 0) == {O}. Set xO == 0 and k f-- 1 and go to Step 1. Step 1_ For any v E V'-((Sk+l + C)O)\V((Sk + C)O), let Av and xt; be the optinlal value and an optirnal solution of the follov..ring pr()bleln~
Remark 13. At iteration k of . t. \lgorithul Il\.IVI} since 0 E {x E Rn : \Vk,x) < 1}, {rorll the 85SUlllption (B2), every opt.imal solution of problem (1) belongs to {x E Rn: (vk;x) = I}. Theorern 14. Let x be an aceurnlllation point of x belongs to Rn\int (X + C) and solves problem (1\([ P). {x(k)}. Then
respectively:
Proof Let a subsequence {x(k q )} C {x(k)} converge to x. 'Then, there is a sequence {v kq } such that. v kq is an optimal solution of (D kq ) at iteration k q of the algorithm. By Remark 13, (v kq ~ x(k q ) :::= 1 for all q. Therefore~
(SP(v)) {miU.iIUize I(x) ~ PkB(X, v)~ subject. to x ER,
t
O(x~v)
:=;
L[max{O,pi(X)}]S j=l
+
Moreover, for every accUIIlulation point v of {v kq }, (v, x) = 1. By Theorem 10, since ij E (X + 0)°,
x
E bd (-,Y
+ C).
Consequently,
x
rt int
(...Y
[max{O, - (v, x)
+ 1 }]s.
Step 2. Choose'v k E \/T«(Sk+C)O)\{O} satisfying the follo~ving condition:
+ C).
Since {x(k q )} C X and for any q, x(k q ) is an optimal solution of problem (1) at iteration k q of the algorithm, gH (v kq ) =::: -g(x(kq ) == -f(x(k q )) for all q. Therefore, by Theorem 12 and continuity of f, inf(A-fP) = - sup(DP) = - limq-...+,x; gH (v kq ) == limq-too f (x(k q )) f (x). The proof is cornplete. 0
..4 V k == rnin{ Av : v E
V~«Sk
+ C)O) \ {O}}.
Set x(k) ~ xl.J~ . Step 3. For v k 1 solve problem (2). Let zk and Ok denote an optinlal solution and the optimal value of problem (2), respec.tively. k 3. If O!k = 0 and O(x(k), v~) == 0, then stop; v and x(k) are optimal solutions of problems (DP) and (/J.,fP), respectively. b. 1. If j(x(k) - j(x(k - 1)) < c and 6(x(k), v k ) :::= 0, then 'Ok and x(k) are compromise solutions of problems (DP) and (J.iJ P)~ respectively. 2. Other"vise, set
Kotc that, from Theorem 14 and the continuity of f~ liU1k-t-oo j(x(k) == inf(M P). Therefore~ for any € > 0, Algorithm I-A.."Yl terminates after finite iterations. r"loreovcr J since every accuruulation point of {x(k)} is an optirnal solution of problem (",-"-vI P); for sufficiently small s > 0> the obtained compromise solution x(k) generated by the algorithm is an approximate solution of problem (-,-"\;1 P).
5919
Copyright 1999 IFAC
ISBN: 0 08 043248 4
INNER APPROXIMATION ALGORITHMS FOR OPTIMIZATION OV...
14th World Congress of IFAC
and set
t k
Pk+l ::::::
BIlk if O(x(k), v ) k { J.l k if 6 (;r ( k ) ,v .)
>
0,
:=:
O.
2: q-+oo Hrn "rmax{O, pj(x(k g )) }Js L.-t j==l
t
~ L[nlax{O;pj(x)}J8.
Conlpute the vertex set l7((Sk-l-l + C) 0). Replace k by k + 1 ~ and return to Step 1.
j=l
This implies that
X.
Next, let {vi} be a subsequence of {v kq } satisfying vi ---+- fj as l -+ 00, ,,"vhere D is an accumulation point of {v kq }, and'let {x(l)} be a :;ubsequence of {x(k q ) } for {Vi}. Obviously, {x(l)} converges to v as l --+ 00. By Theorem 10, v belongs to (-LX" + 0)° . Therefore, (X + 0)° C {x E Rn : (v~ x) :; I}. Hence, by Lemma 18
Lemma 15. ~-\t iteration k of algorithm I . ~ . P~ ..4 11 k ~ inf {g(x) : x i: int (Sk + C)}. N ate that inf(Pk ) ==: inf {g(x) : x tf. int (Sk Therefore by Lemma 15, for any k,
x belongs to
+ C)}.
1
1
o ==
Hrn e(x(l),v l )
l-+CXJ
2: linl [max{O, q----+oo
== 0 at iteration k of
Lemma 17. At iteration k of algorithm I~A.P: if == 0 and e(x ( k), V k) == 0 ~ t hen x ( k) is contained in the weakly efficient set .LYe and solves problem (OES).
+ C),
5~
In this subsection, the suitability of the convergence of Algorit.hm Lt\.P ",Yill be verified.
Lemma 18, Let {x(k)} and {v k } be infinite seIAP~
quences generated by AlfSorithm 8 (x (k ) ~ Vk) ~ 0 as k -----t co. -
Then,
Theorem 19. Let {x(k)} be an infinite sequence generated by . ~ . lgorithln lAP. TheIl every accumulation point x of {x(k)} belongs to the weakly efficient set X e • 1
Proof Let {x(k q ) } be a subsequence of {x(k)} satisfying x(k q ) ----+ x a.~ q -+ 00, and let {v kq } be a subsequence of {v k } for {x(k g )}. Then, by Lemma 18, limq-tco 6(x(k q ), v kq ) ~ O. Therefore~ t
~ hm "[max{O,pj(x(kq))})S ~
0
Tllcorern 21. Let {v k } be an infinite sequence generated by Algorithm I.A.P.Then, every accumulation point v of {v k } belongs to (X + C)O and solves problem (DP).
Convergence of Algorithm lAP
q--+oo
Consequently, since x E X and x r/:. int (X belongs to X e ;=: ...¥\int (X + C).
The01--ern 20. Let {x(k)} be an infinite sequence generated by .A.lgorithm lA.P. Then~ every accumulation point of {x(k)} solves problenl (DES).
0: k
o
is not
x
Lemma 16. At iteration k of algorithul I.t\P, x{k) belongs to X n {x : (v k ~ x) 2: I} if and only if B(x(k), v k ) ~ O.
4~3
x
This implies that (v, x) == 1. Therefore, contained in int (X + C).
In this subsection, it \vill be sho\vn that x(k) and v k solve problems (OES) and (DP), respectively, )
+ 1}]8
== [max{O, ~{v~x) + l}]s.
4.2 Stopping cTiterion of Algorithm lAP
if Uk = 0 and 8(x(k), v k Algorit.hIll lAP.
~
CONCLUSION
In tIllS pa.per~ two solution algorjthms for problem (OES) have been presented based on an inner approximate method. From the viewpoint of compu tationaI effort ~ the algorithIIl incorpora~iIlg an exterior penalty method will have an advantage of the other algorithm. REFERE~CES
.t\ubin, J.P. (1977). Applied Abstract Analysis, John \Vi]ey~ Nev.r York. Hogan, \v.\,r. (1973). Point-to-Set Afaps in Mathematical Programming, SIA.1vI Rcvie~r, \Tol. 15~
No. 3. Konno, H.~ P.T4 Thach and H. Tuy (1997). Optimization on Low Rank Nonconvex St',.,uctures Klu\ver Academic Publishers, Dordrecht. Sawaragi, l''-'j H. Nakayama and T. Tanino (1985). Theory of Multiobjective ()ptiTnizati()11.~ Academic Press, Orland.
j=l
+ q----+-oo lim [max{O,-{V k'l:x(k q )} +
I}]S
5920
Copyright 1999 IFAC
ISBN: 0 08 043248 4