An inner approximation method incorporating a branch and bound procedure for optimization over the weakly efficient set

An inner approximation method incorporating a branch and bound procedure for optimization over the weakly efficient set

European Journal of Operational Research 133 (2001) 267±286 www.elsevier.com/locate/dsw An inner approximation method incorporating a branch and bou...

212KB Sizes 0 Downloads 13 Views

European Journal of Operational Research 133 (2001) 267±286

www.elsevier.com/locate/dsw

An inner approximation method incorporating a branch and bound procedure for optimization over the weakly ecient set Syuuji Yamada *, Tetsuzo Tanino, Masahiro Inuiguchi Department of Electronics and Information Systems, Graduate School of Engineering, Osaka University, 2-1 Yamadaoka-oka, Suita, Osaka 565-0871, Japan

Abstract In this paper, we consider an optimization problem which aims to minimize a convex function over the weakly ecient set of a multiobjective programming problem. From a computational viewpoint, we may compromise our aim by getting an approximate solution of such a problem. To ®nd an approximate solution, we propose an inner approximation method for such a problem. Furthermore, in order to enhance the eciency of the solution method, we propose an inner approximation algorithm incorporating a branch and bound procedure. Ó 2001 Elsevier Science B.V. All rights reserved. Keywords: Weakly ecient set; Global optimization; Dual problem; Inner approximation method; Branch and bound procedure

1. Introduction We consider the following multiobjective programming problem: …P†

maximize hci ; xi; i ˆ 1; . . . ; K; subject to x 2 X  Rn ;

where ci 2 Rn (i ˆ 1; . . . ; K), X is a compact convex set and h; i denotes the Euclidean inner product in Rn . The objective functions hci ; xi, i ˆ 1; . . . ; K, express the criteria which the decision maker wants to maximize. A feasible vector x 2 X is said to be weakly ecient if there is no feasible vector y such that hci ; xi < hci ; yi for every i 2 f1; . . . ; Kg. The set Xe of all feasible weakly ecient vectors is called the weakly ecient set. For problem (P), we shall assume the following conditions throughout this paper: *

Corresponding author. Tel.: +81-6-6879-7787; fax: +81-6-6879-7939. E-mail addresses: [email protected] (S. Yamada), [email protected] (T. Tanino), [email protected] (M. Inuiguchi). 0377-2217/01/$ - see front matter Ó 2001 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 7 - 2 2 1 7 ( 0 0 ) 0 0 2 9 7 - 6

268

S. Yamada et al. / European Journal of Operational Research 133 (2001) 267±286

(A1) X ˆ fx 2 Rn : pj …x† 6 0; j ˆ 1; . . . ; tg where pj : Rn ! R, j ˆ 1; . . . ; t, are di€erentiable convex functions satisfying pj …0† < 0 (whence 0 2 int X ), (A2) fx 2 Rn : hci ; xi < 0 for all i 2 f1; . . . ; Kgg 6ˆ ;. Let p…x† :ˆ maxjˆ1;...;t pj …x†. Then X :ˆ fx 2 Rn : p…x† 6 0g and int X ˆ fx 2 Rn : p…x† < 0g 6ˆ ; (Slater's constraint quali®cation). In this paper, we consider minimization of a convex cost function over the weakly ecient set. An example of such a problem is furnished by the portfolio optimization problem in capital markets. A fund manager may look for a portfolio which minimizes the transaction cost on the ecient set. When X is a polytope, Konno et al. [1] have proposed a cutting plane method for solving the problem. In order to solve the problem with a compact convex set X, not necessarily a polytope, we propose an inner approximation method for solving the problem in the paper. The organization of the paper is as follows. In Section 2, we deal minimization of a convex function over the weakly ecient set in Rn . Moreover, we consider the dual problem of the original problem. In Section 3, we formulate an inner approximation algorithm, and make sure of convergence of the algorithm. In Section 4, to terminate the algorithm after ®nite iterations, we propose another stopping criterion of the algorithm. In Section 5, we propose an inner approximation algorithm incorporating a branch and bound procedure. Throughout the paper, we use the following notation: for X  Rn , int X, bd X and conv X denote the interior set of X, the boundary set of X and the convex hull of X, respectively. Let R ˆ R [ f 1g [ f‡1g, and let X  ˆ fu 2 Rn : hu; xi 6 1 8x 2 X g, which is called the polar set of X, D…X † ˆ max fkx yk : x; y 2 X g, which is the diameter of X, d… j X † the indicator of X, which is an extended-real-valued function de®ned as follows:  0 if x 2 X ; d…x j X † ˆ ‡1 if x 62 X : Given a convex polyhedral set (or polytope) Y  Rn , V …Y † denotes the set of all vertices of Y. Given a function f : Rn ! R, f H : Rn ! R is called the quasi-conjugate of f if f H is de®ned as follows:  if u ˆ 0; sup ff …x† : x 2 Rn g H f …u† ˆ inf ff …x† : hu; xi P 1g if u 6ˆ 0: Given a di€erentiable function f : Rn ! R, rf …x† denotes the gradient of f at x 2 Rn . Given a convex function f : Rn ! R, of …x† denotes the subdi€erential of f at x 2 Rn . 2. Minimizing a convex function over the weakly ecient set Let us consider the following problem which minimizes a function f over the weakly ecient set Xe of (P): …OES†

minimize f …x† subject to x 2 Xe ;

where f : Rn ! R satis®es the following assumptions: (B1) f is a convex function, (B2) f is regular at the origin, that is, f …0† ˆ inf ff …x† : x 2 Rn n f0gg, (B3) arg min ff …x† : x 2 Rn g ˆ f0g. Let C ˆ fx 2 Rn : hci ; xi 6 0 for all i 2 f1; . . . ; Kgg. Then, by assumption (A2), int C 6ˆ ;. It follows from the following lemma that the weakly ecient set Xe for problem (P) is formulated as Xe ˆ X n int …X ‡ C†.

S. Yamada et al. / European Journal of Operational Research 133 (2001) 267±286

269

Lemma 2.1. Xe ˆ X n int …X ‡ C†. Proof. For any x 2 X n int …X ‡ C†, = x 2 X ; such that x 2 int …fxg ‡ C†: 9

…1†

For any x 2 X , fxg ‡ C is formulated as fxg ‡ C ˆ fxg ‡ fy 2 Rn : hci ; yi 6 0 for all i 2 f1; . . . ; Kgg ˆ fz 2 Rn : z ˆ x ‡ y; hci ; yi 6 0 for all i 2 f1; . . . ; Kgg ˆ fz 2 Rn : hci ; z xi 6 0 for all i 2 f1; . . . ; Kgg ˆ fz 2 Rn : hci ; zi 6 hci ; xi for all i 2 f1; . . . ; Kgg: Hence, for any x 2 X , int …fxg ‡ C† ˆ fz 2 Rn : hci ; zi < hci ; xi for all i 2 f1; . . . ; Kgg. By condition (1), for any x 2 X n int …X ‡ C†, = x 2 X such that hci ; xi < hci ; xi for all i 2 f1; . . . ; Kg: 9 Consequently, every point x 2 X n int …X ‡ C† is a weakly ecient solution for problem (P), that is, Xe  X n int …X ‡ C†. Since every x0 2 Xe is a weakly ecient solution for problem (P), we get that x0 2 X and = x 2 X such that hci ; x0 i < hci ; xi for all i 2 f1; . . . ; Kg: 9 Therefore, for any x0 2 Xe , = x 2 X such that x0 2 int …fxg ‡ C†: 9 Consequently, since x0 2 X and x0 62 int …X ‡ C† for any x0 2 Xe , Xe  X n int …X ‡ C†.  By using the indicator of X, problem (OES) can be reformulated as …MP†

minimize subject to

g…x† x 2 Rn n int …X ‡ C†;

…2†

where g…x† :ˆ f …x† ‡ d…x j X †. The dual problem of problem (MP) is formulated as …DP†

maximize gH …x†  subject to x 2 …X ‡ C† : 

By assumptions (A1) and (A2), 0 2 int …X ‡ C†, so that …X ‡ C† is compact. Since gH is a quasi-convex function, we note that problem (DP) is a quasi-convex maximization problem over a compact convex set in Rn . Denote by inf …MP† and sup …DP† the optimal values of the feasible values of the objective functions in …MP† and …DP†, respectively. It follows from the duality relations between problem (MP) and problem (DP) that inf …MP† ˆ sup …DP† (cf. [1]). 3. An inner approximation method for (MP) 3.1. An inner approximation method for (MP) We propose an inner approximation method for problem (MP) as follows:

270

S. Yamada et al. / European Journal of Operational Research 133 (2001) 267±286

Algorithm IA. Initialization. Generate a polytope S1 such that S1  X and that 0 2 int S1 . Set k to Step 1. Step 1. Consider the following problem (Pk ): …Pk †

minimize subject to

1 and go

g…x† x 2 Rn n int …Sk ‡ C†:

Choose vk 2 V ……Sk ‡ C† † such that vk solves the following dual problem of problem (Pk ): …Dk †

maximize subject to

gH …x† x 2 …Sk ‡ C† :

Let x…k† be an optimal solution of the following convex minimization problem: minimize subject to

f …x† x 2 X \ fx 2 Rn : hvk ; xi P 1g:

…3†

Step 2. Solve the following problem: minimize subject to

/…x; vk † ˆ max fp…x†; h…x; vk †g x 2 Rn ;

…4†

where h…x; vk † ˆ hvk ; xi ‡ 1. Let zk and ak denote an optimal solution of problem (4) and the optimal value, respectively. It will be proved later in Theorem 3.2 and Lemma 3.4 that there exists its optimal value for problem (4) and that zk 2 X , respectively. (a) If ak ˆ 0, then stop; vk solves problem (DP). Moreover, x…k† solves problem (MP) and the optimal value of problem (3) is the optimal value of problem (MP). (b) Otherwise, set Sk‡1 ˆ conv …fzk g [ Sk †. Set k k ‡ 1 and go to Step 1. At every iteration k of the algorithm, problems (3) and (4) are convex minimization problems. Let fSk g be generated by the algorithm. Then Sk ‡ C, i ˆ 1; 2; . . . ; are convex polyhedral sets and satisfy that 0 2 int …Sk ‡ C†. Hence, from the principle of duality, …Sk ‡ C† is a polytope. Moreover, the following assertions are valid: · for any k, …Sk ‡ C†

ˆ …Sk † \ C  ˆ fx 2 Rn : hz; xi 6 1

8z 2 V …Sk †; hu; xi 6 0

8u 2 E…C†g;

P where E…C† is a ®nite set of extreme directions of C satisfying C ˆ fx 2 Rn : x ˆ u2E…C† ku u; ku P 0g,  · for any k, Sk ‡ C ˆ fx 2 Rn : hv; xi 6 1; 8v 2 V ……Sk ‡ C† †g.  Since problem …Dk † is a quasi-convex maximization problem over …Sk ‡ C† , we can choose an optimal  solution of problem …Dk † from the set of all vertices of …Sk ‡ C† . Denote by inf …Pk † and sup …Dk † the optimal values of problem …Pk † and that of …Dk †, respectively. Since …Dk † is the dual problem of problem …Pk †, inf …Pk † ˆ sup …Dk † [1]. In Section 3.2±3.5, we shall discuss the suitability of the algorithm: Section 3.2: We propose a procedure generating an initial polytope S1 . Section 3.3: At iteration k of the algorithm, the feasible set of problem (3) is not empty. Moreover, an optimal solution x…k† of problem (3) belongs to the feasible set of problem …Pk † and solves problem …Pk †. Section 3.4: First, at every iteration of the algorithm, there exists the minimal value of the objective function of problem (4). Secondly, at every iteration of the algorithm, an optimal solution zk of problem (4)

S. Yamada et al. / European Journal of Operational Research 133 (2001) 267±286

271

is contained in X. Lastly, if the algorithm terminates after ®nite iterations, we obtain the optimal solutions of problems (MP) and (DP). Section 3.5: If the algorithm does not terminate after ®nite iterations, every accumulation point of a sequence fvk g is an optimal solution to problem (DP). Moreover, every accumulation point of a sequence fx…k†g is an optimal solution to problem (MP).

3.2. Generating an initial polytope S1 In this section, we propose a procedure for generating an initial polytope S1 such that S1  X and 0 2 int S1 . Let T ˆ fe1 ; e2 ; . . . ; en ; en‡1 g; where, for all i 2 f1; . . . ; ng, ei ˆ …ei1 ; . . . ; ein †t 2 Rn satis®es that eii ˆ 1 and eij ˆ 0 for all j 6ˆ i, and en‡1 2 Rn satis®es that en‡1 ˆ 1 for all j. Then, 0 2 conv T and conv T is a polytope. However, conv T is not always j contained in X. To get a polytope contained in X, let for all i 2 f1; . . . ; n ‡ 1g,  1 p…ei † 6 0; ki ˆ p…0† p…ei † > 0; p…ei † p…0† and let T :ˆ fk1 e1 ; k2 e2 ; . . . ; kn‡1 en‡1 g. Then, ki ei 2 X if p…ei † 6 0 (i 2 f1; . . . ; n ‡ 1g). Moreover, since p is a convex function, p…0† < 0 and the de®nition of ki (i ˆ 1; . . . ; n ‡ 1), we get that for all i 2 f1; . . . ; n ‡ 1g such that p…ei † > 0, p…ki ei † ˆ p…ki ei ‡ …1

ki †0† 6 ki p…ei † ‡ …1

ki †p…0† ˆ 0:

Therefore, p…ki ei † 6 0 for all i 2 f1; . . . ; n ‡ 1g, that is, conv T  X . Obviously, 0 2 conv T. Consequently, we get S1 and V …S1 † by replacing conv T and T, respectively. 3.3. A relationship between problem …Pk † and problem (3) Assume that Sk  X at iteration k of Algorithm IA. The validity of this assumption will be proved later in Lemma 3.4. Under this assumption it follows that an optimal solution x…k† of problem (3) solves …Pk † (see Theorem 3.1 described later). 

Remark 3.1. If Sk  X , then Sk ‡ C  X ‡ C. Moreover, by the principle of duality, …Sk †  X  and   …Sk ‡ C†  …X ‡ C† . Lemma 3.1. At iteration k of Algorithm IA, vk 6ˆ 0. Proof. For any y 2 …bd X  † \ C  , there is x0 2 X such that hy; x0 i ˆ 1. Therefore, gH …y† > 1 for any   y 2 …bd X  † \ C  . Since …Sk ‡ C†  …X ‡ C†  …bd X  † \ C  and vk is an optimal solution of …Dk †, we H k H obtain g …v † > 1. Moreover, since g …0† ˆ 1, we have vk 6ˆ 0.  

Lemma 3.2. At iteration k of Algorithm IA, for any v 2 V ……Sk ‡ C† † n f0g, v 62 int X  .

272

S. Yamada et al. / European Journal of Operational Research 133 (2001) 267±286 

Proof. Suppose to the contrary that there exists v 2 V ……Sk ‡ C† † n f0g satisfying v 2 int X  . Then, since v is   a vertex of …Sk ‡ C† and …Sk ‡ C† ˆ fx 2 Rn : hz; xi 6 1 8z 2 V …Sk †; hu; xi 6 0 8u 2 E…C†g, 9a1 ; . . . ; ar 2 V …Sk † [ E…C† such that dim fa1 ; . . . ; ar g ˆ n and hai ; vi ˆ bi ; i ˆ 1; . . . ; r;

…5†

where for all i 2 f1; . . . ; rg,  1 if ai 2 V …Sk †; bi ˆ 0 if ai 2 C: Note that y 62 int X if a point y 2 Rn satis®es that hz; yi 6 1 for some z 2 V …Sk †, because  X   …Sk † ˆ fx 2 Rn : hz; xi 6 1 for all z 2 V …Sk †g. Hence, by the assumption of v, fa1 ; . . . ; ar g  E…C†. Then v is the origin of Rn because \riˆ1 fx 2 Rn : hai ; xi ˆ 0g ˆ f0g. This is a contradiction and hence, we  have v 62 int X  for any v 2 V ……Sk ‡ C† † n f0g.  Lemma 3.3. At iteration k of Algorithm IA, vk 62 int X  . 

Proof. We remember that vk is a vertex of …Sk ‡ C† . By Lemmas 3.1 and 3.2, we get that vk 62 int X  . Theorem 3.1. At iteration k of Algorithm IA, let vk be an optimal solution for problem (Dk ) and x…k† an optimal solution for problem (3). Then 1. X \ fx 2 Rn : hvk ; xi P 1g 6ˆ ;, 2. x…k† is contained in the feasible set of problem …Pk †, 3. x…k† solves problem …Pk †. Proof. 1. By Lemma 3.3, vk 62 int X  . Therefore, there is x0 2 X such that hvk ; x0 i P 1, that is, X \ fx 2 Rn : hvk ; xi P 1g 6ˆ ;. 2. Since vk 2 …Sk ‡ C† , x…k† 2 fx 2 Rn : hvk ; xi P 1g and ……Sk ‡ C† † ˆ Sk ‡ C, we get that x…k† 62 int …Sk ‡ C†. Therefore, x…k† is contained in the feasible set of problem …Pk †. 3. Since vk is an optimal solution and …Dk † is the dual problem of problem …Pk †, inf …Pk † ˆ ˆ

sup …Dk † gH …vk †

ˆ inffg…x† : hvk ; xi P 1g

…by Lemma 3:1†

k

ˆ infff …x† : hv ; xi P 1; x 2 X g ˆ inf …3†; where inf (3) is the optimal value of problem (3). Therefore, we have f …x…k†† ˆ g…x…k†† ˆ inf …Pk †.  3.4. Stopping criterion of Algorithm IA In this section, we examine the suitability of the stopping criterion of Algorithm IA. Theorem 3.2. For any v 2 Rn , there exists the minimal value of the objective function /…x; v† of problem (4) over Rn .

S. Yamada et al. / European Journal of Operational Research 133 (2001) 267±286

273

Proof. Since p is a continuous function and X ˆ fx 2 Rn : p…x† 6 0g is compact by the assumption, the minimal value of p over Rn exists [2]. Let a :ˆ min fp…x† : x 2 Rn g. Then inf x2Rn /…x; v† ˆ inf x2Rn max fp…x†; h…x; v†g P inf x2Rn p…x† ˆ a. This implies that /…x; v† has the in®mum over Rn . Let b :ˆ inf x2Rn /…x; v†. Since p is a proper convex function and X is compact, we obtain that for any c P a, the level set Lp …c† ˆ fx 2 Rn : p…x† 6 cg is compact ([3], Corollary 8.7.1). We note that b P a and that Lp …c†  L/ …c† 6ˆ ; for any c P b. Hence, L/ …c† is compact for any c P b. Consequently, the minimal value of /…x; v† over Rn exists ([2], Theorem 2.1). Lemma 3.4. At iteration k of Algorithm IA, assume that Sk  X . Then 1. ak 6 0, 2. zk 2 X . Proof. By Lemma 3.3, vk 62 int X  . Therefore, there is x^ 2 X such that hvk ; x^i P 1. Furthermore, ak ˆ minn /…x; vk † 6 /…^ x; vk † ˆ max fp…^ x†; hvk ; x^i ‡ 1g 6 0: x2R

Since ak 6 0, we obtain p…zk † 6 ak 6 0. Consequently, zk 2 X .  From Lemma 3.4 and Section 3.2, we have · S1 ‡ C  S2 ‡ C      Sk ‡ C      X ‡ C,     · …S1 ‡ C†  …S2 ‡ C†      …Sk ‡ C†      …X ‡ C† . Moreover, we note that sup …Dk 1 † P sup …Dk † for any k P 2, that is, gH …v1 † P gH …v2 † P    P gH …vk † P    P sup …DP†;

…6†

and that inf …Pk 1 † 6 inf …Pk † for any k P 2, that is, f …x…1†† 6 f …x…2†† 6    6 f …x…k†† 6    6 inf …MP†:

…7†

If the algorithm terminates after ®nite iterations, then it is guaranteed that there is an optimal solution of problem (DP), as follows. Theorem 3.3. At iteration k of Algorithm IA, ak ˆ 0 if and only if vk 2 X  . Proof. We shall show that vk 2 X  if ak ˆ 0. Suppose that vk 62 X  . Then, there is x^ 2 X such that hvk ; x^i > 1. Moreover, since fx 2 Rn : hvk ; xi > 1g is an open set, there exists e > 0 such that B…^ x; e†  fx 2 Rn : k n hv ; xi > 1g where B…^ x; e† ˆ fy 2 R : ky x^k < eg. This implies that …int X † \ B…^ x; e† 6ˆ ;. Let x0 2 …int X † \ B…^ x; e†, then ak ˆ minn /…x; vk † ˆ minn max fp…x†; hvk ; xi ‡ 1g 6 max fp…x0 †; hvk ; x0 i ‡ 1g < 0: x2R

x2R

Therefore, we get that ak < 0 if vk 62 X  . Consequently, vk 2 X  if ak ˆ 0. Next, we shall show that ak ˆ 0 if vk 2 X  . Suppose that vk 2 X  . Then, since X is a closed convex set,   …X † ˆ X , so that X  fx 2 Rn : hvk ; xi 6 1g. Therefore, X \ fx 2 Rn : hvk ; xi > 1g ˆ ;, that is, = x 2 Rn such that p…x† < 0 and 9

hvk ; xi ‡ 1 < 0:

Hence, for any x 2 Rn , /…x; vk † P 0, that is, ak P 0. Consequently, by Lemma 3.4, minx2Rn /…x; vk † ˆ ak ˆ 0.

274

S. Yamada et al. / European Journal of Operational Research 133 (2001) 267±286

Theorem 3.4. At iteration k of Algorithm IA, if ak ˆ 0, then 1. vk is an optimal solution of problem (DP), 2. x…k† is an optimal solution of problem (MP). Proof. Suppose that ak ˆ 0. Then, by Theorem 3.3, vk 2 X  . Furthermore, we obtain vk 2 X  \ C  ˆ   …X ‡ C† because vk 2 …Sk ‡ C†  C  . Therefore, gH …vk † 6 sup …DP†. Since vk is an optimal solution of …Dk †   and …Sk ‡ C†  …X ‡ C† , we get gH …vk † P sup …DP†. Hence, gH …vk † ˆ sup …DP†. Consequently, vk is an optimal solution of problem (DP). Since hvk ; x…k†i P 1 and vk 2 …X ‡ C† , we have x…k† 62 int …X ‡ C†. Therefore, x…k† is contained in the feasible set of problem (MP). By Theorem 3.1, gH …vk † ˆ

f …x…k†† ˆ

sup …DP† ˆ inf …DP†:

Consequently, x…k† is an optimal solution of problem (MP). At iteration k of Algorithm IA, hvk ; zk i > 1 if ak < 0. Hence, Sk‡1 ‡ C ˆ conv …Sk [ fzk g† ‡ C 6ˆ Sk ‡ C because Sk ‡ C  fx 2 Rn : hvk ; xi 6 1g. Moreover, since V …Sk‡1 †  V …Sk † [ fzk g, we have 





…Sk‡1 ‡ C† ˆ …Sk ‡ C† \ fx 2 Rn : hzk ; xi 6 1g 6ˆ …Sk ‡ C†

…8†

(see Fig. 1). 



Remark 3.2. At iteration k of Algorithm IA, for any v 2 V ……Sk‡1 ‡ C† † n V ……Sk ‡ C† †, hzk ; vi ˆ 1. 3.5. Convergence of Algorithm IA Algorithm IA using the stopping criterion discussed in Section 3.4 does not necessarily terminate after ®nite iterations. In this section, we consider the case that an in®nite sequence fvk g is generated by the algorithm. Lemma 3.5. Assume that fvk g is an infinite sequence such that for all k, vk is an optimal solution of (Dk ) at iteration k of Algorithm IA. Then, there exists an accumulation point of fvk g. 



Proof. Since fvk g  …S1 ‡ C† and …S1 ‡ C† is compact, there exists an accumulation point of fvk g. 

Fig. 1. Construct Sk‡1 ‡ C and …Sk‡1 ‡ C† ; H …zk † ˆ fx : hzk ; xi ˆ 1g.

S. Yamada et al. / European Journal of Operational Research 133 (2001) 267±286

275

The following theorem assures that every accumulation point of fvk g belongs to the feasible set of problem (DP). Theorem 3.5. Assume that fvk g is an infinite sequence such that for all k, vk is an optimal solution of …Dk † at iteration k of Algorithm IA and that v is an accumulation point of fvk g. Then v belongs to …X ‡ C† . Proof. Let a subsequence fvkq g  fvk g converge to v. Then, there is the sequence fzkq g such that zkq is an optimal solution of problem (4) at iteration kq of the algorithm. Since fzkq g belongs to the compact set X, there are an accumulation point z and a subsequence fzl g  fzkq g such that zl ! z as l ! 1. Moreover, for fzl g, there is the subsequence fvl g  fvkq g. Obviously, fvl g converges to v. Since fvl g 6 X  , by Theorem 3.3, 0 > al ˆ max fp…zl †; h…zl ; vl †g P

hvl ; zl i ‡ 1;

for all l: 0

o

v; zi P 1. On the other hand, since vl 2 …Sl‡1 ‡ C† for all l0 > l, and …Sl‡1 ‡ Therefore, liml!1 hvl ; zl i ˆ h o o n C† ˆ …Sl ‡ C† \ fx 2 R : hzl ; xi 6 1g, we obtain liml!1 hvl‡1 ; zl i ˆ h v; zi 6 1. Hence, liml!1 hvl ; zl i ˆ kq h v; zi ˆ 1. Furthermore, we get that limq!1 h v; z i ˆ 1 ([4], Proposition 1, Section 6, Chapter 1). Since X is compact, there is a l > 0 such that kxk < l for all x 2 X . Then, we have lim sup hvkq ; zkq i 6 lim sup hvkq

q!1

q!1

ˆ lim sup hv

kq

q!1

v; zkq i ‡ lim sup h v; zkq i q!1

kq

v; z i ‡ 1

6 lim sup kvkq

vk  kzkq k ‡ 1

6 lim sup kvkq

vk  l ‡ 1

q!1 q!1

ˆ 0 ‡ 1 ˆ 1; and lim inf hvkq ; zkq i P lim inf hvkq

q!1

q!1

ˆ lim inf hvkq q!1

v; zkq i ‡ lim inf h v; zkq i q!1

v; zkq i ‡ 1

P lim inf kvkq

vk  … kzkq k† ‡ 1

P lim inf kvkq

vk  … l† ‡ 1

q!1 q!1

ˆ 0 ‡ 1 ˆ 1: Consequently, lim hvkq ; zkq i ˆ 1:

…9†

q!1

By Lemma 3.4, limq!1 sup akq 6 0. Moreover, according to condition (9), lim inf akq ˆ lim inf max fp…zkq ; h…zkq ; vkq †g P lim h…zkq ; vkq † ˆ 0:

q!1

q!1

q!1

Consequently, limq!1 akq ˆ 0. In order to obtain contradiction, suppose that v 62 X  . Then, we have 9x0 2 X such that h…x0 ; v† ˆ

h v; x0 i ‡ 1 < 0:

276

S. Yamada et al. / European Journal of Operational Research 133 (2001) 267±286

Since h…; v† is a continuous function over Rn , 9e > 0 such that B…x0 ; e†  fx 2 Rn : h…x; v† < 0g; where B…x0 ; e† ˆ fx 2 Rn : kx x0 k < eg. This implies that for any x 2 …int X † \ B…x0 ; e†, p…x† < 0 and h…x; v† < 0 because int X 6ˆ ;. Then, we obtain 1 9d > 0 such that h…x; v† < h…x; v† < 0; 2

8v 2 B… v; d†

and, for any v 2 B… v; d†, minn /…x; v† minn max fp…x†; h…x; v†g x2R

x2R

6 max fp…x†; h…x; v†g   1 6 max p…x†; h…x; v† < 0: 2 Consequently, limq!1 akq 6 max fp…x†; 12 h…x; v†g < 0. This is a contradiction. Hence v 2 X  . Moreover, since  fvkq g  …S1 ‡ C†  C  and C  is a closed set, we have limq!1 vkq ˆ v 2 C  . Therefore, we get that  v 2 …X ‡ C† ˆ …X  † \ …C  †.  Corollary 3.1. Assume that fvk g is an infinite sequence such that for all k, vk is an optimal solution of problem (Dk ) at iteration k of Algorithm IA and that v is an accumulation point of fvk g. Then v 62 int X  . Proof. Let a subsequence fvkq g  fvk g converge to v. By Theorem 3.4, vkq 62 int X  for all q. Since Rn n int X  is a closed set, we have limq!1 vkq ˆ v 2 Rn n int X  .  Moreover, we get from the following theorem that every accumulation point of fvk g solves problem (DP). Theorem 3.6. Assume that fvk g is an infinite sequence such that for all k, vk is an optimal solution of (Dk ) at iteration k of Algorithm IA and that v is an accumulation point of fvk g. Then v solves problem (DP). Proof. Let a subsequence fvkq g  fvk g converge to v. Since that f is continuous over Rn , h is continuous over Rn  Rn , X is a compact set and fx 2 Rn : hv; xi P 1; x 2 X g ˆ fx 2 Rn : h…x; v† P 0; x 2 X g 6ˆ ; for any v 2 C  n int X  , we obtain that gH is upper semi-continuous over C  n int X  [5]. Therefore, by condition (6), v† P lim sup gH …vkq † P sup …DP†: gH … q!1

By Theorem 3.5, v 2 …X ‡ C† . Hence, gH … v† 6 sup …DP†. Consequently, v† ˆ lim gH …vkq † ˆ sup …DP†:  gH … q!1

By Theorems 3.5 and 3.6, we get that every accumulation point of fvk g belongs to the feasible set of problem (DP) and solves problem (DP). Remark 3.3. At iteration k of Algorithm IA, from assumptions (B2) and (B3), 0 2 fx 2 Rn : hvk ; xi < 1g, every optimal solution of problem (3) belongs to fx 2 Rn : hvk ; xi ˆ 1g. Remark 3.4. For the feasible set Rn n int …X ‡ C† of (MP), we have

since

S. Yamada et al. / European Journal of Operational Research 133 (2001) 267±286

277

Rn n int …X ‡ C†  X n int …X ‡ C† 6ˆ ;: Theorem 3.7. Assume that fx…k†g is an infinite sequence such that for all k, x…k† is an optimal solution of problem (Pk ) at iteration k of Algorithm IA and that x is an accumulation point of fx…k†g. Then x belongs to Rn n int …X ‡ C† and solves problem (MP). Proof. Let a subsequence fx…kq †g  fx…k†g converge to x. Then, there is a sequence fvkq g such that vkq is an optimal solution of …Dkq † at iteration kq of the algorithm. By Remark 3.3, hvkq ; x…kq †i ˆ 1 for all q. Therefore, limq!1 hvkq ; x…kq †i ˆ limq!1 hvkq ; xi ˆ 1. Moreover, for every accumulation point v of fvkq g, o h v; xi ˆ 1. By Theorem 3.5, since v 2 …X ‡ C† , we obtain x 2 bd …X ‡ C†. Consequently, x 62 int …X ‡ C†. Since fx…kq †g  X and for any q, x…kq † is an optimal solution of problem (3) at iteration kq of the algorithm, we get gH …vkq † ˆ g…x…kq †† ˆ f …x…kq †† for all q. Therefore, by Theorem 3.6 and the continuity of f, inf …MP† ˆ

sup …DP† ˆ

lim gH …vkq † ˆ lim f …x…kq †† ˆ f …x†:

q!1

q!1



4. A stratagem terminating Algorithm IA after ®nite iterations In this section, in order to terminate Algorithm IA after ®nite iterations, we propose another stopping criterion of the algorithm. Since every objective function of problem …P † is an ane function, the following theorem holds. Theorem 4.1 (See Sawaragi, Nakayama and Tanino [6], Theorems 3.5.3 and 3.5.4). Assume that problem (P ) satisfies the Kuhn±Tucker constraint qualification at x 2 X . Then a necessary and sufficient condition for x K t to be a weakly PK efficient Pt solution to problem (P ) is that there exist l 2 R and k 2 R such that i (i) P iˆ1 li c ‡ jˆ1 kj rpj …x† ˆ 0, (ii) tjˆ1 kj pj …x† ˆ 0, (iii) l P 0, k P 0 and li0 > 0 for some i0 2 f1; . . . ; Kg. Remark 4.1. For any x 2 X such that p…x† > p…0†, problem (P) satis®es the Kuhn±Tucker constraint quali®cation at x because problem (P) satis®es the assumption (A1). Let J …x† :ˆ fj : pj …x† ˆ p…x†; j ˆ 1; . . . ; tg. Then, for any x 2 X , the subdi€erential op…x† of p at x is formulated as ( ) t X X n op…x† ˆ y 2 R : y ˆ kj rpj …x†; kj ˆ 1; kj P 0 8j 2 J …x† : j2J …x†

jˆ1

By the principle of duality, C  is formulated as ( ) K X  n i C ˆ x2R :xˆ li c ; li P 0 i ˆ 1; . . . ; K iˆ1

n

ˆ fx 2 R : hu; xi 6 0 8u 2 E…C†g:

278

S. Yamada et al. / European Journal of Operational Research 133 (2001) 267±286

Theorem 4.2. Assume that problem (P) satisfies the Kuhn±Tucker constraint qualification at x 2 X . Then a necessary and sufficient condition for x 2 X to be a weakly efficient solution to problem (P) is that x satisfies the following conditions: (a) p…x† ˆ 0, (b) C  \ op…x† 6ˆ ;. Proof. We shall show that x 2 X satis®es conditions (a) and (b) if and only if for x, there exist l 2 RK and k 2 Rt satisfying conditions (i), (ii) and (iii) in Theorem 4.1. Suppose that x satis®es conditions (a) and (b). By condition (a), for any k P 0 such that kj ˆ 0 for all j 62 J …x†, we have t X

kj pj …x† ˆ 0:

…10†

jˆ1

Moreover, by condition (b), 9 l P 0; k0j P 0; j 2 J …x† such that

K X iˆ1

X

li ci ˆ

j2J …x†

k0j rpj …x† 6ˆ 0 and

X j2J …x†

k0j ˆ 1:

…11†

Let, for all j 2 f1; . . . ; tg, ( k0j j 2 J …x†;  kj :ˆ 0 j 62 J …x†: Then, by conditions (10) and (11), l and k satisfy conditions (i) and (ii) in Theorem 4.1. Moreover, since P K i ci 6ˆ 0 from condition (11), lj > 0 for some j 2 f1; . . . ; Kg. That is, l and k satisfy condition (iii) in iˆ1 l Theorem 4.1. Suppose that for x 2 X , there exist l 2 RK and k 2 Rt satisfying conditions (i), (ii) and (iii) in Theorem 4.1. Since x 2 X , pj …x† 6 0 for all j 2 f1; . . . ; tg, and pj …x† < 0 for all j 62 J …x†. By conditions (ii) and (iii), kj ˆ 0 for all j 62 J …x†: …12† Pt  P P K i ci 6ˆ 0. Moreover, by conditions (i) Hence, x† ˆ j2J …x† kj rpj …x†. By condition (iii), jˆ1 kj rpj … iˆ1 l and (iii), 0 6ˆ

K X iˆ1

li ci ˆ

t X jˆ1

kj rpj …x† ˆ

X

kj rpj …x†:

…13†

j2J …x†

This implies that kj > 0 for some j 2 J …x†, that is, X

kj > 0:

…14†

j2J …x†

P P Let KP :ˆ j2J …x† kj and k0j :ˆ kj =K for all j 2 J …x†. Then, since j2J …x† k0j ˆ 1 and k0j P 0 for all j 2 J …x†, we have j2J …x† k0j rpj …x† 2 op…x†. We notice that C  is a cone. Therefore, by condition (13), X j2J …x†

k0j rpj …x† ˆ

K 1 X l ci 2 C  : K iˆ1 i

S. Yamada et al. / European Journal of Operational Research 133 (2001) 267±286

279

Consequently, x satis®es condition (b). We remember that p…x† 6 0 because x 2 X . Therefore, by conditions (ii), (iii) and (12), and the de®nition of J …x†, 0ˆ

t X

X

kj pj …x† ˆ

jˆ1

X

kj pj …x† ˆ

j2J …x†

kj p…x† 6 0:

j2J …x†

By condition (14), we obtain p…x† ˆ 0. Consequently, x satis®es condition (a).



Note that for any x 2 Rn , op…x† is a compact convex set ([3], Theorem 23.4). Moreover we note that 0 62 op…x† for any x 2 Rn such that p…x† > p…0† because x is not a minimum point of a convex function p over Rn if p…x† > p…0†. For x 2 X with p…x† > p…0†, we consider the following problem: minimize subject to

maxu2E…C† hu; yi y 2 op…x†:

…15†

Notice that E…C† is a ®nite set (see Section 3.1). Let b0 be the optimal value of problem (15) for x. Then the following theorem holds. Theorem 4.3. Assume that x 2 X satisfies that p…x† > p…0† and b0 6 0. Then C  \ op…x† 6ˆ ;. Proof. Let y be an optimal solution of problem (15) for x. It is obvious that y belongs to op…x†. Since maxu2E…C† hu; yi ˆ b0 6 0, y 2 C  ˆ fy 2 Rn : hu; yi 6 0 8u 2 E…C†g. Consequently y 2 C  \ op…x†.  Let E…C† ˆ fu1 ; . . . ; um g. Then problem (15) can be transformed into the following equivalent linear programming problem: minimize g subject to

*

X

i

u; X

+ kj rpj …x† 6 g; i ˆ 1; . . . ; m;

j2J …x†

…16†

kj ˆ 1; kj P 0 8j 2 J …x†:

j2J …x†

Let g0 be the optimal value of problem (16) for x0 2 X such that f …x0 † > f …0†. Then g0 is equal to the optimal value of problem (15) for x0 . It follows from Theorems 4.2 and 4.3 that x0 2 X is a weakly ecient solution to problem (P) if and only if x0 satis®es that p…x0 † ˆ 0 and g0 6 0. By Theorem 3.1, the sequence fx…k†g generated by Algorithm IA belongs to X. Therefore, we get p…x…k†† 6 0 for all k. Moreover, from Theorem 3.7, we get that every accumulation point of fx…k†g is contained in the feasible set of problem (MP). That is, every accumulation point of fx…k†g is a weakly ecient solution to problem (P). Hence, we have that p…x† ˆ 0 and g 6 0;

…17†

where x denotes every accumulation point of fx…k†g and g is the optimal value of problem (16) for x. Since p is continuous, from Theorem 4.2 we get that limk!1 p…x…k†† ˆ 0. Hence, since p…0† < 0, there 0 0 exists P1 k such that p…x…k †† > p…0†. Let fsk g be a sequence of positive numbers such that limk!1 sk ˆ 0, kˆ1 sk ˆ 1. Then, we get that limk!1 sup sk gk 6 0. Therefore, it follows that for any c1 , c2 > 0, 9k 0 such that p…x…k 0 †† > max fp…0†; c1 g and sk0 g…xk0 † 6 c2 :

…18†

According to (18), using to admissible tolerances c1 ; c2 > 0, the algorithm terminates after ®nitely many iterations. By doing this, the weak eciency of the obtained solution is compromised, that is, the weak

280

S. Yamada et al. / European Journal of Operational Research 133 (2001) 267±286

eciency condition p…x† ˆ 0 and g…x† 6 0 is relaxed to p…x† > max fp…0†; c1 g and sk g…x† 6 c2 . From this point of view, we replace Step 2 of the algorithm by the following: Step 2. Solve problem (16) for x…k†. Let gk be the optimal value of the problem. (a) If p…x…k†† > max fp…0†; c1 g and sk gk 6 c2 , then stop; vk and x…k† are compromise solutions for problems (DP) and (MP), respectively. (b) Otherwise, solve problem (4) for vk . Let zk be an optimal solution for the problem. Set Sk‡1 ˆ conv …Sk [ fzk g† and k k ‡ 1. Go to Step 1. 5. An inner approximation method incorporating a branch and bound procedure 5.1. An inner approximation method incorporating a branch and bound procedure To execute the algorithm proposed in Section 3, it is necessary to solve problem (3) at every iteration. In this section, we propose now another inner approximation algorithm incorporating a branch and bound procedure. At every iteration of the algorithm, it is not necessary to solve problem (3). The algorithm is formulated as follows: Algorithm IA-BB. Initialization. Generate a polytope S1  Rn and a hyper rectangular M1  Rn such that S1  X , 0 2 int S1 and X  M1 . Choose s P 1 and penalty parameters l > 0, 1 < B < 2. Let M1 ˆ fM1 g and  t…M1 † ˆ 1. For convenience, let V ……S0 ‡ C† † ˆ f0g and M0 ˆ ;. Set k 1 and go to Step 1. Step 1. For every M 2 Mk n Mk 1 , choose xM 2 V …M† satisfying FlM …xM † ˆ max fFlM …z† : z 2 V …M†g; Pt s where FlM …x† ˆ f …x† ‡ lM h…x†, h…x† ˆ jˆ1 ‰max f0; pj …x†gŠ , lM ˆ lBt…M† . Moreover, for every M 2 Mk n Mk 1 , let b…M† ˆ FlM …xM †

…krf …xM †k ‡ lM kdM k† D…M†;

where dM 2 oh…xM †. Select Mk 2 Mk satisfying b…Mk † ˆ min fb…M† : M 2 Mk g and let x…k† ˆ xMk . Choose vk  from V ……Sk ‡ C† † such that hvk ; x…k†i P 1. k Step 2. For v , let ak and zk be the optimal value of problem (4) and an optimal solution, respectively. If ak ˆ 0, h…x…k†† ˆ 0, b…Mk † ˆ FlMk …x…k††, then stop; vk and x…k† solve problems (DP) and (MP), respectively. Otherwise, go to Step 3. Step 3. Let  conv …Sk [ fzk g† if ak < 0; Sk‡1 ˆ Sk if ak ˆ 0; and let Hk be a partition which is generated by dividing Mk (described latter in Section 5.2). For every M 2 Hk , let t…M† ˆ t…Mk † ‡ 1. Set Mk‡1 ˆ …Mk n fMk g† [ …Hk n Qk †; where Qk ˆ fM 2 Pk : M  X [ …Sk‡1 ‡ C†g. Replace k by k ‡ 1 and return to Step 1. It often occurs that we do not get an optimal solution of problem (MP) within ®nite iterations. In order to terminate the algorithm within ®nite iterations, we shall present another stopping condition in Section 5.3.

S. Yamada et al. / European Journal of Operational Research 133 (2001) 267±286

281

It will be described latter in Section 5.2 that for each k, Mk and Hk satisfy the following conditions: (I) D…M† ˆ D…Mk †=2 8M 2 Hk , (II) Xe  [M2Mk M and M 6 X for all M 2 Mk . In Step 1 of the algorithm, for every M 2 Mk , we have min fFlM …x† : x 2 Mg 6 FlM …xM †: Since f and h are convex functions ([7], Chapter 9), for every x 2 M, we get FlM …x† ˆ f …x† ‡ lM h…x† P f …xM † ‡ hrf …xM †; x

xM i ‡ lM h…xM † ‡ lM hdM ; x

xM i

P f …xM † ‡ lM h…xM † …kf …xM †k ‡ lM kdM k† D…M† ˆ FlM …xM † …kf …xM †k ‡ lM kdM k† D…M† ˆ b…M†: This implies min fFlM …x† : x 2 Mg P b…M†. Moreover, since FlM …x† ˆ f …x† for any x 2 M \ X , M 2 Mk , it follows from condition (II) that min b…M† 6 inf …OES† ˆ inf …MP†:

M2Mk

…19†

In Step 2 of the algorithm, x…k† 2 Xe if ak ˆ 0 and h…x…k†† ˆ 0, so that FlMk …x…k†† ˆ f …x…k†† P inf …MP†. Moreover, from (19), f …x…k†† ˆ inf …MP† if b…Mk † ˆ FlMk …x…k††. That is, x…k† is an optimal solution of problem (MP). 5.2. Construction of a partition At Initialization of the algorithm, since X is compact and nonempty, we can generate a hyper rectangular M1 de®ned as follows: M1 ˆ fx 2 Rn : a1 6 x 6 b1 g; where a1 ; b1 2 Rn satisfy that a1j 6 min fhej ; xi : x 2 X g (j ˆ 1; . . . ; n) and b1j P max fhej ; xi : x 2 X g (j ˆ 1; . . . ; n). Since int X 6ˆ ;, a1 < b1 . Then, M1 is satis®es condition (II) because M1 ˆ fM1 g. At iteration k of the algorithm, suppose that Mk selected in Step 1 is de®ned as follows: Mk ˆ fx 2 Rn : ak 6 x 6 bk g; where ak ; bk 2 Rn (ak < bk ). Since Mk is a hyper rectangular, Mk has 2n vertices. Denote by n V …Mk † ˆ fy 1 ; . . . ; y 2 g the vertex set of Mk . Let y…Mk † ˆ …ak ‡ bk †=2. Then, Mk can be divided into 2n hyper rectangular as follows: M…y i † ˆ fx 2 Rn : a…y i † 6 x 6 b…y i †g; i ˆ 1; . . . ; 2n ;

…20†

where a…y i †j ˆ min fyji ; y…Mk †j g and b…y i †j ˆ max fyji ; y…Mk †j g (j ˆ 1; . . . ; n). Let Hk ˆ fM…y i † : i ˆ 1; . . . ; 2n g. From (20), the following equation holds: D…M…y i †† ˆ

D…Mk † 8i 2 f1; . . . ; 2n g: 2

…21†

282

S. Yamada et al. / European Journal of Operational Research 133 (2001) 267±286

Hence, Hk satis®es condition (I). Moreover, from the de®nition of Qk , Mk‡1 satis®es condition (II) if Mk satis®es condition (II). Furthermore, if Mk is divided as (20) at every iteration, then for any k, (C1) int M 6ˆ ; for any M 2 Mk , (C2) int M 0 \ int M 00 ˆ ; for any M 0 , M 00 2 Mk (M 0 6ˆ M 00 ). 5.3. Convergence of the algorithm Let fx…k†g and fvk g be sequences generated by the algorithm. Then, by Theorem 3.5, every accumulation  point v of fvk g belongs to …X ‡ C† . Since fx…k†g  M1 and M1 is compact, fx…k†g has an accumulation point. In this subsection, it will be shown that every accumulation point x of fx…k†g is contained in Xe and that x solves problem (MP). Lemma 5.1. Let fx…k†g and fvk g be generated by the algorithm, and let x and v be accumulation points of fx…k†g and fvk g, respectively. Then, x belongs to Rn n int …X ‡ C†. Proof. Let fx…kq †g  fx…k†g converge to x, and let fvkq g be a subsequence of fvk g for fx…kq †g. Without loss of generality, we can assume that fvkq g converges to v. Since hvkq ; x…kq †i P 1 for all q,  limq!1 hvkq ; x…kq †i ˆ h v; xi P 1. By Theorem 3.5, v 2 …X ‡ C† . Therefore, x 62 int …X ‡ C†.  Lemma 5.2. Let x be an accumulation point of fx…k†g. Then, there exists fMk0 g such that Mk0 2 Mk , x 2 Mk0 for all k. Proof. In order to obtain contradiction, suppose that there exists k 0 2 f1; 2; . . .g such that x 62 M for all M 2 Mk0 . Then, since [M2Mk0 M is closed, there exists d > 0 such that B…x; d† \ …[M2Mk0 M† ˆ ;. Since fx…k†gk P k0  [M2Mk0 M, B…x; d† \ fx…k†k P k0 g ˆ ;. This contradicts the fact that x is an accumulation point of fx…k†g. Consequently, for every k, there exists Mk0 2 Mk such that x 2 Mk0 .  For any x 2 Rn , let Tk …x† ˆ fM 2 Mk : x 2 Mg. Then, the following lemma holds: Lemma 5.3. Let x be an accumulation point of fx…k†g. Then, for any k 0 2 f1; 2; . . .g, there exists k P such that Mk 2 Tk0 …x†. Proof. By Lemma 5.2, Tk …x† 6ˆ ; for any k. In order to obtain a contradiction, suppose that for some k 0 , there is no k P k 0 such that Mk 2 Tk0 …x†. From (19) and the assumption, Tk0 …x†  Mk for any k P k 0 . By (C1), for each M 2 Mk (k 2 f1; 2; . . .g), since M has an interior point and is a hyperrectangular, M ˆ cl …int M†:

…22†

Hence, by (22) and (C2), for any k P k 0 , M 0 \ int M 00 ˆ ;;

8M 0 2 Tk0 …x†; M 00 2 Mk n Tk0 …x†:

This implies that, for each M 00 2 Mk n Tk0 …x†, 0 1 0 [ [ M 0 A  Rn nint @ int M 00  Rn n@ M 0 2Tk 0 …x†

M 0 2Tk 0 …x†

1 M 0 A:

S. Yamada et al. / European Journal of Operational Research 133 (2001) 267±286

283

Since Rn n int …[M 0 2Tk0 …x† M 0 † is closed and every M 00 2 Mk n Tk0 …x† satis®es (22), 0 1 [ M 00  Rn n int @ M 0 A 8M 00 2 Mk n Tk0 …x†: M 0 2Tk 0 …x†

Thus,

0

int @

[

1

[

M 0A [

M 0 2Tk 0 …x†

M 00 ˆ ;:

…23†

M 00 2Mk nTk 0 …x†

Moreover, by (C1), there exists e > 0 such that 0 1 [ B…x; e†  int @ M 0 A 8k P k 0 :

…24†

M 0 2Tk 0 …x†

From the assumption and the de®nitions of Mk and xk , for any k, [ xk 2 M 00 :

…25†

M 00 2Mk nTk 0 …x†

By (23)±(25), B…x; e† \ fxk gk P k0 ˆ ;: This contradicts the fact that x is an accumulation point of fxk g. Consequently, for each k 0 , there exists k P k 0 such that Mk 2 Tk0 …x†.  Lemma 5.4. Let a family A ˆ fAi  Rn : i 2 Kg (K is an index set) satisfy the following: (i) For each i 2 K, the interior set of Ai is nonempty, and Ai is a hyperrectangular. (ii) For each i1 , i2 2 K (i1 6ˆ i2 ), int Ai1 \ int Ai2 ˆ ;. (iii) For some x 2 Rn , x 2 Ai for any i 2 K. Then, jAj 6 2n . Lemma 5.5. Let x be an accumulation point of fx…k†g. Then, there exists a subsequence fMki g  fMk g such that Mk1  Mk2      Mki      fxg:

…26†

Proof. From Lemma 5.3, for any k, there exists k 0 P k such that Mk0 2 Tk …x†. Moreover, for k 0 ‡ 1, there exists k 00 P k 0 ‡ 1 such that Mk00 2 Tk0 …x†. Therefore, without loss of generality, we can assume that x 2 Mk for any k. Next, we shall show that there exists a subsequence fMki g  fMk g satisfying Mk1  Mk2      Mki    

…27†

To obtain a contradiction, suppose that there is no subsequence fMki g  fMk g satisfying (27). From the assumption, there exists k…1† such that Mk 6 Mk…1† for any k > k…1†. Moreover, there exists k…2† > k…1† such that Mk 6 Mk…2† for any k > k…2†. Similarly, there exists a sequence fk…i†g such that k…j† > k…i† …j > i† and Mk 6 Mk…i† for any k > k…i†, i 2 f1; 2; . . .g. Note that for any i > 1, there exists M…i† 2 Mk…1† such that

284

S. Yamada et al. / European Journal of Operational Research 133 (2001) 267±286

Mk…i†  M…i†. Since Mk…i† 6 Mk…1† for any i > 1, by (C2), int M…i† \ int Mk…1† ˆ ;. This implies that for any i > 1, int Mk…i† \ int Mk…1† ˆ ;: Similarly, for any i0 , i00 2 f1; 2; . . .g …i0 6ˆ i00 †, int Mk…i0 † \ int Mk…i00 † ˆ ;:

…28†

Therefore, jfMk…i† g : i ˆ 1; 2; . . .gj ˆ ‡1 > 2n :

…29†

However, since fMk…i† g satis®es conditions (i) and (ii) in Lemma 5.4 and x 2 Mk…i† (i ˆ 1; 2; . . .), it follows from Lemma 5.4 that jfMk…i† g : i ˆ 1; 2; . . .gj 6 2n . This contradicts to (29). Consequently, there exists a subsequence fMki g  fMk g satisfying (26).  From (21), fMki g  fMk g satisfying (26) satis®es the following conditions: D…Mki † 8i; 2

…30†

Mki ! fxg as i ! 1:

…31†

D…Mki‡1 † 6

Theorem 5.1. Let x be an accumulation point of fx…k†g. Then, x belongs to Xe . Furthermore, x solves problem (MP). Proof. By Lemma 5.1, x 2 Rn n int …X ‡ C†. Hence, we shall show that x 2 X . In order to obtain contradiction, suppose that x 62 X . For x, let fMki g  fMk g satisfy condition (26) in Lemma 5.5. Then, limi!1 x…ki † ˆ x. Hence, we have lim inf b…Mki † ˆ lim inf …FlMk …x…ki †† i!1

i!1

…krf …x…ki ††k ‡ lMki kdMki k†D…Mki ††:

i

…32†

Since X is a closed set, by (31), there exists i0 such that Mki  Rn n X 8i P i0 :

…33†

By (33), fx…ki †gi P i0  Rn n X . Without loss of generality, we can assume that fx…ki †g  Rn n X . Then, we have lim …x…ki †† ˆ h…x† > 0:

…34†

i!1

Since fMki g satis®es condition (26), lMk

i‡1

ˆ lM1 Bt…Mki‡1 † ˆ lMk B…t…Mki‡1 †

t…Mk1 ††

1

P lMk Bi : 1

Hence, lim inf FlMk …x…ki †† ˆ lim FlMk …x…ki †† i!1

i

i!1

i

ˆ lim …f …x…ki †† ‡ lMki h…x…ki ††† i!1

P f …x† ‡ h…x†lMk lim Bi 1

ˆ ‡1:

1

i!1

…35†

S. Yamada et al. / European Journal of Operational Research 133 (2001) 267±286

285

By (31), it follows that lim inf … krf …x…ki ††kD…Mki †† ˆ

lim supkrf …x…ki ††kD…Mki †

i!1

i!1

 i 1 krf …x†k D…Mk1 † lim i!1 2

P

1

ˆ 0:

…36†

Since M1 is compact, [x2M1 oh…x† is compact (Rockafellar [3], Theorem 24.7). Hence, there exists X > 0 such that kdk 6 X for all d 2 [x2M1 oh…x†. Therefore, kdMki k 6 X for all i, because fx…ki †g  M1 . Since B < 2, we get that lim inf … lMki kdMki k  D…Mki †† ˆ

lim sup…lMki kdMki k  D…Mki ††

i!1

i!1

P

lim sup…lMki X  D…Mki †† i!1



1 lMk X  D…Mk1 † lim B 1 i!1 2

P

i

1

ˆ 0:

…37†

By (32), (35)±(37), lim inf i!1 b…Mki † P ‡ 1. This contradicts (19). Therefore, x 2 Xe ˆ X n int …X ‡ C†. Next, we shall show that x solves problem (MP). Since x is a feasible solution of problem (MP), f …x† P inf…MP†. From the de®nition of h and lM , lMki h…x…ki †† P 0 for any i. By (19), (36) and (37), we have inf…MP† P lim inf bki i!1

ˆ lim inf …FlMk …x…ki †† i!1

i

…krf …x…ki ††k ‡ lMki kdMki k†  D…Mki ††

P lim inf FlMk …x…ki †† i!1

i

ˆ lim inf …f …x…ki †† ‡ lMki h…x…ki ††† i!1

P lim inf f …x…ki †† i!1

ˆ f …x†: Consequently, f …x† ˆ inf …MP†. From Theorem 5.1, every accumulation point of fx…k†g is included in Xe . Let gk be the P1optimal value of problem (16) and let fsk g be a sequence of positive numbers such that limk!1 sk ˆ 0, kˆ1 sk ˆ 1. Then, from the discussion in Section 4, lim supk!1 sk gk 6 0. Moreover, from Lemma 5.5 and (31), lim supk!1 …a…Mk † b…Mk †† ˆ 0. Hence, in order to terminate Algorithm IA-BB after ®nitely many iterations, using admissible tolerances c1 , c2 , c3 > 0, we propose the following stopping criterion: If p…x…k†† > max fp…0†; c1 g, sk gk 6 c2 and a…Mk † lution of problem (MP).

b…Mk † 6 c3 , then stop; x…k† is an approximate so-

286

S. Yamada et al. / European Journal of Operational Research 133 (2001) 267±286

6. Conclusion In this paper, instead of solving problem (OES) directly, we have presented an inner approximation method for a dual problem. By compromising a weak eciency to problem (P), the algorithm terminates after ®nite iterations. To execute the algorithm, a convex minimization problem (4) is solved at each iteration. However, we note that it is not necessary to obtain an optimal solution for problem (4) at each step. At iteration k of the algorithm, it suces to get a point which is contained in X and is not contained in Sk ‡ C. That is, at each step, we can compromise solving problem (4) by getting a point zk satisfying /…zk ; vk † < 0, because zk belongs to X n …Sk ‡ C† if /…zk ; vk † < 0. Moreover, problem …Dk † is solved at each step. To solve problem …Dk †, for every vertex v of the feasible set of problem …Dk †, it is necessary to get the objective function value gH …vk †. This means that a convex minimization problem (3) is solved for every vertex of …Sk ‡ C† . Hence, by solving problem …Dk †, x…k† is obtained. As is mentioned above, by solving two kinds of convex minimization problems successively, it is possible to obtain an approximate solution of problem (OES). Moreover, we propose another inner approximation algorithm incorporating a branch and bound procedure. To execute the algorithm, it is not necessary to solve problem (3). References [1] H. Konno, P.T. Thach, H. Tuy, Optimization on Low Rank Nonconvex Structures, Kluwer Academic Publishers, Dordrecht, 1997. [2] M.R. Hestenes, Optimization Theory: The Finite Dimensional Case, Wiley, New York, 1975. [3] R.T. Rockafellar, Convex Analysis, Princeton University Press, Princeton, NJ, 1970. [4] J.P. Aubin, Applied Abstract Analysis, Wiley, New York, 1977. [5] W.W. Hogan, Point-to-set maps in mathematical programming, SIAM Review 15 (3) (1973). [6] Y. Sawaragi, H. Nakayama, T. Tanino, Theory of Multiobjective Optimization, Academic Press, Orlando, FL, 1985. [7] M.S. Bazaraa, H.D. Sherali, C.M. Shetty, Nonlinear Programming: Theory and Algorithms, second ed., Wiley, New York, 1993.