Computers and Mathematics with Applications 57 (2009) 1196–1203
Contents lists available at ScienceDirect
Computers and Mathematics with Applications journal homepage: www.elsevier.com/locate/camwa
Approximate generalized proximal-type method for convex vector optimization problem in Banach spacesI Zhe Chen a,b,∗ , Haiqiao Huang c , Kequan Zhao a a
Department of Mathematics and Computer Science, Chongqing Normal University, Chongqing 400047, China
b
Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom Kowloon, Hong Kong
c
School of Fashion Art and Engineering, Beijing Institute of Clothing Technology, Beijing 100029, China
article
info
Article history: Received 24 April 2008 Received in revised form 15 October 2008 Accepted 8 November 2008 Keywords: Vector optimization Proximal point algorithm Lyapunov functional Banach spaces Weakly converge Weak Pareto optimal solution Error term
a b s t r a c t In this paper, we consider a convex vector optimization problem of finding weak Pareto optimal solutions for an extended vector-valued map from a uniformly convex and uniformly smooth Banach space to a real Banach space, with respect to the partial order induced by a closed, convex and pointed cone with a nonempty interior. We propose an inexact vector-valued proximal-type point algorithm based on a Lyapunov functional when the iterates are computed approximately and prove the sequence generated by the algorithm weakly converges to a weak Pareto optimal solution of the vector optimization problem under some mild conditions. Our results improve and generalize some known results. © 2009 Elsevier Ltd. All rights reserved.
1. Introduction Let H be a real Hilbert space with inner product h., .i and T : H → 2H be a maximal monotone operator. Consider the following problem: finding an x ∈ H such that 0 ∈ T (x).
(1.1)
This problem is very important in both theory and methodology of optimization and some related fields. One of the most popular methods to solve the above problem is called as proximal point algorithm which was firstly introduced by Martinet [1] and its celebrated progress was attained in the work of Rockafellar [2]. The classical proximal point algorithm generated a sequence {z k } ⊂ H with an initial point z 0 through the following iteration z k+1 + ck v k+1 = z k
(1.2)
where v k+1 ∈ T (z k+1 ) and {ck } is a sequence of positive real numbers bounded away from zero. In [2] Rockafellar had proved that for a maximal monotone operator T , the sequence {z k } was weakly convergent to a zero of T under some mild conditions. From then on many papers have been devoted to investigate various proximal point algorithms, their applications and some modifications [3–13]. On the other hand as in Eckstein [14], the ideal form of proximal point algorithm is often impractical,
I This work is partially supported by the National Science Foundation of China (10831009), Research Grant of Education Committee of Chongqing, Research Grant of Key Lab of Operations Research and System Engineering of Chongqing. ∗ Corresponding author at: Department of Mathematics and Computer Science, Chongqing Normal University, Chongqing 400047, China. E-mail addresses:
[email protected],
[email protected] (Z. Chen).
0898-1221/$ – see front matter © 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.camwa.2008.11.017
Z. Chen et al. / Computers and Mathematics with Applications 57 (2009) 1196–1203
1197
since solving (1.2) exactly can be as difficult (or almost difficult) as solving the original problem (1.1). Moreover, there seems to be little justification of the effort required to solve the problem accurately when the iterate is far away from the solution set. In [2] Rockafellar also gave an inexact variant of the proximal point algorithm, that is finding z k+1 ∈ H such that z k+1 + ck v k+1 − z k = ε k
(1.3)
k where ε k is regarded as an error sequence. It was shown that, if kε k k → 0 quickly enough such that k=0 kε k < ∞, k then z was weakly convergent to z with 0 ∈ T (z ). Because of its relaxed accuracy requirement, the inexact proximal point algorithm is more practical than the exact one. Thus the study of inexact proximal point algorithm has received extensive attention and various forms of the algorithm have been developed for scalar-valued optimization problems and variational inequality problems. Meanwhile, vector optimization model had found a lot of important applications in decision making problems such as in economics theory, management science and engineering design. Because of these applications, a lot of literatures have been published to study optimality conditions, duality theories and topological properties of solutions of vector optimization problems [15–18]. Recently some numerical methods for solving convex vector optimization problems have been proposed in the following papers: the steepest descent method for multiobjective optimization was dealt with in [19], an extension of the projective gradient method to the case of convex constrained vector optimization can be found in [20]. It is worth noticing that H. Bonnel et al. [21] constructed a vector-valued proximal point algorithm to investigate convex vector optimization problem in Hilbert space and generalized the famous Rockafellar’s results from scalar case to vector case. Very recently Ceng and Yao [22] proposed an approximate proximal algorithm for solving convex vector optimization problem in Hilbert space. An important motivation for making analysis about the convergence properties of various proximal point algorithms is related to The Mesh Independence Principle [23–25]. The mesh independence principle replies on infinite dimensional convergence results for predicting the convergence properties of the discrete finite dimensional method. Furthermore it can provide a theoretical foundation for the justification of refinement strategies and help to design the refinement process. As we know many practice problems in economics and engineering are modeled in infinite dimensional spaces. These include optimal control problems, shape optimization problems, and the problems of minimal area surface with obstacles, among many others. In many shape optimization problems, the function space is only a Banach and not a Hilbert space, thus it is important and necessary to analyze convergence properties of the algorithms in Banach spaces. Based on the above results and reasons, the purpose of this paper is to construct an inexact vector-valued proximaltype point algorithm based on a Lyapunov functional in Banach spaces, carry out convergence analysis on the method and prove the sequence generated by the algorithm converges weakly to a weak Pareto optimal solution of the convex vector optimization problem under some mild conditions. The main results of this paper are some extensions of the corresponding results of [2,21] and [22] to more general case. The paper is organized as following. In Section 2, we present some concepts, basic assumptions and preliminary results. In Section 3, we introduce the generalized vector-valued proximal point algorithm and make convergence analysis about the algorithm. In Section 4, we draw a conclusion and make some remarks.
P∞
2. Preliminaries In this section, we present some basic definitions and propositions for the proof of our main results. A Banach space X is (x+y) said to be strictly convex if k 2 k < 1 for all x, y ∈ X with kxk = kyk = 1 and x 6= y. It is said to be uniformly convex x +y
if limn→∞ kxn − yn k = 0 for any two sequence {xn }, {yn } in X such that kxn k = kyn k = 1 and limn→∞ k n 2 n k = 1. It is known that a uniformly convex Banach space is reflexive and strictly convex. A Banach space X is said to be smooth if the limit lim
t →0
kx + tyk − kxk t
(2.1)
exists for all x, y ∈ U, where U = {x ∈ X : kxk = 1}. It is said to be uniformly smooth if the limit (2.1) is attained uniformly p for x, y ∈ U. We know that the space Lp (1 < p < ∞), lp and Sobolev space Wm are uniformly convex and uniformly smooth Banach spaces. In this paper, let X be a uniformly convex and uniformly smooth Banach space with norm k.kX , denote by X ∗ with norm k.kX ∗ the dual space of X and Y be a real Banach space ordered by a pointed, closed and convex cone C with nonempty interior int C , which defines a partial order ≤C in Y, i.e., y1 ≤C y2 if and only if y2 − y1 ∈ C for any y1 , y2 ∈ Y and for relation ≤intC , given by y1 ≤intC y2 if and only if y2 − y1 ∈ intC . The extended space Y¯ = Y ∪ {−∞C , +∞C }, where +∞C and −∞C are two distinct elements not belonging to Y . We denote Y ∗ as the dual space of Y and C ∗ as the positive polar cone of C, i. e., C ∗ = {z ∈ Y ∗ : hz , yi ≥ 0, ∀y ∈ C }. Denote by C ∗+ = {z ∈ Y ∗ : hy, z i > 0} for all y ∈ C \ {0}.
1198
Z. Chen et al. / Computers and Mathematics with Applications 57 (2009) 1196–1203
We state Lemma 1.1 of [26] as the following lemma. Lemma 2.1. Let e ∈ intC be fixed and C ∗0 = {z∗ ∈ C ∗ |hz , ei = 1}, then C ∗0 is a w eak∗ -compact subset of C ∗ . The normalized duality mapping J : X → 2X is defined by Jx = {v ∈ X ∗ : hx, vi = kxk2 = kvk2 } where h., .i stands for the usual dual product in X . As we know that the normalized duality mapping is identity operator in Hilbert space and it has the following important properties [27]: A. B. C. D.
kxk2 − kyk2 ≥ 2hx − y, ji, for all x, y ∈ X and j ∈ Jy; If X is smooth, then J is single valued; If X is smooth, then J is norm-to-weak ∗ continuous; If X is uniformly smooth, then J is uniformly norm-to-norm continuous on each bounded subset of X .
Lemma 2.2 ([28]). Let X be uniformly convex and uniformly smooth Banach space, let δX () : [0, 2] → [0, 1] be the modulus of convexity and ρX (τ ) : [0, ∞) → [0, ∞) be the modulus of smoothness of the Banach space X . If x, y ∈ X are such that kxk ≤ C and kyk ≤ C , then we have
hJx − Jy, x − yi ≥ (2L)−1 C 2 δX
hJx − Jy, x − yi ≥ (2L)−1 C 2 δX ∗
kx − yk
(2.2)
2C
kJx − Jyk
(2.3)
2C
and
kJx − Jyk ≤ 8ChX (16Lkx − yk/C )
(2.4)
where hX (τ ) = ρX (τ )/τ and L is the constant in Figiel s inequalities [29]. 0
Definition 2.1 ([15]). Let S ⊂ X be convex and a map F : S → Y ∪ {+∞C } is said to be C-convex if F ((1 − λ)x + λy) ≤C (1 − λ)F (x) + λF (y) for any x, y ∈ S and λ ∈ [0, 1]. F is said to be strictly C-convex if F ((1 − λ)x + λy) ≤intC (1 − λ)F (x) + λF (y) for any x, y ∈ S with x 6= y and λ ∈ (0, 1). Definition 2.2 ([15]). The set (F (x0 ) − C ) ∩ F (X ) is said to be C -complete, if for every sequence {αn } ⊂ X , with α0 = x0 such that F (αn+1 ) ≤C F (αn ) for all n ∈ N, there exists α ∈ X such that F (α) ≤C F (αn ) for all n ∈ N. Definition 2.3 ([17]). Let S ⊂ X be convex and F : S → Y ∪ {+∞C } be a vector-valued function. x∗ ∈ S is said to be a Pareto optimal solution of F on S if
(F (S ) − F (x∗ )) ∩ (−C \ {0}) = ∅. Suppose that intC 6= ∅, x∗ ∈ S is said to be a weak Pareto optimal solution of F on S if
(F (S ) − F (x∗ )) ∩ (−intC ) = ∅. Definition 2.4 ([21]). A map F : S ⊂ X → Y ∪ {+∞C } is said to be positively lower semicontinuous if for every z ∈ C ∗ , the scalar-valued function x → hF (x), z i is lower semicontinuous. From now on, let us consider following vector optimization problem: C − Minw {F (x)|x ∈ X } (VOP) where X is a uniformly convex and smooth Banach spaces and Y is a real Banach space, F : X → Y ∪ {+∞C } is C-convex and denote by dom (F ) = {x ∈ X |F (x) 6= +∞C } be the effective domain of F. Notice that any constrained vector optimization problem C − Minw {F0 (x)|x ∈ X0 }
(CVOP)
where F0 : X0 → Y and X0 be a nonempty closed convex subset of X. It is easy to check that (CVOP) is equivalent to the unconstrained extended valued (VOP) with F (x) =
F0 (x), +∞,
if x ∈ X0 ; if x 6∈ X0
in the sense that they have the same weak Pareto optimal solutions.
Z. Chen et al. / Computers and Mathematics with Applications 57 (2009) 1196–1203
1199
Lemma 2.3 ([30]). Let s > 0 and X be a Banach space. Then X is uniformly convex if and only if there exists a continuous, strictly increasing and convex function g : [0, ∞) → [0, ∞), g (0) = 0 such that
kx + yk2 ≥ kxk2 + 2hy, ji + g (kyk) for all x, y ∈ {z ∈ X : kz k ≤ s} and j ∈ Jx. Now we introduce a real-valued function in Banach space which has some similar properties with kx − yk2 in Hilbert space. Definition 2.5 ([28]). Let X be a smooth Banach space, the Lyapunov functional L : X × X → R+ is defined by L(x, y) = kxk2 − 2hx, Jyi + kyk2 for x, y ∈ X . It is easy to see that
(kxk + kyk)2 ≥ L(x, y) ≥ (kxk − kyk)2
(2.5)
for any x, y ∈ X . Proposition 2.1 ([28]). Let X be a uniformly convex and smooth Banach space, {yn } and {zn } be two sequences of X . If L(yn , zn ) → 0 and either {yn } or {zn } is bounded, then we have that kyn − zn k → 0. Proposition 2.2 ([28]). Let X be a reflexive, strictly convex, and smooth Banach space and C be a nonempty closed convex subset of X , then there exists a unique element x∗ ∈ C such that L(x∗ , x) = inf{L(z , x) : z ∈ C }.
(2.6)
For every nonempty closed convex subset C of a reflexive, strictly convex and smooth Banach space X , we can define a map PC of X onto C by PC x = x∗ , where x∗ is defined by (2.6). It is clear that the map PC is coincident with the metric projection when X is a Hilbert space. 3. Main results Lemma 3.1. If S ⊂ X is a convex set and F : S → Y ∪ {+∞C } is a C -conv ex proper map, then C − Argmin{F (x) | x ∈ S } = w
[
argmin{hF (x), z i | x ∈ S }
z ∈C ∗0
where C − Argminw {F (x) | x ∈ S } is the weak Pareto optimal solution set of F . This follows immediately from Theorem 2.1 in [21]. Now we make the following assumption on the map and the initial point x0 : (A) the set (F (x0 ) − C ) ∩ F (X ) be C -complete, which means that for all sequences {an } ⊂ X , with a0 = x0 , such that F (an+1 ) ≤C F (an ) for all n ∈ N, there exists a ∈ X such that F (a) ≤C F (an ) for all n ∈ N. Here we propose the following inexact generalized proximal point algorithm based on a Lyapunov functional (GPPAL, in short): Step(1): Taken x0 ∈ dom F ; Step(2): Given xk , if xk ∈ C − Argminw {F (x) | x ∈ X }, then xk+p = xk for all p ≥ 1 and the algorithm stops, otherwise go to step(3); Step(3): If xk 6∈ C − Argminw {F (x) | x ∈ X }, then compute the xk+1 such that
xk+1 ∈ C − Argminw F (x) +
λk 2
(L(x, xk ) − hx, εk+1 i)ek | x ∈ θk
(3.1)
then go to step(2). The sequence {ek } ⊂ intC such that kek k = 1, {εk } ⊂ X ∗ is regarded as an error sequence, which is satisfied with ∞ X
kεk kX ∗ < +∞,
(3.2a)
k=0
and ∞ X hxk , εk i exists and is finite. k=0
Let θk = {x ∈ X | F (x) ≤C F (xk )}, λk ∈ (0, λ] and λ > 0. Now we will show the main results of this paper.
(3.2b)
1200
Z. Chen et al. / Computers and Mathematics with Applications 57 (2009) 1196–1203
Theorem 3.2. Let F : X → Y ∪ {+∞C } be a proper C -convex map and positively lower semicontinuous, then any sequence {xk } generated by the algorithm (GPPAL) is well defined. Proof. Let x0 ∈ dom F be the initial point and we assume that the algorithm has reached step k, then we will show that the next iterative xk+1 does exist. Take any z ∈ C ∗0 and define a new function φk (x) : X → R ∪ {+∞} as follows:
φk (x) = hF (x), z i + Iθk (x) +
λk 2
(L(x, xk ) − hx, εk+1 i)hek , z i
(3.3)
where Iθk (x) : X → R ∪ {∞} is the indicator function (with scalar values) of the set θk (x) Iθk (x) =
0,
if x ∈ θk ; if x 6∈ θk .
+∞,
It is clear that θk is a convex set by its definition. From F is C -convex and positively lower semicontinuous, it follows that hF (x), z i + Iθk (x) is convex and lower semicontinuous with respect to x. By the fact of ek belongs to intC and the definition of C ∗0 , we have that hek , z i > 0. And it is easy to check that φk (x) is strongly convex, then the subdifferential of φk (x) is maximal monotone and strongly monotone. So by virtue of Minty’s theorem, the subdifferential of φk (x) has some zero, which is a minimizer of φk (x). By Lemma 3.1, we can conclude that such a minimizer satisfies
min hF (x)i + x∈θk
λk 2
(L(x, xk ) − hx, εk+1 i)hek , z i ,
hence satisfies (3.1) and can be taken as xk+1 .
Theorem 3.3. Let assumptions in Theorem 3.2 hold. Further suppose that assumption (A) holds, then the sequence {xk } generated by the algorithm (GPPAL) is bounded. Proof. From the algorithm (GPPAL), we know that if the sequence stops at some iterations, it will be a constant thereafter. Now we assume the sequence {xk } will not stop after iterative step k. Define E ⊂ X as follows E = {x ∈ X | F (x) ≤C F (xk ) ∀k ∈ N }. It is easy to check E is nonempty by the fact that the set (F (x0 ) − C ) ∩ F (X ) is C-complete. For xk+1 is a weak Pareto optimal solution of problem (3.1) and there exists zk ∈ C ∗0 such that xk+1 is a solution of the following problem
(P ∗ )
min {φk (x)| x ∈ X }
with z = zk , thus xk+1 satisfies first-order necessary optimality condition of problem (P ∗ ). By the definition of θk , we λ have that θk ⊂ dom F , ∅ 6= dom (Iθk ) ⊂ dom F , then we can derive that there exists µk ∈ ∂{hF (.) + 2k (L(., xk ) − h., εk+1 i)ek , zk i(xk+1 )}. By virtue of Theorem 3.23 of [31], one has that
hµk , x − xk+1 i ≥ 0
(3.4)
for any x ∈ θk . Now we can define another function as follows:
ϕk (x) = hF (x), zk i. From (3.3) we know that there exists some γk ∈ ∂ϕk (xk+1 ) such that
µk = γ k +
λk 2
hek , zk i(2Jxk+1 − 2Jxk − εk+1 ).
(3.5)
Now taking x∗ ∈ E, it is clear x∗ ∈ θk and we can derive that
x∗ − xk+1 , γk +
λk 2
hek , zk i(2Jxk+1 − 2Jxk − εk+1 ) ≥ 0
(3.6)
and from the definition of subgradient of ϕk , it follows that
hF (x∗ ) − F (xk+1 ), zk i ≥ hx∗ − xk+1 , γk i. By the fact x∗ ∈ E and zk ∈ C ∗0 , it is easy to check hF (x∗ ) − F (xk+1 ), zk i ≤ 0 and it follows that hx∗ − xk+1 , γk i ≤ 0. From (3.6) we can see that
λk 2
hek , zk ihx∗ − xk+1 , 2Jxk+1 − 2Jxk − εk+1 i ≥ 0.
(3.7)
Z. Chen et al. / Computers and Mathematics with Applications 57 (2009) 1196–1203
1201
λ
Now we can define ηk = 2k hek , zk i, it is easy to check that ηk > 0. By the definition of Lyapunov functional and (3.7), we can derive the following inequalities: L(x∗ , xk ) − L(x∗ , xk+1 ) − L(xk+1 , xk ) − hx∗ − xk+1 , εk+1 i ≥ 0, L(xk+1 , xk ) ≤ L(x∗ , xk ) − L(x∗ , xk+1 ) + hxk+1 − x∗ , εk+1 i.
(3.8)
So one obtains for all l ≥ 0 that L(x∗ , xl ) ≤ L(x∗ , x0 ) −
l −1 X
L(xk+1 , xk ) +
k=0
l −1 X
hxk+1 − x∗ , εk+1 i.
k=0
From the property (3.2a) of the error sequence, we know that 0≤
∞ ∞ X X hx∗ , εk+1 i ≤ kεk+1 kX ∗ kx∗ k < +∞, k =0
and that
P∞
k=0
k=0
hx , εk+1 i is convergent. Combining with (3.2b), one has that ∗
M = sup
( l −1 X
l ≥0
P∞
k=0
hxk+1 − x∗ , εk+1 i exists and is finite. It follows
) hxk+1 − x , εk+1 i . ∗
(3.9)
k=0
From the property of Lyapunov functional (2.5) and (3.9), we can see that
(kxl k − kx∗ k)2 ≤ L(x∗ , xl ) ≤ L(x∗ , x0 ) + M ≤ (kx0 k + kx∗ k)2 + M which implies that
p
(kx0 k + kx∗ k)2 + M + kx∗ k ≤ K p where K is any real number larger than (kx0 k + kx∗ k)2 + M + kx∗ k, for all l ≥ 0. Since E is nonempty, we can conclude that {xk } is bounded. kx l k ≤
Meanwhile summing up inequality (3.8) and from (3.9), we have ∞ X
L(xk+1 , xk ) ≤ L(x∗ , x0 ) +
k=0
∞ X
hxk+1 − x∗ , εk+1 i < +∞
k=0
it follows that lim L(xk+1 , xk ) = 0.
k→+∞
(3.10)
Theorem 3.4. Let assumptions in Theorem 3.3 hold and further suppose that X¯ is nonempty, where X¯ is the weak Pareto optimal solution set of F . Then any weak cluster points of {xn } belong to X¯ . Proof. Since {xk } is bounded, it has some weak cluster points. Next we will show that all of weak cluster points are weak Pareto optimal solution of problem (VOP). Let xˆ be one of the weak cluster points of {xk } and {xkj } be a subsequence of {xk },
which weakly converges to xˆ . We define, for each z ∈ C ∗0 , the function ψz : X → R ∪ {+∞} as ψz (x) = hF (x), z i. Since F is positively lower semicontinuous and C -convex, ψz is also lower semicontinuous and convex, it follows that ψz (ˆx) ≤ lim infj→+∞ ψz (xkj ). By the fact that xk+1 ∈ θk , we can see that F (xk+1 ) ≤C F (xk ) for k ∈ N. Thus, ψz (xk+1 ) ≤ ψz (xk ). Therefore,
ψz (ˆx) ≤ lim inf ψz (xkj ) = inf{ψz (xk )}. j→+∞
Hence, we have that for all z ∈ C ∗0
ψz (ˆx) ≤ ψz (xk ), implying F (ˆx) ≤C F (xk ). Assume that xˆ is not the weak Pareto optimal solution of problem (VOP), then there exists x¯ ∈ X such that F (¯x) ≤intC F (ˆx). Taking zk ∈ C ∗0 , from Lemma 2.1 we know that C ∗0 be weak∗ -compact. By virtue of Banach–Alaoglu Theorem, there exists z¯ ∈ C ∗0 such that z¯ is a weak∗ -cluster point of {zkj }. Without loss of generality, we can assume that
w ∗ − lim zkj = z¯ j→+∞
(3.11)
1202
Z. Chen et al. / Computers and Mathematics with Applications 57 (2009) 1196–1203
thus we have that
hF (¯x) − F (ˆx), zkj i ≥ hF (¯x) − F (xkj +1 ), zkj i = ϕkj (¯x) − ϕkj (xkj +1 ).
(3.12)
There exists some γkj ∈ ∂ϕkj (xkj +1 ) such that
ϕkj (¯x) − ϕkj (xkj +1 ) ≥ h¯x − xkj +1 , γkj i = h¯x − xkj +1 , µkj i − ηkj h¯x − xkj +1 , Jxkj +1 − Jxkj − εkj +1 i. It is clear that h¯x − xkj +1 , µkj i ≥ 0, and we can see that
ϕkj (¯x) − ϕkj (xkj +1 ) ≥ −ηkj h¯x − xkj +1 , Jxkj +1 − Jxkj i − ηkj hxkj +1 − x¯ , εkj +1 i ≥ −ηkj kJxkj +1 − Jxkj kX ∗ k¯x − xkj +1 k − ηkj kεkj +1 kX ∗ kxkj +1 − x¯ k.
(3.13)
By Proposition 2.1 and (3.10) and the fact of the boundedness of {xk }, it is easy to check that lim kxkj +1 − xkj k = 0.
(3.14)
j→+∞
From Lemma 2.1, inequalities (2.4) and (3.13), we have
kJxkj +1 − Jxkj kX ∗ k¯x − xkj +1 k ≤ 8KhX (16Lkxkj +1 − xkj k/K )k¯x − xkj +1 k
(3.15)
where we use the fact that {xk } is bounded by K . By virtue of the fact hX (τ ) tends to 0 as τ tends to 0, which holds for any uniformly smooth Banach spaces. So from (3.2a), (3.14), (3.15) and by the fact that {xk } is a bounded set, we can draw the conclusion that the limit of right expression in (3.12) vanishes as j → ∞. It is clear that
hF (¯x) − F (ˆx), z¯ i ≥ 0.
(3.16)
Then we can see that (3.16) contradicts with the facts that z¯ ∈ C ∗0 and the assumption F (¯x) ≤intC F (ˆx), thus we can conclude that the assumption is false and xˆ is a weak Pareto optimal solution of problem (VOP). Theorem 3.5. Consider the same assumptions in Theorem 3.4 and assume that the normalized dual mapping J is weak-to-weak continuous, then the whole sequence {xk } weakly converge to a weak Pareto optimal solution of problem (VOP). Proof. In this part we will show that xˆ is the only one weak Pareto optimal solution of of problem (VOP). Let us consider the contrary, assume that both xˆ and x˜ are weak cluster points of {xn }, {xkj } and {xki } are two subsequence of {xk }
w − lim xkj = xˆ , j→+∞
w − lim xki = x˜ . i→+∞
So by Theorem 3.4, we know that xˆ and x˜ are also weak Pareto optimal solutions of problem (VOP). And from the proof of Theorem 3.3, we have the sequence L(ˆx, xk ) and L(˜x, xk ) are bounded and convergent. Let ¯l1 and ¯l2 be their limits then lim (L(ˆx, xk ) − L(˜x, xk )) = ¯l1 − ¯l2 .
k→+∞
(3.17)
On the other hand from definition of Lyapunov functional, we have that L(ˆx, xk ) − L(˜x, xk ) = kˆxk2 − 2hˆx − x˜ , Jxk i − k˜xk2 .
(3.18)
Let W be the limit of (3.18), taking k = kj in (3.18) and using the weak-to-weak continuous of normalized duality map J, we can derive that W = −L(ˆx, x˜ ). Repeating with k = ki , we have W = L(˜x, xˆ ). It follows that L(ˆx, x˜ ) + L(˜x, xˆ ) = 0.
(3.19)
By the definition of Lyapunov functional, we can derive that xˆ = x˜ , which can prove the uniqueness of the weak cluster point of {xn }.
Z. Chen et al. / Computers and Mathematics with Applications 57 (2009) 1196–1203
1203
4. Conclusion In this paper, we consider a convex vector optimization problem of finding weak Pareto optimal solutions from a uniformly convex and uniformly smooth Banach space to a real Banach space, with respect to the partial order induced by a closed, convex and pointed cone with a nonempty interior. We introduce an inexact vector-valued proximal point algorithm based on a Lyapunov functional for the convex vector optimization problem, carry out convergence analysis on the method and prove the sequence generated by the algorithm converges weakly to a weak Pareto optimal solution of the vector optimization problem under C -complete. Our results extend some corresponding results of [2,21] and [22] to more general cases. Acknowledgments We thank an anonymous referee for carefully reading the original submission and supplying many helpful suggestions, which greatly improved the paper. The first author also thanks Prof. X.Q. Yang, The Hong Kong Polytechnic University, for his teaching and comments on the original manuscript. References [1] B. Matinet, Regularisation d’inéquations variationelles par approximations successives, Rev. Francaise Inf. Rech, Oper. 2 (1970) 154–159. [2] R.T. Rockafellar, Monotone operators and the proximal point algorithm, SIAM J. Control Optim. 14 (1976) 877–898. [3] A. Auslander, M. Haddou, An interior proximal point method for convex linearly constrained problems and its extension to variational inequalities, Math. Program. 71 (1995) 77–100. [4] R.S. Burachik, A.N. Iusem, A generalized proximal point algorithm for the variational inequality problem in a Hilbert space, SIAM J. Optim. 8 (1) (1998) 197–216. [5] R.S. Burachik, S. Scheimberg, A proximal point algorithm for the variational inequality problem in Banach space, SIAM J. Control Optim. 39 (2001) 1633–1649. [6] Y. Censor, S.A. Zenios, The proximal minimization algorithm with D-function, J. Optim. Theory Appl. 73 (1992) 451–464. [7] G. Chen, M. Teboulle, Convergence analysis of a proximal-like minimization algorithm using Bregman function, SIAM J. Optim. 3 (1993) 538–543. [8] A. De Pierro, A.N. Iusem, A relaxed version of Bregman’s method for convex programming, J. Optim. Theory Appl. 51 (1986) 421–440. [9] O. Güler, On the convergence of the proximal point algorithm for convex minimization, SIAM J. Control Optim. 29 (1991) 403–419. [10] K.C. Kiwiel, Proximal minimization methods with generalized Bregman functions, SIAM J. Control Optim. 35 (1997) 326–349. [11] S. Kamimura, W. Takahashi, Strong convergence of a proximal-type algorithm in a Banach space, SIAM J. Optim. 13 (2003) 938–945. [12] T. Pennanen, Local convergence of the proximal point algorithm and multiplier methods without monotonicity, Math. Oper. Res. 27 (2002) 170–191. [13] M.V. Solodov, B.F. Svaiter, Forcing strong convergence of proximal point iterations in a Hilbert space, Math. Program. 87 (2000) 189–202. [14] J. Eckstein, Approximate iterations in Bregman-function-based proximal algorithm, Math. Program. 89 (1998) 113–123. [15] G.Y. Chen, X.X. Huang, X.Q. Yang, Vector Optimization: Set-Valued and Variational Analysis, in: Lecture Notes in Economics and Mathematical Systems, vol. 541, Springer-Verlag, Berlin, 2005. [16] J. Jahn, Vector Optimization: Theory, Applications and Extensions, Springer-Verlag, Berlin, 2004. [17] T.D. Luc, Theory of Vector Optimization, in: Lecture Notes in Economics and Mathematical Systems, vol. 319, Springer-Verlag, Berlin, 1989. [18] Y. Sawaragi, H. Nakayama, T. Tanino, The Theory of Multi Objective Optimization, Academic Press, New York, 1985. [19] J. Fliege, B.F. Svaiter, Steepest descent methods for multicriteria optimization, Math. Methods Oper. Res. 51 (2000) 479–494. [20] L.M. Graña Drummond, A.N. Iusem, A projected gradient method for vector optimization problems, Comput. Optim. Appl. 28 (2004) 5–30. [21] H. Bonnel, A.N. Iusem, B.F. Svaiter, Proximal methods in vector optimization, SIAM J. Optim. 15 (2005) 953–970. [22] L.C. Ceng, J.C. Yao, Approximate proximal methods in vector optimization, Eur. J. Oper. Res. 183 (2007) 1–19. [23] E.L. Allgower, K. Böhmer, F.A. Potra, W.C. Rheinboldt, A mesh independence principle for operator equations and their discretizations, SIAM J. Mumer. Anal. 23 (1986) 160–169. [24] M. Heinkenschloss, Mesh independence for nonlinear least squares problems with norm constraints, SIAM J. Optim 3 (1993) 81–117. [25] M. Laumen, Newton’s mesh independence principle for a class of optimal shape design problems, SIAM J. Control Optim. 37 (1999) 1142–1168. [26] X.X. Huang, K.L. Teo, X.Q. Yang, Calmness and exact penalization in vector optimization with cone constraints, Comput. Optim. Appl. 35 (2006) 47–67. [27] H.K. Xu, Inequalities in Banach space, Nonlinear Anal. 16 (1991) 1127–1138. [28] Y. Alber, I. Ryazantseva, Nonlinear Ill-posed Problems of Monotone Type, Springer-Verlag, Dordrecht, 2006. [29] T. Figiel, On the moduli of convexity and smoothness, Studia Math. 56 (1976) 121–151. [30] D. Pascali, S. Sburlan, Nonlinear Mappings of Monotone Type, Sijthoff Noordhoff International Publishers, The Netherlands, 1978. [31] R.R. Phelps, Convexfunctions Monotone Operators and Differentiability, in: Lecture Notes in Math., vol. 1364, Springer, Berlin, 1988.